Our systems are now restored following recent technical disruption, and we’re working hard to catch up on publishing. We apologise for the inconvenience caused. Find out more

Online ordering will be unavailable from Saturday, September 6 at 06:00 GMT until Sunday, September 7 at 14:00 GMT.

To place an order, please contact Customer Services.

UK/ROW directcs@cambridge.org +44 (0) 1223 326050 | US customer_service@cambridge.org 1 800 872 7423 or 1 212 337 5000 | Australia/New Zealand enquiries@cambridge.edu.au 61 3 86711400 or 1800 005 210, New Zealand 0800 023 520

Recommended product

Popular links

Popular links


Neural Network Learning

Neural Network Learning

Neural Network Learning

Theoretical Foundations
Authors:
Martin Anthony, London School of Economics and Political Science
Peter L. Bartlett, Australian National University, Canberra
Published:
November 1999
Availability:
Available
Format:
Hardback
ISBN:
9780521573535

Looking for an examination copy?

This title is not currently available for examination. However, if you are interested in the title for your course we can consider offering an examination copy. To register your interest please contact collegesales@cambridge.org providing details of the course you are teaching.

    This book describes theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Research on pattern classification with binary-output networks is surveyed, including a discussion of the relevance of the Vapnik–Chervonenkis dimension, and calculating estimates of the dimension for several neural network models. A model of classification by real-output networks is developed, and the usefulness of classification with a 'large margin' is demonstrated. The authors explain the role of scale-sensitive versions of the Vapnik–Chervonenkis dimension in large margin classification, and in real prediction. They also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient constructive learning algorithms. The book is self-contained and is intended to be accessible to researchers and graduate students in computer science, engineering, and mathematics.

    • Contains results that have not appeared in journal papers or other books
    • Presents many recent results in a unified framework and, in many cases, with simpler proofs
    • Self-contained: it introduces the necessary background material on probability, statistics, combinatorics and computational complexity
    • It is suitable for graduate students as well as active researchers in the area (parts of it have already formed the basis of a graduate course)

    Reviews & endorsements

    "This book gives a thorough but nevertheless self-contained treatment of neural network learning from the perspective of computational learning theory." Mathematical Reviews

    "This book is a rigorous treatise on neural networks that is written for advanced graduate students in computer science. Each chapter has a bibliographical section with helpful suggestions for further reading...this book would be best utilized within an advanced seminar context where the student would be assisted with examples, exercises, and elaborative comments provided by the professor." Telegraphic Reviews

    See more reviews

    Product details

    November 1999
    Hardback
    9780521573535
    404 pages
    229 × 152 × 27 mm
    0.76kg
    Available

    Table of Contents

    • 1. Introduction
    • Part I. Pattern Recognition with Binary-output Neural Networks:
    • 2. The pattern recognition problem
    • 3. The growth function and VC-dimension
    • 4. General upper bounds on sample complexity
    • 5. General lower bounds
    • 6. The VC-dimension of linear threshold networks
    • 7. Bounding the VC-dimension using geometric techniques
    • 8. VC-dimension bounds for neural networks
    • Part II. Pattern Recognition with Real-output Neural Networks:
    • 9. Classification with real values
    • 10. Covering numbers and uniform convergence
    • 11. The pseudo-dimension and fat-shattering dimension
    • 12. Bounding covering numbers with dimensions
    • 13. The sample complexity of classification learning
    • 14. The dimensions of neural networks
    • 15. Model selection
    • Part III. Learning Real-Valued Functions:
    • 16. Learning classes of real functions
    • 17. Uniform convergence results for real function classes
    • 18. Bounding covering numbers
    • 19. The sample complexity of learning function classes
    • 20. Convex classes
    • 21. Other learning problems
    • Part IV. Algorithmics:
    • 22. Efficient learning
    • 23. Learning as optimisation
    • 24. The Boolean perceptron
    • 25. Hardness results for feed-forward networks
    • 26. Constructive learning algorithms for two-layered networks.
    Resources for
    Type
    Authors' web page
      Authors
    • Martin Anthony , London School of Economics and Political Science
    • Peter L. Bartlett , Australian National University, Canberra