

Pattern Recognition and Machine Learning (Information Science and Statistics) [Bishop, Christopher M.] on desertcart.com. *FREE* shipping on qualifying offers. Pattern Recognition and Machine Learning (Information Science and Statistics) Review: If only all textbooks were this well-written - I was a big fan of Bishop's earlier "Neural Networks for Pattern Recognition" despite my not being particularly interested in neural networks (as opposed to other aspects of machine learning), and so I was pretty excited when I heard about this book. Reading it has not left me disappointed. Like his earlier book, this text is quite mathematically oriented, and not well-suited for people who aren't comfortable with calculus. However, also like in "NNPR", the writing style here is very clear, and everything past basic calculus and linear algebra is well-explained before it's needed. The appendices alone are a goldmine. (Appendix B is a great "cheat sheet" for commonly used probability distributions; Appendix C has lots of useful matrix properties you may have forgotten or never known; Appendix D quickly explains what you need to know about the calculus of variations; and Appendix E does the same for Lagrange multipliers.) The author also does an excellent job throughout the text of marrying math and intuition without giving either short shrift. However, note that the material covered is inherently pretty complex, so the book can still be intimidating in parts despite the excellent writing. It's more appropriate for, say, Ph.D. students and professional researchers in statistics or machine learning than people who just want to crank out code for a simple classifier. There is very little pseudocode (although copious MATLAB code will supposedly be made available in a companion book due out in 2008), and the book's overall approach to machine learning is basically hard-core Bayesian statistics. If you are not willing to scratch your head for a while over lots and lots of equations, this may not be the book for you. On the flip side, people who are already experts in machine learning may be mildly disappointed with the lack of coverage some of their pet topics get. For example, while the chapter on graphical models is excellent as far as it goes, it only mentions the problem of learning graphical model structures (one of my areas of interest) in passing. Reinforcement learning (another personal area of interest) is discussed briefly in the introduction and then written off as beyond the scope of the book. However, the book is already a fabulous resource as it stands; complaining there's not even more of it would be gauche. The cover may look like goat barf, and there are some innocuous missing words here and there (hey, it's a first edition), but if you're serious about machine learning and not afraid of a little math, you should definitely own this book. I can only imagine how much cooler my own thesis research might have been if this book had been around a few years earlier. Review: Excellent text - First of all, as some other reviewers have pointed out, the subtitle of the book should include the word 'Bayesian' in some form or the other. The reason this is important is because the Bayesian approach, although an important one, is not adapted across the board in machine learning, and consequently, an astonishing number of methods presented in the book (Bayesian versions of just about anything) are not mainstream. The recent Duda book gives a better idea of the mainstream in this sense, but because the field has evolved in such rapidity, it excludes massive recent developments in kernel methods and graphical models, which Bishop includes. Pedagogically, however, this book is almost uniformly excellent. I didn't like the presentation on some of the material (the first few sections on linear classification are relatively poor), but in general, Bishop does an amazing job. If you want to learn the mathematical base of most machine learning methods in a practical and reasonably rigorous way, this book is for you. Pay attention in particular to the exercises, which are the best I've seen so far in such a text; involved, but not frustrating, and always aiming to further elucidate the concepts. If you want to really learn the material presented, you should, at the very least, solve all the exercises that appear in the sections of the text (about half of the total). I've gone through almost the entire text, and done just that, so I can say that it's not as daunting as it looks. To judge your level regarding this, solve the exercises for the first two chapters (the second, a sort of crash course on probability, is quite formidable). If you can do these, you should be fine. The author has solutions for a lot of them on his website, so you can go there and check if you get stuck on some. As far as the Bayesian methods are concerned, they are usually a lot more mathematically involved than their counterparts, so solving the equations representing them can only give you more practice. Seeing the same material in a different light can never hurt you, and I learned some important statistical/mathematical concepts from the book that I'd never heard of, such as the Laplace and Evidence Approximations. Of course, if you're not interested, you can simply skip the method altogether. From the preceding, it should be clear that the book is written for a certain kind of reader in mind. It is not for people who want a quick introduction to some method without the gory details behind its mathematical machinery. There is no pseudocode. The book assumes that once you get the math, the algorithm to implement the method should either become completely clear, or in the case of some more complicated methods (SVMs for example), you know where to head for details on an implementation. Therefore, the people who will benefit most from the book are those who will either be doing research in this area, or will be implementing the methods in detail on lower level languages (such as C). I know that sounds offputting, but the good thing is that the level of the math required to understand the methods is quite low; basic probability, linear algebra and multivariable calculus. (Read the appendices in detail as well.) No knowledge is needed, for example, of measure-theoretic probability or function spaces (for kernel methods) etc. Therefore the book is accessible to most with a decent engineering background, who are willing to work through it. If you're one of the people who the book is aimed at, you should seriously consider getting it. Edited to Add: I've changed my rating from 4 stars to 5. Even now, 4-5 years later, there is simply no good substitute for this book.
| Best Sellers Rank | #73,824 in Books ( See Top 100 in Books ) #12 in Computer Vision & Pattern Recognition #42 in Probability & Statistics (Books) #200 in Artificial Intelligence & Semantics |
| Customer Reviews | 4.5 4.5 out of 5 stars (789) |
| Dimensions | 7.7 x 1.3 x 10.2 inches |
| ISBN-10 | 0387310738 |
| ISBN-13 | 978-0387310732 |
| Item Weight | 2.31 pounds |
| Language | English |
| Print length | 738 pages |
| Publication date | August 17, 2006 |
| Publisher | Springer |
S**S
If only all textbooks were this well-written
I was a big fan of Bishop's earlier "Neural Networks for Pattern Recognition" despite my not being particularly interested in neural networks (as opposed to other aspects of machine learning), and so I was pretty excited when I heard about this book. Reading it has not left me disappointed. Like his earlier book, this text is quite mathematically oriented, and not well-suited for people who aren't comfortable with calculus. However, also like in "NNPR", the writing style here is very clear, and everything past basic calculus and linear algebra is well-explained before it's needed. The appendices alone are a goldmine. (Appendix B is a great "cheat sheet" for commonly used probability distributions; Appendix C has lots of useful matrix properties you may have forgotten or never known; Appendix D quickly explains what you need to know about the calculus of variations; and Appendix E does the same for Lagrange multipliers.) The author also does an excellent job throughout the text of marrying math and intuition without giving either short shrift. However, note that the material covered is inherently pretty complex, so the book can still be intimidating in parts despite the excellent writing. It's more appropriate for, say, Ph.D. students and professional researchers in statistics or machine learning than people who just want to crank out code for a simple classifier. There is very little pseudocode (although copious MATLAB code will supposedly be made available in a companion book due out in 2008), and the book's overall approach to machine learning is basically hard-core Bayesian statistics. If you are not willing to scratch your head for a while over lots and lots of equations, this may not be the book for you. On the flip side, people who are already experts in machine learning may be mildly disappointed with the lack of coverage some of their pet topics get. For example, while the chapter on graphical models is excellent as far as it goes, it only mentions the problem of learning graphical model structures (one of my areas of interest) in passing. Reinforcement learning (another personal area of interest) is discussed briefly in the introduction and then written off as beyond the scope of the book. However, the book is already a fabulous resource as it stands; complaining there's not even more of it would be gauche. The cover may look like goat barf, and there are some innocuous missing words here and there (hey, it's a first edition), but if you're serious about machine learning and not afraid of a little math, you should definitely own this book. I can only imagine how much cooler my own thesis research might have been if this book had been around a few years earlier.
K**A
Excellent text
First of all, as some other reviewers have pointed out, the subtitle of the book should include the word 'Bayesian' in some form or the other. The reason this is important is because the Bayesian approach, although an important one, is not adapted across the board in machine learning, and consequently, an astonishing number of methods presented in the book (Bayesian versions of just about anything) are not mainstream. The recent Duda book gives a better idea of the mainstream in this sense, but because the field has evolved in such rapidity, it excludes massive recent developments in kernel methods and graphical models, which Bishop includes. Pedagogically, however, this book is almost uniformly excellent. I didn't like the presentation on some of the material (the first few sections on linear classification are relatively poor), but in general, Bishop does an amazing job. If you want to learn the mathematical base of most machine learning methods in a practical and reasonably rigorous way, this book is for you. Pay attention in particular to the exercises, which are the best I've seen so far in such a text; involved, but not frustrating, and always aiming to further elucidate the concepts. If you want to really learn the material presented, you should, at the very least, solve all the exercises that appear in the sections of the text (about half of the total). I've gone through almost the entire text, and done just that, so I can say that it's not as daunting as it looks. To judge your level regarding this, solve the exercises for the first two chapters (the second, a sort of crash course on probability, is quite formidable). If you can do these, you should be fine. The author has solutions for a lot of them on his website, so you can go there and check if you get stuck on some. As far as the Bayesian methods are concerned, they are usually a lot more mathematically involved than their counterparts, so solving the equations representing them can only give you more practice. Seeing the same material in a different light can never hurt you, and I learned some important statistical/mathematical concepts from the book that I'd never heard of, such as the Laplace and Evidence Approximations. Of course, if you're not interested, you can simply skip the method altogether. From the preceding, it should be clear that the book is written for a certain kind of reader in mind. It is not for people who want a quick introduction to some method without the gory details behind its mathematical machinery. There is no pseudocode. The book assumes that once you get the math, the algorithm to implement the method should either become completely clear, or in the case of some more complicated methods (SVMs for example), you know where to head for details on an implementation. Therefore, the people who will benefit most from the book are those who will either be doing research in this area, or will be implementing the methods in detail on lower level languages (such as C). I know that sounds offputting, but the good thing is that the level of the math required to understand the methods is quite low; basic probability, linear algebra and multivariable calculus. (Read the appendices in detail as well.) No knowledge is needed, for example, of measure-theoretic probability or function spaces (for kernel methods) etc. Therefore the book is accessible to most with a decent engineering background, who are willing to work through it. If you're one of the people who the book is aimed at, you should seriously consider getting it. Edited to Add: I've changed my rating from 4 stars to 5. Even now, 4-5 years later, there is simply no good substitute for this book.
A**E
A Definitive Reference
This book is used in an undergraduate statistics course at the University of Chicago. As a textbook for a class, I believe it does its job well. It doesn't concern itself with silly 'plug and chug' examples and presents the mathematical derivations of each technique clearly and concisely. Obviously, you'll need a background in linear algebra and probability theory. To think that the 5 page introduction to probability theory could possibly be helpful is ridiculous. It helps to also have some programming experience so you can practice implementing the techniques directly. This is written so as to be helpful as a reference book. Just the other day, I found myself implementing a parallelized feed-forward neural network and quickly picked up the text to see what variations would be helpful. Each topic is generally presented in its basic form with any alternatives and optimizations presented toward the end of the chapter. This meant it was very easy to find what I wanted in seconds. Although the book covers most of the major techniques (binary classifiers, regressions, vector support machines, neural networks, etc.) and their optimizations, it doesn't concern itself with optimal implementations. This is *not* a machine learning algorithms textbook. This is what you consult if you want to refresh yourself on a particular technique.
P**L
This book is excellently written. It is not simply a reference bible, the author tells a chronological story and takes you along for the ride. The print quality of my copy is excellent, nice waxy paper, crisp text and nice and colourful. As you've probably read elsewhere online, you will need to have done prior courses in probability and linear algebra, as the introductory chapters here, although technically "self contained", are very dense. Although Kevin Murphy's new 2022 book is also great, it feels like more of a reference on a zillion topics. Whereas with PMRL, Bishop is really trying to get you to understand the fundamentals.
U**E
素晴らしい本です。 パターン認識の教科書として、非常に優れていると思います。 パターン認識の原理や特徴、既存の有用な手法などが分かりやすく書かれています。 これらは統計の知識を駆使していますが、その基本の部分から書かれているので 独習する事も可能です。 また、フルカラーなので、グラフや図が非常に綺麗で見やすいです。 パターン認識を研究する初・中級者向けの本と言えると思います。
A**X
There are a huge number of machine learning books now available. I own many of them. But I don't think any have had such an impact as Chris Bishop's effort here - I certainly count it as my favourite. The material covered is not exhaustive (although good for 2006), but it's a good springboard to many other advanced texts. (The moniker of ML 'Bible' has apparently been passed to Kevin Murphy's book.) What *is* covered is explained with exceptional clarity with an eye for understanding the intuition as well as the theory. If you are after a practitioners guide, or a first ML book for self study, this probably isn't ideal. It assumes significant familiarity with multivariate calculus, probability and basic stats (identities, moments, regression, MLE etc.). The pitch is probably early post-graduate level, but with a few stretching parts. If this is your background, I think it's a better first ML book than MacKay (Information Theory ...), Murphy (Machine Learning ...), or Hastie et al. (Elements of Statistical Learning), due to its coherence of topics and consistency of depth. But those books are all excellent in their own ways. However, Barber (Bayesian Reasoning ...) is a good alternative. Most chapters are fairly self contained, so once you've worked your way through the first couple of chapters, you can skip around as required. A particular highlight for me were the chapters on EM and variational methods (ch 9 & 10); I think you'd be hard pressed to find a better explanation of either of them. Finally, worth pointing out it's unrepentantly Bayesian, and lacking some subtelty which may be grating for seasoned statisticians. Nevertheless, if the above sounds like what you're looking for, this is probably a good choice.
M**A
Target audience: Graduate students and researchers relatively new to the field of Bayesian learning. (+) Clearly written. High-quality print (figure quality is much higher than that of your average textbook). Fresh approach to HMMs and the Kalman Filter. Yes, the Kalman filter / smoother equations make much more sense when derived from a graphical model. A quick Google search will yield some accompanying lecture videos from the author (on graphical models and sequential learning). Solutions for the "www"-marked exercises are available from the author's webpage. (neutral) Formula-heavy so not for the faint of heart. (-) Little emphasis on computational / implementation aspects. There is no "official" (author's) code for the algorithms discussed in the book - however, there are some good-quality 3rd party implementations available on the web. Most of the exercises simply fill-in the missing steps in the algebra derivations. There are no coding exercises. Some of the data sets used in the book don't seem to be available anymore (at least not at the URL given in the book). Highly recommended. Best when used in conjunction with the 3rd party (MATLAB /Python) codes available on the web.
B**Y
very comprehensive. will be relevant for a long time to come. there's a move for people to adopt the approach of learning coding libraries in order to solve problems...which is good but one still needs a reliable reference to fill in the blanks or to learn the basics (and advanced!).
Trustpilot
3 weeks ago
2 months ago