Theoretical Neuroscience: Computational And Mathematical Modeling of Neural Systems (Computational Neuroscience)
D**N
Good overview
This book is a detailed overview of the computational modeling of nervous systems from the molecular and cellular level and from the standpoint of human psychophysics and psychology. They divide their conception of modeling into descriptive, mechanistic, and interpretive models. My sole interest was in Part 3, which covers the mathematical modeling of adaptation and learning, so my review will be confined to these chapters. The virtue of this book, and others like it, is the insistence on empirical validation of the models, and not their justification by "thought experiments" and arm-chair reasoning, as is typically done in philosophy. Part 3 begins with a discussion of synaptic plasticity and to what degree it explains learning and memory. The goal here is to develop mathematical models to understand how experience and training modify the neuronal synapses and how these changes effect the neuronal patterns and the eventual behavior. The Hebb model of neuronal firing is ubiquitous in this area of research, and the authors discuss it as a rule that synapses change in proportion to the correlation of the activities of pre- and postsynaptic neurons. Experimental data is immediately given that illustrates long-term potentiation (LTP) and long-term depression (LTD). The authors concentrate mostly on models based on unsupervised learning in this chapter. The rules for synaptic modification are given as differential equations and describe the rate of change of the synaptic weights with respect to the pre- and postsynaptic activity. The covariance and BCM rules are discussed, the first separately requiring postsynaptic and presynaptic activity, the second requiring both simultaneously. The authors consider ocular dominance in the context of unsupervised learning and study the effect of plasticity on multiple neurons. The last section of the chapter covers supervised learning, in which a set of inputs and the desired outputs are imposed during training. In the next chapter, the authors consider the area of reinforcement learning, beginning with a discussion of the mathematical models for classical conditioning, and introducing the temporal difference learning algorithm. The authors discuss the Rescorla-Wagner rule , which is a trial-by-trial learning rule for the weight adjustments, in terms of the reward, the prediction, and the learning rate. They then discuss more realistic policies such as static action choice, where the reward/punishment immediately follows the action taken, and sequential action choice, where rewards may be delayed. The authors discuss foraging behavior of bees as an example of static action choice, reducing it to a stochastic two-armed bandit problem. The maze task for rats is discussed as an example of sequential action choice, and the authors reduce it to the "actor-critic algorithm." A generalized reinforcement learning algorithm is then discussed, with the rat water maze problem given as an example. Chapter 10 is an overview of what the authors call "representational learning", which, as they explain, is a study of neural representations from a computational point of view. The goal is to begin with sensory input and find out how representations are generated on the basis of these inputs. That such representations are necessary is based on for example the consideration of the visual system, since, argue the authors, what is presented at the retina is too crude for an accurate representation of the visual world. The main strategy in the chapter is to begin with a deterministic or probabilistic input and construct a recognition algorithm that gives an estimate of the input. The algorithms constructed are all based on unsupervised learning, and hence the existence and nature of the causes must be computed using heuristics and the statistics of the input data. These two requirements are met via the construction of first a generative model and then a recognition model in the chapter. The familiar 'expectation maximization' is discussed as a method of optimization between real and synthetic data in generative models. A detailed overview of expectation maximization is given in the context of 'density estimation'. The authors then move on to discuss causal models for density estimation, such as Gaussian mixtures, the K-means algorithm, factor analysis, and principal components analysis. They then discuss sparse coding, as a technique to deal with the fact that the cortical activity is not Gaussian. They illustrate an experimental sample, showing the activity follows an exponential distribution in a neuron in the inferotemporal area of the macaque brain. The reader will recognize 'sparse' probability distributions as being 'heavy-tailed', i.e. having values close to zero usually, but ones far from zero sometimes. The authors emphasize the difficulties in the computation of the recognition distribution explicitly. The Olshausen/Field model is used to give a deterministic approximate recognition model for this purpose. The authors then give a fairly detailed overview of a two-layer, nonlinear 'Helmholtz machine' with binary inputs. They illustrate how to obtain the expectation maximization in terms of the Kullback-Leibler divergence. The learning in this model takes place via stochastic sampling and occurs in two phases, the so-called "wake and sleep" algorithm. The last section of the chapter gives a general discussion of how recent interest in coding, transmitting, and decoding images has led to much more research into representational learning algorithms. They discuss multi-resolution decomposition and its relationship to the coding algorithms available.
S**A
Majestic overview of the fundamentals; helped me fall in love with the subject
This book was an eye opener for me. Scientists still fully don't understand how neurons "think" and "learn" but I was shocked to learn how much we _do know_. After reading significant chunks for this book I feel inspired and want to recommend this book to others who have an interest in this subject. This book is a great overview of the field of Computational Neuroscience. The authors convincingly explain the most fundamental theoretical concepts in Computation Neuroscience and back them up by describing some of the major experiments in this field. It does not oversimplify nor does it over-complicate, for a first introduction.Coming to the specific merits of the book, what stands out is the quality of the prose and explanations. The book is tightly written, so, it gets to the point fast and explains what it needs to without much ado. Because this book is quite succinct (and does not "over explain") you might need do multiple readings of the chapters to understand the content. Actually, it was only on the repeated readings that I came to appreciate the overall coherence of this book.In this book you will find that complex math and derivations are often either relegated to the chapter appendix or left to the reader to cover independently. This approach actually makes the book less daunting because you don't need to wade through dozens of pages of topics that are not really computational neuroscience but Math!Lest someone get the impression that the book is too mathematical I want to point out that you need to have a standard science/engineering background in Calculus and differential equations and basic knowledge of Physics/Chemistry and you should be fine. I personally only had a few problems in the area of dynamical systems which make their appearance in a few places in the book.On the negative side (though this book definitely deserves its 5 stars), I feel the book lacks a little sparkle and personality and can be a bit dry in places. Luckily there are a lot interesting MOOCs and videos on the Internet on Neuroscience that will provide the necessary background "excitement" and context you need while reading this book.Another (subjective) thing: I love calculus but I think its slightly overdone here. If you're _really_ doing computational neuroscience, you're probably going to use a lot of summation, simulation, discrete math, data analysis and algorithms but this book loves showing things in terms of Calculus. Yeah, its prettier with integrals but you're going to have to translate that into algorithms eventually. So, ironically, this book on Computational Neuroscience needs to be a bit more "computational."Finally, if you have some prior knowledge of Machine Learning you are likely to enjoy this book more. This was an unexpected bonus as I didn't realize that was so much overlap between Machine Learning and (real) Neural Systems before diving into the subject.
N**O
Good, and Useful
Yes, the book is heavy in mathematics. This is, after all, a book about COMPUTATIONAL neuroscience! Mathematics is a human language, like English or Mandarin. It happens to be a PRECISE languags. Be prepared to embrace the math, and do know you need to understand enough math so that the math itself speaks to you, like English or Mandarin prose. If you are not prepared for that, then think twice of purchasing this book or taking a class based on this book.I know, I know, many people go in to medicine in order to avoid math. I think that is to the eternal shame of the modern practitioner, but just know that computational neuroscience is not for you.I don't give many reviews five stars, or even one star. Those stars are too many standard deviations from the norm for most work. This is a good book, better than merely competent. With my math background, I am finding it very useful and understandable read.
W**F
New Title: Theoretical Neuroscience - Firing Rate Models
While I would like to say that this book is all encompassing, it only briefly touches upon one of the very important camps of computational neuroscience - the spiking models. Be warned that you will be viewing theoretical neuroscience through one lens targeted mainly at firing rates. A brief distinction: spiking models include the dynamic changes of the individual spikes of neurons into neural models, and tend to focus on the contribution of the temporal and electrical components of the neuronal action potentials as they move down the axons and interact with other neurons. Firing rate models condense this spiking behavior into a probability distribution governing the rate at which the neuron fires (think Hertz). This is a fantastically written book, but I would suggest izhikevich's book as a companion.
A**L
Excellent product
Excellent product. Good state, I recommend it
W**N
Four Stars
Great textbook for the field - but you better brush up on your maths first!
A**I
Great textbook !
Excellent book on theoretical neuroscience; suitable both for students and for PhD.Engage in a concise and mathematical treatment with the basics of funzoinamento description of the nervous system in a number of applications involving learning, development, memory, etc ...I highly recommend it !
ら**だ
ええ本や!
Neural systemをcomputationalに考える上での基礎的なtextです。Celluar levelからnetwork levelまでをカバーしている点で非常に良いテキストであるだけでなく、情報系のテキストで扱われている内容を神経科学にきっちり落とし込んで説明している点でもこのテキストは非常に良い本だと思います。細胞レベルに関してもう少しマニアックにやりたい人はChristof Koch のBiophysics of ComputationNeural codingに関してマニアックにやりたい人はFred RiekeのSpikesあたりを読むのが良いでしょう。
T**O
理論神経科学ならまずこれでは。
神経科学に対する数理的なアプローチを幅広くカバーしていて、とても良い。8章以降の機械学習に関する部分は分量の関係もあり、他の本を参照した方が良いけれど、7章までに関しては self-contained で完璧。Appendixの説明も丁寧。
Trustpilot
Hace 2 meses
Hace 3 días