Might a single neuron solve interesting machine learning problems through successive computations on its dendritic tree? (English) Zbl 1469.92032

Summary: Physiological experiments have highlighted how the dendrites of biological neurons can nonlinearly process distributed synaptic inputs. However, it is unclear how aspects of a dendritic tree, such as its branched morphology or its repetition of presynaptic inputs, determine neural computation beyond this apparent nonlinearity. Here we use a simple model where the dendrite is implemented as a sequence of thresholded linear units. We manipulate the architecture of this model to investigate the impacts of binary branching constraints and repetition of synaptic inputs on neural computation. We find that models with such manipulations can perform well on machine learning tasks, such as fashion MNIST or extended MNIST. We find that model performance on these tasks is limited by binary tree branching and dendritic asymmetry and is improved by the repetition of synaptic inputs to different dendritic branches. These computational experiments further neuroscience theory on how different dendritic properties might determine neural computation of clearly defined tasks.


92C20 Neural biology
68T05 Learning and adaptive systems in artificial intelligence
Full Text: DOI


[1] Agmon-Snir, H., Carr, C. E., & Rinzel, J. (1998). The role of dendrites in auditory coincidence detection. Nature, 393, 268-272. Google Scholar Search ADS
[2] Ahrens, M. B., Huys, Q. J. M., & Paninski, L. (2006). Large-scale biophysical parameter estimation in single neurons via constrained linear regression. In Y. Weiss, B. Schölkopf, & J. Platt (Eds.), Advances in neural information processing systems, 18. Cambridge, MA: MIT Press. Google Scholar
[3] Antic, S. D., Zhou, W. L., Moore, A. R., Short, S. M., & Ikonomu, K. D. (2010). The decade of the dendritic NMDA spike. Journal of Neuroscience Research, 88(14), 2991-3001. Google Scholar Search ADS
[4] Barlow, H. B., & Levick, W. R. (1965). The mechanism of directionally selective units in rabbit’s retina. Journal of Physiology, 178(3), 477-504. Google Scholar Search ADS
[5] Bhumbra, G. S. (2018). Deep learning improved by biological activation functions. arXiv:1804.11237. (pp. 1-9).
[6] Bliss, T. V. P., & Lomo, T. (1973). Long-lasting potentiation of synaptic transmission in the dentate area of the unanaesthetized rabbit following stimulation of the perforant path. Journal of Physiology, 232(2), 357-374. Google Scholar Search ADS
[7] Branco, T., & Häusser, M. (2011). Synaptic integration gradients in single cortical pyramidal cell dendrites. Neuron, 69(5), 885-892.
[8] Brette, R. (2015). What is the most realistic single-compartment model of spike initiation? PLOS Computational Biology, 11(4), 1-13.
[9] Brette, R., Fontaine, B., Magnusson, A. K., Rossant, C., Platkiewicz, J., & Goodman, D. F. M. (2011). Fitting neuron models to spike trains. Frontiers in Neuroscience 5(February), 1-8. Google Scholar
[10] Clanuwat, T., Bober-Irizar, M., Kitamoto, A., Lamb, A., Yamamoto, K., & Ha, D. (2018). Deep learning for classical Japanese literature. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, & R. Garnett (Eds.), Advances in neural information processing systems, 31 (pp. 1-8). Red Hook, NY: Curran. Google Scholar
[11] Cohen, G., Afshar, S., Tapson, J., & Van Schaik, A. (2017). EMNIST: Extending MNIST to handwritten letters. In Proceedings of the International Joint Conference on Neural Networks (pp. 2921-2926). Piscataway, NJ: IEEE. Google Scholar
[12] David, B., Idan, S., & Michael, L. (2019). Single cortical neurons as deep artificial neural networks. bioRxiv:613141.
[13] Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2018). BERT: Pretraining of deep bidirectional transformers for language understanding. arXiv:1810.04805.
[14] Dreiseitl, S., & Ohno-Machado, L. (2002). Logistic regression and artificial neural network classification models: A methodology review. Journal of Biomedical Informatics, 35(5-6), 352-359. Google Scholar
[15] Farhoodi, R., & Kording, K. P. (2018). Sampling neuron morphologies. bioRxiv. .
[16] Federmeier, K. D., Kleim, J. A., & Greenough, W. T. (2002). Learning-induced multiple synapse formation in rat cerebellar cortex. Neuroscience Letters, 332(3), 180-184. Google Scholar Crossref Search ADS
[17] FitzHugh, R. (1961). Impulses and physiological states in theoretical models of nerve membrane. Biophysical Journal, 1(6), 445-466. Google Scholar Crossref Search ADS
[18] Frankle, J., & Carbin, M. (2019). The lottery ticket hypothesis: Finding sparse, trainable neural networks. In Proceedings of the 7th International Conference on Learning Representations (pp. 1-42).
[19] Gerstner, W., & Naud, R. (2009). How good are neuron models? Science, 326(5951), 379-380. Google Scholar Search ADS
[20] Gidon, A., Zolnik, T. A., Fidzinski, P., Bolduan, F., Papoutsi, A., Poirazi, P., … Larkum, M. E. (2020). Dendritic action potentials and computation in human layer 2/3 cortical neurons. Science, 367(6473), 83-87. Google Scholar Search ADS
[21] Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. Cambridge, MA: MIT Press. · Zbl 1373.68009
[22] Goodfellow, I. J., Bulatov, Y., Ibarz, J., Arnoud, S., & Shet, V. (2014). Multi-digit number recognition from street view imagery using deep convolutional neural networks. In Proceedings of the 2nd International Conference on Learning Representations (pp. 1-13). Google Scholar
[23] Gouwens, N. W., Berg, J., Feng, D., Sorensen, S. A., Zeng, H., Hawrylycz, M. J., … Arkhipov, A. (2018). Systematic generation of biophysically detailed models for diverse cortical neuron types. Nature Communications, 9(1). Google Scholar
[24] Hastie, T., Tibshirani, R., & Friedman, J. (2001). The elements of statistical learning. New York: Springer-Verlag. · Zbl 0973.62007
[25] Hay, E., Hill, S., Schürmann, F., Markram, H., & Segev, I. (2011). Models of neocortical layer 5b pyramidal cells capturing a wide range of dendritic and perisomatic active properties. PLOS Computational Biology, 7(7). Google Scholar
[26] Hines, M. L., & Carnevale, N. T. (1997). The NEURON simulation environment. Neural Computation, 9(6), 1179-1209. Google Scholar Search ADS
[27] Hodgkinbx & Huxley (1952). A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiology, 1117, 500-544. Google Scholar
[28] Huval, B., Wang, T., Tandon, S., Kiske, J., Song, W., Pazhayampallil, J., … Ng, A. Y. (2015). An empirical evaluation of deep learning on highway driving. arXiv:1504.01716.
[29] Huys, Q. J. M., Ahrens, M. B., & Paninski, L. (2006). Efficient estimation of detailed single-neuron models. Journal of Neurophysiology, 96(2), 872-890. Google Scholar Search ADS
[30] Jones, I. S., & Kording, K. P. (2019). Quantifying the role of neurons for behavior is a mediation question. Behavioral and Brain Sciences, 42, E233. Google Scholar
[31] Jones, T. A., Klintsova, A. Y., Kilman, V. L., Sirevaag, A. M., & Greenough, W. T. (1997). Induction of multiple synapses by experience in the visual cortex of adult rats. Neurobiology of Learning and Memory, 68(1), 13-20. Google Scholar Search ADS
[32] Kincaid, A. E., Zheng, T., & Wilson, C. J. (1998). Connectivity and convergence of single corticostriatal axons. Journal of Neuroscience 18(12), 4722-4731. Google Scholar Search ADS
[33] Koch, C., Poggio, T., & Torre, V. (1983). Nonlinear interactions in a dendritic tree: Localization, timing, and role in information processing. In Proceedings of the National Academy of Sciences of the United States of America, 80, 2799-2802. Google Scholar Search ADS
[34] Krizhevsky, A. (2009). Learning multiple layers of features from tiny images (Technical Report TR-2009). Toronto: University of Toronto.
[35] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2006). ImageNet classification with deep convolutional neural networks. In Y. Weiss, B. Schölkopf, & J. Platt (Eds.), Advances in neural information processing systems, 18 (pp. 713-772). Cambridge, MA: MIT Press. Google Scholar
[36] LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444. Google Scholar Search ADS
[37] LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. In Proceedings of the IEEE, 86(11).
[38] Lee, K. J., Park, I. S., Kim, H., Greenough, W. T., Pak, D. T., & Rhyu, I. J. (2013). Motor skill training induces coordinated strengthening and weakening between neighboring synapses. Journal of Neuroscience, 33(23), 9794-9799. Google Scholar Search ADS
[39] Legenstein, R., & Maass, W. (2011). Branch-specific plasticity enables self-organization of nonlinear computation in single neurons. Journal of Neuroscience, 31(30), 10787-10802. Google Scholar Search ADS
[40] London, M., & Häusser, M. (2005). Dendritic computation. Annual Review of Neuroscience, 28(1), 503-532. Google Scholar Search ADS
[41] McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5(4), 115-133. Google Scholar Search ADS · Zbl 0063.03860
[42] Mel, B. (2016). Toward a simplified model of an active dendritic tree. In G. J. Stuart, N. Spruston, N., & M. Häusser, (Eds.), Dendrites. Oxford Scholarship Online. Google Scholar
[43] Mel, B. W. (1993). Synaptic integration in an excitable dendritic tree. Journal of Neurophysiology, 70(3), 1086-101. Google Scholar Search ADS
[44] Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., … Hassabis, D. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540), 529-533. Google Scholar Search ADS
[45] Moldwin, T., Kalmenson, M., & Segev, I. (2020). The gradient clusteron: A model neuron that learns via dendritic nonlinearities, structural plasticity and gradient descent. bioRxiv. .
[46] Moldwin, T., & Segev, I. (2019). Perceptron learning and classification in a modeled cortical pyramidal cell. bioRxiv:464826.
[47] Poirazi, P., Brannon, T., & Mel, B. W. (2003a). Arithmetic of subthreshold synaptic summation in a model CA1 pyramidal cell. Neuron, 37(6), 977-987. Google Scholar Crossref Search ADS
[48] Poirazi, P., Brannon, T., & Mel, B. W. (2003b). Pyramidal neuron as two-layer neural network. Neuron, 37(6), 989-999. Google Scholar Crossref Search ADS
[49] Poirazi, P., & Mel, B. W. (2001). Impact of active dendrites and structural plasticity on the memory capacity of neural tissue. Neuron, 29(3), 779-796. Google Scholar Crossref Search ADS
[50] Rall, W. (1959). Physiological properties of dendrites. Annals of the New York Academy of Sciences, 96(4), 1071-1092. Google Scholar Search ADS
[51] Schiller, J., Major, G., Koester, H. J., & Schiller, Y. (2000). NMDA spikes in basal dendrites. Nature, 1261(1997), 285-289. Google Scholar
[52] Segev, I. (2006). What do dendrites and their synapses tell the neuron? Journal of Neurophysiology, 95(3), 1295-1297.
[53] Stuchlik, A. (2014). Dynamic learning and memory, synaptic plasticity and neurogenesis: An update. Frontiers in Behavioral Neuroscience, 8), 1-6. Google Scholar
[54] Toni, N., Buchs, P., Nikonenko, I., Bron, C. R., & Muller, D. (1999). LTP promotes formation of multiple spine synapses between a single axon terminal and a dendrite. Nature, 402, 421-425. Google Scholar Search ADS
[55] Tran-van minh, A., Cazé, R. D., Abrahamsson, T., Gutkin, B. S., & Digregorio, D. A. (2015). Contribution of sublinear and supralinear dendritic integration to neuronal computations. Frontiers in Cellular Neuroscience, 9, 1-15. Google Scholar
[56] Travis, K., Ford, K., & Jacobs, B. (2005). Regional dendritic variation in neonatal human cortex: A quantitative Golgi study. Developmental Neuroscience, 27(5), 277-287. Google Scholar Search ADS
[57] Ujfalussy, B. B., Makara, J. K., Branco, T., & Lengyel, M. (2015). Dendritic nonlinearities are tuned for efficient spike-based computations in cortical circuits. eLife, 4, 1-51. Google Scholar Search ADS
[58] Wilson, D. E., Whitney, D. E., Scholl, B., & Fitzpatrick, D. (2016). Orientation selectivity and the functional clustering of synaptic inputs in primary visual cortex. Nature Neuroscience, 19(8), 1003-1009. Google Scholar Search ADS
[59] Xiao, H., Rasul, K., & Vollgraf, R. (2017). Fashion-MNIST: A novel image dataset for benchmarking machine learning algorithms. arXiv:1708.07747.
[60] Zador, A. M., Claiborne, B. J., & Brown, T. H. (1992). Nonlinear pattern separation in single hippocampal neurons with active dendritic membrane. In J. Moody, S. J. Hanson, & R. Lippmann (Eds.), Advances in neural information processing systems, 4 (pp. 51-58). San Mateo, CA: Morgan Kaufmann. Google Scholar
[61] Zador, A. M., & Pearlmutter, B. A. (1996). VC dimension of an integrate-and-fire neuron model. In Proceedings of the Ninth Annual Conference on Computational Learning Theory (pp. 10-18). New York: ACM. Google Scholar
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.