May 27, 2022

Share this article

Deep learning is a subfield of machine learning concerned with algorithms sets of instructions for solving a problem or accomplishing a task, like a recipe – that are inspired by the structure and function of the brain.


Israeli scientists have now discovered that just one single neuron (nerve cell) is enough achieve deep-learning algorithms that previously required an artificial complex network consisting of thousands of connected neurons and synapses (structures that enable neurons to pass an electrical or chemical signal to another neuron). This discovery by researchers ast Bar-Ilan University (BIU) near Tel Aviv is expected to have important implications on future artificial-intelligence hardware. 


The brain is a complex network containing billions of neurons. Each of these neurons communicates simultaneously with thousands of others via their synapses and collects incoming signals through several extremely long, branched “arms” called dendritic trees. 


For the last 70 years, a basic assumption of neuroscience has been that brain learning occurs by modifying the strength of the synapses, following the relative firing activity of their connecting neurons. 

This hypothesis has been the basis for machine and deep learning algorithms that increasingly affect almost all aspects of our lives. But after seven decades, this long-lasting hypothesis has now been called into question.


In an article just published by the Nature group in Scientific Reports, the BIU researchers revealed that the brain learns completely differently than has been assumed since the 20th century. The new experimental observations suggest that learning is mainly performed in neuronal dendritic trees, where the trunk and branches of the tree modify their strength, as opposed to modifying solely the strength of the synapses (dendritic leaves), as was previously thought. 


These observations also indicate that the neuron is actually a much more complex, dynamic and computational element than a binary element that fires or not. “We‘ve shown that efficient learning on dendritic trees of a single neuron can artificially achieve success rates approaching unity for handwritten digit recognition. This finding paves the way for an efficient biologically inspired new type of artificial-intelligence (AI) hardware and algorithms,“ said lead author Prof. Ido Kanter of BIU’s physics department and the Gonda (Goldschmied) Multidisciplinary Brain Research Center. 


“This simplified learning mechanism represents a step towards a plausible biological realization of backpropagation algorithms (an algorithm used to calculate derivatives quickly) that are currently the central technique in AI,“ added Shiri Hodassman, a doctoral student and one of the key contributors to this work.


The brain’s clock is a billion times slower than existing parallel graphics processing units (GPUs). The new demonstration of efficient learning on dendritic trees calls for new approaches in brain research, as well as for the generation of counterpart hardware aiming to implement advanced AI algorithms, the researchers concluded. „If one can implement  slow brain dynamics on ultrafast computers, the sky is the limit.“