Skip to content

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

PLOS BLOGS Your Say

A synergistic interplay of brain and neuro-inspired technologies: will the path to Artificial General Intelligence for machines stand the test of time?

Author: Aekta Shah, M.D., MRes (Translational Cancer Medicine) Associate Professor and Pathologist. Member: Neuro-oncology Disease Management Group, Tata Memorial Hospital, Homi Bhabha National Institute, Mumbai, India.
I would also acknowledge Anuj Shah, for his contribution to Figure 2.

Since the later part of the last century, man has tried to simulate the structure and functions of the brain by creating artificial neural networks and has more recently begun the quest for Artificial General Intelligence (AGI) for machines. The idea is to create efficient computer systems as unique as the human brain which would have the capability to judge, learn skills dynamically, develop their own imaginative skills and have cognitive abilities. Will these systems stand the test of time is the big question. Or would these threaten humanity altogether? This is something to ponder upon.

An Ode to Brain

Within the safe confines of the skull bones I lie,

A complex physical scaffolding of jelly-like material.

Through a fascinating network of connectomes, I ally,

Amongst the billions of neurons with whom I identify.

Excitatory or inhibitory, synaptic signal so trivial be,

Initiator and driver of tasks so crucial, none will disagree.

From Artificial neural networks to Neuromorphic computing,

My every structure and function are worth emulating.

This race to simulate my being is like a double-edged sword,

Any grave mistake here man cannot afford.

My only dream and only hope,

Is let it be for the benefit of the human folk.

Indeed, man has long been intrigued by the complexity of the human brain. Besides being the driver of all functions in the body; it is the seat of creativity where ideas emanate, through vast seams of imagination, so exclusive to the human mind.

Let’s move back in time to the year 1943 when the first ever biological neural network called the perceptron was born. Fifteen years later, in the year 1958, an attempt to build a mechanical hardware based on the perceptron was conceptualised. Its purpose was to serve the US Navy in performing image analysis tasks. Ever since then, the pursuit to design and develop both hardware and software components of the computer systems based on central nervous system has gained traction. This later gave birth to a new method of computer engineering commonly referred to as neuromorphic computing (NC) first proposed by Caltech professor Carver Mead in the 1980s who described the first analog silicon retina.

Circa 2008, an inevitable death sentence to the Moore’s law paralleled by a compelling need to develop energy efficient, fault tolerant and massively parallel processing systems once again revived the popularity of the concept of NC. The core component of NC lies the memristor developed from a thin film of titanium oxide which enables similar degree of plasticity as the brain and essentially integrates functions of capacitors, resistors, and inductors. The main area where NC would be different from traditional computing would be its ability to take decisions logically and being curious enough like the brain to garner new information. Besides using spiking neural network (SNN) to measure discrete analog signal changes, NC uses a novel chip architecture system wherein both memory and processing concurrently occur at the level of each neuron very much unlike the von Neumann architecture systems of traditional computer systems. They are inherently scalable as already demonstrated by SpiNNaker, BrainScaleS (enables neuroscience simulations at scale), Loihi (130 million synapses and 131,000 neurons per chip), Intel’s Pohoiki Beach computers (8.3 million neurons), IBM’s TrueNorth chip (1 million neurons and over 268 million synapses), The Tianjic chip (had 40,000 neurons, 10 million synapses) to name a few. Besides enhancing efficiency through spike driven computation they also generate randomness within the system to bring forth an element of noise. Potential applications of NC include driverless cars, internet of things, edge devices, smart home applications, natural language understanding, data analytics, process optimization and robotics. Though still at its infancy, the market for NC is estimated to reach $22 billion by the year 2035.

Besides signalling, memory, and processing, Brain-Inspired Computer Systems also take inspiration from the complexity of architecture of neuronal connections bringing us to the interesting topic of neuronal wiring or connectomics which involves comprehensive mapping of the connections within the nervous system.

‘Neurons that fire together wire together’

— Donald Hebb

Neuropsychologist Donald Hebb, in 1949, in his ‘assembly theory’ first proposed that the set of neurons which receive and respond to the same stimulus repeatedly connect with one another to form ‘neuronal ensembles’. The Hebbian theory states that these associations, mediated by synapses play a vital role in learning, memory and recall function. Any trivial stimulus to a select set of neurons is enough to fire the entire ensemble of neurons. Why do then similar simulations fail terribly on computer systems? Possibly due to explosion of activity and the resulting instability. There are several theories to explain the possible mechanism. One such explanation by Wu and Zenke, is what they call the ‘Nonlinear Transient Amplification’ states how short-term synaptic depression could bring about stability to the strong positive excitatory feedback. Coming back to the wiring between neuronal connections, it is hypothesised that such connections determine who we are as individuals.

So, what makes us who we are? Is it the genome or the connectome?

‘Identity lies not in our genes, but in the connections between our brain cells.’
― Sebastian Seung (Professor in Computer Science and Neuroscience)

Genome is something which we are born with, our basic neural structure, however connectome is something which changes throughout our life. Our interaction with the environment, leads to continuous remodelling of the neuronal network which determines and shapes our personalities. Seung defines reweighting, reconnection, rewiring and regeneration as the four Rs of connectome change. It is hypothesised that miswiring of these connections is the root cause of many psychiatric illnesses.

Will unwiring the connectome solve the mystery as to why certain neurological disorders occur in the first place?

The circuitry of the brain is indeed so complex that unwiring the entire network of 86 billion neurons and 700 trillion synapses is no less than a herculean task. The connectome of Caenorhabditis elegans comprising of 300 neurons and seven thousand synaptic connections was mapped in the year 1986. Two decades later, the connectome of Ciona intestinalis was mapped. Macroscopic visualisation of the human connectome is possible using several techniques like Positron Emission Tomography (PET), Magnetic Resonance Imaging (MRI), and Diffusion tensor (DT). Analysis of electron microscope images of billions of pixels of brain tissue would take an unsurmountable amount of time and energy. Here comes the role of Deep Neural networks which have grown by leaps and bounds in recent years thanks to the gaming industry which has led to tremendous development of Graphics Processing Unit (GPU). Deep Learning can fast-track the process of identification of synaptic connections inside the brain which can in turn inspire the development of neuromorphic based computational programs and systems to perform skilled neurocognitive tasks.

Figure 1: Evolution of Neuromorphic Computing (Key events). The above figure is by no means exhaustive, and several other individuals and groups have contributed significantly to the evolution of NC.

Besides neuromorphic technologies, dramatic advancement in the field of neural engineering has led to the development of Brain Computer Interface (BCI) systems and Deep Brain stimulators (DBS). These technologies are used to augment, understand, repair, or replace brain function. DBS involve sending electrical impulses through implanted electrodes in the brain to help patients suffering from chronic depression, Parkinson’s disease, dystonia, and tremors. BCI systems facilitate communication between the brain and an output device to enable control of motor, autonomic or language functions besides also providing external feedback to the user. Implantation and replacement of DBS involves invasive surgical procedures due to limited battery time because of continuous streaming. Through BCI triggered DBS systems, patients will be able to operate DBS on their own thus conserving energy by enhancing battery time and removing the need to repeatedly implant them. All this would be hunky-dory if things were that simple. Brain is the harbinger of thought and carries the entire database of personal information.

Imagine such a system harbouring mental and psychological states of individuals being accessed by a third party through BCI systems?

This would lead to a major infringement of privacy of people. Unlike other organ systems, brain defines our individuality and identity and thus raises a lot more ethical questions on privacy, identity, and jurisdiction.

What would happen if a prosthetic device controlled by BCI malfunctions? Will it be covered under the ambit of law and if yes who will take the onus of responsibility? Will it be the patient or the manufacturer of the device?  

As these technologies are still in their nascent stages, there are no clear-cut answers.

Be it NC or neural engineered devices, the range of functions these automated systems can perform is amazing; however, the fear that these machines will become autonomous one day is real and palpable.

“Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold.”

Ray Kurzweil (Computer Scientist and futurist)

Being autonomous is just one aspect, but can AI really become sentient? Of late, claims of Google’s AI chatbot named LaMDA (Language Model for Dialogue Applications) being sentient has generated a lot of heated debate. Though the company has dismissed this, and we may rest assured today, we do not know what lies ahead of us.

Figure 2: Will the path to Artificial General Intelligence stand the test of time?

In the words of the famous astrophysicist Stephen Hawking,

“The development of full artificial intelligence could spell the end of the human race. It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Literary, celluloid, and digital portrayal of humanoids have since long captured human psyche. Issac Asimov’s short story ‘The Last Question’ draws a close analogy with the above.

Is this a mere figment of imagination or are we already into this kind of a future? Do we want to live in a world where the machines are as intelligent as human beings?

References

Liang, Feng M, Mario L, et al. 2022 roadmap on neuromorphic computing and engineering. IOPscience. 2022.

The age of connectome qa with sebastian seung. Accessed 26 September 2022. https://blogs.scientificamerican.com/cocktail-party-physics/the-age-of-connectome-qa-with-sebastian-seung/.

The Ethical Challenges of Neural Technology – Viterbi Conversations in Ethics. Accessed 26 September 2022.

https://vce.usc.edu/volume-5-issue-2/the-ethical-challenges-of-neural-technology/.

Leave a Reply

Your email address will not be published. Required fields are marked *


Add your ORCID here. (e.g. 0000-0002-7299-680X)

Related Posts
Back to top