Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Performance Analysis of New One-Piece Iron Roughneck and Its Spinning Mechanism
Previous Article in Journal
Fault Detection of Rotating Machines Using poly-Coherent Composite Spectrum of Measured Vibration Responses with Machine Learning
Previous Article in Special Issue
Vibration Analysis for Fault Diagnosis in Induction Motors Using One-Dimensional Dilated Convolutional Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Novel Directions for Neuromorphic Machine Intelligence Guided by Functional Connectivity: A Review

Institute for Digital Technologies, Loughborough University London, 3 Lesney Avenue, London E20 3BS, UK
*
Author to whom correspondence should be addressed.
Machines 2024, 12(8), 574; https://doi.org/10.3390/machines12080574
Submission received: 25 July 2024 / Revised: 11 August 2024 / Accepted: 16 August 2024 / Published: 20 August 2024
(This article belongs to the Special Issue Application of Deep Learning in Intelligent Machines)

Abstract

:
As we move into the next stages of the technological revolution, artificial intelligence (AI) that is explainable and sustainable is becoming a key goal for researchers across multiple domains. Leveraging the concept of functional connectivity (FC) in the human brain, this paper provides novel research directions for neuromorphic machine intelligence (NMI) systems that are energy-efficient and human-compatible. This review serves as an accessible review for multidisciplinary researchers introducing a range of concepts inspired by neuroscience and analogous machine learning research. These include possibilities to facilitate network integration and segregation in artificial architectures, a novel learning representation framework inspired by two FC networks utilised in human learning, and we explore the functional connectivity underlying task prioritisation in humans and propose a framework for neuromorphic machines to improve their task-prioritisation and decision-making capabilities. Finally, we provide directions for key application domains such as autonomous driverless vehicles, swarm intelligence, and human augmentation, to name a few. Guided by how regional brain networks interact to facilitate cognition and behaviour such as the ones discussed in this review, we move toward a blueprint for creating NMI that mirrors these processes.

1. Introduction

Neuromorphic Artificial Intelligence (NAI) encompasses the idea that we can build better AI systems by using neural network architectures and learning paradigms inspired by biology [1]. Functional Connectivity (FC) is the observation of how regional brain networks interact with one another to facilitate cognition and behaviour [2]. FC helps us understand how the stimuli we face are processed in the brain to drive a certain output. If we build based on what is known, the final product becomes clearer and gives a degree of explainability. We believe that, by paying attention to task-dependent FC models of the brain and supporting knowledge accessibility across different domains, we can develop NAI for machines efficiently and sustainably for a synergistic future for humans and AI.
As we move into the next stage of the technological revolution, there is a common goal that we aim to achieve: explainable, sustainable, and efficient intelligent machines. This evolution of machine learning (ML) poses fundamental changes to the society we know today, bringing about new ethical and environmental considerations [3]. Furthermore, we are trying to move toward AI that is human-like in nature, for the purpose of collaboration such as the use of Large Language Models (LLMs), e.g., Chat-GPT [4]. The use of LLMs alone, across various domains including software development [5], has demonstrated the fact that we are moving toward a time where AI is as much a necessity as human intelligence. Human–Robot Interaction (HRI) is another example where a multitude of domains such as psychology and AI come together, as the demand for intelligent collaborative robots in the workplace grows [6]. Application domains include physical and mental healthcare, warehouse operations, and the military to name a few [7,8]. However, the key thing for all these domains is how we can maximise the compatibility of explainable AI systems that collaborate with humans [9,10], whilst minimising negative environmental consequences.
To achieve the above, one particular system comes to mind: the human brain. The brain is an energy-efficient and automatic processing system. As it is a shared feature among humans, it enables us to form social relationships well. Nature has already given us the perfect biological machine. Replicating this requires comprehensive insight into how the brain defines humans, and synergy across key domains such as neuroscience, cognitive neuropsychology, and computer science [9,11]. Taking inspiration from nature has advanced many areas of AI including reinforcement learning (RL), which stems from Pavlov and their classical conditioning experiments [12]. Furthermore, advances in reservoir computing have seen the use of brain organoids (small versions of the brain made from stem cells) that have enabled unsupervised learning through the manipulation of FC [13], demonstrating the importance of network organisation and dynamics [14]. Biological networks encode human behaviour [15] and, by modelling AI systems inspired by FC in the brain, we can ensure that steps are taken toward neuromorphic machine intelligence (NMI): machines with the capacity to function as humans do.
We all somewhat understand what it means to be human, the ability to sympathise and relate to one another is a key aspect of human–human collaboration and is also a key need for the species [16]. The human need for affiliation [16] is the desire for social interaction and relationships with other people. It plays a key role in facilitating human–human collaboration, as satisfying this need activates reward-associated pathways in the brain [17], helping to maintain positive perceptions and efficient collaboration. Humanness is also an important feature for improving HRI. The way that humans, who do not regularly work with AI, perceive and interact with agents has a significant impact on how effective they will be when working as a team with agents [18]. Humans are more inclined to put blame on their AI team members than their human teammates, fostering negative perceptions that hinder teamwork. Intelligent machines should be able to integrate into human-centred environments for efficient collaboration [19] and be adaptive to dangerous situations, e.g., those caused by human error in the manufacturing sector [20], in order to preserve human safety.
AI is considered a black box because it is difficult to say exactly how models make decisions. Human society is built on trust [21] and, as we look to integrate machines into it, we consider the following question: why we should trust AI and the predictions it makes? Being able to trust others is a key pillar, such as trusting that drivers will stop at pedestrian crossings, trusting that our leaders are making the best decisions, and trusting our surgeons to effectively carry out medical procedures. Without trust, society would crumble, and being able to interpret and explain AI, from safe autonomous vehicles to explainable disease prediction systems [22,23], is key to pave the way for NMI development.
Similarly, maintaining a sustainable approach to NMI is important to consider. The AI sector requires a lot of energy to sustain research and development of more complex and capable models. Model type and training demonstrate differences in energy consumption [24], with the training of one particular model, Meena, having the same carbon footprint as driving 242,231 miles in an average vehicle [25]. Facilitating NMI will, no doubt, require an immense amount of energy, and, with sustainability and preservation becoming a global urgency [26], it is imperative that we optimise the energy consumption of future AI developments.
While neuromorphic computing looks to aspects of FC on a cellular level such as synaptic plasticity for memristor development [27], these works remain heavily technical. Likewise, breakthroughs in neuroscience regarding FC are daunting to read for computer scientists, as one must familiarise themselves with biological terminology. Achieving NMI that will have a place amongst humans requires research to be accessible across the domains of both neuroscience (cognitive and behavioural) and computer science.
In this paper, we present FC as a backbone for neuromorphic machine intelligence (NMI). In neuroscience, the brain is discussed in terms of six general regions that can be seen in Table 1, with five of these shown in Figure 1. We discuss research in various areas of neuroscience and link this to analogous brain-inspired machine learning concepts to highlight a direction for NMI that FC can inspire. The layout for this review comprises an introduction to areas of neuroscientific research on aspects of human cognition and the underlying FC driving those capabilities. We follow this with reviews of research in analogous areas of artificial intelligence to highlight the similarities between the fields. We then highlight possibilities for future NMI in the form of distinct hypotheses in each section. An outline of these and the sections in which we discuss attributes of human intelligence can be seen in Figure 2.
Key Contributions
  • We provide an accessible resource for researchers in the related fields of neuroscience and computer science that explains FC in terms of network organisation, multi-tasking, and learning methods.
  • This is the first review, to our knowledge, that gives a comprehensive background into key areas of neuroscience to give novel directions for neuromorphic machine intelligence.
  • We discuss key neuroimaging techniques, giving computer scientists an insight into the types of data available for consideration in the development of neuromorphic AI.
  • The review also refers to key application domains, such as human augmentation that will benefit from FC-inspired neural network design and artificial learning.

Organisation of the Review

The rest of the review is organised as follows:
  • Section 2: We begin by providing a background into key concepts in neuroscience and AI that are referred to throughout this review. We provide descriptive definitions for FC, executive function, and neuromorphic AI. In addition, we discuss popular neuroimaging techniques that are being used in both domains and how these have been used in combination to understand FC.
  • Section 3: We then look at how FC can inspire artificial network design and the potentials of considering multi-network integration for computing systems. Modelling AI systems from known FC networks may be a step toward making AI explainable and more human in decision-making.
  • Section 4: We then discuss learning in humans and the role of FC in memory retrieval. After also discussing various learning paradigms in AI, we suggest research directions for improving the efficiency and memory capabilities of neuromorphic artificial learning.
  • Section 5: The next section covers the dilemmas faced by a robot trying to cook pasta. We discuss the human ability to multi-task and allocate attention in response to an unexpected stimulus and how FC could aid in areas of research relating to temporal abstraction, transfer learning, and swarm intelligence.
  • Section 6: Potential applications for NMAI: we discuss areas around non-invasive neurotechnology, human augmentation, and collaborative robots across different domains.

2. Background

2.1. Explanations of Key Concepts in Neuroscience for This Review

This section contains descriptions of some key concepts needed by any multidisciplinary researcher looking to use FC for AI development. We have also included information and abbreviations for the functional networks in this review and their abbreviations and anatomical names. The brain’s cognitive functions are underpinned by several key networks, each with distinct roles [28]. The frontoparietal network is critical for adaptive control and flexible thinking, enabling us to switch between tasks and manage complex behaviours. The dorsal attention network is responsible for directing our focus towards specific external stimuli, helping us maintain attention on relevant tasks. The default mode network comes into play during rest, self-referential thinking, and mind-wandering, allowing for introspection and memory retrieval. The salience network is essential for detecting and filtering critical information, guiding the brain’s switch between the DMN and attention networks based on the demands of the situation. These networks work as distinct, yet integrated, networks that facilitate various aspects of cognitive control through these interactions. Based on ref. [28], an overview of these can be seen in Table 2, with a diagram showing network regions on a superior view of the human brain in Figure 3.
Table 2. Table showing the anatomical and cognitive names of brain networks, the regions they include, and the sections in the review where they are discussed. This table was developed using information from a universal taxonomy of functional networks [29].
Table 2. Table showing the anatomical and cognitive names of brain networks, the regions they include, and the sections in the review where they are discussed. This table was developed using information from a universal taxonomy of functional networks [29].
Brain Network (Abbreviation)Primary Brain RegionsRole They PlaySection
Frontoparietal Network (FPN)
  • Dorsolateral and rostral lateral prefrontal cortex
  • Inferior parietal lobule
  • Middle cingulate gyrus
  • Inferior temporal lobe
  • Thalamus
  • Striatum
Executive functions such as working memory, goal-oriented cognition, inhibition, and task-switching.Section 3, Section 4 and Section 5
Dorsal Attention Network (DAN)
  • Superior parietal lobule
  • Ventrolateral prefrontal cortex
  • Intraparietal Sulcus
  • Ventral premotor cortex
Typically involved in visuo-spatial attention and attention allocation.Section 5
Salience Network (SN)
  • Anterior insula
  • Anterior Midcingulate cortex
  • Inferior parietal cortex
  • Sub-cortical regions such as the amygdala and hypothalamus
This network is involved in determining if information is important or not. Furthermore, it plays a role in individualistic goal-directed behaviour.Section 5
Default Mode Network (DMN)
  • Medial prefrontal cortex
  • Inferior frontal gyrus
  • Middle temporal gyrus
  • Parahippocampal cortex
This network can be seen as active during cognitive functions that are focused on internal conditions rather than in response to the external environment. It plays a role in semantic cognition and language comprehension, as well as conscious thought and decision-making.Section 3, Section 4 and Section 5
Figure 3. Diagram showing the brain regions, from a superior view, associated with each of the networks discussed in this review. FPN: frontoparietal network, SN: salience network, DAN: dorsal attention network, and DMN: default mode network. Made using the human Brainnetome Atlas [30]. Each colour seen on the diagram are to distinguish between different region of the brain that become activated as part of the defined networks.
Figure 3. Diagram showing the brain regions, from a superior view, associated with each of the networks discussed in this review. FPN: frontoparietal network, SN: salience network, DAN: dorsal attention network, and DMN: default mode network. Made using the human Brainnetome Atlas [30]. Each colour seen on the diagram are to distinguish between different region of the brain that become activated as part of the defined networks.
Machines 12 00574 g003

2.1.1. Defining Functional Connectivity

FC encompasses the notion that cortical regions that are simultaneously activated together consistently are part of a network, thus highlighting potential relationships between spatially distinct regions of the brain [2]. Hebb’s postulate of learning encapsulates these network activations at a synaptic level with impulses that activate pre- and post-synaptic neurons simultaneously to strengthen those network connections [31]. Not to be confused with anatomical connectivity, which is the structure of the hardware in the brain, usually observed in vivo, FC is observed on a broader scale. It is also important to note that FC does not depend on structural connectivity, and, despite the lack of clear anatomical connections between cortical regions, there may still be strong FC exhibited when observing brain activation [32]. An example is the activation of cortical areas, other than the visual cortex (VC), during the processing of visual stimuli [33]. The entire cortex is split into six distinct regions, each attributed to our various forms of sensory perception. The lobes of the brain, frontal, temporal, parietal, and occipital, the cerebellum, and the brain stem each contain the primary cortex for visual, auditory, and somatosensory information processing and decision- and movement-making [34]. However, there are also sub-cortical regions that play a role in computation in the brain, enabling the interaction of spatially distinct regions [2].

2.1.2. Executive Function and Cognitive Control

Executive functions (EFs) refer to the key network activity involved in facilitating cognitive control (CC); elements of CC include working memory maintenance, response selection/inhibition, and task-switching [35]. These can be observed in task-related neuroimaging studies using electroencephalography (EEG), or functional magnetic resonance imaging (fMRI). Cognitive control in adults has been observed using deep learning techniques to identify correlations between skill acquisition in adults and underlying brain activity [36]. This particular study demonstrated the link between the most significant brain regions and underlying EFs. The electrodes with the highest influence over the task performance metric covered overlapping frontal, central, and parietal regions [36]: all known key regions involved in EFs for higher-order cognitive processes. Various works in AI try to create EFs for machines such as decision-making for learning agents or multi-tasking. Therefore, we felt it necessary to include a reference here.

2.2. Key Concepts in Computer Science for this Review

2.2.1. Neuromorphic Artificial Intelligence and Synaptic Models

Neuromorphic AI refers to artificial intelligence systems designed to mimic the neural architecture and functioning of the human brain [1]. This approach aims to create more efficient and adaptive computational models by leveraging principles of synaptic plasticity and neural network topology [1]. Neuromorphic AI systems use specialised hardware, such as spiking neural networks and memristors [37], to replicate the way biological neurons and synapses process and store information. Neuromorphic hardware plays an important role in the development of neuromorphic AI, an example of this being the use of solitonic units for artificial circuits that can enable models to learn as the brain does [38]. These circuits demonstrate a type of neural plasticity enabled by light-propagated waves that can self-modify depending on input patterns [38], similar to the way the neural plasticity of the brain enables the formation of synaptic connections that store information and enable learning [31]. Another example is the use of nanodevices composed of a semiconductor phonon waveguide and patterned ferromagnetic layer for reservoir computing for the training of an artificial network [39]. There also exist models that provide a mathematical basis for the neural underpinnings of synaptic activity. The Wilson–Cowan model [40] simulates the dynamics of prefrontal cortex activity and provides a framework for understanding and replicating cognitive processes. By modelling the interactions between excitatory and inhibitory neurons, the Wilson–Cowan model aids in the development of neuromorphic systems by providing a framework that can be used in neuromorphic network design. Models such as this are a step toward NMI that we can interpret. As they improve our understanding of biological neural networks, it is a step closer to understanding artificial ones.

2.2.2. Neuromorphic Directions for Artificial Learning

Taking inspiration from the abilities of the human brain, neuromorphic developments have begun to work toward creating systems that work in the same way. Spiking neural networks (SNNs) [41] are a type of artificial neural network that more closely mimics the actions of the human brain when compared to traditional neural networks. Unlike conventional neural networks that use continuous values for neuron activations, SNNs operate with discrete events known as spikes, inspired by action potentials seen in biological neurons. These spikes are generated when a neuron’s membrane potential crosses a certain threshold, leading to the transmission of a signal to another neuron. This event-driven processing allows SNNs to be highly efficient in terms of computational resources and energy consumption [42]. For energy-efficient learning, one such model is the hybrid synergic learning model (HB) proposed by Wu et al. This model follows a metal-learning paradigm that integrates imitation learning (IL) and RL to enable quick adaptation for various tasks. The model was implemented on neuromorphic hardware and combined top–down and bottom–up regulatory methods that were also inspired by the bidirectional control mechanisms between networks in the brain [42].

2.3. Neuroimaging Techniques Used in Observing FC

2.3.1. Functional Magnetic Resonance Imaging

Functional magnetic resonance imaging (fMRI) has been used extensively to map FC in the brain [43]. fMRI measures brain activity by detecting changes in blood oxygenation and flow that occur in response to neural activity, a phenomenon known as the blood-oxygen-level-dependent (BOLD) response [44]. When a brain region is more active, it consumes more oxygen, leading to an increase in blood flow to that area. This causes a change in the ratio of oxygenated to deoxygenated haemoglobin, which can be detected because oxygenated haemoglobin is weakly repelled by a magnetic field while deoxygenated haemoglobin is attracted to it. The fMRI machine uses a strong magnetic field and radiofrequency pulses to detect these changes and create detailed images of brain activity. The technique provides high spatial resolution, allowing researchers to pinpoint active brain regions [45].

2.3.2. Electroencephalography

Electroencephalography is a popular non-invasive neuroimaging technique that has been used extensively in research toward interpreting brain activity [46]. It involves the placement of dry electrodes in specific locations on the scalp to acquire electrical signals that reflect brain activity [47]. In doing this, we are able to observe electrical activity caused by voltage fluctuations between the electrodes. These fluctuations are generalised as measures of activation of brain areas. EEG is a popular method used in the fields of neuroscience, psychology and medicine; it is cost-effective and does not involve any harmful radiation, making it safe to use on most participants [48]. There are some common disadvantages to using EEG data; raw signals are affected by noise and artefacts, making it difficult to interpret brain activity directly from the raw signal [48]. Electrodes are typically placed according to an international standard placement known as the 10–20 system and the relationship between channels and brain regions can be seen in Figure 4.

2.3.3. Microelectrode Recording and Arrays

Another method of reading the brain is through microelectrode recording (MER), which involves the insertion of electrodes into the brain. However, microelectrodes are a lot smaller than the standard use electrodes discussed previously. There is a number of factors to consider when using microelectrodes: their conductivity, strength, biocompatibility, and chemical resistance, to name a few key factors [49]. There is a number of recording and stimulating techniques possible with MER, as they enable two-way communication: the collection of electrical signals generated by neurons and, also, the ability to stimulate a particular neuron [49]. As electrodes can be placed at multiple sites, it enables the spatial recording of the activity of cells in various regions, such as how the activation of a cell in region A affects the behaviour of another cell in region B. This simultaneous recording is key for getting good insights into FC across the brain. Recording can be done extracellularly or intracellularly; extracellular recordings detect action potentials generated by neurons as well as local field potentials (LFPs). These methods provide average spatial resolution, but good temporal resolution in the microsecond range. Recordings can also be done intracellularly [49].
The implantation of microelectrode arrays (MEA) can be done in vitro to study neuron cultures on the MEA itself; this is heavily used in testing within the pharmaceutical industry to understand the effects of various drugs on neural activity [50]. Scientists are also able to create their own neural networks in a controlled setting to create simple models for experimentation [51]. The use of MEAs can also guide our understanding of synaptic plasticity, a key aspect when it comes to FC, by detecting spike rates and changes in potential alongside enabling precise stimulation for biological network creation [52]. In vivo, MEAs are implanted into the brain and are particularly used to understand how information is encoded and processed in the brain with respect to sensory perceptions, particularly aiding work in epileptic research [53].

2.3.4. Combining EEG and fMRI to Predict Functional Brain States

fMRI is a costly process and places certain constraints on subjects during data collection, such as having to remain still. EEG is a non-invasive and cheap method of neuroimaging and, while it does not give us specific insight into active regions like fMRI, we can combine the two techniques. One study combined these techniques by identifying EEG signal correlates of FC [54]. EEG gives indications of dynamic neural activity on a millisecond scale and, thus, is a good long-term indicator of brain activity associated with certain brain states. In Abreu et al’s study, they took EEG microstates, dominant patterns of activity in a given time period, and used them to predict changing FC, as would be observed in fMRI. Results yielded good classification accuracy when using these microstates to investigate the dynamics of brain networks.

3. Functional Network Design for Neuromorphic Machine Intelligence

This section explores the role of integration and segregation of brain networks in supporting cognitive control (CC) and EF. We discuss how flexible functional hubs adapt to task demands, focusing on the salience network (SN), default mode network (DMN), and central executive network (CEN), and the role of neural plasticity in learning.
In relation to artificial intelligence (AI), we examine how biological network functions can inspire multiple network integration in deep learning. Hypernetworks (HNs) improve task performance and adaptability in AI, and their application in reinforcement and continual learning addresses challenges like catastrophic forgetting. Additionally, we explore brain organoids, lab-grown mini-brains that model human brain development. Insights from brain organoids could advance NMI, creating systems that learn and generalise more effectively in dynamic environments.

3.1. Network Organisation and Information Storage in the Brain

3.1.1. Network Integration and Segregation for Executive Functions in Humans

Looking at network activity more closely highlights the role of prefrontal cortex networks (PFCs) in CC and EF. The network architecture in the brain contributes significantly to flexible cognitive control; the existence of flexible functional hubs that regularly update FC according to task demand is a significant observable phenomenon in the brain [55]. Involved networks include the salience network (SN), default mode network (DMN), and frontoparietal network (FPN) to name a few. Specifically, segregation and integration of these networks were hypothesised to contribute to CC and EF [43]. Adults and children were found to have two dominant FC networks involving the SN, DMN, and FPN: the networks comprised a segregated network and an integrated network. In children, the segregated states of the networks were less common than in adults, and children demonstrated weaker connectivity between networks [56]. This difference between adult and children FC infers the role of plasticity in childhood learning. Plasticity is not only important during structural development but also in network connectivity. This could be the key to why children have the ability to learn new things faster and with increased task variance (such as learning two languages at once) [57]. In Ryali’s study, global network integration could be observed during working memory tasks and served as an indicator of improved task performance.

3.1.2. Neural Plasticity in the Brain for Energy-Efficient Processing

In neuroscience, two primary types of synaptic plasticity are recognised: Hebbian and homeostatic plasticity [58]. Hebbian plasticity involves the time-dependent strengthening or weakening of synaptic connections. When a presynaptic neuron activates just before a postsynaptic neuron, the connection between them strengthens. Conversely, if the postsynaptic neuron activates before the presynaptic neuron, the connection weakens. Summarising Hebb’s theory, the repeated activation of a postsynaptic neuron by a presynaptic neuron enhances their synaptic connection. Neuroimaging studies have shown Hebbian principles in action through neuronal replay, where neural activation patterns are re-expressed during sleep or rest [59]. This phenomenon has been observed in hippocampal place cells in rodents, which are active during both navigation and rest, suggesting a mechanism for circuit strengthening and memory consolidation during sleep. Homeostatic plasticity, in contrast, regulates overall neuronal excitability to maintain optimal function and prevent cellular overuse. This form of plasticity operates on a global scale: synapses that are highly active are downscaled, while those that are less active are upscaled, helping to make the brain energy-efficient during processing.

3.1.3. Synaptic Information Storage Capacity Measured with Information Theory

Learning and memory are driven by synaptic activity in the brain, with studies in neuroscience showing that information can be stored in synapses [60]. Shannon’s information theory [61] is a way of quantifying the storage capacity of synapses by measuring synaptic strength. This approach calculates bits of information stored, which has revealed distribution gaps in synaptic strength, demonstrating variability in synaptic plasticity in different regions [60]. In Samavat’s study, synaptic strength was found to correlate with dendritic spine head volume (SHV). In measuring the differences in SHV of dendrite and axon pairs from a single neuron, they were able to estimate synaptic strength. This work helps us to quantitatively understand how information is stored in biological neural networks and gives us new considerations when designing neuromorphic systems.

3.1.4. Networks for Skill Transfer in Humans

Another important ability of humans is the ability to transfer knowledge from one domain to another. For a robot, left and right arms must be trained individually, which is a lengthy and energy-consuming process. For humans, this is not necessary. A typical example of skill transfer is intermanual transfer: training a limb on one side of your body results in learning the task on the contralateral (untrained) side [62]. A study found that incongruent visual feedback (training one hand and visualising the same movements in the opposite hand) yielded the strongest intermanual transfer in a virtual reality environment that enabled training an individual’s finger movements in the right hand whilst visualising the same movements in the left hand [63]. Using fMRI, four regions of interest (ROIs), the right superior parietal lobe (R-SPL), left superior parietal lobe (L-SPL), and bilateral occipito-temporal visual regions (R-V and L-V), were examined during training. During incongruent training, activity in the left and right SPL was observed. Furthermore, Ossmy’s study identified a transfer-related network between the M1 areas of the visual cortex and the SPL on the left side: subjects with stronger FC of these areas exhibited better performance in the left hand. The same was found for incongruent training on the right side [63]. The ROIs in this study typically are associated with the sensory–motor network for controlling our body parts and the SN [64].

3.1.5. Hypergraphs for Modelling Brain Connectivity from fMRI

Hypergraphs are models used to represent complex relationships within a network [65]. Graph Neural Networks (GNNs) are usually composed of edges and nodes with an edge only joining two nodes together [65]; hypergraphs, on the other hand, can connect multiple nodes at once to capture interactions in complex networks such as brain connectivity [66]. This makes hypergraphs successful at constructing FC networks in the brain [67]. One such study presented a hypergraph learning method by integrating data from various fMRI paradigms to create a functional connectivity network (FCN). They present this FCN as an optimised method of estimating FCNs for neuroscientific research [67]. The use of hypergraphs to predict FC in neuroscience opens up the possibility of reusing these models for NMI.

3.1.6. Subsection Summary

Understanding network connectivity is important, as the order of interaction of brain regions determines how information is encoded and processed. This means that the organisation of networks in the brain and their ability to integrate and segregate in different ways could influence the way that humans interpret information and the subsequent decisions we make. Furthermore, neural plasticity at a synaptic level is a key contributor to human learning ability and memory storage capacity.
Why is this important for AI? While data processing pipelines and novel model architectures are introduced, there is little work on integrating multiple networks and enabling these networks to interact during information processing. The following section covers some examples where multiple artificial networks have been used in the training of target networks. Additionally, humans can store information at a synaptic level and insight into this could guide us in finding neuromorphic solutions for obstacles in continual learning like catastrophic forgetting. To complete this section, we also discuss the potential areas in AI that could benefit from the above.

3.2. Perceiving FC from a Deep Learning Perspective

There are various examples where NN architectures have used a combination of multiple networks to train a target network [68,69,70]. The benefit of doing this means that NNs can be trained on different data distributions, improving training times and model performance in different task domains [68].
This section of the review looks at the potential for hypernetworks in ML models. We discuss the challenges of catastrophic forgetting in NNs and how hypernetworks could be used to improve continual learning methods and the generalisability of models.

3.2.1. Hypernetworks as Artificial Functional Networks

Hypernetworks (HNs) are smaller networks that generate weights for a main target network. While the main network maps raw input to a target, the hypernetwork exclusively takes in information about weight structure and generates weights for each layer [68]. Research has shown that the integration of HNs to NNs has achieved much better results in tasks such as image recognition when compared to vanilla methods [68]. They have been used alongside long short-term memory models as in HyperLSTM [68] for image, text, and language tasks. In using HNs, models have demonstrated competitive performance, even in cases where just a single HN is used.

3.2.2. Hypernetworks for Continual Learning

One element of the brain that is highly sought after in the field of deep learning is the ability to learn continually and adapt to input that changes. In the case of neural networks, when models are exposed to new tasks, they forget what they have learned from a previous task. This is referred to as catastrophic forgetting [70]. On the other hand, humans retain the ability to store memory by forming synaptic connections across brain networks [60]. This limitation of artificial networks means that reusing models in dynamic real-world scenarios is a difficult task. Continual learning [70] is a field of ML research that looks to make models better at remaining robust to new inputs, so that they can be used across various task domains. The end goal of continual learning is to make models generalisable and resource-efficient [70] by learning from different data distributions. Typically, for improved generalisability, neural networks are pre-trained on large datasets and then fine-tuned for task specificity [70]. In one study, a hypernetwork was used to fine-tune a pre-trained Vision Transformer (VIT) to reduce the effects of catastrophic forgetting [71]. This method showed better performance, avoiding forgetting when performing downstream continual learning tasks.

3.2.3. Transfer Learning

Another problem in ML is the limited availability of accurately labelled data for supervised learning methods. Even methods such as semi-supervised learning, meant to ease the constraints of this problem, lead to problems with data availability as well. Transfer learning (TL) is a method for sharing knowledge across different domains in order to reduce the need for mass amounts of data for training ML models. Inspired by the psychological model of the generalisation of experience, it is believed that humans generalise experiences and this is what enables the use of skills learned from one task, such as playing the violin, to master another musical instrument [72]. In the same way, TL utilises knowledge from a source domain to minimise the data requirements and learning resources required to perform in a related target domain [73]. Multiple techniques are used to facilitate this, such as feature alignment, model control strategies (where the source model transfers knowledge to the target model during training), and parameter sharing of the source domain to the target [73]. Utilising HNs could aid in the context of transfer learning, making the training process more sustainable, for example, by having multiple highly trained networks available for parameter sharing with a single target network so that the target network becomes reusable across different domains.

3.2.4. Organising Organic Neural Networks for Neuromorphic Machine Intelligence

A novel mechanism known as brainoware is being used for reservoir computing to advance unsupervised learning methods [13]. Brain organoids are miniature “brains” made from pluripotent human stem cells (cells that can turn into any type of cell) and they have been used as part of an interface to enable reservoir computing [13]. The brain organoids were used as part of a reservoir computing framework with an input layer, a reservoir layer, and an output layer. Data inputs included audio clips and time series data that were converted into electrical stimulation for the organic reservoir layer. The organoids then mapped these electrical pulses to a computational space and were then fed into a decoding function for various tasks, such as classification. The power consumption of this hardware was low and manipulating the FC of the organoids enabled task completion from unsupervised learning techniques [13]. Works such as this demonstrate that energy-efficient computing is possible and while, currently, the maintenance of organic hardware is a significant limitation, they may provide the key to sustainable NMI.

3.3. Research-Inspired Directions for Neuromorphic Machine Intelligence

This section demonstrates how machine learning concepts correlate with network implementation in the biological brain. Key points include brain networks and neural plasticity, hypernetworks in AI, and the use of brain organoids for neuromorphic machine intelligence.
In consideration of these works, we propose the following research avenues that can also be seen in Figure 5:
  • Given the importance of the human DMN and its integration with other networks, a neuromorphic default mode network using a hypergraph structure could create a base for artificial architectures to be built upon.
  • The use of brain organoid hardware in this hypergraph DMN may provide a sustainable path for energy-efficient neuromorphic systems, making the feasibility of NMI in the future coincide with environmental preservation goals in society.

4. Learning: The Key to Human-like Intelligence

Here, we discuss the ability of humans to learn in different ways and how these methods can be used to progress neuroscientific research and the development of more human-like AI systems. This section covers the following topics:
  • Errorless and trial-and-error learning in humans: FC dynamics facilitating learning in humans.
  • Memory retrieval in humans and how FC contributes to this ability.
  • Artificial learning methods such as reinforcement learning and brain-inspired methods of incorporating learning and memory into AI agents.

4.1. Learning in Humans

4.1.1. Errorless Learning and Trial-and-Error Learning

Errorless learning (EL) is a behavioural technique used to shape the way an individual learns without attempting trial and error. it focuses entirely on the use of positive feedback to motivate an individual to learn tasks whilst incrementally increasing task difficulty over time. It was first developed to be used in cognitive rehabilitation for the treatment of amnesia patients [74]. Conversely, trial-and-error learning (TEL) in humans is done by attempting a task again and again to observe the results of their response and then readjusting responses accordingly. This iterative process enables learning the best response over time and also encourages creative thinking and self-evaluation.
In order to explore these behavioural methods on a neuroscientific scale, FC can be explored during both to observe how network connectivity links to both methods. Implementing functional magnetic resonance imaging (fMRI), the activity of the DMN and FPN was monitored during a colour-name association task in a study conducted by Yamashita et al. For TEL, it was hypothesised that the strength of DMN connectivity would be related to the retrieval and processing of events during learning. Similarly, the activity of the FPN was observed for TEL and EL. The results of this study found that, for TE learning, the interconnectivity of the DMN was significantly higher during TEL when compared with resting-state brain activity and EL. For FPM, inter-network connectivity was also significantly higher for EL and TEL when compared to the resting state. Another key observation was the between-network connectivity of the DMN and FPN, potentially related to the benefits of learning through EL or TEL [75].

4.1.2. Human Memory Models

Over the years there have been multiple neuropsychological models of memory, demonstrating long-term memory (LTM) and short-term memory (STM) [76]. Traditional memory models [76] distinguish between the types of LTM, such as non-declarative memory, implicit memory, which operates unconsciously, or procedural memory, which enables us to perform tasks like riding a bike without conscious thought. Another aspect of non-declarative memory is priming, which influences our responses based on prior exposure. These memory types guide our automatic behaviours and responses without conscious awareness [76]. On the other hand, declarative memory, or explicit memory, involves the conscious recall of information and is divided into episodic and semantic memory. Episodic memory relates to personal experiences and specific events, while semantic memory encompasses general knowledge and facts. These types of memory enable us to access and articulate information, playing a crucial role in learning and decision-making [76]. How these models link to FC has also been explored, with different sub-cortical brain regions, such as the hippocampus and amygdala, and cortical regions, like the prefrontal cortex, interacting in dynamic ways to facilitate different memory storage processes [76]. However, more research is needed to understand the dynamics between these different memory stores highlighted in these models [76]. By leveraging FC observations using neuroimaging techniques like fMRI, we can move toward understanding specific information storage in humans and mirror these approaches for more efficient NMI.

4.1.3. FC for Memory Retrieval

Difficulty in information retrieval is something commonly experienced by us all. Furthermore, known as forgetting, as we learn new things, the knowledge we acquire can become difficult to retrieve as we stop practising certain skills or recollecting information. As it was learned that learning methods that yielded the best short-term performance also led to the worst retention ability, work has been done in behavioural psychology to explore the relationships between long-term performance and learning [77]. The findings from studies such as this have highlighted the importance of retrieval difficulty having a significant impact on long-term performance and are key in the desirable difficulties framework proposed by Bjork in 1994. For learning to be the most efficient, the challenge of memory retrieval must be present, exhibiting better performance when skills were tested later on [78]. Exploring how FC correlates to retrieval has been explored, once again, through observation of the DMN and FPN in individuals with varying memory performance. For this work, the DMN was considered as two subnetworks: the main DMN (which shall be referred to as DMN) and the medial temporal lobe and the retrosplenial cortex (MTL-RSC-DMN). Buuren et al. found that lower network connectivity of the MTL-RSC-DMN and stronger between-network connectivity of the DMN and FPN were associated with better memory performance. This suggests that MTL-RSC-DMN interactions may be essential for good retrieval. The regions of connectivity can be seen in Figure 6. Furthermore, the increased activation of DMN and FPN was observed during successful memory retrieval, highlighting the role of these networks in effective retrieval processes [79]. Intrinsic connectivity between these two networks in particular is demonstrated for complex learning and the ability to transfer learned representations from one task to another [80].

4.1.4. Subsection Summary

Humans have the ability to learn using various methods. The distinction between these methods and the ability to utilise them synchronously could be what makes humans superior in learning and adapting to complex and dynamic situations much more effectively than current AI and robotics. Human learning is a key behavioural concept and behavioural psychology has paid much attention to the different ways in which we learn. Combined with neuroimaging, capturing FC behind learning mechanisms has given us an observable and reproducible insight into how we could go about giving these abilities to machines.

4.2. Current Brain-Inspired Machine Learning Concepts for Artificial Learning

The following section covers the relevant machine learning literature for efficient learning systems and how these can be considered alongside neuromorphic computing of the future.

4.2.1. Prospective Learning vs. Backpropagation for Artificial Learning

The backpropagation algorithm remains a fundamental aspect of multilayered neural network (NN) design for learning feature representations of data to update weights within NNs [81]. NNs were initially modelled on the human brain and believed to function in a similar way to synaptic weight updates in the human brain [82,83]. It has since been found that the biological way of learning still works very differently to backpropagation and, in order to understand the brain, other methods should be explored to model learning in the biological brain [84]. Prospective configuration (PC) is a proposed method of updating synaptic weights during learning. PC involves configuring neurons to a new state that accurately predicts the observed outcome before modifying the weights to consolidate the state. This method anticipates potential negative side-effects of weight changes, such as catastrophic interference, a common issue in backpropagation, where learning new associations disrupts previously learned memories [84]. This research proposes the concept of prospective learning for deep neural networks, suggesting the efficiency and plasticity of this method when compared with backpropagation. This work also provides suggestions for the problem of forgetting when learning new information, as an underlying network is learned for one specific output, leading to a more stable and robust model. This research is a key example of how biological processes in the brain can progress new methods in ML, and how we could create AI systems that have the capacity to learn as efficiently as a human.

4.2.2. Artificial Learning Methods: Reinforcement Learning

Reinforcement learning [12] is a learning paradigm in which an agent learns what best actions to take in a given environment state and, from this, earns rewards. Better actions elicit better rewards from the environment, enabling the agent to learn which best action to take in a given state. The agent learns through trial and error and seeks to maximise its reward over time. While there are multiple methods of learning a suitable policy [12], for action-value-based methods (policy-based methods are completely different), the following equation is employed:
π ( a | s ) = argmax a A r ( s , a ) + γ max a A Q ( s , a )
where r ( s , a ) is the immediate reward for the agent for taking action a in state s, and γ is the discount factor to regard recent rewards more than future rewards.
In the case of multiple agents cooperating to accomplish a common goal, the policy π ( a 1 , , a n | s ) action-value function Q ( s , a 1 , , a n ) refers to the value of the joint action across the agents at a given state s,
π ( a 1 , , a n | s ) = argmax a 1 , , a n A ( r ( s , a 1 , , a n ) . + γ max a 1 , , a n A Q ( s , a 1 , , a n ) )
IL, on the other hand, removes the need for coding explicit reward functions and utilises a human expert to demonstrate actions to a machine. There are various methods of IL that are being considered for the behaviour of agents in complex environments [85]. It has been found that combining these two methods has yielded better results in performance and robustness [86].

4.3. Research-Inspired Directions for Neuromorphic Machine Intelligence

In this section, we explore the parallels and distinctions between learning mechanisms in humans and machines, highlighting the unique and shared aspects of each. We discuss errorless learning, trial-and-error learning, and FC behind memory retrieval in humans. These mechanisms allow humans to adapt to complex and dynamic environments. Additionally, we discuss the development of advanced models like spiking neural networks (SNNs), which aim to emulate the efficiency and functionality of the human brain.
By examining these different learning paradigms, we give the following directions for NMI that can be seen in Figure 7:
  • A neuromorphic machine intelligence architecture with a task-specific context and episodic memory storage could limit the effects of catastrophic forgetting in continual learning systems.
  • Facilitating inter-network relationships when designing neuromorphic models with multiple networks could give machines the ability to learn faster. This would lead to reduced model training times and less energy consumption for sustainable NMI.

5. The Pasta Problem: How FC Can Inspire Task-Prioritisation Systems

In this section of the review, we look at the literature surrounding FC for attention allocation that enables multi-tasking in humans. In this section, multi-tasking is explained as the ability to juggle multiple tasks and prioritise them accordingly. To look at this intuitively, we take the example of the seemingly simple task of cooking. Humans can allocate attentional resources to multiple tasks in the kitchen. The two tasks involved in reaching the goal of making a pasta dish include 1. chopping up vegetables and 2. making sure that the pasta does not overboil. As humans, we know that, while we are paying attention to the knife and our hand movements chopping the vegetables, if we are alerted to signs that the pasta pot is about to overboil, we can stop our current actions and divert our attention to the new task of turning the stove off, as this takes precedence in averting a dangerous outcome. Humans do indeed possess the ability to multi-task to a certain extent that surpasses our capacity for learning better.

5.1. The Role of Different FC Networks in Human Multi-Tasking

The main brain networks discussed in this section include the frontoparietal network (FPN), default mode network (DMN), salience network (SN), and dorsal attention network (DAN). Here, we discuss key networks that work together to achieve the human ability to pay attention, ignore, and manage multiple tasks to complete a goal.

5.1.1. Dorsal Attention Network: For Goal-Directed Attention to Environmental Stimuli

The DAN is a key brain network involved in the voluntary, goal-directed allocation of attention to relevant stimuli in the environment. It is primarily composed of regions in the frontal and parietal lobes, including the frontal eye fields (FEFs) and the intraparietal sulcus (IPS) [43]. This network plays a crucial role in processes enabling individuals to focus on specific tasks or objects by modulating sensory processing to prioritise important information. The DAN is active during tasks that require sustained attention and spatial awareness, ensuring that cognitive resources are directed towards the most relevant stimuli [87]. By orchestrating the selection and prioritisation of sensory information, the dorsal attention network is fundamental for effective interaction with and navigation through complex environments.

5.1.2. Salience Network: A Biological Switch for Identifying Relevant Stimuli

The SN is regarded as canonical and plays a role in almost all cognitive functions [28]; its relevance with regard to the triple network model of psychopathology has been particularly studied [43]. The brain regions associated with the SN are the anterior cingulate, ventral anterior insular cortices, and frontal lobe regions [88]. In collaboration with the DMN and FPN, the SN serves as a switch between self-consciousness and task-related attention to external salient stimuli (unexpected, stand-out stimuli) [89]. The activation of the SN in response to certain tasks has also led to the belief that the SN plays a role in identifying relevant stimuli [88] and allocating attention to the most relevant stimuli. As a whole, the SN is viewed as the advisor to executive control: directing attention to what is important and lessening the impact of irrelevant stimuli.

5.1.3. The Default Mode Network: The Introspective Mind at Rest

Conversely, the DMN is a crucial brain network that is most active when an individual is at rest and not focused on the external environment. Comprising regions such as the medial prefrontal cortex, posterior cingulate cortex, and angular gyrus, the DMN is involved in introspective and self-referential activities and recalling personal memories [90]. Interestingly, the DMN decreases its activity when attention is directed towards external tasks, highlighting its role in the brain’s default state of inward-focused thought; understanding the DMN is essential for exploring how the brain maintains cognitive processing to salient stimuli [88].

5.1.4. The Frontoparietal Network: For Task-Planning and Decision-Making

Finally, the FPN, also known as the central executive network (CEN), is a critical brain network involved in high-level cognitive functions, including working memory, decision-making, and problem-solving. This network encompasses regions in the lateral prefrontal cortex and the posterior parietal cortex, which work together to manage and manipulate the information necessary for goal-directed behaviour [88]. The FPN is particularly active during tasks that require conscious control, such as planning, reasoning, and adapting to new situations. The FPN’s dynamic connectivity allows for the flexible adjustment of cognitive strategies in response to changing demands, making it essential for adaptive and efficient mental functioning [43].

5.2. FC to Enable Task-Switching and Multi-Task Completion

Multi-tasking (MT) can be defined as handling two or more tasks simultaneously or the ability to switch between tasks [91]; we will discuss both methods and refer to them as dual-tasking (DT) or task-switching (TS). A study looked at corresponding FC during the completion of the multi-attribute task battery (MATB): a method for assessing the MT ability of pilots in a cockpit. The example given here is the ability of a pilot to be able to carry out multiple tasks such as monitoring fuel levels and controls and maintaining general awareness. Findings demonstrated the correlations between the activation of key networks during MT. DMN and DAN connectivity was notably stronger during MT when compared with carrying out single tasks [92]; this may represent the link between having internal goals (DMN activity) and paying attention to the external environment (DAN). Lam et al. also found that a combination of having to MT and increasing task difficulty leads to deficits in performance; however, before this happens, a difficulty threshold must be overcome. The authors also put forward the following model for brain states during the MATB task:
  • Attention allocation during multi-tasking.
  • Performance evaluation during task completion.
  • Mind wandering due to feeling overwhelmed by multiple stimuli.
  • Imagining the task components in order to facilitate the state of attention allocation.
Finally, it has been observed that DMN and DAN connectivity is a critical function for MT. Collectively, these networks may be involved in introspective thought for self-evaluation [93] and the resulting allocation of attention (DAN) in order to complete multiple tasks at a time. Another finding was the confirmation of the performance cost when multi-tasking, as participants showed lower accuracy in MT compared to completing a single task [92]. For switching between tasks and mediating inter-network FC, the SN is a key network [94]. The network switching facilitated by the SN is thought to mediate DMN activity from FPN activity, to enable switching between inner thoughts and engaging with external thoughts [43]. The SN enables cognitive flexibility and serves as a fast-acting mechanism [43]. This enables attention allocation based on the continual reevaluation of internal thoughts and external stimuli for the completion of multiple tasks.

Subsection Summary

This subsection discussed the interaction and functions of key brain networks involved in attention, multi-tasking, and task-switching, focusing on the dorsal attention network (DAN), salience network (SN), default mode network (DMN), and frontoparietal network (FPN). We discussed FC and its relation to task-switching and attention allocation based on external and internal stimuli.
The above findings have potential applications for ML, for example, in the context of training single NNs to complete multiple tasks. The next section explores several methods in ML that could see benefit in drawing inspiration from neuroscience. We look at what multi-task learning is for ML, how it has been used in RL, and how self-attention mechanisms have helped serve as a salience network for artificial agents.

5.3. Areas within Machine Learning That Could Benefit from Attention and Task Prioritisation

5.3.1. Human-Inspired Multi-Task Learning

MT learning is a training paradigm in ML that refers to the ability of a model to learn to complete multiple tasks through the learning of shared representations between tasks [95]. The ability of neural networks (NNs) to learn from multiple inputs is also explored in parallel learning (PLL); this is a method that is inspired by the ability of biological networks to deal with multiple inputs and learn information in parallel [96]. PLL has been explored as a low-storage method of learning so that networks can learn multiple patterns simultaneously by the hierarchical or parallel learning of patterns in input data [96]. MT learning for machines is, in fact, thought to reflect the human ability to integrate knowledge across task domains, such as learning basic motor skills such as being able to balance whilst standing upright in infancy and applying the same technique of balance to more complex tasks in adulthood [95]. This generalisability that is enabled in human learning is believed to be essential for improving related fields in ML such as transfer learning [73] and continual learning [97].

5.3.2. Multi-Task Reinforcement Learning for Generalisability

Reinforcement learning (RL) is a part of machine learning where an agent interacts with an environment and receives rewards. Based on these rewards, the agent learns an optimal policy that is continuously updated through interactions with the changing environment [12]. Another application of MT approaches is agent training for multi-task scenarios using RL. This approach has been applied by Kumar et al. with the use of a hypernetwork-enhanced multi-task actor–critic (HEMAAC) method. In this paper, two networks, a retrieval network and a decision network, were implemented. The decision network extracted features from states so the actor could predict actions in environments developed for multi-tasking scenarios, while the critic network estimated action values. The role of the HN was to improve adaptability and is referred to as a cognitive centre for the agent [69]. After predicting actions, the agent interacted with the environment to observe optimal rewards; this was used in conjunction with the BERT module to retrieve relevant “memories”. In this case, the use of the HN improved the sampling efficiency of the agent, which made it able to adapt to different tasks. The architecture used here mimics the role of the human hippocampus and uses context retrieval using a hypernetwork and a parameter adaptation module using BERT [69]. HEMACC was optimised to understand natural language instructions given during the training phase and results demonstrated a superior performance of the agent in decision-making capabilities for multi-tasking settings [69].

5.3.3. Salience Interest for Temporal Abstraction

In RL, agents have the ability to choose different options at various time scales; these are known as temporal abstractions. For an RL agent to tackle a complex problem, it needs to be able to look to the future for optimal solutions and to be able to consider multiple choices. This has been demonstrated by introducing a self-attention mechanism [98], which can be observed in comparison to the actions facilitated by the DMN in humans. Self-attention generates values for certain characteristics to determine how much “attention” should be paid; values close to 1 indicate that more attention is needed for a given characteristic [99]. However, these methods led to slow backpropagation speeds and overfitting problems; to solve these problems, Zhu et al. proposed a salience interest option critic (SIOC) algorithm. SIOC uses a filtering method that focuses on a set of pre-trained initial states, without using backpropagation. This was said to have significantly improved learning, robustness, and flexibility in discrete and continuous tasks. The use of particle filters in this approach can be appreciated in comparison to the generalisable nature of humans during learning, as particle filters use approximation inference methods to estimate a representation of a state [100]. This method is good for partially observable environments, where the true states of a system are unknown.

5.3.4. The Need for Task-Prioritisation in Co-Operative Multi-Agent Swarms

For swarm intelligence, the failure of a single agent should not prevent the goal completion of the swarm. In human–human teams, we can account for the shortcomings of a team member: for example, in a relay race, the poor performance of a single runner does not necessarily mean a total loss for the team. This concept has been demonstrated in multi-agent RL, a subfield of ML where multiple autonomous agents work together in a single environment. Each agent learns to interact with one another as well as with the environment and optimises its own performance by either cooperating or competing with the other agents. During collaboration with other agents, there is a number of things that can go wrong. For cooperative swarms of agents, achieving the group objective can become hindered by agent malfunctions, demonstrating the need for agents that can learn independently [101]. In Pina’s paper, they evaluate the problem of agent malfunction when all other agents in the swarm expect each other to cooperate effectively. Results demonstrated that, even with the malfunction of a single agent, group performance would decrease significantly, showing the need for agents to be able to adapt task execution in cases where a team member becomes out of use. The authors then go on to demonstrate an independent deep-Q learning (IDQL) technique [101], which showed promise as a training method. Studies such as this one demonstrate the need for task-prioritisation systems and methods of updating task objectives in an adaptive way, not only in the case of swarms, but also with our cooking robot: if something in the environment becomes dangerous if not dealt with immediately, the robot would need to re-update its attention allocation to prioritise the dangerous task.

5.4. Research-Inspired Directions for Neuromorphic Machine Intelligence

Here, we have discussed FC behind the human ability to prioritise tasks to reach a goal. We have outlined multiple definitions of multi-tasking ability, such as dual-tasking and task-switching. The importance of attention and the ability to direct attention through the use of a salience network plays a key role in the human ability to remain adaptive when working on goals that require continual self and environmental evaluation to balance multiple tasks. If we look back to the cooking analogy mentioned in the beginning, for a machine to be able to cook a pasta dish (a goal that requires handling multiple tasks), it would need to utilise the things discussed in this review.
Considering the above, we propose the following directions for NMI, which can be seen in Figure 8:
  • Having an external attention mechanism to store task-specific context and a self-attention mechanism storing episodic memory, inspired by FPN and DMN, could give agents the ability to prioritise as intuitively as humans do whilst preserving task-related memory.
  • For neuromorphic machines, a dedicated decision network could fit into this architecture, for the final decision-making ability based on the evaluation of any unexpected situations in the environment made by the salience switch.

6. Potential Application Domains for Neuromorphic Machine Intelligence

There are many applications that can benefit from the novel directions of research discussed in the previous sections of this paper. While we have touched on a few section-specific applications already, here, we would like to address some key areas of work being done that could benefit from considering multidisciplinary contributions to AI development. Specifically, we hypothesise potential future applications and research directions, as seen in Figure 9, that could be considered using FC of the brain as a blueprint for AI.

6.1. Autonomous Driverless Vehicles

Autonomous vehicles are seen as one of the most expected developments of modern society [102]. To get there, it is well known that AI will play a major role. In fact, there are already several methods that are being used in current autonomous vehicles, although these are still far from achieving the futuristic level of full autonomy that is expected [103,104]. The literature studying the challenges that needed to be addressed before getting to that point is extensive [103,104,105,106]. Amongst all the noted problems to be tackled, in this review, we focus specifically on the impact that FC-inspired methods could have in creating agents that can better adapt to unseen situations or react to unexpected events. For instance, in autonomous driving scenarios, usually, the traffic law rules to follow are implemented in the vehicles and cannot be easily changed at any desired time [107]. Thus, it is important that driving systems are ready to adapt to these, and, additionally, to unexpected changes in road conditions such as the weather [108] or unexpected traffic jams [109]. Inspired by the principles of FC, we hypothesise that an element of neural plasticity and episodic memory can be used for continual learning, based on the learned experience. In addition, to react to unexpected scenarios, a task-prioritisation system could be leveraged to improve decision-making.

6.2. Human-Compatible Assistive Neuroprosthetics

Works on integrating a robotic third arm [110] have shown that tri-manual control, the use of a third artificial arm powered by a single human brain, is possible. Ventures such as this could open possibilities for surgery assistance technologies that can reduce the physical strains experienced by surgeons during minimally invasive surgeries [111]. In cases like this, real-time control with pinpoint accuracy is fundamental for their purpose [112] and achieving this could be possible by giving the surgeon the ability to guide the robot’s actions using their brain [113]. To do this non-invasively poses several challenges. If we take the use of EEG-based robot control, due to noise and signal variability, determining surgeons’ intended movements will be slow and lack the precision offered by invasive BCI approaches [113]. Similarly, surgeons’ time is limited and having multiple assistive robots per surgeon is costly. We believe that by using FC-inspired neuromorphic architectures, we could create neuroprosthetics that can integrate better with the users by using an artificial default mode network that can adapt and integrate with multiple users. Furthermore, with the use of episodic memory storage, brain profiles of individual users could be saved, so that as the prosthetic learns to integrate with a new user, and the previous user’s information is still accessible for future use.

6.3. Development and Integration of Energy-Efficient Sensors

The development of energy-efficient sensors has challenges that significantly impact their design and application. One major limitation is the trade-off between power consumption and performance; sensors need to operate with minimal energy while still maintaining high accuracy and responsiveness [114]. Achieving this balance is particularly difficult in battery-powered or remote applications where recharging or replacing power sources is impractical, such as with wearable health-monitoring devices [115]. The miniaturisation of sensors exacerbates this issue, as smaller devices have less room for power storage and typically require more sophisticated, energy-intensive components to maintain performance standards. Additionally, advanced data processing and communication protocols, necessary for real-time and reliable sensor operation, often consume substantial energy, further straining power resources [115]. Developing new materials and innovative design architectures that minimise energy consumption without sacrificing sensor capabilities is essential for the progression of sensor technology [114]. Integrating sensors with neural network architectures presents several challenges. Data compatibility and synchronisation issues require complex preprocessing, introducing latency [116]. High data volumes from sensors can overwhelm neural networks, causing processing bottlenecks and increasing energy consumption, which is problematic for real-time applications [116]. Additionally, maintaining adaptability and scalability is crucial, as evolving sensor technology requires systems to integrate new data sources without extensive reconfiguration. In exploring methods to reduce power consumption while maintaining performance, we propose the use of flexibly integrated, organic networks as a potential solution. These networks could facilitate the dynamic formation of artificial synapses between different system components, allowing for more adaptable and efficient communication. By adjusting connections based on real-time requirements, such networks might optimise data flow and reduce unnecessary energy expenditure.

6.4. Collaborative Robots for Human–Robot Interaction

Not every workspace that would benefit from automation is originally designed for that end, as, usually, humans are the pioneers in those spaces. Creating environments that are suitable to enable fully automated robot activities, such as warehouses, might require substantial structural changes [117]. Furthermore, this type of environment can be highly unpredictable at times. Unexpected events might happen, not only because of the uncertainty of the machines operating, but also because humans are not perfect, and they might also make mistakes [109]. For these reasons, it is essential that collaborative agents can adapt to and accommodate unexpected changes or unsuitable conditions. To accommodate machines under incompatible environments, FC-inspired networks could be designed for the machines aiming to facilitate their integration in environments that were originally made for humans. When it comes to reacting to the uncertainty of the workspaces, one solution might be to adjust the priority of the tasks assigned to that machine. Depending on the occurrences in the workspaces, some new urgent tasks might show up and these can require the agents to change their attention to something different from what they were initially programmed to do at a certain moment. By taking advantage of different networks with distinct functions, FC-inspired machines could be more prepared to perceive the imminence of a catastrophic event that interrupts what they were programmed to do. Hence, their priorities must change, which could be achieved following an FC-inspired approach to switch their attention to a new urgent task.

6.5. Robust Swarms of Autonomous Machines

Collaborative robots, or cobots, are becoming a staple part of working environments [118]. From Amazon warehouses to car manufacturing plants [118], we are relying more on machinery to take over heavy-duty and high-risk tasks. However, it is known that machines are not 100% effective, and, like humans, machines can fail. When machines operate as a swarm, this type of failure might affect only one of them but, since they work as a swarm, where trust amongst them is assumed, a failure in one machine is very likely to affect the behaviour of the swarm as a whole [101]. In this sense, the operations that they are conducting will also be affected, causing problems and, potentially, catastrophic damage. A straightforward solution would be to stop the entire floor until the malfunctioning machine is fixed or removed [119], which can be time-consuming. Instead, we hypothesise that these agents could independently and autonomously adapt to situations when there is a malfunction within their swarm, and, for that, they could follow an FC-inspired approach by using a salience network that would facilitate perceiving when other members of the swarm are negatively impacting the performance of the group. Another way of creating more robust swarm activity is through the reuse of knowledge across different tasks [120,121]. However, it can be unfeasible to directly transfer this kind of information from one task to another, and it can also incur additional costs and loss of time in case the agents need to be retrained. Along these lines and following, again, the principles of FC in the human brain, we propose another direction towards NMI that consists of integrating networks that enable memory storage with others, enabling the learning of the context of new tasks while accessing episodic memory. This would increase the robustness of the swarms, allowing them to generalise their experience more across task domains.

6.6. Customisable Non-Invasive Human Augmentation Tools Using BCI

Invasive neurotechnology such as the Neuralink [122] is a key example of how neuroscience can advance the technology that we have today. Brain–computer interfaces are commonly being used for healthcare purposes [123,124], particularly for those who have lost limb control due to reasons like neuronal degradation and injury. For individuals such as this, invasive NeuroTech is a daunting prospect as not much is known of the long-term physical and ethical effects of implanting foreign artefacts into the brain [125]. Beyond prosthetics for those with disabilities, it also enables human augmentation for enhancing the sensorimotor ability of humans, e.g., the use of supernumerary robotic limbs [126]. Robotic limb designs currently can be worn using a harness and are usually rigid structures [126]. For limb control, various feedback interfaces exist, including those utilising EEG activity [127]. However, EEG–BCI-based robotic limbs can be ineffective due to the noisy singles and low spatial resolution, making user’s intentions difficult to interpret [126]. For human augmentation, the real-time processing of signals is needed and this requires significant computational requirements and complex algorithms that must be calibrated for individual users [128]. In comparison, invasive brain-based augmentations are more successful in user calibration and better for real-time actions [125]. However, little can be guaranteed about the long-term effects of these invasive procedures [129] and, for human augmentation to be widely accessible, more should be thought toward faster, non-invasive systems. Leveraging our findings around FC, we believe that we can design complex neuromorphic algorithms to facilitate human augmentation. In using an artificial default mode network, such as the one we propose, a basic network that can integrate and segregate across users could reduce the computational resources necessary and energy usage in doing so.

7. Conclusions

Artificial intelligence has long been influenced by different insights from neuroscience. Logically, to enable machines to demonstrate human-like behaviour, it is crucial to understand how humans work, and how our brains operate. However, interlinking the advances across different fields of research can still be challenging within research that is highly specialised in one area, but lacks foundations in the other. With this paper, we have created a structured and organised guide that brings together several important concepts from developments both in AI and neuroscience. We believe that bridging the advances of these fields can be key to reaching what we hereby have introduced as neuromorphic machine intelligence (NMI).
In the same way that distinct areas with different specialities of the human brain are connected, intelligent machines can have specialised parts that share a good synergy. Throughout this paper, we motivated discussions following the principles of FC in the human brain as a way of improving the way that AI-controlled machines behave. We discussed how FC can be seen as a blueprint to improve overall operations. We discussed challenges faced by both human–machine and machine–machine collaboration and the potential of FC-inspired solutions to address them. From our analysis, we hope that achieving NMI can contribute towards safer and more robust intelligent machines that can demonstrate more human-like intelligence across multiple key application domains.
Overall, the review conducted in this paper enhances the existing synergy between the fields of neuroscience and AI. We proposed key directions for NMI, starting with an artificial default mode network, with a hypergraph structure, that can facilitate network integration and segregation for artificial architectures. Secondly, we proposed a way forward for novel learning representation schemes by introducing a potential framework inspired by two FC networks used during human learning. The final section of this review looked at FC behind the task-prioritisation ability in humans; we proposed a novel framework for neuromorphic machines that could improve their task-prioritisation and decision-making abilities.
We believe that every human capability is written in the functional connectivity of the brain, in the same way that every ability of an algorithm is written in the code. With this review, we argue that, in using FC as a blueprint for NMI, we can demonstrate what it means to act as a human, providing the key to unlocking the next steps of machine intelligence.

Author Contributions

Conceptualization of the review: M.I. and V.D.S.; writing and original draft preparation: M.I.; writing review and editing: M.I., R.P., V.D.S. and X.L.; creation of visualizations and tables: M.I. and V.D.S.; supervision: V.D.S. and X.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the ATRACT: A Trustworthy Robotic Autonomous system to support Casualty Triage, project. EPSRC Reference: EP/X028631/1.

Data Availability Statement

No datasets were analyzed or generated for the making of this review.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Frenkel, C.; Bol, D.; Giacomo, I. Bottom-Up and Top-Down Neural Processing Systems Design: Neuromorphic Intelligence as the Convergence of Natural and Artificial Intelligence. IEEE Proc. 2023. [Google Scholar] [CrossRef]
  2. Eickhoff, S.; Muller, V. Functional Connectivity. Brain Mapp. 2015, 2, 187–201. [Google Scholar] [CrossRef]
  3. McIntosh, T.; Susnjak, T.; Liu, T.; Watters, P.; Ng, A.; Halgamuge, M. A Game-Theoretic Approach to Containing Artificial General Intelligence: Insights from Highly Autonomous Aggressive Malware. IEEE Trans. Artif. Intell. 2024. [Google Scholar] [CrossRef]
  4. Wan, Q.; Hu, S.; Zhang, Y.; Wang, P.; Wen, B.; Lu, Z. “It Felt Like Having a Second Mind”: Investigating Human-AI Co-creativity in Prewriting with Large Language Models. Proc. Acm -Hum.-Comput. Interact. 2024. [Google Scholar] [CrossRef]
  5. Rasnayaka, S.; Wang, G.; Shariffdeen, R.; Lyer, G. An Empirical Study on Usage and Perceptions of LLMs in a Software Engineering Project. In Proceedings of the 46th International Conference on Software Engineering, Lisbon, Portugal, 12–21 April 2024. [Google Scholar]
  6. Obrenovic, B.; Gu, X.; Wang, G.; Godinic, D.; Jakhongirov, I. Generative AI and human—Robot interaction: Implications and future agenda for business, society and ethics. AI Soc. 2024. [Google Scholar] [CrossRef]
  7. Cheng, B.; Lin, H.; Kong, Y. Challenge or hindrance? How and when organizational artificial intelligence adoption influences employee job crafting. J. Bus. Res. 2023, 164, 113987. [Google Scholar] [CrossRef]
  8. Oniani, D.; Hilsman, J.; Peng, Y.; Poropatich, R.; Pamplin, J.; Legault, G.; Wang, Y. Adopting and expanding ethical principles for generative artificial intelligence from military to healthcare. npj Digit. Med. 2023, 6, 225. [Google Scholar] [CrossRef]
  9. Wang, D.; Churchill, E.; Maes, P.; Fan, X.; Shneiderman, B.; Shi, Y.; Wang, Q. From Human-Human Collaboration to Human-AI Collaboration: Designing AI Systems That Can Work Together with People. In Proceedings of the Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, CHI EA ’20, Honolulu, HI, USA, 25–30 April 2020; pp. 1–6. [Google Scholar] [CrossRef]
  10. Wu, C.J.; Raghavendra, R.; Gupta, U.; Acun, B.; Ardalani, N.; Maeng, K.; Chang, G.; Aga, F.; Huang, J.; Bai, C.; et al. Sustainable AI: Environmental Implications, Challenges and Opportunities. Proc. Mach. Learn. Syst. 2022, 4, 795–813. [Google Scholar]
  11. Macpherson, T.; Churchland, A.; Sejnowski, T.; DiCarlo, J.; Kamitani, Y.; Takahashi, H.; Hikida, T. Natural and Artificial Intelligence: A brief introduction to the interplay between AI and neuroscience research. Neural Netw. 2021, 144, 603–613. [Google Scholar] [CrossRef]
  12. Sutton, R.; Barto, A. Reinforcement Learning: An Introduction; MIT Press: Cambridge, MA, USA, 2018. [Google Scholar]
  13. Cai, H.; Ao, Z.; Tian, C.; Wu, Z.; Liu, H.; Tchieu, J.; Gu, M.; Mackie, K.; Guo, F. Brain organoid reservoir computing for artificial intelligence. Nat. Electron. 2023, 6, 1032–1039. [Google Scholar] [CrossRef]
  14. Suarez, L.; Richards, B.; Lajoie, G.; Misic, B. Learning function from structure in neuromorphic networks. Nat. Mach. Intell. 2021, 3, 771–786. [Google Scholar] [CrossRef]
  15. Benisty, H.; Barson, D.; Moberly, A.; Lohani, S.; Tang, L.; Coifman, R.; Crair, M.; Mishne, G.; Cardin, J.; Higley, M. Rapid fluctuations in functional connectivity of cortical networks encode spontaneous behavior. Nat. Neurosci. 2024, 27, 148–158. [Google Scholar] [CrossRef] [PubMed]
  16. Maslow, A.H. A theory of human motivation. Psychol. Rev. 1943, 50, 370–396. [Google Scholar] [CrossRef]
  17. Rybnicek, R.; Bergner, S.; Gutschelhofer, A. How individual needs influence motivation effects: A neuroscientific study on McClelland’s need theory. Rev. Manag. Sci. 2019, 13, 443–482. [Google Scholar] [CrossRef]
  18. Demir, M.; McNeese, N.J.; Cooke, N.J. The Impact of Perceived Autonomous Agents on Dynamic Team Behaviors. IEEE Trans. Emerg. Top. Comput. Intell. 2018, 2, 258–267. [Google Scholar] [CrossRef]
  19. Hartikainen, M.; Spurava, G.; Vaananen, K. Human-AI Collaboration in Smart Manufacturing: Key Concepts and Framework for Design. Front. Artif. Intell. Appl. 2024, 386, 162–172. [Google Scholar]
  20. La Fata, C.; Adelfio, L.; Micale, R.; La Scalia, G. Human error contribution to accidents in the manufacturing sector: A structured approach to evaluate the interdependence among performance shaping factors. Saf. Sci. 2023, 161, 106067. [Google Scholar] [CrossRef]
  21. Hassija, V.; Chamola, V.; Mahapatra, A.; Singal, A.; Goel, D.; Huang, K.; Scardapane, S.; Spinelli, I.; Mahmud, M.; Hussain, A. Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence. Cogn. Comput. 2024, 16, 45–74. [Google Scholar] [CrossRef]
  22. Pina, R.; Artaud, C.; Liu, X.; De Silva, V. Staged Reinforcement Learning for Complex Tasks Through Decomposed Environments. Intell. Syst. Pattern Recognit. 2023. [Google Scholar]
  23. Kaur, S.; Singla, J.; Nkenyereye, L.; Jha, S.; Prashar, D.; Prasad, G. Medical Diagnostic Systems Using Artificial Intelligence (AI) Algorithms: Principles and Perspectives. IEEE Access 2020. [Google Scholar] [CrossRef]
  24. Luccioni, S.; Jernite, Y.; Strubell, E. Power Hungry Processing: Watts Driving the Cost of AI Deployment? In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’24, Rio de Janeiro, Brazil, 3–6 June 2024; pp. 85–99. [Google Scholar] [CrossRef]
  25. EPA. United States Environmental Protection Agency Green—House Gas Equivalencies Calculator; EPA: New York, NY, USA, 2021.
  26. Greif, L.; Kimmig, A.; Bobbou, S.; Jurisch, P.; Ovtcharova, J. Strategic view on the current role of AI in advancing environmental sustainability: A SWOT analysis. Discov. Artif. Intell. 2024, 4, 45. [Google Scholar] [CrossRef]
  27. Wu, Y.; Moon, J.; Zhu, X.; Lu, W. Neural Functional Connectivity Reconstruction with Second-Order Memristor Network. Adv. Intell. Syst. 2021, 3, 2000276. [Google Scholar] [CrossRef]
  28. Uddin, L. Salience processing and insular cortical function and dysfunction. Nat. Rev. Neurosci. 2015, 16, 55–61. [Google Scholar] [CrossRef]
  29. Uddin, L.; Yeo, T.; Spreng, N. Towards a Universal Taxonomy of Macro-scale Functional Human Brain Networks. Brain Topogr. 2019, 32, 926–942. [Google Scholar] [CrossRef]
  30. Fan, L.; Li, H.; Zhuo, J.; Zhang, Y.; Wang, J.; Chen, L.; Yang, Z.; Chu, C.; Xie, S.; Laird, A.; et al. The Human Brainnetome Atlas: A New Brain Atlas Based on Connectional Architecture. Cereb. Cortex 2016, 26, 3508–3526. [Google Scholar] [CrossRef]
  31. Hebb, D.O. The Organization of Behavior: A Neuropsychological Theory; Psychology Press: London, UK, 2005. [Google Scholar]
  32. Eickhoff, S.B.; Jbabdi, S.; Caspers, S.; Laird, A.R.; Fox, P.T.; Zilles, K.; Behrens, T.E.J. Anatomical and functional connectivity of cytoarchitectonic areas within the human parietal operculum. J. Neurosci. 2010, 30, 6409–6421. [Google Scholar] [CrossRef]
  33. Bear, M.; Connors, B.W.; Paradiso, M. Neuroscience: Exploring the Brain, 3rd ed.; Lippincott Williams & Wilkins Publishers: Philadelphia, PA, USA, 2007; pp. 293–331. [Google Scholar]
  34. Javed, K.; Reddy, V.; Lui, F. Neuroanatomy, Cerebral Cortex; StatPearls: Treasure Island, FL, USA, 2023. Available online: https://www.ncbi.nlm.nih.gov/books/NBK537247/ (accessed on 26 March 2024).
  35. Diamond, A. Executive Functions. Annu Rev Psychol. 2013, 64, 35–168. [Google Scholar] [CrossRef] [PubMed]
  36. Vereshchaka, A.; Yang, F.; Suresh, A.; Olokodana, I.L.; Dong, W. Predicting Cognitive Control in Older Adults Using Deep Learning and EEG Data. In Proceedings of the 2020 International Conference on Social Computing, Behavioral-Cultural Modeling & Prediction and Behavior Representation in Modeling and Simulation (SBP-BRiMS 2020), Washington, DC, USA, 19–22 October 2020; pp. 19–22. [Google Scholar]
  37. Miranda, E.; Sune, J. Memristors for Neuromorphic Circuits and Artificial Intelligence Applications. Materials 2020, 13, 938. [Google Scholar] [CrossRef]
  38. Bile, A.; Tari, H.; Pepino, R.; Nabizada, A.; Fazio, E. Solitonic Neural Network: A novel approach of Photonic Artificial Intelligence based on photorefractive solitonic waveguides. In Proceedings of the EPJ Web of Conferences. EDP Sciences, Kaifeng, China, 21–23 April 2023; Volume 287, p. 13003. [Google Scholar]
  39. Yaremkevich, D.D.; Scherbakov, A.V.; De Clerk, L.; Kukhtaruk, S.M.; Nadzeyka, A.; Campion, R.; Rushforth, A.W.; Savel’ev, S.; Balanov, A.G.; Bayer, M. On-chip phonon-magnon reservoir for neuromorphic computing. Nat. Commun. 2023, 14, 8296. [Google Scholar] [CrossRef]
  40. Wilson, H.; Cowan, J. Excitatory and Inhibitory Interac- tions in Localized Populations of Model Neurons. Biophys. J. 1972, 12, 1–24. [Google Scholar] [CrossRef]
  41. Gerstner, W.; Kistler, W.M. Spiking Neuron Models: Single Neurons, Populations, Plasticity; Cambridge University: Cambridge, UK, 2002. [Google Scholar]
  42. Yamazaki, K.; Vo-Ho, V.K.; Bulsara, D.; Le, N. Spiking Neural Networks and Their Applications: A Review. Brain Sci. 2022, 12, 863. [Google Scholar] [CrossRef]
  43. Menon, V.; D’Esposito, M. The role of PFC networks in cognitive control and executive function. Neuropsychopharmacology 2022, 47, 90–103. [Google Scholar] [CrossRef]
  44. Hilman, E. Coupling Mechanism and Significance of the BOLD Signal: A Status Report. Annu. Rev. Neurosci. 2014, 37, 161–181. [Google Scholar] [CrossRef] [PubMed]
  45. Matthews, P.M.; Jezzard, P. Functional Magnetic Resonance Imaging. J. Neurol. Neurosurg. Psychiatry 2004, 75, 6–12. [Google Scholar]
  46. Scrivener, C.L.; Reader, A.T. Variability of EEG electrode positions and their underlying brain regions: Visualizing gel artifacts from a simultaneous EEG-fMRI dataset. Brain Behav. 2022, 12, e2476. [Google Scholar] [CrossRef]
  47. Schomer, D.; Silva, F.D. Niedermeyer’s Electroencephalography; Oxford University Press: Oxford, UK, 2017; Volume 1. [Google Scholar] [CrossRef]
  48. Mohammedi, M.; Omar, M.; Bouabdallah, A. Methods for detecting and removing ocular artifacts from EEG signals in drowsy driving warning systems: A survey. Multimed. Tools Appl. 2023, 82, 17687–17714. [Google Scholar] [CrossRef]
  49. Seyedkhani, S.; Mohammadpour, R.; Irajizad, A. Principles and Advancements of Microelectrode Arrays in Brain–Machine Interfaces; Intechopen: London, UK, 2024. [Google Scholar] [CrossRef]
  50. Bradley, J.; Luithardt, H.; Metea, M.; Stock, C.J. In vitro screening for seizure liability using microelectrode array technology. Toxicol. Sci. 2018, 163, 240–253. [Google Scholar] [CrossRef] [PubMed]
  51. Hales, C.; Rolston, J.; Potter, S. How to culture, record and stimulate neuronal networks on micro-electrode arrays (MEAs). J. Vis. Exp. 2010, 39, e2056. [Google Scholar] [CrossRef]
  52. Maccione, A.; Garofalo, M.; Nieus, T.; Tedesco, M.; Berdondini, L.; Martinoia, S. Multiscale functional connectivity estimation on low-density neuronal cultures recorded by high-density CMOS micro electrode arrays. J. Neurosci. Methods 2012, 207, 161–171. [Google Scholar] [CrossRef] [PubMed]
  53. Viventi, J.; Kim, D.H.; Vigeland, L.; Frechette, E.; Blanco, J.; Kim, Y.S. Flexible, foldable, actively multiplexed, high-density electrode array for mapping brain activity in vivo. Nat. Neurosci. 2011, 14, 1599–1605. [Google Scholar] [CrossRef]
  54. Abreu, R.; Jorge, J.; Leal, A. EEG Microstates Predict Concurrent fMRI Dynamic Functional Connectivity States. Brain Topogr. 2021, 34, 41–55. [Google Scholar] [CrossRef]
  55. Cole, M.; Bassett, D.; Power, J.; Braver, T.; Petersen, S. Intrinsic and task-evoked network architectures of the human brain. Neuron 2014, 83, 238–251. [Google Scholar] [CrossRef]
  56. Ryali, S.; Supekar, K.; Chen, T.; Kockalka, J.; Cai, W.; Nicholas, J.; Padmanabhan, A.; Menon, V. Temporal Dynamics and Developmental Maturation of Salience, Default and Central-Executive Network Interactions Revealed by Variational Bayes Hidden Markov Modeling. PLoS Comput. Biol. 2016, 12, e1005138. [Google Scholar] [CrossRef]
  57. Yamada, T.; Watanabe, T.; Sasaki, Y. Plasticity—Stability dynamics during post-training processing of learning. Trends Cogn. Sci. 2023. [Google Scholar] [CrossRef] [PubMed]
  58. Hebb, D. The Organization of Behaviour; John Wiley and Sons: Hoboken, NJ, USA, 1949. [Google Scholar]
  59. Ólafsdóttir, H.; Bush, D.; Barry, C. The role of Hippocampal Replay in Memory and Planning. Curr. Biol. 2018, 28, R37–R50. [Google Scholar] [CrossRef]
  60. Samavat, M.; Bartol, T.; Harris, K.; Sejnowski, T. Synaptic Information Storage Capacity Measured with Information Theory. Neural Comput. 2024, 36, 781–802. [Google Scholar] [CrossRef] [PubMed]
  61. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  62. Scripture, E.; Smith, T.; Brown, E. On the education of muscular control and power. Stud. Yale Psychol. Lab. 1894, 2, 114–119. Available online: http://echo.mpiwg-berlin.mpg.de/MPIWG:47EYCE88 (accessed on 15 August 2024).
  63. Ossmy, O.; Mukamel, R. Neural Network Underlying Intermanual SKill Transfer in Humans. Cell Rep. 2016, 17, 2891–2900. [Google Scholar] [CrossRef]
  64. Alahmadi, A.A.S. Investigating the sub-regions of the superor parietal cortex using functional magnetic resonance imaging connectivity. Insights Into Imaging 2021, 12, 47. [Google Scholar] [CrossRef]
  65. Agarwal, S.; Branson, K.; Belongie, S. Higher Order learning with graphs. In Proceedings of the 23rd International Conference on Machine Learning, Pittsburgh, PA, USA, 25–29 June 2006. [Google Scholar]
  66. Serrano, N.; Jaimes-Reategui, R.; Pisarchik, A. Hypergraph of Functional Connectivity Based on Event-Related Coherence: Magnetoencephalography Data Analysis. Appl. Sci. 2024, 14, 2343. [Google Scholar] [CrossRef]
  67. Xiao, L.; Wang, J.; Kassani, P.; Zhang, Y.; Bai, Y.; Stephen, J.M.; Wilson, T.W.; Calhoun, V.D.; Wang, Y.P. Multi-Hypergraph Learning-Based Brain Functional Connectivity Analysis in fMRI Data. IEEE Trans. Med. Imaging 2020, 39, 1746–1758. [Google Scholar] [CrossRef] [PubMed]
  68. Ha, D.; Dai, A.; Le, Q.V. HyperNetworks. arXiv 2016, arXiv:1609.09106. [Google Scholar] [CrossRef]
  69. Kumar, P.; Kumar, P.B.; Prabhu, S.R.; Upadhyay, Y.; Teja, N.V.; Swamy, P.A. Advanced Multi-task Reinforcement Learning Utilising Task-Adaptive Episodic Memory with Hypernetwork Integration. In Proceedings of the 2024 2nd International Conference on Intelligent Data Communication Technologies and Internet of Things (IDCIoT), Bengaluru, India, 4–6 January 2024. [Google Scholar]
  70. Wang, L.; Zhang, X.; Su, H. A Comprehensive Survey of Continual Learning: Theory, Method and Application. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 5362–5383. [Google Scholar] [CrossRef] [PubMed]
  71. Ding, F.; Xu, C.; Liu, H.; Zhou, B.; Zhou, H. Bridging pre-trained models to continual learning: A hypernetwork based framework with parameter-efficient fine-tuning techniques. Inf. Sci. 2024, 674, 120710. [Google Scholar] [CrossRef]
  72. Judd, C. Generalized Experience. In Psychology of Secondary Education; Ginn & Company: Boston, MA, USA, 1927. [Google Scholar] [CrossRef]
  73. Zhuang, F.; Qi, Z.; Duan, K.; Xi, D.; Zhu, Y.; Zhu, H.; Xiong, H.; He, Q. A comprehensive survey on transfer learning. Proc. IEEE 2020, 109, 43–76. [Google Scholar] [CrossRef]
  74. McKissock, S.; Ward, J. Do errors matter? Errorless and errorful learning in anomic picture naming. Neuropsychol. Rehabil. 2007, 17, 355–373. [Google Scholar] [CrossRef] [PubMed]
  75. Yamashita, M.; Shimokawa, T.; Peper, F.; Tanemura, R. Functional network activity during errorless and trial-and-error color-name association learning. Brain Behav. 2020, 10, e01723. [Google Scholar] [CrossRef] [PubMed]
  76. Zárate-Rochín, A.M. Contemporary neurocognitive models of memory: A descriptive comparative analysis. Neuropsychologia 2024, 108846. [Google Scholar] [CrossRef]
  77. Pyc, M.A.; Rawson, K. Testing the retrieval effort hypothesis: Does greater difficulty correctly recalling information lead to higher levels of memory? J. Mem. Lang. 2009, 60, 437–447. [Google Scholar] [CrossRef]
  78. Bjork, R. Memory and metamemory considerations in the training of human beings. In Metacognition: Knowing About Knowing; MIT Press: Cambridge, MA, USA, 1994. [Google Scholar]
  79. Van Buuren, M.; Wagner, I.; Fernandez, G. Functional network interactions at rest underlie individual differences in memory ability. Learn. Mem. 2019, 26, 9–19. [Google Scholar] [CrossRef] [PubMed]
  80. Gerraty, R.T.; Davidow, J.Y.; Wimmer, G.E.; Kahn, I.; Shohamy, D. Transfer of Learning Relates to Intrinsic Connectivity between Hippocampus, Ventromedial Prefrontal Cortex, and Large-Scale Networks. J. Neurosci. 2014, 34, 11297–11303. [Google Scholar] [CrossRef] [PubMed]
  81. Rojas, R. The backpropagation Algorithm. In Neural Networks: A Systematic Introduction; Springer: Cham, Swtizerland, 1996. [Google Scholar] [CrossRef]
  82. Sacramento, J.; Costa, R.P.; Bengio, Y.; Senn, W. Dendritic cortical microcircuits approximate the backpropagation algorithm. Adv. Neural Inf. Process. Syst. 2018, 31. [Google Scholar]
  83. Song, Y.; Lukasiewicz, T.; Xu, Z.; Bogacz, R. Can the brain do backpropagation? Exact implementation of backpropagation in predictive coding networks. Adv. Neural Inf. Process. Syst. 2020, 33, 22566–22579. [Google Scholar]
  84. Song, Y.; Millidge, B.; Salvatori, T.; Lukasiewicz, T.; Xu, Z.; Bogacz, R. Inferring neural activity before plasticity as a foundation for learning beyond backpropagation. Nat. Neurosci. 2023, 27, 348–358. [Google Scholar] [CrossRef] [PubMed]
  85. Zare, M.; Kebria, P.; Khosravi, A.; Nahavandi, S. A Survey of Imitation Learning: Algorithms, Recent Developments, and Challenges. IEEE Trans. Cybern. 2023. [Google Scholar] [CrossRef] [PubMed]
  86. Leiva, F.; Ruiz-del Solar, J. Combining RL and IL using a dynamic, performance-based modulation over learning signals and its application to local planning. arXiv, 2024; arXiv:2405.09760. [Google Scholar] [CrossRef]
  87. Sestieri, C.; Shulman, G.; Corbetta, M. Orienting to the environment: Separate contributions of dorsal and ventral frontoparietal attention networks. In The Neuroscience of Attention: Attentional Control and Selection; Oxford Academic: Oxford, UK, 2012. [Google Scholar] [CrossRef]
  88. Seeley, W.; Menon, V.; Scchatzberg, A.; Keller, J.; Glover, G.; Kenna, H. Dissociable intrinsic connectivity networks for salience processing and executive control. J. Neurosci. 2007, 27, 2349–2356. [Google Scholar] [CrossRef] [PubMed]
  89. Schimmelpfennig, J.; Topczewski, J.; Zajkowski, W.; Jankowiak-Siuda, K. The role of the salience network in cognitive and affective defecits. Front. Hum. Neurosci. 2023, 17, 1133367. [Google Scholar] [CrossRef] [PubMed]
  90. Grecius, M.; Krasnow, B.; Reiss, A.; Menon, V. Functional connectivity in the resting brain: A network analysis of the default mode hypothesis. Proc. Natl. Acad. Sci. USA 2003, 100, 253–258. [Google Scholar] [CrossRef]
  91. Koch, I.; Poljac, E.; Muller, H.; Kiesel, A. Cognitive structure, flexibility, and plasticity in human multitasking—An integrative review of dual-task and task-switching research. Psychol. Bull. 2018, 144, 557. [Google Scholar] [CrossRef] [PubMed]
  92. Lam, T.; Vartanian, O.; Hollands, J. The brain under cognitive workload: Neural networks underlying multitasking performance in the multi-attribute task battery. Neuropsychologia 2022, 174, 108350. [Google Scholar] [CrossRef] [PubMed]
  93. Garrison, K.; Scheinost, D.; Worhunsky, P.; Elwafi, H.; Thornhill, T.; Thompson, E.; Saron, C.; Desbordes, G.; Kober, H.; Hampson, M.; et al. Real-time fMRI links ssubjective experience with brain activity during focused attention. Neuroimage 2013, 81, 110–118. [Google Scholar] [CrossRef] [PubMed]
  94. Cushnie, A.; Tang, W.; Heilbronner, S. Connecting Circuits with Networks in Addiction Neuroscience: A Salience Network Perspective. Int. J. Mol. Sci. 2023, 24, 9083. [Google Scholar] [CrossRef] [PubMed]
  95. Crawshaw, M. Multi-task learning with deep neural networks: A survey. arXiv 2020, arXiv:2009.09796. [Google Scholar]
  96. Agliari, E.; Alessandrelli, A.; Barra, A.; Ricci-Tersenghi, F. Parallel learning by multitasking neural networks. J. Stat. Mech. 2023, 2023, 113401. [Google Scholar] [CrossRef]
  97. Parisi, G.; Kemker, R.; Part, J.; Kanan, C.; Wermter, S. Continual Learning with neural networks: A review. Neural Netw. 2018, 113, 54–71. [Google Scholar] [CrossRef]
  98. Wu, H.; Khetarpal, K.; Precup, D. Self-supervised Attention-Awware Reinforcement Learning. In Proceedings of the AAAI conference on Artificial Intelligence, Virtual, 2–9 February 2021. [Google Scholar]
  99. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.; Kaiser, L.; Polosukhin, i. Attention is all you need. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2023. [Google Scholar]
  100. Zhu, X.; Zhao, L.; Zhu, W. Salience Interest Option: Temporal abstraction with salience interest functions. Neural Netw. 2024, 176, 106342. [Google Scholar] [CrossRef] [PubMed]
  101. Pina, R.; De Silva, V.; Artaud, C. Towards Self-Adaptive Resilient Swarms Using Multi-Agent Reinforcement Learning. In Proceedings of the 13th International Conference on Pattern Recognition Applications and Methods (ICPRAM 2024), Rome, Italy, 24–26 February 2024. [Google Scholar]
  102. Bissell, D.; Birtchnell, T.; Elliott, A.; Hsu, E.L. Autonomous automobilities: The social impacts of driverless vehicles. Curr. Sociol. 2020, 68, 116–134. [Google Scholar] [CrossRef]
  103. Khan, M.A.; Sayed, H.E.; Malik, S.; Zia, T.; Khan, J.; Alkaabi, N.; Ignatious, H. Level-5 autonomous driving—Are we there yet? a review of research literature. ACM Comput. Surv. CSUR 2022, 55, 1–38. [Google Scholar] [CrossRef]
  104. Wang, J.; Huang, H.; Li, K.; Li, J. Towards the unified principles for level 5 autonomous vehicles. Engineering 2021, 7, 1313–1325. [Google Scholar] [CrossRef]
  105. Wong, K.; Gu, Y.; Kamijo, S. Mapping for autonomous driving: Opportunities and challenges. IEEE Intell. Transp. Syst. Mag. 2020, 13, 91–106. [Google Scholar] [CrossRef]
  106. Barabas, I.; Todoruţ, A.; Cordoş, N.; Molea, A. Current challenges in autonomous driving. In Proceedings of the IOP Conference Series: Materials Science and Engineering, Barcelona, Spain, 14–16 July 2017; Volume 252, p. 012096. [Google Scholar]
  107. Lin, J.; Zhou, W.; Wang, H.; Cao, Z.; Yu, W.; Zhao, C.; Zhao, D.; Yang, D.; Li, J. Road traffic law adaptive decision-making for self-driving vehicles. In Proceedings of the 2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC), Macau, China, 8–12 October 2022; pp. 2034–2041. [Google Scholar]
  108. Erkent, Ö.; Laugier, C. Semantic segmentation with unsupervised domain adaptation under varying weather conditions for autonomous vehicles. IEEE Robot. Autom. Lett. 2020, 5, 3580–3587. [Google Scholar] [CrossRef]
  109. Galántai, P. Assessment of dangerous traffic situations for autonomous vehicles. Period. Polytech. Transp. Eng. 2022, 50, 260–266. [Google Scholar] [CrossRef]
  110. Huang, Y.; Eden, J.; Ivanova, E.; Burdet, E. Can Training Make Three Arms Better Than Two Heads for Trimanual Coordination? IEEE Open J. Eng. Med. Biol. 2023, 4, 148–155. [Google Scholar] [CrossRef] [PubMed]
  111. Amirthanayagam, A.; Zecca, M.; Barber, S.; Singh, B.; Moss, E. Impact of minimally invasive surgery on surgeon health (ISSUE) study: Protocol of a single-arm observational study conducted in the live surgery setting. Br. Med. J. 2023, 13, e066765. [Google Scholar] [CrossRef] [PubMed]
  112. Rivero-Moreno, Y.; Echevarria, S.; Vidal-Valderrama, C.; Pianetti, L.; Cordova-Guilarte, J.; Navarro-Gonzalez, J.; Acevedo-Rodríguez, J.; Dorado-Avila, G.; Osorio-Romero, L.; Chavez-Campos, C.; et al. Robotic surgery: A comprehensive review of the literature and current trends. Cureus 2023, 15. [Google Scholar] [CrossRef] [PubMed]
  113. Kosmyna, N.; Hauptmann, E.; Hmaidan, Y. A Brain-Controlled Quadruped Robot: A Proof-of-Concept Demonstration. Sensors 2023, 24, 80. [Google Scholar] [CrossRef] [PubMed]
  114. Shajari, S.; Kuruvinashetti, K.; Komeili, A.; Sundararaj, U. The emergence of AI-based wearable sensors for digital health technology: A review. Sensors 2023, 23, 9498. [Google Scholar] [CrossRef] [PubMed]
  115. Rault, T.; Bouabdallah, A.; Challal, Y.; Marin, F. A survey of energy-efficient context recognition systems using wearable sensors for healthcare applications. Pervasive Mob. Comput. 2017, 37, 23–44. [Google Scholar] [CrossRef]
  116. Tawakuli, A.; Kaiser, D.; Engel, T. Synchronized preprocessing of sensor data. In Proceedings of the 2020 IEEE International Conference on Big Data (Big Data), Atlanta, GA, USA, 10–13 December 2020; pp. 3522–3531. [Google Scholar]
  117. Tubis, A.A.; Rohman, J. Intelligent Warehouse in Industry 4.0—Systematic Literature Review. Sensors 2023, 23, 4105. [Google Scholar] [CrossRef] [PubMed]
  118. Liu, L.; Guo, F.; Zou, Z.; Duffy, V. Application, Development and Future Opportunities of Collaborative Robots (Cobots) in Manufacturing: A Literature Review. Int. J. Hum. Comput. Interact. 2024, 40, 915–932. [Google Scholar] [CrossRef]
  119. Inam, R.; Fersman, E.; Raizer, K.; Souza, R.; Nascimento, A.; Hata, A. Safety for Automated Warehouse exhibiting collaborative robots. In Safety and Reliability—Safe Societies in a Changing World; CRC Press: Boca Raton, FL, USA, 2018; pp. 2021–2028. [Google Scholar]
  120. Shi, D.; Tong, J.; Liu, Y.; Fan, W. Knowledge Reuse of Multi-Agent Reinforcement Learning in Cooperative Tasks. Entropy 2022, 24, 470. [Google Scholar] [CrossRef]
  121. Gao, Z.; Xu, K.; Ding, B.; Wang, H. Knowru: Knowledge reuse via knowledge distillation in multi-agent reinforcement learning. Entropy 2021, 23, 1043. [Google Scholar] [CrossRef]
  122. Musk, E.; Neuralink. An Integrated Brain–Machine Interface Platform with Thousands of Channels. J. Med. Internet Res. 2019, 21, e16194. [Google Scholar] [CrossRef]
  123. Karikari, E.; Koshechkin, K. Review on brain–computer interface technologies in healthcare. Biophys. Rev. 2023, 15, 1351–1358. [Google Scholar] [CrossRef]
  124. Parui, S.; Samanta, D.; Chakravorty, N. An Advanced Healthcare System Where Internet of Things meets Brain-Computer Interface Using Event-Related Potential. In Proceedings of the 24th International Conference on Distributed Computing and Networking, Kharagpur, India, 4–7 January 2023; pp. 438–443. [Google Scholar] [CrossRef]
  125. Zhao, Z.P.; Nie, C.; Jiang, C.T.; Cao, S.H.; Tian, K.X.; Yu, S.; Gu, J.W. Modulating Brain Activity with Invasive Brain-Computer Interface: A Narrative Review. Brain Sci. 2023, 13, 134. [Google Scholar] [CrossRef]
  126. Prattichizzo, D.; Pozzi, M.; Baldi, T.L.; Malvezzi, M.; Hussain, I.; Rossi, S.; Salvietti, G. Human augmentation by wearable supernumerary robotic limbs: Review and perspectives. Prog. Biomed. Eng. 2021, 3, 042005. [Google Scholar] [CrossRef]
  127. Penaloza, C.; Hernandez-Carmona, D.; Nishio, S. Towards intelligent brain-controlled body augmentation robotic limbs. In Proceedings of the 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Miyazaki, Japan, 7–10 October 2018; pp. 1011–1015. [Google Scholar]
  128. Zhou, Y.; Yu, T.; Gao, W.; Huang, W.; Lu, Z.; Huang, Q.; Li, Y. Shared three-dimensional robotic arm control based on asynchronous BCI and computer vision. IEEE Trans. Neural Syst. Rehabil. Eng. 2023. [Google Scholar] [CrossRef]
  129. Drew, L. Neuralink brain chip: Advance sparks safety and secrecy concerns. Nature 2024, 627, 19. [Google Scholar] [CrossRef] [PubMed]
Figure 1. (A) The frontal lobe, (B) the parietal lobe, (C) the cerebellum, (D) the temporal lobe, and (E) the occipital lobe.
Figure 1. (A) The frontal lobe, (B) the parietal lobe, (C) the cerebellum, (D) the temporal lobe, and (E) the occipital lobe.
Machines 12 00574 g001
Figure 2. Figure to show the organisation of this review. By taking inspiration from neuroscience and combining it with known ML research, we discuss FC for select traits of human intelligence to present a blueprint for neuromorphic machine intelligence.
Figure 2. Figure to show the organisation of this review. By taking inspiration from neuroscience and combining it with known ML research, we discuss FC for select traits of human intelligence to present a blueprint for neuromorphic machine intelligence.
Machines 12 00574 g002
Figure 4. (A) shows an example of fMRI connectivity [30]. The colours red, blue and yellow on the image cover brain regions that are being activated during a certain activity. (B) shows an EEG headplot with the numbers representing channels that exist over various brain regions. (C) shows one example of viewing the EEG waveform as frequency and amplitude. (D) shows an example set up for microelectrode array equipment using neuronal slices, rendered using BioRender.com (2020). Retrieved from https://app.biorender.com/biorender-templates (accessed on 11 July 2024).
Figure 4. (A) shows an example of fMRI connectivity [30]. The colours red, blue and yellow on the image cover brain regions that are being activated during a certain activity. (B) shows an EEG headplot with the numbers representing channels that exist over various brain regions. (C) shows one example of viewing the EEG waveform as frequency and amplitude. (D) shows an example set up for microelectrode array equipment using neuronal slices, rendered using BioRender.com (2020). Retrieved from https://app.biorender.com/biorender-templates (accessed on 11 July 2024).
Machines 12 00574 g004
Figure 5. Here, we highlight the insights taken from human intelligence for network organisation and information storage that are facilitated by FC. By linking them to brain-inspired machine learning concepts, we propose novel directions for NMI network design.
Figure 5. Here, we highlight the insights taken from human intelligence for network organisation and information storage that are facilitated by FC. By linking them to brain-inspired machine learning concepts, we propose novel directions for NMI network design.
Machines 12 00574 g005
Figure 6. Diagram to show the regional network connectivity for memory retrieval. x marks the location of the MTL-RSC-DMN, displaying essential connectivity with FPN regions for retrieval.
Figure 6. Diagram to show the regional network connectivity for memory retrieval. x marks the location of the MTL-RSC-DMN, displaying essential connectivity with FPN regions for retrieval.
Machines 12 00574 g006
Figure 7. Here, we highlight the insights taken from human intelligence for learning that are facilitated by FC. By linking them to brain-inspired machine learning concepts, we propose directions for NMI through novel representation learning schemes. The diagram shows the FPN and DMN networks with other potential networks indicated by the letters a, b, c and d.
Figure 7. Here, we highlight the insights taken from human intelligence for learning that are facilitated by FC. By linking them to brain-inspired machine learning concepts, we propose directions for NMI through novel representation learning schemes. The diagram shows the FPN and DMN networks with other potential networks indicated by the letters a, b, c and d.
Machines 12 00574 g007
Figure 8. Here, we highlight the insights taken from human intelligence for task-prioritisation ability that are facilitated by FC. By linking it to brain-inspired machine learning concepts, we propose novel directions for NMI with better task-prioritisation capability.
Figure 8. Here, we highlight the insights taken from human intelligence for task-prioritisation ability that are facilitated by FC. By linking it to brain-inspired machine learning concepts, we propose novel directions for NMI with better task-prioritisation capability.
Machines 12 00574 g008
Figure 9. Table outlining the future applications and directions for research discussed around NMI.
Figure 9. Table outlining the future applications and directions for research discussed around NMI.
Machines 12 00574 g009
Table 1. Table to outline the human attributes that are associated with key brain regions discussed in neuroscience literature.
Table 1. Table to outline the human attributes that are associated with key brain regions discussed in neuroscience literature.
Brain RegionAttributes
Frontal LobePlays a crucial role in higher cognitive functions such as reasoning, planning, decision-making, and voluntary movement.
Temporal LobeInvolved in auditory processing, memory, language comprehension, and emotional responses.
Brain StemRegulation of autonomous bodily functions such as breathing and digestion.
Parietal LobeProcessing sensory information, spatial awareness, perception, and integrating sensory input with motor function for coordinated movement and orientation in space.
Occipital LobeVisual information processing and integrating it with higher-level visual functions such as object recognition, color discrimination, and depth perception.
CerebellumCoordinating voluntary movements such as balance and motor learning.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Illeperuma, M.; Pina, R.; De Silva, V.; Liu, X. Novel Directions for Neuromorphic Machine Intelligence Guided by Functional Connectivity: A Review. Machines 2024, 12, 574. https://doi.org/10.3390/machines12080574

AMA Style

Illeperuma M, Pina R, De Silva V, Liu X. Novel Directions for Neuromorphic Machine Intelligence Guided by Functional Connectivity: A Review. Machines. 2024; 12(8):574. https://doi.org/10.3390/machines12080574

Chicago/Turabian Style

Illeperuma, Mindula, Rafael Pina, Varuna De Silva, and Xiaolan Liu. 2024. "Novel Directions for Neuromorphic Machine Intelligence Guided by Functional Connectivity: A Review" Machines 12, no. 8: 574. https://doi.org/10.3390/machines12080574

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop