Final Workshop Series ‘Neural Control: From Data to Machines’

Plan4ActPredictive Neural Information for Proactive Actions: From Monkey Brain to SmartHouse Control

Project Abstract:
Planning and mental simulation of actions and outcomes are a major cognitive trait of humans. We predict action consequences and perform goal-directed actions in proactive, forward-looking ways. By contrast, systems that lack predictive planning are reactive and dominated by reflex-like, cumbersome behaviors. Most currently existing brainmachine- interfaces (BMI) fall into this category.

Plan4Act sets out to go beyond this by inferring actions from actionpredicting neural activity of complex action sequences. Neurophysiology in non-human primates recently revealed that such encoding is far more widespread than previously thought. The goal of the Plan4Act project is to record and understand predictive neural activity and use it to proactively control devices in a smart house.

The far-future vision behind this is to endow motor-impaired patients with the ability to plan a daily-life goal – like making coffee – and achieve it without having to invoke one by one every single individual action to reach this goal. To approach this complex problem, we record multi-unit action predicting activity in macaques (WP1), model this by adaptive neural networks (WP2), design therefrom an embedded (FPGA-based) controller (WP3), and interface it with a smart house (WP4) to control action sequences with a clear look-ahead property. The main outcome of this project is a system that integrates the above components at TRL4 for which we quantify improved reaction speed and robustness of this type of proactive BMI control. The understanding and use of predictive neural signals for machine control is novel and methods, algorithms, and hardware developed to translate predictive planning from neural activity to technology create the major general impact of this project. Potential translational and commercial interests will be assessed by our industrial partner, where specifically the embedded controller and its smart house interface are expected to create nearfuture commercial impact, too.

Final Workshop Series ‘Neural Control: From Data to Machines’


05.11.2020Session 1 – BMI (Peripher)Chair: ALEXANDER GAIL
10:00-10:45Dario Farina
10:45-11:30Silvestro Micera

Abstract of Dario Farina

Alpha motor neurons receive synaptic input that they
convert into the ultimate neural code of movement — the neural
drive to muscles. The study of the behaviour of motor neurons
provides a window into the neural processing of movement.
The spiking activity of motor neurons can be identified from
recordings of electrical activity of muscles using wearable sensors. Therefore, motor neurons are the only neural cells whose individual activities can be studied in humans during natural behavior, without the need for surgical implants. The talk will overview the technology for motor neuron interfacing and its potential for neural interfacing in neuroprosthetic applications. Examples of neural interfacing for assistive and rehabilitation devices in patients suffering spinal cord injury and limb amputations will be discussed. 

Abstract of Silvestro Micera
i4LIFE: Intraneural stimulation to restore sensory, motor 
and autonomic neural functions
Bertatelli Foundation Chair in Translational NeuroEngineering,
Institute of Bioengineering & Center, for Neuroprosthetics Ecole Polytechnique Federale de Lausanne, Lausanne, Switzerland 
Translational Neural Engineering Area, The BioRobotics Institute
and Departmente of Excellence in Robotics and AI, Scuola Superiore Sant’Anna, Pisa, Italy 
Neuroengineering is a novel discipline combining engineering including micro and nanotechnology, electrical and mechanical, and computer science with cellular, molecular, cognitive neuroscience with two main goals: (i) increase our basic knowledge of how the nervous system works; (ii) develop systems able to restore functions in people affected by breakthroughs different types of neural disability. In the past years, several have been reached by neuroengineers  in particular on the development of neurotechnologies able to restore sensorimotor functions in disabled people. Intraneural electrodes represent a potentially very attractive technology for peripheral stimulation for their very good selectivity with limited invasiveness.  In this presentation, I will provide several examples on how implantable intraneural  interfaces can be used to restore sensory (tactile feedback for hand prostheses, vision), motor (locomotion and grasping), and autonomic functions (for type 2 diabetes and hearth controls).  

Register here

12.11.2020Session 2
BMI (Central)
Michael Fauth
17:00-17:45Tonio Ball
17:45- 18:30Nick Ramsay

Abstract of Tonio Ball
Understanding Deep Learning Models for Brain Signals
Deep learning with convolutional neural networks (CNNs) is increasingly used for brain signal analysis and interfacing. Little is however known about the internal representations of brain signals that emerge from the training of deep learning systems. Such knowledge might increase their value both for medical applications where black-box algorithms are inacceptable, and for neuroscientific discovery. Here I summarize recent progress in understanding deep learning models for brain signals. Specifically, we studied how CNNs decode hand movement parameters from intracranial EEG (iEEG) signals during a continuous motor task. Our findings reveal that individual CNN units became specialized to the extraction of either iEEG amplitude or phase information originating in the motor cortex in physiological frequency bands. This segregation into two functionally different neural populations became more distinct in the later networks layers. Our study thus provides insights into the principles of how deep networks learn to represent brain signals, and may facilitate the development of more transparent deep learning models for neuroscience and technology.

Abstract of Nick Ramsay
Beyond Proof of Concept: Can Brain-Computer Implants imrpove daily life in people with Locked-in Syndrome? 

People with severe loss of motor control, such as complete paralysis, can suffer from an inability to communicate and are excluded from social interaction. Until recently, there was no solution to offer to these patients. In November 2016 we presented the first case of an implanted Brain-Computer Interface system that enabled a late-stage ALS patient with Locked-In Syndrome to control spelling software at home, without help ( Key to this system is the principle that the brain generates motor signals even when they do not reach the muscles, which can be detected and interpreted in real-time. I will present use of the BCI implant by the first participant at home over a period of almost 5 years, and progress with 2 more participants included since. In addition I will discuss envisioned next developments and hurdles in moving the BCI research for communication into the home of end users. 

Register here

12.11.2020Session 3
Data analysis I
Poramate Manoonpong
17:00-17:45Juan Álvaro Gallego
17:45- 18:30Florentin Wörgötter
Alexander Gail
Christian Tetzlaff
Michael Fauth

Abstract of Juan Álvaro Gallego
Dept. of Bioengineering
Imperial College London

A neural population view on how the brain achieves both stable and rapidly adaptable behaviour 
The analysis of neural population activity in several brain cortices has consistently uncovered low-dimensional subspaces that capture a significant fraction of neural variability. These “neural manifolds” are spanned by specific patterns of correlated neural activity, whose activation are often called “latent dynamics”. I will discuss a model of brain function in which these latent dynamics, rather than the independent modulation of single neurons, drive behaviour.  
Animals readily execute learnt behaviours in a consistent manner. How does the brain achieve this stable control? We recorded from neural populations in premotor, primary motor, and somatosensory cortices for up to two years as monkeys performed the same task. Remarkably, despite the unavoidable changes in recorded neurons, the population latent dynamics remain stable. Such stability allows reliable decoding of behavioural features for the entire timespan, while fixed decoders based on the recorded neural activity degrade substantially. 
Perhaps the other most notable feature about animal behaviour is that we can adapt our movements very rapidly, even after a single error. When monkeys need to learn to counteract a velocity-dependent force field, the activity of single motor and premotor cortical neurons changes in complex ways; such changes puzzled neuroscientists for many years. I will show that adopting a “population view” reveals that despite these changes in single neuron activity, the neural manifold remains stable. Interestingly, motor adaptation is paralleled by the formation of new motor plans that are associated with novel latent dynamics. These observations indicate that rapid learning needs not be associated to fast synaptic changes. 
A population view of how the brain works allows revealing both robust and rapidly changing patterns of neural activity that mediate behaviour. Given that neural manifolds are found throughout the brain, from prefrontal to visual cortex and even hippocampus, similar principles may apply to non-motor functions. 

Abstract of Florentin Wörgötter, Alexander Gail, Christian Testzlaff, Michael Fauth
Neural dynamics underlying planning of sequences of actions in freely moving monkeys
In daily life, actions are occurring as part of action sequences, like walking towards a door, opening it, switching on the light, etc. Humans usually plan ahead, mentally creating an action plan for a follow-up action or an action sequence, while still busy executing ongoing actions. If an action goal requires multiple steps for getting achieved, an agent usually knows the different individual actions already at the start of an action sequence. However, the neuronal characteristics of sequence planning and co-occurring execution and planning remains unclear up to now. To address this problem, we developed a new framework allowing the investigation of behavioral and neuronal dynamics of unrestrained monkeys performing different sequences of actions. Using this framework, we extracted and identified neuronal signatures of action planning from monkey’s neuronal activities in parietal and pre-motor brain areas. Our results show that the knowledge of a specific action to be executed is present already early during action sequence execution and over the time passed down along several brain areas, suggesting a hierarchy of action planning and execution. Decoding of planned action during ongoing action is an important prerequisite for smooth proactive control of neuroprosthetic devices or ambient assisted living (AAL) environments.

Register here

26.11.2020Session 4 – Theory and Experiments I Chair:
Christian Tetzlaff
10:00-10:45Tomoki Fukai
10:45- 11:30Petra Ritter

Abstract of Tomoki Fukai
Okinawa Institute of Science and Technology
Rate and temporal coding perspectives of motor processing in cortical microcircuits
Our understanding of motor information processing has rapidly progressed, but the details of motor coding in six-layered cortical microcircuits have yet to be fully clarified. I will show the results of our experimental and computational attempts to uncover the neural representations of behaviorally relevant information in the cortical microcircuits. Particularly, whether and how the firing rates of neurons as well as the temporal features of single-cell and cell-assembly activities participate in motor coding is explored. Using a computational model, I also argue the possibility that somato-dendritic interactions enable cortical neurons to segment and learn hierarchical features of temporal input.

Abstract of Petra Ritter
Integrating neuroscience data through personalized brain simulation in protected cloud environments to infer multi-scale mechanisms of brain function and dysfunction
The challenge in studying the brain as a complex adaptive system is that complexity arises from the interactions of structure and function at different spatiotemporal scales. Modern neuroimaging can provide exquisite measures of structure and function separately, but misses the fact that the brain complexity emerges from the intersection of the two. We can exploit the power of large-scale network models to integrate disparate neuroimaging data sources and evaluate the potential underlying biophysical network mechanisms. This approach became broadly feasible with the whole-brain simulation platform TheVirtualBrain (TVB). TVB integrates empirical neuroimaging data from different modalities to construct biologically plausible computational models of brain network dynamics. TVB is a generative model wherein biophysical parameters for the level of cell population activity and anatomical connectivity are optimized/fitted so that they generate an individual’s observed data in humans, macaques or rodents. The inferences about brain dynamics, complexity, and the relation to cognition are thus made at the level of the biophysical features (e.g., balance of excitation and inhibition in a cell population) that generated the observed data, rather than particular features of the measured data. The Virtual Brain Cloud (TVB-Cloud) enables neuroscience data integration in the cloud through personalized brain simulation in compliance with the EU General Data Protection Regulations (GDPR). We will demonstrate the data protection mechanisms implemented in our multi-scale simulation workflows and image processing pipelines comprising e.g. authentication, encryption, sandboxing.

Register here

03.12.2020Session 5 – Experiments IIChair:
Michael Berger
10:00-10:45Valerio Mante
10:45-11:30Hans Scherberger

Abstract of Valerio Mante
Residual population dynamics as a window into neural computation
Neural activity in frontal and motor cortices can be considered to be the manifestation of a dynamical system implemented by large neural populations in recurrently connected networks. The computations emerging from such population-level dynamics reflect the interaction between external inputs into a network and its internal, recurrent dynamics. Isolating these two contributions in experimentally recorded neural activity, however, is challenging, limiting the resulting insights into neural computations. I will present an approach to addressing this challenge based on response residuals, i.e. variability in the population trajectory across repetitions of the same task condition. A complete characterization of residual dynamics is well-suited to systematically compare computations across brain areas and tasks, and leads to quantitative predictions about the consequences of small, arbitrary causal perturbations.

Abstract of Hans Scherberger
Neurobiology Laboratory, German Primate Center (DPZ), Göttingen, Germany
Sensori-motor neuronal networks for grasping in the primate brain
Hand function plays an important role in all primate species, and its loss is associated with severe disability. Grasping movements are complex actions for which the primate brain integrates sensory and cognitive signals to generate meaningful behavior. To achieve this computation, specialized brain areas are functionally connected, in particular in the parietal (anterior intra parietal area, AIP), premotor (area F5), and primary motor cortex (M1 hand area). This presentation will highlight recent experimental results in non-human primates to characterize how individual neurons in these cortical areas interact in order to generate grasping movements on the basis of sensory signals, and how such neuronal population signals can be used to decode hand actions, e.g., for operating a neural prosthesis.

Register here

10.12.2020Session 6 – TechincalChair:
Rebeca I. Garcia Betances
10:00-10:45Gordon Cheng
10:45-11:30Poramate Manoonpong
Jan- Matthias Braun
Eugenio Gaeta

Abstract of Gordon Cheng
Less is More: neural-based event-driven control
In this talk, I will present a neural-based event-driven control scheme, I will show effectiveness of handling large amount of sensory data in yielding meaningful seamless human-robot interactions. I advocate that taking a neuroengineering approach can overcome long-standing problems in the control of highly complex robotic systems. Several examples will be given in this presentation to demonstrate the effectiveness of such an approach.

Abstract of Poramate Manoonpong / Jan- Matthias Braun / Eugenio Gaeta
Proactive brain-machine interface control: From Neural Information Decoding to Smart Device Control
In this talk, we will present the overall proactive brain-machine interface (BMI)-based control framework developed in our EU Plan4Act project. Specifically, we will show how predictive neural activity in the (macaque) brain is translated into commands for proactively controlling smart home devices. We have developed artificial neural networks for neural activity processing and implemented them on an embedded FPGA-based hardware controller.  Though the controller, we can robustly extract and predict sequence-predicting neural activity for real-time proactive control. We will also present the development of our smart home interfaces based on semantic technologies and IoT standards with machine-understandable instructions for proactive BMI control of home devices. All in all, the project has extended the recent experimental results in the domain of BMI which still lacks predictive planning for proactive device control. Our vision is to later exploit the developed emerging technologies to improve the Quality of Life of people with disabilities in a far-future. This will endow them with the capability to plan a daily-life task and robustly achieve it without the need to invoke each individual action to reach their goal.

Register here

17.12.2020Session 7 – Data analysis II and RoboticsChair:
Florentin Wörgötter
10:00-10:45Daniel Durstewitz
10:45-11:30Yukie Nagai

Abstract of Daniel Durstewitz
Dept. Theoretical Neuroscience & BCCN Heidelberg-Mannheim, Central Institute of Mental Health/ Medical Faculty & Faculty of Physics and Astronomy, Heidelberg University
Deep Learning of Dynamical Systems from Neural Time Series
Computational processes in the brain, like working memory or decision making, are often conceived as being implemented in terms of their neural system dynamics. For instance, sequential-syntactical processes may be realized through neural trajectories that transit among attractor states or attractor ruins. Traditionally, in computational neuroscience, such neuro-dynamical models that can account for a set of physiological and behavioral observations are formulated by the theoretician and then compared to or tested on data. This is a laborious bottom-up process. In my talk, I describe an alternative route to scientific model building, namely inferring neuro-computational models directly from recorded time series data through statistical deep learning. For this we use a particular form of “dynamically interpretable” recurrent neural network, a discrete time version of a neural population model, embedded within a variational auto-encoder framework with specific regularization constraints. My talk will review this methodology, showcase it on various dynamical systems benchmarks, and illustrate some applications on neuroimaging and spike train data.

Abstract of Yukie Nagai
Cognitive Development Based on Predictive Coding
A theoretical framework called predictive coding suggests that the human brain works as a predictive machine. That is, the brain tries to minimize prediction errors by updating the internal model and/or by affecting the environment. We have been investigating to what extent the predictive coding theory accounts for human cognitive development in terms of the temporal continuity (i.e., from non-social to social development) and the individual diversity (i.e., differences between typical development and developmental disorders).
This talk presents computational neural networks we designed to examine whether and how the process of minimizing prediction errors lead to cognitive development. Our experiments using a robot demonstrated that various cognitive abilities such as goal-directed action, imitation, estimation of others’ intention, and altruistic behavior emerged in a staged manner as observed in infants. Not only the characteristics of typical development but also those of developmental disorders such as autism spectrum disorder were generated as a result of aberrant prediction abilities. These results demonstrate that predictive coding provides a unified computational account for cognitive development (Nagai, Phil Trans B 2019).

Register here