Statistical Machine Learning and Optimisation Challenges for Brain Imaging at a Millisecond Timescale
Understanding how the brain works in healthy and pathological conditions is considered as one of the major challenges for the 21st century. After the first electroencephalography (EEG) measurements in 1929, the 90's was the birth of modern functional brain imaging with the first functional MRI (fMRI) and full head magnetoencephalography (MEG) system. By offering noninvasively unique insights into the living brain, imaging has revolutionized in the last twenty years both clinical and cognitive neuroscience. After pioneering breakthroughs in physics and engineering, the field of neuroscience has to face new major computational and statistical challenges. The size of the datasets produced by publicly funded populations studies (Human Connectome Project in the USA, UK Biobank or Cam-CAN in the UK etc.) keeps increasing with now hundreds of terabytes of data made available for basic and translational research. The new high density neural electrode grids record signals over hundred of sensors at thousands of Hz which represent also large datasets of time-series which are overly complex to model and analyze: non-stationarity, high noise levels, heterogeneity of sensors, strong variability between individuals, lack of accurate models for the signals. In this talk I will present some recent statistical machine learning contributions applied to electrophysiological data, and illustrate how optimization, statistics and advanced signal processing are used today to get the best of such challenging, and sometimes massive, data.
Alexandre Gramfort is senior researcher in the Parietal Team at INRIA Saclay Research Center and CEA Neurospin since 2017. He was formerly Assistant Professor at Telecom ParisTech, Université Paris-Saclay, in the image and signal processing department. His field of expertise is statistical machine learning, signal processing and scientific computing applied primarily to functional brain imaging data (EEG, MEG, fMRI). His work is strongly interdisciplinary at the interface with statistics, computer science, software engineering and neuroscience. He is known for his work on the scikit-learn open source software that he contributed to write since 2010 at Inria, as well as the MNE-Python software that he started while at Harvard in 2011. In 2015, he was awarded a Starting Grant by the European Research Council (ERC).
2019-09-16 at 3:00 pm (subject to variability)