We aim to understand how the brain manages to make sense of the world so rapidly and robustly, based on a vast amount of noisy and ambiguous sensory inputs.
Our guiding hypothesis is that the brain constructs predictive internal models of its environment using a process of self-supervised learning, and compares the predictions from this model with incoming sensory inputs. This predictive processing strategy may enable both efficient encoding of incoming signals, rapid and robust perceptual inference based on those signals, and continuous learning of the complex and hierarchical statistical regularities that exist in the world.
The ability of the human brain to generate predictions about yet unseen data - and to update its models based on the mismatch between predicted and actual data – may be a fundamental building block underlying the intelligence of the human mind.
We examine the computational and neural implementation of prediction in perception and cognition. We use an integrative and multidisciplinary approach, by investigating and comparing predictive processing in different modalities (visual, auditory, language), under both constrained and naturalistic conditions, using complementary techniques (psychophysics, fMRI, MEG, AI-inspired computational modeling) and species (mouse, monkey, human).
The aim of our research is to contribute fundamental knowledge of the general operating principles of the brain that enable us to understand our surroundings and successfully interact with them.
Below you can find some examples of our current main lines of research.
We are interested in understanding how sensory information and prior expectations are dynamically combined in the brain. Using a combination of neuroimaging tools (fMRI, MEG), computational models, and psychophysical paradigms, we study the form and neural implementation of such predictions in vision (e.g., Fritsche et al., 2020; Kok et al., 2017), audition (Kern et al., 2022; Todorovic et al., 2012) and language (Heilbron et al., 2022; Heilbron et al., 2020), in both experimentally controlled and more naturalistic environments.
We explore how the brain learns about a variety of statistical regularities in the world to build internal predictive models that aid perception and cognition.
For example, we study differences between incidental and intentional learning of regularities (Ferrari et al., 2022), the importance of goal-directed attention (Richter et al., 2019), and the learning rules of statistical learning (Nazli et al., 2024).
Evidence from anatomical, physiological, and behavioral studies has shown that the visual cortex employs feedforward, lateral recurrence, and feedback recurrence during sensory information processing. However, the specific roles of these different types of information processing are still debated. We investigate the consequences of various types of recurrence in artificial neural networks (ANNs), and their alignment to biological neural networks. Moreover, we empirically study feedforward and feedback processes in the human brain using ultrahigh-resolution fMRI at 7 Tesla to image layer-resolved neural activity.
Empirical studies support the notion that prior knowledge strongly influences sensory and cognitive processes, yet the computational mechanisms underlying this influence remain relatively unknown. To explore these mechanisms, we use a variety of neuroscience-inspired AI algorithms (Artificial Neural Networks, ANNs), as models of neural information processing.