Much of our intelligent behavior and even our own characters rely on memory — the ability to remember past experiences and to bind information scattered over time, readily deployable to guide our actions. The human memory has been the topic of numerous studies over decades, providing an extensive description of its various forms, the relationship between them, the anatomical structures supporting each, and the conditions under which each one is engaged. Yet, we still lack computational models that could explain and predict many of the neural and behavioral observations that have been made over the years. My lab seeks to develop models and algorithms that can explain, predict, and ultimately regulate the brain responses (in the form of neural responses and the consequent behaviors) during visual tasks, specially ones that require short- and long-term memory. Our research builds on top of various tools and theories developed in machine learning, neuroscience and cognitive science. Current projects in the lab are broadly along three directions:

  • Topographical models of visual cortex.

    Across the primate neocortex, neurons that perform similar functions tend to be spatially grouped together. In the high-level visual cortex, this principle manifests as distinct cortical patches of category-selective neurons, a phenomenon observed in diverse species, including macaque monkeys and humans. Units in artificial neural networks have been shown to provide a suitable scaffold for simulating neural activity. Yet, units in these models lack the topographical organizion that is widely observed across many species including humans and monkeys. Our goal is build topographical neural network models that can additionally reproduce the cortical topography in human and non-human primates.
  • Massively-multitask neural network models for simulating the human prefrontal cortex.

    Prefrontal cortex in general and dorsolateral prefrontal cortex in particular is deeply involved in a range of cognitive abilities such as working memory and goal-directed selection of information that are flexibly deployed across a wide range of cognitive tasks. Our goal is to use neural network models trained to perform large number of composiontionally-constructed multimodal tasks to simulate the function in primate dorsolateral prefrontal cortex.
  • Saccading neural network models replicating human saccadic behavior.

    Most cognitive animals perceive their visual environments in segments by controlling the focus of their gaze. This is in stark contrast to how most artificial neural networks see by processing the full visual field in one shot. Our goal is to develop neural networks with controllable gaze, enabling them to explore visual environment in a manner akin to natural saccadic exploration. Ultimately, our objective is to leverage these models to predict the neural underpinnings of saccadic behavior in humans, bridging the gap between artificial intelligence and human cognitive processes.
  • Building predictive models of primate hippocampus.

    Hippocampus is one of the most studied brain areas in neuroscience that is known to be involved in spatial navigation and episodic memory. Various computational models based on predictive coding have been proposed that, despite their differences, replicate prior experimental observations from rodents' hippocampus. Yet, it is unclear whether any of the models provides a better description of the hippocampal code and function compared to others. We aim to use predictive benchmarking as a more precise approach for evaluating models of hippocampus.

Developing computational models of neural processes are becoming increasingly important in elevating our understanding of the neural processes supporting different behaviors, as well as for paving the way towards translating neuroscience into life-changing applications.

Recurring questions

Why do you use predictive models?

Scientific models are often put into one of the the two categories. 1. Explanatory models are used to test a hypothesis about a system by showing that it can explain various observations made from that system. 2. If an explanatory model is also able to explain future (unseen) observations they are called predictive.

Regardless of this classification of scientific models, all scientists desire causal models which can identify the cause-effect relationships and to distinguish between the causes and other correlative effects. In our research we commit ourselves to constantly push towards causal models of the brain function by revisiting the metrics that we use to assess the precision of our models in approximating the brain function.

How do you use computational models to understand the brain?

Two essential properties of every scientific hypothesis is testability and falsifiability. The preciseness of predictions in mechanistic/mathematical models makes them ideal in that regard. Naturally, we view the computational models as scientific hypotheses. By following the model generation-validation cycle we can continuously improve the preciseness of our hypotheses. We believe that this approach will ultimately lead to a precise understanding of the brain.

What form of understanding do we gain from complex models of the brain?

Many of the most accurate models of visual perception consist of artificial neural network components which themselves are made of millions of connections and many more units. Indeed such view of computational models make them appear extremely complex and not offering any insights about the system they are approximating. However, their complex construction is often a product of a much lower-dimensional design choices from their architectural constituents to learning parameters and data distributions. It is this space of design choices that we find most informative about how computational models could lead to an understanding of the brain

Current artificial neural networks are not anything like the brain, why do you use them?

We are also aware that these networks are very different from their biological counterparts however, they are still our preferred class of models because of the following reasons:

  • Empirical results in recent years have shows that the representations learned by these models as well as their behavioral responses hold remarkable similarities with observations from various regions of animal brains. This is a strong indicator that despite apparent differences in architecture and learning rules, the solutions found by these networks have nontrivial similarities with those that are being run by neurons withing the brain.
  • Brain is formed out of interconnected network of neurons, and this suggests any model of the brain to be a network of interconnected units. Many instances of current neural networks lack well-known functional or structural features of biological neural networks such as feedback and recurrence. Nevertheless, neural network models provide us with a rich modeling framework for developing new ideas and we are hard at work to make more brain-aligned neural network models.

Lab meetings

You can access the topic list for our weekly lab meetings here .