Much of our intelligent behavior and even our own characters rely on memory — the ability to remember past experiences and to bind information scattered over time, readily deployable to guide our actions. The human memory has been the topic of numerous studies over decades, providing an extensive description of its various forms, the relationship between them, the anatomical structures supporting each, and the conditions under which each one is engaged. Yet, we still lack computational models that could explain and predict many of the neural and behavioral observations that have been made over the years. My lab seeks to develop models and algorithms that can explain, predict, and ultimately regulate the brain responses (in the form of neural responses and the consequent behaviors) during visual tasks, specially ones that require short- and long-term memory. Our research builds on top of various tools and theories developed in machine learning, neuroscience and cognitive science. We specially aim at research projects with prospective benefits for both machine learning and neuroscience.

Developing computational models of neural processes are becoming increasingly important in elevating our understanding of the neural processes supporting different behaviors, as well as for paving the way towards translating neuroscience into life-changing applications.

Recurring questions

Why do you use predictive models?

Scientific models are often put into one of the the two categories. 1. Explanatory models are used to test a hypothesis about a system by showing that it can explain various observations made from that system. 2. If an explanatory model is also able to explain future (unseen) observations they are called predictive.

Regardless of this classification of scientific models, all scientists desire causal models which can identify the cause-effect relationships and to distinguish between the causes and other correlative effects. In our research we commit ourselves to constantly push towards causal models of the brain function by revisiting the metrics that we use to assess the precision of our models in approximating the brain function.

How do you use computational models to understand the brain?

Two essential properties of every scientific hypothesis is testability and falsifiability. The preciseness of predictions in mechanistic/mathematical models makes them ideal in that regard. Naturally, we view the computational models as scientific hypotheses. By following the model generation-validation cycle we can continuously improve the preciseness of our hypotheses. We believe that this approach will ultimately lead to a precise understanding of the brain.

What form of understanding do we gain from complex models of the brain?

Many of the most accurate models of visual perception consist of artificial neural network components which themselves are made of millions of connections and many more units. Indeed such view of computational models make them appear extremely complex and not offering any insights about the system they are approximating. However, their complex construction is often a product of a much lower-dimensional design choices from their architectural constituents to learning parameters and data distributions. It is this space of design choices that we find most informative about how computational models could lead to an understanding of the brain

Current artificial neural networks are not anything like the brain, why do you use them?

We are also aware that these networks are very different from their biological counterparts however, they are still our preferred class of models because of the following reasons:

  • Empirical results in recent years have shows that the representations learned by these models and the behavioral hold remarkable similarities with some aspects of the neural firing patterns in primates visual system.
  • Brain is formed out of interconnected network of neurons, and this suggests any model of the brain to be a network of interconnected units. Many instances of current neural networks lack well-known functional or structural features of biological neural networks such as feedback and recurrence. However, we emphasize that we do not have a strong opinion on particular instances of these models but rather base our work on this class of models as a rich modeling framework that different model instances could be developed within it.