Model Neurons Interactions in time and between each other
Motivation
We train models to predict the visual cortex responses for mice (a model takes an image or a video as input and predicts the neuronal response). We treat part of the model weights as neuronal embeddings (each array has correspondence for teh actual recorded cell). The models are built in the core-readout framework, where the core is universal and could be shared between animals and datasets, while readout is recording-specific. However, in the current models readouts do not model interactions between neurons and do not modulate the current neuron responce based on its previous activity, while it is known to happen in real-world neurons.
Project
The idea is to use current models and try to integrate divisive normalisation and neurons interactions. The starting point is to try simple reccurency for adjusting the neuronal response and adding self-attention on neuronal embeddings, however, there could be other ways to implement same high-level ideas. We also have the data for neurons positions in the real brain, which could be used for modeling.
No prior knowledge in biology is required.
Thesis
Within the context of this project, several research questions could be explored: - How to model neurons interaction? Can we model neuronal interaction using attention? - Does it improve model’s performace? - What do we see in the attention maps? Does it somehow corresponds to the neurons positions? - Can we implement the divisive normalisation for a single neuron over time (using only models predictions from the previous stages)? - Does it improve model’s performace? Does it work for all of the neurons or depends on the signal to noise ratio?
Contact
To apply please email Polina Turishcheva stating your interest in this project and detailing your relevant skills. A part of this project could be also a lab rotation.