Mamba-inspired model for neuron-to-neuron architecture

Mamba for response-to-response modeling

Motivation

Steady-state models, in particular, Mamba (S6) models have recently shown a great potential to compete with transformers, while being less requiring in terms of memory (due to removing quadratic attention). Moreover, these models are by design causal (eg they use only previous tokens to predict the next one, like attention can be non-causal and “look into the future”). So these seems to be a great candidate for response-to-response models, predicting some neurons based on the others.

Project

The idea is to take the mamba architecture (paper, code) and our data(sensorium 2023 dataset, dataloader) and make them work together. One could experiment with masking or neuronal tokenization and compare this to the transformer based solutions (this numbers could be provided by the team inside).

No prior knowledge in biology is required.

Thesis

Within the context of this project, several research questions could be explored:

  • Neuronal forecasting (predict neuronal future responses based on the past)
  • Predicting other neurons (predict activity of the excluded set given some other neurons)
  • What are the optimal parameters for the mamba-inspired model?
  • Does mamba-based solution perform better then the transformer based solution?
  • How does model performance scale with scaling the amount of neurons?
  • What is the dependency of the number of neurons used as input VS number of neurons used as output for the model performace?
    • How do correlations between neurons affect it?

Contact

To apply please email Polina Turishcheva stating your interest in this project and detailing your relevant skills. Please note that this project is better suited for student, who already have some minimal coding background.

Neural Data Science Group
Institute of Computer Science
University of Goettingen