Deep Embedding Clustering for the visual cortex cells embeddings

Design a transformer network for the retina

Motivation

We train models to predict the visual cortex responses for mice (a model takes an image or a video as input and predicts the neuronal response). We treat part of the model weights as neuronal embeddings (each array has correspondence for teh actual recorded cell). The idea is that using these cells embeddings we can come up with the objective cell types classification using unsupervised clustering on top of the embeddings. The issue is that classic clustering algorithms are vulnerable for the curse of dimensionality, so we have to come up with an alternative clustering, maybe implying a model finetuning to make the clusters more separatable.

Project

The idea is to use if neuron performance is related to the resulting clusters and useDeep Embedding Clustering, where the idea is to adjust the weights to make the embeddings more separatable.The original paper is done based on autoencoders, while in our case the embeddings are actually the networks weights. To estimate the results we would like to compare with classif k-means, GMM, as well as spectral clustering method (like Spectral Embedded Clustering, Nie et al. 2011, and local discriminant models and global integration (LDMGI), Yang et al. 2010).

Thesis

Within the context of this project, several research questions could be explored: - Does neuron performance is connected to the resulting clustering - How to adjust DEP for the rotation-equivariant models for DEP - Does DEP finetuning ruins the original model performance? If yes, how to come up with the joint loss to keep performance - Does DEP improve clustering? - What is the optimal amount of clusters?

Contact

To apply please email Polina Turishcheva stating your interest in this project and detailing your relevant skills. A part of this project could be also a lab rotation.

Neural Data Science Group
Institute of Computer Science
University of Goettingen