Start

Welcome

The Neural Data Science group led by Alexander Ecker works at the interface of machine learning and computational neuroscience. We develop new methods and algorithms to make sense of large-scale neuroscience data. Moreover, we work on novel approaches to computer vision based on insights we gain from biological vision.

Currently, the Neural Data Science Group has 8 PhD students and 2 postdocs.

Data Science is a highly collaborative endeavour. We work closely with a number of labs in Göttingen, Germany and the United States: Fabian Sinz (Uni Göttingen), Andreas Tolias (Baylor College of Medicine; Houston, TX, USA), Thomas Euler (University of Tübingen), Tim Gollisch (University Medical Center, Göttingen), Christoph Kleinn (Uni Göttingen), Eberhard Bodenschatz (MPI for Dynamics and Self-Organization, Göttingen), Viola Priesemann (MPI for Dynamics and Self-Organization, Göttingen), Michael Wibral (Uni Göttingen), Michael Wilczek (Uni Bayreuth).


Workshop “Current Topics in Neural Data Science”

From August 16th to August 19th 2022, the neural data science group organized the “Current Topics in Neural Data Science” workshop.


Teaching

Summer term 2023

Practical course on applying deep learning for image generation.

Alexander Ecker and Timo Lüddecke

Introduction to Machine Learning

Alexander Ecker

Winter term 2022/2023

Seminar where recent deep learning papers are presented and discussed.

Alexander Ecker, Laura Hansel, Richard Vogg, Polina Turishcheva and Timo Lüddecke

Summer term 2022

Seminar where recent computational neuroscience papers are presented and discussed.

Alexander Ecker, Laura Pede, Michaela Vystrčilová, Suhas Shrinivasan

Practical course on applying deep learning for image generation.

Alexander Ecker and Timo Lüddecke

Introduction to Machine Learning

Alexander Ecker

Bachelor’s and Master’s theses

General requirements

We expect prospective students to have substantial knowledge in machine learning, its mathematical foundations and Python programming. We therefore strongly recommend that students interested in doing their thesis in our lab should take our courses on Machine Learning, Deep Learning and took the Fachpraktikum Data Science. Exceptions are possible if well motivated.

Further recommended lectures are:

Please note, our thesis supervision capacity is limited and we receive more thesis inquiries than we are able supervise. Therefore, we have to select candidates. If you are interested, please write an email with the subject “Master’s thesis” or “Bachelor’s thesis” containing one to three sentences about what you would like to work on and your study record to the supervisor stated below.

We will get back to you within a few days. Otherwise, do not hesitate to remind us :).

Thesis offers

3D Voxel Model
3D Voxel Model for Representation Learning of Neuronal Morphologies
Supervisor: Laura Hansel
Clustering vs. Continuum
Cluster tendency assessment metrics on high-dimensional data
Supervisor: Laura Hansel
Comparing Neurons directly
Directed Tree Neural Networks
Supervisor: Martin Ritzert
CoreMon: CoreSet Selection for Training Robust Monkey Trackers in Real-World Environments
A coreset selection approach for training robust monkey tracking algorithms for the wild
Supervisor: Sharmita Dey
Embedding Unbranched Segments of Neuronal Dendrites
Embedding Unbranched Segments of Neuronal Dendrites for Neuron Clustering
Supervisor: Martin Ritzert
Lemur Accelerometer
Automatic Detection of Lemur Behaviors from Accelerometer Data
Supervisor: Dr Kaja Wierucka, Richard Vogg
Lemur Vocalization
Automatic Detection, Segmentation and Classification of Lemur Vocalizations
Supervisor: Dr Kaja Wierucka, Richard Vogg
Self-Supervised Pretraining for Training Robust Monkey Detection and Tracking Models
Leveraging self-supervised pretraining to improve the robustness of monkey detection and tracking models in diverse environmental conditions.
Supervisor: Sharmita Dey

Research

2023

Z. Ding, D. Tran, K. Ponder, E. Cobos, Z. Ding, P. Fahey, E. Wang, T. Muhammad, J. Fu, S. Cadena, others
Bipartite invariance in mouse primary visual cortex
bioRxiv, 2023
@article{ding2023bipartite, title: {Bipartite invariance in mouse primary visual cortex}, author: {Z. Ding, D. Tran, K. Ponder, E. Cobos, Z. Ding, P. Fahey, E. Wang, T. Muhammad, J. Fu, S. Cadena, others}, year: {2023}, journal: {bioRxiv}, }
.bib
J. Fu, S. Shrinivasan, K. Ponder, T. Muhammad, Z. Ding, E. Wang, Z. Ding, D. Tran, P. Fahey, S. Papadopoulos, others
Pattern completion and disruption characterize contextual modulation in mouse visual cortex
bioRxiv, 2023
@article{fu2023pattern, title: {Pattern completion and disruption characterize contextual modulation in mouse visual cortex}, author: {J. Fu, S. Shrinivasan, K. Ponder, T. Muhammad, Z. Ding, E. Wang, Z. Ding, D. Tran, P. Fahey, S. Papadopoulos, others}, year: {2023}, journal: {bioRxiv}, }
.bib
T. Jiang, M. Freudenberg, C. Kleinn, A. Ecker, N. N{"o}lke
The Impacts of Quality-Oriented Dataset Labeling on Tree Cover Segmentation Using U-Net: A Case Study in WorldView-3 Imagery
Remote Sensing, 2023
@article{jiang2023impacts, title: {The Impacts of Quality-Oriented Dataset Labeling on Tree Cover Segmentation Using U-Net: A Case Study in WorldView-3 Imagery}, author: {T. Jiang, M. Freudenberg, C. Kleinn, A. Ecker, N. N{"o}lke}, year: {2023}, journal: {Remote Sensing}, }
.bib
E. Wang, P. Fahey, K. Ponder, Z. Ding, A. Change, T. Muhammad, S. Patel, Z. Ding, D. Tran, J. Fu, others
Towards a foundation model of the mouse visual cortex
bioRxiv, 2023
@article{wang2023towards, title: {Towards a foundation model of the mouse visual cortex}, author: {E. Wang, P. Fahey, K. Ponder, Z. Ding, A. Change, T. Muhammad, S. Patel, Z. Ding, D. Tran, J. Fu, others}, year: {2023}, journal: {bioRxiv}, }
.bib

2022

L. Hoefling, K. Szatko, C. Behrens, Y. Qiu, D. Klindt, Z. Jessen, G. Schwartz, M. Bethge, P. Berens, K. Franke, others
A chromatic feature detector in the retina signals visual context changes
bioRxiv, 2022
@article{hoefling2022chromatic, title: {A chromatic feature detector in the retina signals visual context changes}, author: {L. Hoefling, K. Szatko, C. Behrens, Y. Qiu, D. Klindt, Z. Jessen, G. Schwartz, M. Bethge, P. Berens, K. Franke, others}, year: {2022}, journal: {bioRxiv}, }
.bib
M. Goldin, B. Lefebvre, S. Virgili, M. Pham Van Cang, A. Ecker, T. Mora, U. Ferrari, O. Marre
Context-dependent selectivity to natural images in the retina
Nature Communications, 2022
@article{goldin2022context, title: {Context-dependent selectivity to natural images in the retina}, author: {M. Goldin, B. Lefebvre, S. Virgili, M. Pham Van Cang, A. Ecker, T. Mora, U. Ferrari, O. Marre}, year: {2022}, journal: {Nature Communications}, }
.bib
S. Cadena, K. Willeke, K. Restivo, G. Denfield, F. Sinz, M. Bethge, A. Tolias, A. Ecker
Diverse task-driven modeling of macaque V4 reveals functional specialization towards semantic tasks
bioRxiv, 2022
@article{cadena2022diverse, title: {Diverse task-driven modeling of macaque V4 reveals functional specialization towards semantic tasks}, author: {S. Cadena, K. Willeke, K. Restivo, G. Denfield, F. Sinz, M. Bethge, A. Tolias, A. Ecker}, year: {2022}, journal: {bioRxiv}, }
.bib
T. Lüddecke, A. S. Ecker
Image Segmentation Using Text and Image Prompts
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022
@article{lueddecke2021clipseg, title: {Image Segmentation Using Text and Image Prompts}, author: {T. Lüddecke, A. S. Ecker}, year: {2022}, journal: {2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, }
.bib
M. Weis, S. Papadopoulos, L. Hansel, T. Lüddecke, B. Celii, P. Fahey, J. Bae, A. Bodor, D. Brittain, J. Buchanan, D. Bumbarger, M. Castro, E. Cobos, F. Collman, N. da Costa, S. Dorkenwald, L. Elabbady, E. Froudarakis, A. Halageri, Z. Jia, C. Jordan, D. Kapner, N. Kemnitz, S. Kinn, K. Lee, K. Li, R. Lu, T. Macrina, G. Mahalingam, E. Mitchell, S. Mondal, S. Mu, B. Nehoran, S. Patel, X. Pitkow, S. Popovych, R. Reid, C. Schneider-Mizell, H. Seung, W. Silversmith, F. Sinz, M. Takeno, R. Torres, N. Turner, W. Wong, J. Wu, W. Yin, S. Yu, J. Reimer, A. Tolias, A. Ecker
Large-scale unsupervised discovery of excitatory morphological cell types in mouse visual cortex
bioRxiv, 2022
show abstract
Neurons in the neocortex exhibit astonishing morphological diversity which is critical for properly wiring neural circuits and giving neurons their functional properties. The extent to which the morphological diversity of excitatory neurons forms a continuum or is built from distinct clusters of cell types remains an open question. Here we took a data-driven approach using graph-based machine learning methods to obtain a low-dimensional morphological bar code describing more than 30,000 excitatory neurons in mouse visual areas V1, AL and RL that were reconstructed from a millimeter scale serial-section electron microscopy volume. We found a set of principles that captured the morphological diversity of the dendrites of excitatory neurons. First, their morphologies varied with respect to three major axes: soma depth, total apical and basal skeletal length. Second, neurons in layer 2/3 showed a strong trend of a decreasing width of their dendritic arbor and a smaller tuft with increasing cortical depth. Third, in layer 4, atufted neurons were primarily located in the primary visual cortex, while tufted neurons were more abundant in higher visual areas. Fourth, we discovered layer 4 neurons in V1 on the border to layer 5 which showed a tendency towards avoiding deeper layers with their dendrites. In summary, excitatory neurons exhibited a substantial degree of dendritic morphological variation, both within and across cortical layers, but this variation mostly formed a continuum, with only a few notable exceptions in deeper layers.Competing Interest StatementThe authors have declared no competing interest.
@article{weis2022largescale, title: {Large-scale unsupervised discovery of excitatory morphological cell types in mouse visual cortex}, author: {M. Weis, S. Papadopoulos, L. Hansel, T. Lüddecke, B. Celii, P. Fahey, J. Bae, A. Bodor, D. Brittain, J. Buchanan, D. Bumbarger, M. Castro, E. Cobos, F. Collman, N. da Costa, S. Dorkenwald, L. Elabbady, E. Froudarakis, A. Halageri, Z. Jia, C. Jordan, D. Kapner, N. Kemnitz, S. Kinn, K. Lee, K. Li, R. Lu, T. Macrina, G. Mahalingam, E. Mitchell, S. Mondal, S. Mu, B. Nehoran, S. Patel, X. Pitkow, S. Popovych, R. Reid, C. Schneider-Mizell, H. Seung, W. Silversmith, F. Sinz, M. Takeno, R. Torres, N. Turner, W. Wong, J. Wu, W. Yin, S. Yu, J. Reimer, A. Tolias, A. Ecker}, year: {2022}, journal: {bioRxiv}, }
.bib
A. Barreto Alcantara, L. Reifenrath, R. Vogg, F. Sinz, A. Mahlein
Using UAV-Imagery for Leaf Segmentation in Diseased Plants via Mask-Based Data Augmentation and Extension of Leaf-based Phenotyping Parameters
bioRxiv, 2022
show abstract
In crop production plant diseases cause significant yield losses. Therefore, the detection and scoring of disease occurrence is of high importance. The quantification of plant diseases requires the identification of leaves as individual scoring units. Diseased leaves are very dynamic and complex biological objects which constantly change in form and color after interaction with plant pathogens. To address the task of identifying and segmenting individual leaves in agricultural fields, this work uses unmanned aerial vehicle (UAV), multispectral imagery of sugar beet fields and deep instance segmentation networks (Mask R-CNN). Based on standard and copy-paste image augmentation techniques, we tested and compare five strategies for achieving robustness of the network while keeping the number of labeled images within reasonable bounds. Additionally, we quantified the influence of environmental conditions on the network performance. Metrics of performance show that multispectral UAV images recorded under sunny conditions lead to a drop of up to 7% of average precision (AP) in comparison with images under cloudy, diffuse illumination conditions. The lowest performance in leaf detection was found on images with severe disease damage and sunny weather conditions. Subsequently, we used Mask R-CNN models in an image-processing pipeline for the calculation of leaf-based parameters such as leaf area, leaf slope, disease incidence, disease severity, number of clusters, and mean cluster area. To describe epidemiological development, we applied this pipeline in time-series in an experimental trial with five varieties and two fungicide strategies. Disease severity of the model with the highest AP results shows the highest correlation with the same parameter assessed by experts. Time-series development of disease severity and disease incidence demonstrates the advantages of multispectral UAV-imagery for contrasting varieties for resistance, and the limits for disease control measurements. With this work we highlight key components to consider for automatic leaf segmentation of diseased plants using UAV imagery, such as illumination and disease condition. Moreover, we offer a tool for delivering leaf-based parameters relevant to optimize crop production thought automated disease quantification imaging tools.
@article{barreto2022using, title: {Using UAV-Imagery for Leaf Segmentation in Diseased Plants via Mask-Based Data Augmentation and Extension of Leaf-based Phenotyping Parameters}, author: {A. Barreto Alcantara, L. Reifenrath, R. Vogg, F. Sinz, A. Mahlein}, year: {2022}, journal: {bioRxiv}, }
.bib

2021

M. A. Weis, K. Chitta, Y. Sharma, W. Brendel, M. Bethge, A. Geiger, A. S. Ecker
Benchmarking Unsupervised Object Representations for Video Sequences
Journal of Machine Learning Research, 2021
@article{Weis2021, title: {Benchmarking Unsupervised Object Representations for Video Sequences}, author: {M. A. Weis, K. Chitta, Y. Sharma, W. Brendel, M. Bethge, A. Geiger, A. S. Ecker}, year: {2021}, journal: {Journal of Machine Learning Research}, }
.bib
K. Lurz, M. Bashiri, K. Willeke, A. Jagadish, E. Wang, E. Y. Walker, S. A. Cadena, T. Muhammad, E. Cobos, A. S. Tolias, A. S. Ecker, F. H. Sinz
Generalization in data-driven models of primary visual cortex
International Conference on Learning Representations, 2021
@inproceedings{lurz2021generalization, title: {Generalization in data-driven models of primary visual cortex}, author: {K. Lurz, M. Bashiri, K. Willeke, A. Jagadish, E. Wang, E. Y. Walker, S. A. Cadena, T. Muhammad, E. Cobos, A. S. Tolias, A. S. Ecker, F. H. Sinz}, year: {2021}, booktitle: {International Conference on Learning Representations}, }
.bib
M. Burg, S. Cadena, G. Denfield, E. Walker, A. Tolias, M. Bethge, A. Ecker
Learning divisive normalization in primary visual cortex
PLOS Computational Biology, 2021
show abstract
Divisive normalization (DN) is a prominent computational building block in the brain that has been proposed as a canonical cortical operation. Numerous experimental studies have verified its importance for capturing nonlinear neural response properties to simple, artificial stimuli, and computational studies suggest that DN is also an important component for processing natural stimuli. However, we lack quantitative models of DN that are directly informed by measurements of spiking responses in the brain and applicable to arbitrary stimuli. Here, we propose a DN model that is applicable to arbitrary input images. We test its ability to predict how neurons in macaque primary visual cortex (V1) respond to natural images, with a focus on nonlinear response properties within the classical receptive field. Our model consists of one layer of subunits followed by learned orientation-specific DN. It outperforms linear-nonlinear and wavelet-based feature representations and makes a significant step towards the performance of state-of-the-art convolutional neural network (CNN) models. Unlike deep CNNs, our compact DN model offers a direct interpretation of the nature of normalization. By inspecting the learned normalization pool of our model, we gained insights into a long-standing question about the tuning properties of DN that update the current textbook description: we found that within the receptive field oriented features were normalized preferentially by features with similar orientation rather than non-specifically as currently assumed.
@article{burg_2021_learning_divisive_normalization, title: {Learning divisive normalization in primary visual cortex}, author: {M. Burg, S. Cadena, G. Denfield, E. Walker, A. Tolias, M. Bethge, A. Ecker}, year: {2021}, journal: {PLOS Computational Biology}, }
.bib
M. A. Weis, L. Pede, T. Lüddecke, A. S. Ecker
Self-supervised Representation Learning of Neuronal Morphologies
arXiv, 2021
show abstract
Understanding the diversity of cell types and their function in the brain is one of the key challenges in neuroscience. The advent of large-scale datasets has given rise to the need of unbiased and quantitative approaches to cell type classification. We present GraphDINO, a purely data-driven approach to learning a low dimensional representation of the 3D morphology of neurons. GraphDINO is a novel graph representation learning method for spatial graphs utilizing self-supervised learning on transformer models. It smoothly interpolates between attention-based global interaction between nodes and classic graph convolutional processing. We show that this method is able to yield morphological cell type clustering that is comparable to manual feature-based classification and shows a good correspondence to expert-labeled cell types in two different species and cortical areas. Our method is applicable beyond neuroscience in settings where samples in a dataset are graphs and graph-level embeddings are desired.
@article{weis2021selfsupervised, title: {Self-supervised Representation Learning of Neuronal Morphologies}, author: {M. A. Weis, L. Pede, T. Lüddecke, A. S. Ecker}, year: {2021}, journal: {arXiv}, }
.bib
D. Kobak, Y. Bernaerts, M. Weis, F. Scala, A. Tolias, P. Berens
Sparse reduced-rank regression for exploratory visualisation of paired multivariate data
Journal of the Royal Statistical Society: Series C (Applied Statistics), 2021
@article{weis2021sparse, title: {Sparse reduced-rank regression for exploratory visualisation of paired multivariate data}, author: {D. Kobak, Y. Bernaerts, M. Weis, F. Scala, A. Tolias, P. Berens}, year: {2021}, journal: {Journal of the Royal Statistical Society: Series C (Applied Statistics)}, }
.bib

2020

V. Benson, A. Ecker
Assessing out-of-domain generalization for robust building damage detection
NeurIPS 2020 Workshop on Artificial Intelligence for Humanitarian Assistance and Disaster Response (AI+HADR 2020), 2020
@inproceedings{benson2020assessing, title: {Assessing out-of-domain generalization for robust building damage detection}, author: {V. Benson, A. Ecker}, year: {2020}, booktitle: {NeurIPS 2020 Workshop on Artificial Intelligence for Humanitarian Assistance and Disaster Response (AI+HADR 2020)}, }
.bib
T. Lüddecke, A. Ecker
CNNs efficiently learn long-range dependencies
NeurIPS 2020 Workshop on Shared Visual Representations in Human & Machine Intelligence, 2020
@inproceedings{luddeckecnns, title: {CNNs efficiently learn long-range dependencies}, author: {T. Lüddecke, A. Ecker}, year: {2020}, booktitle: {NeurIPS 2020 Workshop on Shared Visual Representations in Human & Machine Intelligence}, }
.bib
T. Lüddecke, F. Wörgötter
Fine-grained action plausibility rating
Robotics and Autonomous Systems (RAS), 2020
@article{lueddecke20, title: {Fine-grained action plausibility rating}, author: {T. Lüddecke, F. Wörgötter}, year: {2020}, journal: {Robotics and Autonomous Systems (RAS)}, }
.bib
M. Rolínek, V. Musil, A. Paulus, M. Vlastelica, C. Michaelis, G. Martius
Optimizing Rank-based Metrics with Blackbox Differentiation
Computer Vision and Pattern Recognition (CVPR), 2020
@inproceedings{Rolínek2019a, title: {Optimizing Rank-based Metrics with Blackbox Differentiation}, author: {M. Rolínek, V. Musil, A. Paulus, M. Vlastelica, C. Michaelis, G. Martius}, year: {2020}, booktitle: {Computer Vision and Pattern Recognition (CVPR)}, }
.bib
I. Ustyuzhaninov, S. A. Cadena, E. Froudarakis, P. G. Fahey, E. Y. Walker, E. Cobos, J. Reimer, F. H. Sinz, A. S. Tolias, M. Bethge, A. S. Ecker
Rotation-invariant clustering of functional cell types in primary visual cortex
International Conference on Learning Representations (ICLR), 2020
@inproceedings{Ustyuzhaninov2020a, title: {Rotation-invariant clustering of functional cell types in primary visual cortex}, author: {I. Ustyuzhaninov, S. A. Cadena, E. Froudarakis, P. G. Fahey, E. Y. Walker, E. Cobos, J. Reimer, F. H. Sinz, A. S. Tolias, M. Bethge, A. S. Ecker}, year: {2020}, booktitle: {International Conference on Learning Representations (ICLR)}, }
.bib
Z. Zhao, D. Klindt, A. M. Chagas, K. P. Szatko, L. Rogerson, D. Protti, C. Behrens, D. Dalkara, T. Schubert, M. Bethge, K. Franke, P. Berens, A. S. Ecker, T. Euler
The temporal structure of the inner retina at a single glance
Scientific Reports, 2020
@article{Z*2020a, title: {The temporal structure of the inner retina at a single glance}, author: {Z. Zhao, D. Klindt, A. M. Chagas, K. P. Szatko, L. Rogerson, D. Protti, C. Behrens, D. Dalkara, T. Schubert, M. Bethge, K. Franke, P. Berens, A. S. Ecker, T. Euler}, year: {2020}, journal: {Scientific Reports}, }
.bib

2019

A. S. Ecker, F. H. Sinz, E. Froudarakis, P. G. Fahey, S. A. Cadena, E. Y. Walker, E. Cobos, J. Reimer, A. S. Tolias, M. Bethge
A rotation-equivariant convolutional neural network model of primary visual cortex
International Conference on Learning Representations (ICLR), 2019
@inproceedings{ecker_2019, title: {A rotation-equivariant convolutional neural network model of primary visual cortex}, author: {A. S. Ecker, F. H. Sinz, E. Froudarakis, P. G. Fahey, S. A. Cadena, E. Y. Walker, E. Cobos, J. Reimer, A. S. Tolias, M. Bethge}, year: {2019}, journal: {International Conference on Learning Representations (ICLR)}, }
.bib
C. Michaelis, B. Mitzkus, R. Geirhos, E. Rusak, O. Bringmann, A. S. Ecker, M. Bethge, W. Brendel
Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming
Machine Learning for Autonomous Driving Workshop, NeurIPS 2019, 2019
@inproceedings{michaelis2019dragon, title: {Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming}, author: {C. Michaelis, B. Mitzkus, R. Geirhos, E. Rusak, O. Bringmann, A. S. Ecker, M. Bethge, W. Brendel}, year: {2019}, booktitle: {Machine Learning for Autonomous Driving Workshop, NeurIPS 2019}, }
.bib
T. Lüddecke, T. Kulvicius, F. Wörgötter
Context-based Affordance Segmentation from 2D Images for Robot Action
Robotics and Autonomous Systems (RAS), 2019
@article{lueddecke19a, title: {Context-based Affordance Segmentation from 2D Images for Robot Action}, author: {T. Lüddecke, T. Kulvicius, F. Wörgötter}, year: {2019}, journal: {Robotics and Autonomous Systems (RAS)}, }
.bib
S. A. Cadena, G. H. Denfield, E. Y. Walker, L. A. Gatys, A. S. Tolias, M. Bethge, A. S. Ecker
Deep convolutional models improve predictions of macaque V1 responses to natural images
PLoS Computational Biology, 2019
@article{Cadena2019, title: {Deep convolutional models improve predictions of macaque V1 responses to natural images}, author: {S. A. Cadena, G. H. Denfield, E. Y. Walker, L. A. Gatys, A. S. Tolias, M. Bethge, A. S. Ecker}, year: {2019}, journal: {PLoS Computational Biology}, }
.bib
T. Lüddecke, A. Agostini, M. Fauth, M. Tamosiunaite, F. Wörgötter
Distributional Semantics of Objects in Visual Scenes in Comparison to Text
Artificial Intelligence, 2019
@article{lueddecke19, title: {Distributional Semantics of Objects in Visual Scenes in Comparison to Text}, author: {T. Lüddecke, A. Agostini, M. Fauth, M. Tamosiunaite, F. Wörgötter}, year: {2019}, journal: {Artificial Intelligence}, }
.bib
S. A. Cadena, F. H. Sinz, T. Muhammad, E. Froudarakis, E. Cobos, E. Y. Walker, J. Reimer, M. Bethge, A. Tolias, A. S. Ecker
How well do deep neural networks trained on object recognition characterize the mouse visual system?
NeurIPS Neuro AI Workshop, 2019
@inproceedings{Cadena2019b, title: {How well do deep neural networks trained on object recognition characterize the mouse visual system?}, author: {S. A. Cadena, F. H. Sinz, T. Muhammad, E. Froudarakis, E. Cobos, E. Y. Walker, J. Reimer, M. Bethge, A. Tolias, A. S. Ecker}, year: {2019}, journal: {NeurIPS Neuro AI Workshop}, }
.bib
R. Geirhos, P. Rubisch, C. Michaelis, M. Bethge, F. A. Wichmann, W. Brendel
ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness
International Conference on Learning Representations (ICLR), 2019
@inproceedings{Geirhos2019a, title: {ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness}, author: {R. Geirhos, P. Rubisch, C. Michaelis, M. Bethge, F. A. Wichmann, W. Brendel}, year: {2019}, journal: {International Conference on Learning Representations (ICLR)}, }
.bib
E. Y. Walker, F. H. Sinz, E. Froudarakis, P. G. Fahey, T. Muhammad, A. S. Ecker, E. Cobos, J. Reimer, X. Pitkow, A. S. Tolias
Inception loops discover what excites neurons most using deep predictive models
Nature Neuroscience, 2019
@article{Walker2019, title: {Inception loops discover what excites neurons most using deep predictive models}, author: {E. Y. Walker, F. H. Sinz, E. Froudarakis, P. G. Fahey, T. Muhammad, A. S. Ecker, E. Cobos, J. Reimer, X. Pitkow, A. S. Tolias}, year: {2019}, journal: {Nature Neuroscience}, }
.bib
Neural Data Science Group
Institute of Computer Science
University of Goettingen