Publications

Publications

2024

L. Höfling, K. Szatko, C. Behrens, Y. Deng, Y. Qiu, D. Klindt, Z. Jessen, G. Schwartz, M. Bethge, P. Berens, K. Franke, A. Ecker, T. Euler
eLife, 2024
show abstract
The retina transforms patterns of light into visual feature representations supporting behaviour. These representations are distributed across various types of retinal ganglion cells (RGCs), whose spatial and temporal tuning properties have been studied extensively in many model organisms, including the mouse. However, it has been difficult to link the potentially nonlinear retinal transformations of natural visual inputs to specific ethological purposes. Here, we discover a nonlinear selectivity to chromatic contrast in an RGC type that allows the detection of changes in visual context. We trained a convolutional neural network (CNN) model on large-scale functional recordings of RGC responses to natural mouse movies, and then used this model to search in silico for stimuli that maximally excite distinct types of RGCs. This procedure predicted centre colour opponency in transient suppressed-by-contrast (tSbC) RGCs, a cell type whose function is being debated. We confirmed experimentally that these cells indeed responded very selectively to Green-OFF, UV-ON contrasts. This type of chromatic contrast was characteristic of transitions from ground to sky in the visual scene, as might be elicited by head or eye movements across the horizon. Because tSbC cells performed best among all RGC types at reliably detecting these transitions, we suggest a role for this RGC type in providing contextual information (i.e. sky or ground) necessary for the selection of appropriate behavioural responses to other stimuli, such as looming objects. Our work showcases how a combination of experiments with natural stimuli and computational modelling allows discovering novel types of stimulus selectivity and identifying their potential ethological relevance.
@article{hoefling2024chromatic, title: {A chromatic feature detector in the retina signals visual context changes}, author: {L. Höfling, K. Szatko, C. Behrens, Y. Deng, Y. Qiu, D. Klindt, Z. Jessen, G. Schwartz, M. Bethge, P. Berens, K. Franke, A. Ecker, T. Euler}, year: {2024}, journal: {eLife}, }
.bib
M. Weis, S. Papadopoulos, L. Hansel, T. Lüddecke, B. Celii, P. Fahey, E. Wang, J. Bae, A. Bodor, D. Brittain, J. Buchanan, D. Bumbarger, M. Castro, F. Collman, N. da Costa, S. Dorkenwald, L. Elabbady, A. Halageri, Z. Jia, C. Jordan, D. Kapner, N. Kemnitz, S. Kinn, K. Lee, K. Li, R. Lu, T. Macrina, G. Mahalingam, E. Mitchell, S. Mondal, S. Mu, B. Nehoran, S. Popovych, R. Reid, C. Schneider-Mizell, H. Seung, W. Silversmith, M. Takeno, R. Torres, N. Turner, W. Wong, J. Wu, W. Yin, S. Yu, J. Reimer, P. Berens, A. Tolias, A. Ecker
bioRxiv, 2024
show abstract
Neurons in the neocortex exhibit astonishing morphological diversity which is critical for properly wiring neural circuits and giving neurons their functional properties. However, the organizational principles underlying this morphological diversity remain an open question. Here, we took a data-driven approach using graph-based machine learning methods to obtain a low-dimensional morphological { extquotedblleft}bar code{ extquotedblright} describing more than 30,000 excitatory neurons in mouse visual areas V1, AL and RL that were reconstructed from the millimeter scale MICrONS serial-section electron microscopy volume. Contrary to previous classifications into discrete morphological types (m-types), our data-driven approach suggests that the morphological landscape of cortical excitatory neurons is better described as a continuum, with a few notable exceptions in layers 5 and 6. Dendritic morphologies in layers 2{ extendash}3 exhibited a trend towards a decreasing width of the dendritic arbor and a smaller tuft with increasing cortical depth. Inter-area differences were most evident in layer 4, where V1 contained more atufted neurons than higher visual areas. Moreover, we discovered neurons in V1 on the border to layer 5 which avoided deeper layers with their dendrites. In summary, we suggest that excitatory neurons{ extquoteright} morphological diversity is better understood by considering axes of variation than using distinct m-types.Competing Interest StatementThe authors have declared no competing interest.
@article{weis2024unsupervised, title: {An unsupervised map of excitatory neurons' dendritic morphology in the mouse visual cortex}, author: {M. Weis, S. Papadopoulos, L. Hansel, T. Lüddecke, B. Celii, P. Fahey, E. Wang, J. Bae, A. Bodor, D. Brittain, J. Buchanan, D. Bumbarger, M. Castro, F. Collman, N. da Costa, S. Dorkenwald, L. Elabbady, A. Halageri, Z. Jia, C. Jordan, D. Kapner, N. Kemnitz, S. Kinn, K. Lee, K. Li, R. Lu, T. Macrina, G. Mahalingam, E. Mitchell, S. Mondal, S. Mu, B. Nehoran, S. Popovych, R. Reid, C. Schneider-Mizell, H. Seung, W. Silversmith, M. Takeno, R. Torres, N. Turner, W. Wong, J. Wu, W. Yin, S. Yu, J. Reimer, P. Berens, A. Tolias, A. Ecker}, year: {2024}, journal: {bioRxiv}, }
.bib
R. Vogg, T. Lüddecke, J. Henrich, S. Dey, M. Nuske, V. Hassler, D. Murphy, J. Fischer, J. Ostner, O. Schülke, P. Kappeler, C. Fichtel, A. Gail, S. Treue, H. Scherberger, F. Wörgötter, A. Ecker
arXiv preprint arXiv:2401.16424, 2024
show abstract
Advances in computer vision as well as increasingly widespread video-based behavioral monitoring have great potential for transforming how we study animal cognition and behavior. However, there is still a fairly large gap between the exciting prospects and what can actually be achieved in practice today, especially in videos from the wild. With this perspective paper, we want to contribute towards closing this gap, by guiding behavioral scientists in what can be expected from current methods and steering computer vision researchers towards problems that are relevant to advance research in animal behavior. We start with a survey of the state-of-the-art methods for computer vision problems that are directly relevant to the video-based study of animal behavior, including object detection, multi-individual tracking, (inter)action recognition and individual identification. We then review methods for effort-efficient learning, which is one of the biggest challenges from a practical perspective. Finally, we close with an outlook into the future of the emerging field of computer vision for animal behavior, where we argue that the field should move fast beyond the common frame-by-frame processing and treat video as a first-class citizen.
@article{vogg2024computer, title: {Computer Vision for Primate Behavior Analysis in the Wild}, author: {R. Vogg, T. Lüddecke, J. Henrich, S. Dey, M. Nuske, V. Hassler, D. Murphy, J. Fischer, J. Ostner, O. Schülke, P. Kappeler, C. Fichtel, A. Gail, S. Treue, H. Scherberger, F. Wörgötter, A. Ecker}, year: {2024}, journal: {arXiv preprint arXiv:2401.16424}, }
.bib
M. Vystrčilová, S. Sridhar, M. Burg, T. Gollisch, A. Ecker
bioRxiv, 2024
show abstract
The diverse nature of visual environments demands that the retina, the first stage of the visual system, encodes a vast range of stimuli with various statistics. The retina adapts its computations to some specific features of the input, such as brightness, contrast or motion. However, it is less clear whether it also adapts to the statistics of natural scenes compared to white noise, the latter of which is often used to infer models of retinal computation. To address this question, we analyzed neural activity of retinal ganglion cells (RGCs) in response to both white noise and naturalistic movie stimuli. We performed a systematic comparative analysis of traditional linear-nonlinear (LN) and recent convolutional neural network (CNN) models and tested their generalization across stimulus domains. We found that no model type trained on one stimulus ensemble was able to accurately predict neural activity on the other, suggesting that retinal processing depends on the stimulus statistics. Under white noise stimulation, the receptive fields of the neurons were mostly lowpass, while under natural image statistics they exhibited a more pronounced surround resembling the whitening filters predicted by efficient coding. Together, these results suggest that retinal processing dynamically adapts to the stimulus statistics.
@article{vystrvcilova2024convolutional, title: {Convolutional neural network models of the primate retina reveal adaptation to natural stimulus statistics}, author: {M. Vystrčilová, S. Sridhar, M. Burg, T. Gollisch, A. Ecker}, year: {2024}, journal: {bioRxiv}, }
.bib
S. Cadena, K. Willeke, K. Restivo, G. Denfield, F. Sinz, M. Bethge, A. Tolias, A. Ecker
PLOS Computational Biology, 2024
@article{cadena2024diverse, title: {Diverse task-driven modeling of macaque V4 reveals functional specialization towards semantic tasks}, author: {S. Cadena, K. Willeke, K. Restivo, G. Denfield, F. Sinz, M. Bethge, A. Tolias, A. Ecker}, year: {2024}, journal: {PLOS Computational Biology}, }
.bib
J. van Delden, J. Schultz, C. Blech, S. Langer, T. Lüddecke
NeurIPS 2024, 2024
@article{delden2024vibrating, title: {Learning to Predict Structural Vibrations}, author: {J. van Delden, J. Schultz, C. Blech, S. Langer, T. Lüddecke}, year: {2024}, journal: {NeurIPS 2024}, }
.bib
F. Müller, R. Görge, A. Bernzen, J. Pirk, M. Poretschkin
2024 AAAI/ACM Conference on AI, Ethics, and Society, 2024
show abstract
Memorization in large language models (LLMs) is a growing concern. LLMs have been shown to easily reproduce parts of their training data, including copyrighted work. This is an important problem to solve, as it may violate existing copyright laws as well as the European AI Act. In this work, we propose a systematic analysis to quantify the extent of potential copyright infringements in LLMs using European law as an example. Unlike previous work, we evaluate instruction-finetuned models in a realistic end-user scenario. Our analysis builds on a proposed threshold of 160 characters, which we borrow from the German Copyright Service Provider Act and a fuzzy text matching algorithm to identify potentially copyright-infringing textual reproductions. The specificity of countermeasures against copyright infringement is analyzed by comparing model behavior on copyrighted and public domain data. We investigate what behaviors models show instead of producing protected text (such as refusal or hallucination) and provide a first legal assessment of these behaviors. We find that there are huge differences in copyright compliance, specificity, and appropriate refusal among popular LLMs. Alpaca, GPT 4, GPT 3.5, and Luminous perform best in our comparison, with OpenGPT-X, Alpaca, and Luminous producing a particularly low absolute number of potential copyright violations. Code will be published soon.
@article{mueller2024llmsmemorizationqualityspecificity, title: {LLMs and Memorization: On Quality and Specificity of Copyright Compliance}, author: {F. Müller, R. Görge, A. Bernzen, J. Pirk, M. Poretschkin}, year: {2024}, journal: {2024 AAAI/ACM Conference on AI, Ethics, and Society}, }
.bib
F. Müller, J. Tanke, J. Gall
ECCV 2024 Workshop and Competition on Affective Behavior Analysis in-the-wild, 2024
show abstract
Forecasting long-term 3D human motion is challenging: the stochasticity of human behavior makes it hard to generate realistic human motion from the input sequence alone. Information on the scene environment and the motion of nearby people can greatly aid the generation process. We propose a scene-aware social transformer model (SAST) to forecast long-term (10s) human motion motion. Unlike previous models, our approach can model interactions between both widely varying numbers of people and objects in a scene. We combine a temporal convolutional encoder-decoder architecture with a Transformer-based bottleneck that allows us to efficiently combine motion and scene information. We model the conditional motion distribution using denoising diffusion models. We benchmark our approach on the Humans in Kitchens dataset, which contains 1 to 16 persons and 29 to 50 objects that are visible simultaneously. Our model outperforms other approaches in terms of realism and diversity on different metrics and in a user study.
@article{mueller2024massivelymultiperson3dhuman, title: {Massively Multi-Person 3D Human Motion Forecasting with Scene Context}, author: {F. Müller, J. Tanke, J. Gall}, year: {2024}, journal: {ECCV 2024 Workshop and Competition on Affective Behavior Analysis in-the-wild}, }
.bib
J. van Delden, J. Schultz, C. Blech, S. Langer, T. L{"u}ddecke
ICLR 2024 Workshop on AI4DifferentialEquations In Science, 2024
@inproceedings{delden2024minimizing, title: {Minimizing Structural Vibrations via Guided Diffusion Design Optimization}, author: {J. van Delden, J. Schultz, C. Blech, S. Langer, T. L{"u}ddecke}, year: {2024}, booktitle: {ICLR 2024 Workshop on AI4DifferentialEquations In Science}, }
.bib
P. Turishcheva, L. Hansel, M. Ritzert, M. A. Weis, A. S. Ecker
NeurIPS 2024 NeurReps Workshop (extended abstract), 2024
@misc{turishcheva2024mnistndsetnaturalisticdatasets, title: {MNIST-Nd: a set of naturalistic datasets to benchmark clustering across dimensions}, author: {P. Turishcheva, L. Hansel, M. Ritzert, M. A. Weis, A. S. Ecker}, year: {2024}, journal: {NeurIPS 2024 NeurReps Workshop (extended abstract)}, }
.bib
F. Schmidt, S. Shrinivasan, P. Turishcheva, F. H. Sinz
NeurIPS 2024 NeurReps Workshop (extended abstract), 2024
@misc{schmidt2024modelingdynamicneuralactivity, title: {Modeling dynamic neural activity by combining naturalistic video stimuli and stimulus-independent latent factors}, author: {F. Schmidt, S. Shrinivasan, P. Turishcheva, F. H. Sinz}, year: {2024}, journal: {NeurIPS 2024 NeurReps Workshop (extended abstract)}, }
.bib
S. Sridhar, M. Vystrčilová, M. Khani, D. Karamanlis, H. Schreyer, V. Ramakrishna, S. Krueppel, S. Zapp, M. Mietsch, A. Ecker, T. Gollisch
bioRxiv, 2024
show abstract
Retinal ganglion cells, the output neurons of the vertebrate retina, often display nonlinear summation of visual signals over their receptive fields. This creates sensitivity to spatial contrast, letting the cells respond to spatially structured visual stimuli, such as a contrast-reversing grating, even when no net change in overall illumination of the receptive field occurs. Yet, computational models of ganglion cell responses are often based on linear receptive fields. Nonlinear extensions, on the other hand, such as subunit models, which separate receptive fields into smaller, nonlinearly combined subfields, are often cumbersome to fit to experimental data, in particular when natural stimuli are considered. Previous work in the salamander retina has shown that sensitivity to spatial contrast in response to flashed images can be partly captured by a model that combines signals from the mean and variance of luminance signals inside the receptive field. Here, we extend this spatial contrast model for application to spatiotemporal stimulation and explore its performance on spiking responses that we recorded from retinas of marmosets under artificial and natural movies. We show how the model can be fitted to experimental data and that it outperforms common models with linear spatial integration, in particular for parasol ganglion cells. Finally, we use the model framework to infer the cells’ spatial scale of nonlinear spatial integration and contrast sensitivity. Our work shows that the spatial contrast model provides a simple approach to capturing aspects of nonlinear spatial integration with only few free parameters, which can be used to assess the cells’ functional properties under natural stimulation and which provides a simple-to-obtain benchmark for comparison with more detailed nonlinear encoding models.
@article{sridhar2024modeling, title: {Modeling spatial contrast sensitivity in responses of primate retinal ganglion cells to natural movies}, author: {S. Sridhar, M. Vystrčilová, M. Khani, D. Karamanlis, H. Schreyer, V. Ramakrishna, S. Krueppel, S. Zapp, M. Mietsch, A. Ecker, T. Gollisch}, year: {2024}, journal: {bioRxiv}, }
.bib
M. F. Burg, T. Zenkel, M. Vystrčilová, J. Oesterle, L. Höfling, K. F. Willeke, J. Lause, S. Müller, P. G. Fahey, Z. Ding, K. Restivo, S. Sridhar, T. Gollisch, P. Berens, A. S. Tolias, T. Euler, M. Bethge, A. S. Ecker
The Twelfth International Conference on Learning Representations, 2024
@inproceedings{burg2024most, title: {Most discriminative stimuli for functional cell type clustering}, author: {M. F. Burg, T. Zenkel, M. Vystrčilová, J. Oesterle, L. Höfling, K. F. Willeke, J. Lause, S. Müller, P. G. Fahey, Z. Ding, K. Restivo, S. Sridhar, T. Gollisch, P. Berens, A. S. Tolias, T. Euler, M. Bethge, A. S. Ecker}, year: {2024}, booktitle: {The Twelfth International Conference on Learning Representations}, }
.bib
N. Wu, I. Valera, F. Sinz, A. Ecker, T. Euler, Y. Qiu
PLOS Computational Biology, 2024
@article{wu2024probabilistic, title: {Probabilistic neural transfer function estimation with Bayesian system identification}, author: {N. Wu, I. Valera, F. Sinz, A. Ecker, T. Euler, Y. Qiu}, year: {2024}, journal: {PLOS Computational Biology}, }
.bib
P. Turishcheva, M. Burg, F. H. Sinz, A. Ecker
NeurIPS 2024 (Spotlight), 2024
@misc{turishcheva2024reproducibilitypredictivenetworksmouse, title: {Reproducibility of predictive networks for mouse visual cortex}, author: {P. Turishcheva, M. Burg, F. H. Sinz, A. Ecker}, year: {2024}, journal: {NeurIPS 2024 (Spotlight)}, }
.bib
P. Turishcheva, P. G. Fahey, M. Vystrčilová, L. Hansel, R. Froebe, K. Ponder, Y. Qiu, K. F. Willeke, M. Bashiri, R. Baikulov, Y. Zhu, L. Ma, S. Yu, T. Huang, B. M. Li, W. D. Wulf, N. Kudryashova, M. H. Hennig, N. L. Rochefort, A. Onken, E. Wang, Z. Ding, A. S. Tolias, F. H. Sinz, A. S. Ecker
NeurIPS 2024 (Dataset and Benchmarks), 2024
@misc{turishcheva2024retrospectivedynamicsensoriumcompetition, title: {Retrospective for the Dynamic Sensorium Competition for predicting large-scale mouse primary visual cortex activity from videos}, author: {P. Turishcheva, P. G. Fahey, M. Vystrčilová, L. Hansel, R. Froebe, K. Ponder, Y. Qiu, K. F. Willeke, M. Bashiri, R. Baikulov, Y. Zhu, L. Ma, S. Yu, T. Huang, B. M. Li, W. D. Wulf, N. Kudryashova, M. H. Hennig, N. L. Rochefort, A. Onken, E. Wang, Z. Ding, A. S. Tolias, F. H. Sinz, A. S. Ecker}, year: {2024}, journal: {NeurIPS 2024 (Dataset and Benchmarks)}, }
.bib
M. Carbone, V. Peterhans, A. Ecker, M. Wilczek
Phys. Rev. Lett. (Editors' Suggestion), 2024
@article{carbone2024tailor, title: {Tailor-Designed Models for the Turbulent Velocity Gradient through Normalizing Flow}, author: {M. Carbone, V. Peterhans, A. Ecker, M. Wilczek}, year: {2024}, journal: {Phys. Rev. Lett. (Editors' Suggestion)}, }
.bib

2023

N. Wu, I. Valera, A. Ecker, T. Euler, Y. Qiu
arXiv preprint arXiv:2308.05990, 2023
@article{wu2023bayesian, title: {Bayesian Neural System Identification with Response Variability}, author: {N. Wu, I. Valera, A. Ecker, T. Euler, Y. Qiu}, year: {2023}, journal: {arXiv preprint arXiv:2308.05990}, }
.bib
Z. Ding, D. Tran, K. Ponder, E. Cobos, Z. Ding, P. Fahey, E. Wang, T. Muhammad, J. Fu, S. Cadena, others
bioRxiv, 2023
@article{ding2023bipartite, title: {Bipartite invariance in mouse primary visual cortex}, author: {Z. Ding, D. Tran, K. Ponder, E. Cobos, Z. Ding, P. Fahey, E. Wang, T. Muhammad, J. Fu, S. Cadena, others}, year: {2023}, journal: {bioRxiv}, }
.bib
A. Barreto, L. Reifenrath, R. Vogg, F. Sinz, A. Mahlein
KI-Künstliche Intelligenz, 2023
show abstract
In crop protection, disease quantification parameters such as disease incidence (DI) and disease severity (DS) are the principal indicators for decision making, aimed at ensuring the safety and productivity of crop yield. The quantification is standardized with leaf organs, defined as individual scoring units. This study focuses on identifying and segmenting individual leaves in agricultural fields using unmanned aerial vehicle (UAV), multispectral imagery of sugar beet fields, and deep instance segmentation networks (Mask R-CNN). Five strategies for achieving network robustness with limited labeled images are tested and compared, employing simple and copy-paste image augmentation techniques. The study also evaluates the impact of environmental conditions on network performance. Metrics of performance show that multispectral UAV images recorded under sunny conditions lead to a performance drop. Focusing on the practical application, we employ Mask R-CNN models in an image-processing pipeline to calculate leaf-based parameters including DS and DI. The pipeline was applied in time-series in an experimental trial with five varieties and two fungicide strategies to illustrate epidemiological development. Disease severity calculated with the model with highest Average Precision (AP) shows the strongest correlation with the same parameter assessed by experts. The time-series development of disease severity and disease incidence demonstrates the advantages of multispectral UAV-imagery in contrasting varieties for resistance, as well as the limits for disease control measurements. This study identifies key components for automatic leaf segmentation of diseased plants using UAV imagery, such as illumination and disease condition. It also provides a tool for delivering leaf-based parameters relevant to optimize crop production through automated disease quantification by imaging tools.
@article{barreto2023data, title: {Data Augmentation for Mask-Based Leaf Segmentation of UAV-Images as a Basis to Extract Leaf-Based Phenotyping Parameters}, author: {A. Barreto, L. Reifenrath, R. Vogg, F. Sinz, A. Mahlein}, year: {2023}, journal: {KI-Künstliche Intelligenz}, }
.bib
J. Schultz, J. van Delden, C. Blech, S. Langer, T. Lüddecke
PAMM, 2023
@article{schultz2023deep, title: {Deep learning for frequency response prediction of a multimass oscillator}, author: {J. Schultz, J. van Delden, C. Blech, S. Langer, T. Lüddecke}, year: {2023}, journal: {PAMM}, }
.bib
K. Willeke, K. Restivo, K. Franke, A. Nix, S. Cadena, T. Shinn, C. Nealley, G. Rodriguez, S. Patel, A. Ecker, others
bioRxiv, 2023
@article{willeke2023deep, title: {Deep learning-driven characterization of single cell tuning in primate visual area V4 unveils topological organization}, author: {K. Willeke, K. Restivo, K. Franke, A. Nix, S. Cadena, T. Shinn, C. Nealley, G. Rodriguez, S. Patel, A. Ecker, others}, year: {2023}, journal: {bioRxiv}, }
.bib
J. Fu, S. Shrinivasan, K. Ponder, T. Muhammad, Z. Ding, E. Wang, Z. Ding, D. Tran, P. Fahey, S. Papadopoulos, others
bioRxiv, 2023
@article{fu2023pattern, title: {Pattern completion and disruption characterize contextual modulation in mouse visual cortex}, author: {J. Fu, S. Shrinivasan, K. Ponder, T. Muhammad, Z. Ding, E. Wang, Z. Ding, D. Tran, P. Fahey, S. Papadopoulos, others}, year: {2023}, journal: {bioRxiv}, }
.bib
M. A. Weis, L. Hansel, T. Lüddecke, A. S. Ecker
Transactions on Machine Learning Research, 2023
show abstract
Unsupervised graph representation learning has recently gained interest in several application domains such as neuroscience, where modeling the diverse morphology of cell types in the brain is one of the key challenges. It is currently unknown how many excitatory cortical cell types exist and what their defining morphological features are. Here we present GraphDINO, a purely data-driven approach to learn low-dimensional representations of 3D neuronal morphologies from unlabeled large-scale datasets. GraphDINO is a novel transformer-based representation learning method for spatially-embedded graphs. To enable self-supervised learning on transformers, we (1) developed data augmentation strategies for spatially-embedded graphs, (2) adapted the positional encoding and (3) introduced a novel attention mechanism, AC-Attention, which combines attention-based global interaction between nodes and classic graph convolutional processing. We show, in two different species and across multiple brain areas, that this method yields morphological cell type clusterings that are on par with manual feature-based classification by experts, but without using prior knowledge about the structural features of neurons. Moreover, it outperforms previous approaches on quantitative benchmarks predicting expert labels. Our method could potentially enable data-driven discovery of novel morphological features and cell types in large-scale datasets. It is applicable beyond neuroscience in settings where samples in a dataset are graphs and graph-level embeddings are desired.
@article{weis2023selfsupervised, title: {Self-Supervised Graph Representation Learning for Neuronal Morphologies}, author: {M. A. Weis, L. Hansel, T. Lüddecke, A. S. Ecker}, year: {2023}, journal: {Transactions on Machine Learning Research}, }
.bib
P. Turishcheva, P. Fahey, L. Hansel, R. Froebe, K. Ponder, M. Vystrčilová, K. Willeke, M. Bashiri, E. Wang, Z. Ding, others
arXiv preprint arXiv:2305.19654, 2023
show abstract
Understanding how biological visual systems process information is challenging due to the complex nonlinear relationship between neuronal responses and high-dimensional visual input. Artificial neural networks have already improved our understanding of this system by allowing computational neuroscientists to create predictive models and bridge biological and machine vision. During the Sensorium 2022 competition, we introduced benchmarks for vision models with static input. However, animals operate and excel in dynamic environments, making it crucial to study and understand how the brain functions under these conditions. Moreover, many biological theories, such as predictive coding, suggest that previous input is crucial for current input processing. Currently, there is no standardized benchmark to identify state-of-the-art dynamic models of the mouse visual system. To address this gap, we propose the Sensorium 2023 Competition with dynamic input. This includes the collection of a new large-scale dataset from the primary visual cortex of five mice, containing responses from over 38,000 neurons to over 2 hours of dynamic stimuli per neuron. Participants in the main benchmark track will compete to identify the best predictive models of neuronal responses for dynamic input. We will also host a bonus track in which submission performance will be evaluated on out-of-domain input, using withheld neuronal responses to dynamic input stimuli whose statistics differ from the training set. Both tracks will offer behavioral data along with video stimuli. As before, we will provide code, tutorials, and strong pre-trained baseline models to encourage participation. We hope this competition will continue to strengthen the accompanying Sensorium benchmarks collection as a standard tool to measure progress in large-scale neural system identification models of the entire mouse visual hierarchy and beyond.
@article{turishcheva2023dynamic, title: {The Dynamic Sensorium competition for predicting large-scale mouse visual cortex activity from videos}, author: {P. Turishcheva, P. Fahey, L. Hansel, R. Froebe, K. Ponder, M. Vystrčilová, K. Willeke, M. Bashiri, E. Wang, Z. Ding, others}, year: {2023}, journal: {arXiv preprint arXiv:2305.19654}, }
.bib
T. Jiang, M. Freudenberg, C. Kleinn, A. Ecker, N. Nölke
Remote Sensing, 2023
@article{jiang2023impacts, title: {The Impacts of Quality-Oriented Dataset Labeling on Tree Cover Segmentation Using U-Net: A Case Study in WorldView-3 Imagery}, author: {T. Jiang, M. Freudenberg, C. Kleinn, A. Ecker, N. Nölke}, year: {2023}, journal: {Remote Sensing}, }
.bib
E. Wang, P. Fahey, K. Ponder, Z. Ding, A. Change, T. Muhammad, S. Patel, Z. Ding, D. Tran, J. Fu, others
bioRxiv, 2023
@article{wang2023towards, title: {Towards a foundation model of the mouse visual cortex}, author: {E. Wang, P. Fahey, K. Ponder, Z. Ding, A. Change, T. Muhammad, S. Patel, Z. Ding, D. Tran, J. Fu, others}, year: {2023}, journal: {bioRxiv}, }
.bib

2022

L. Hoefling, K. Szatko, C. Behrens, Y. Qiu, D. Klindt, Z. Jessen, G. Schwartz, M. Bethge, P. Berens, K. Franke, A. Ecker, T. Euler
bioRxiv, 2022
@article{hoefling2022chromatic, title: {A chromatic feature detector in the retina signals visual context changes}, author: {L. Hoefling, K. Szatko, C. Behrens, Y. Qiu, D. Klindt, Z. Jessen, G. Schwartz, M. Bethge, P. Berens, K. Franke, A. Ecker, T. Euler}, year: {2022}, journal: {bioRxiv}, }
.bib
M. Goldin, B. Lefebvre, S. Virgili, M. Pham Van Cang, A. Ecker, T. Mora, U. Ferrari, O. Marre
Nature Communications, 2022
@article{goldin2022context, title: {Context-dependent selectivity to natural images in the retina}, author: {M. Goldin, B. Lefebvre, S. Virgili, M. Pham Van Cang, A. Ecker, T. Mora, U. Ferrari, O. Marre}, year: {2022}, journal: {Nature Communications}, }
.bib
S. Cadena, K. Willeke, K. Restivo, G. Denfield, F. Sinz, M. Bethge, A. Tolias, A. Ecker
bioRxiv, 2022
@article{cadena2022diverse, title: {Diverse task-driven modeling of macaque V4 reveals functional specialization towards semantic tasks}, author: {S. Cadena, K. Willeke, K. Restivo, G. Denfield, F. Sinz, M. Bethge, A. Tolias, A. Ecker}, year: {2022}, journal: {bioRxiv}, }
.bib
T. Lüddecke, A. S. Ecker
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022
@article{lueddecke2021clipseg, title: {Image Segmentation Using Text and Image Prompts}, author: {T. Lüddecke, A. S. Ecker}, year: {2022}, journal: {2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, }
.bib
K. F. Willeke, P. G. Fahey, M. Bashiri, L. Hansel, M. F. Burg, C. Blessing, S. A. Cadena, Z. Ding, K. Lurz, K. Ponder, T. Muhammad, S. S. Patel, A. S. Ecker, A. S. Tolias, F. H. Sinz
Retrospective on the SENSORIUM 2022 competition
PMLR, 2022
show abstract
The neural underpinning of the biological visual system is challenging to study experi- mentally, in particular as neuronal activity becomes increasingly nonlinear with respect to visual input. Artificial neural networks (ANNs) can serve a variety of goals for improving our understanding of this complex system, not only serving as predictive digital twins of sensory cortex for novel hypothesis generation in silico, but also incorporating bio-inspired architectural motifs to progressively bridge the gap between biological and machine vision. The mouse has recently emerged as a popular model system to study visual information processing, but no standardized large-scale benchmark to identify state-of-the-art mod- els of the mouse visual system has been established. To fill this gap, we proposed the SENSORIUM benchmark competition. We collected a large-scale dataset from mouse primary visual cortex containing the responses of more than 28,000 neurons across seven mice stimu- lated with thousands of natural images, together with simultaneous behavioral measurements that include running speed, pupil dilation, and eye movements. The benchmark challenge ranked models based on predictive performance for neuronal responses on a held-out test set, and included two tracks for model input limited to either stimulus only (SENSORIUM) or stimulus plus behavior (SENSORIUM+). As a part of the NeurIPS 2022 competition track, we received 172 model submissions from 26 teams, with the winning teams improving our previous state-of-the-art model by more than 15%. Dataset access and infrastructure for evaluation of model predictions will remain online as an ongoing benchmark. We would like to see this as a starting point for regular challenges and data releases, and as a standard tool for measuring progress in large-scale neural system identification models of the mouse visual system and beyond.
@article{willeke2022bsensorium, title: {Retrospective on the SENSORIUM 2022 competition}, author: {K. F. Willeke, P. G. Fahey, M. Bashiri, L. Hansel, M. F. Burg, C. Blessing, S. A. Cadena, Z. Ding, K. Lurz, K. Ponder, T. Muhammad, S. S. Patel, A. S. Ecker, A. S. Tolias, F. H. Sinz}, year: {2022}, journal: {PMLR}, }
.bib
K. F. Willeke, P. G. Fahey, M. Bashiri, L. Pede, M. F. Burg, C. Blessing, S. A. Cadena, Z. Ding, K. Lurz, K. Ponder, T. Muhammad, S. S. Patel, A. S. Ecker, A. S. Tolias, F. H. Sinz
arXiv preprint arXiv:2206.08666, 2022
show abstract
The neural underpinning of the biological visual system is challenging to study experimentally, in particular as the neuronal activity becomes increasingly nonlinear with respect to visual input. Artificial neural networks (ANNs) can serve a variety of goals for improving our understanding of this complex system, not only serving as predictive digital twins of sensory cortex for novel hypothesis generation in silico, but also incorporating bio-inspired architectural motifs to progressively bridge the gap between biological and machine vision. The mouse has recently emerged as a popular model system to study visual information processing, but no standardized large-scale benchmark to identify state-of-the-art models of the mouse visual system has been established. To fill this gap, we propose the Sensorium benchmark competition. We collected a large-scale dataset from mouse primary visual cortex containing the responses of more than 28,000 neurons across seven mice stimulated with thousands of natural images, together with simultaneous behavioral measurements that include running speed, pupil dilation, and eye movements. The benchmark challenge will rank models based on predictive performance for neuronal responses on a held-out test set, and includes two tracks for model input limited to either stimulus only (Sensorium) or stimulus plus behavior (Sensorium+). We provide a starting kit to lower the barrier for entry, including tutorials, pre-trained baseline models, and APIs with one line commands for data loading and submission. We would like to see this as a starting point for regular challenges and data releases, and as a standard tool for measuring progress in large-scale neural system identification models of the mouse visual system and beyond.
@article{willeke2022sensorium, title: {The Sensorium competition on predicting large-scale mouse primary visual cortex activity}, author: {K. F. Willeke, P. G. Fahey, M. Bashiri, L. Pede, M. F. Burg, C. Blessing, S. A. Cadena, Z. Ding, K. Lurz, K. Ponder, T. Muhammad, S. S. Patel, A. S. Ecker, A. S. Tolias, F. H. Sinz}, year: {2022}, journal: {arXiv preprint arXiv:2206.08666}, }
.bib

2021

M. A. Weis, K. Chitta, Y. Sharma, W. Brendel, M. Bethge, A. Geiger, A. S. Ecker
Journal of Machine Learning Research, 2021
@article{Weis2021, title: {Benchmarking Unsupervised Object Representations for Video Sequences}, author: {M. A. Weis, K. Chitta, Y. Sharma, W. Brendel, M. Bethge, A. Geiger, A. S. Ecker}, year: {2021}, journal: {Journal of Machine Learning Research}, }
.bib
K. Lurz, M. Bashiri, K. Willeke, A. Jagadish, E. Wang, E. Y. Walker, S. A. Cadena, T. Muhammad, E. Cobos, A. S. Tolias, A. S. Ecker, F. H. Sinz
International Conference on Learning Representations, 2021
@inproceedings{lurz2021generalization, title: {Generalization in data-driven models of primary visual cortex}, author: {K. Lurz, M. Bashiri, K. Willeke, A. Jagadish, E. Wang, E. Y. Walker, S. A. Cadena, T. Muhammad, E. Cobos, A. S. Tolias, A. S. Ecker, F. H. Sinz}, year: {2021}, booktitle: {International Conference on Learning Representations}, }
.bib
M. Burg, S. Cadena, G. Denfield, E. Walker, A. Tolias, M. Bethge, A. Ecker
PLOS Computational Biology, 2021
show abstract
Divisive normalization (DN) is a prominent computational building block in the brain that has been proposed as a canonical cortical operation. Numerous experimental studies have verified its importance for capturing nonlinear neural response properties to simple, artificial stimuli, and computational studies suggest that DN is also an important component for processing natural stimuli. However, we lack quantitative models of DN that are directly informed by measurements of spiking responses in the brain and applicable to arbitrary stimuli. Here, we propose a DN model that is applicable to arbitrary input images. We test its ability to predict how neurons in macaque primary visual cortex (V1) respond to natural images, with a focus on nonlinear response properties within the classical receptive field. Our model consists of one layer of subunits followed by learned orientation-specific DN. It outperforms linear-nonlinear and wavelet-based feature representations and makes a significant step towards the performance of state-of-the-art convolutional neural network (CNN) models. Unlike deep CNNs, our compact DN model offers a direct interpretation of the nature of normalization. By inspecting the learned normalization pool of our model, we gained insights into a long-standing question about the tuning properties of DN that update the current textbook description: we found that within the receptive field oriented features were normalized preferentially by features with similar orientation rather than non-specifically as currently assumed.
@article{burg_2021_learning_divisive_normalization, title: {Learning divisive normalization in primary visual cortex}, author: {M. Burg, S. Cadena, G. Denfield, E. Walker, A. Tolias, M. Bethge, A. Ecker}, year: {2021}, journal: {PLOS Computational Biology}, }
.bib
M. A. Weis, L. Pede, T. Lüddecke, A. S. Ecker
arXiv, 2021
show abstract
Understanding the diversity of cell types and their function in the brain is one of the key challenges in neuroscience. The advent of large-scale datasets has given rise to the need of unbiased and quantitative approaches to cell type classification. We present GraphDINO, a purely data-driven approach to learning a low dimensional representation of the 3D morphology of neurons. GraphDINO is a novel graph representation learning method for spatial graphs utilizing self-supervised learning on transformer models. It smoothly interpolates between attention-based global interaction between nodes and classic graph convolutional processing. We show that this method is able to yield morphological cell type clustering that is comparable to manual feature-based classification and shows a good correspondence to expert-labeled cell types in two different species and cortical areas. Our method is applicable beyond neuroscience in settings where samples in a dataset are graphs and graph-level embeddings are desired.
@article{weis2021selfsupervised, title: {Self-supervised Representation Learning of Neuronal Morphologies}, author: {M. A. Weis, L. Pede, T. Lüddecke, A. S. Ecker}, year: {2021}, journal: {arXiv}, }
.bib
D. Kobak, Y. Bernaerts, M. Weis, F. Scala, A. Tolias, P. Berens
Journal of the Royal Statistical Society: Series C (Applied Statistics), 2021
@article{weis2021sparse, title: {Sparse reduced-rank regression for exploratory visualisation of paired multivariate data}, author: {D. Kobak, Y. Bernaerts, M. Weis, F. Scala, A. Tolias, P. Berens}, year: {2021}, journal: {Journal of the Royal Statistical Society: Series C (Applied Statistics)}, }
.bib

2020

V. Benson, A. Ecker
NeurIPS 2020 Workshop on Artificial Intelligence for Humanitarian Assistance and Disaster Response (AI+HADR 2020), 2020
@inproceedings{benson2020assessing, title: {Assessing out-of-domain generalization for robust building damage detection}, author: {V. Benson, A. Ecker}, year: {2020}, booktitle: {NeurIPS 2020 Workshop on Artificial Intelligence for Humanitarian Assistance and Disaster Response (AI+HADR 2020)}, }
.bib
C. Michaelis, M. Bethge, A. Ecker
arXiv preprint arXiv:2011.04267, 2020
@article{michaelis2020closing, title: {Closing the generalization gap in one-shot object detection}, author: {C. Michaelis, M. Bethge, A. Ecker}, year: {2020}, journal: {arXiv preprint arXiv:2011.04267}, }
.bib
T. Lüddecke, A. Ecker
NeurIPS 2020 Workshop on Shared Visual Representations in Human & Machine Intelligence, 2020
@inproceedings{luddeckecnns, title: {CNNs efficiently learn long-range dependencies}, author: {T. Lüddecke, A. Ecker}, year: {2020}, booktitle: {NeurIPS 2020 Workshop on Shared Visual Representations in Human & Machine Intelligence}, }
.bib
T. Lüddecke, F. Wörgötter
Robotics and Autonomous Systems (RAS), 2020
@article{lueddecke20, title: {Fine-grained action plausibility rating}, author: {T. Lüddecke, F. Wörgötter}, year: {2020}, journal: {Robotics and Autonomous Systems (RAS)}, }
.bib
M. Rolínek, V. Musil, A. Paulus, M. Vlastelica, C. Michaelis, G. Martius
Computer Vision and Pattern Recognition (CVPR), 2020
@inproceedings{Rolínek2019a, title: {Optimizing Rank-based Metrics with Blackbox Differentiation}, author: {M. Rolínek, V. Musil, A. Paulus, M. Vlastelica, C. Michaelis, G. Martius}, year: {2020}, booktitle: {Computer Vision and Pattern Recognition (CVPR)}, }
.bib
I. Ustyuzhaninov, S. A. Cadena, E. Froudarakis, P. G. Fahey, E. Y. Walker, E. Cobos, J. Reimer, F. H. Sinz, A. S. Tolias, M. Bethge, A. S. Ecker
International Conference on Learning Representations (ICLR), 2020
@inproceedings{Ustyuzhaninov2020a, title: {Rotation-invariant clustering of functional cell types in primary visual cortex}, author: {I. Ustyuzhaninov, S. A. Cadena, E. Froudarakis, P. G. Fahey, E. Y. Walker, E. Cobos, J. Reimer, F. H. Sinz, A. S. Tolias, M. Bethge, A. S. Ecker}, year: {2020}, booktitle: {International Conference on Learning Representations (ICLR)}, }
.bib
Z. Zhao, D. Klindt, A. M. Chagas, K. P. Szatko, L. Rogerson, D. Protti, C. Behrens, D. Dalkara, T. Schubert, M. Bethge, K. Franke, P. Berens, A. S. Ecker, T. Euler
Scientific Reports, 2020
@article{Z*2020a, title: {The temporal structure of the inner retina at a single glance}, author: {Z. Zhao, D. Klindt, A. M. Chagas, K. P. Szatko, L. Rogerson, D. Protti, C. Behrens, D. Dalkara, T. Schubert, M. Bethge, K. Franke, P. Berens, A. S. Ecker, T. Euler}, year: {2020}, journal: {Scientific Reports}, }
.bib

2019

A. S. Ecker, F. H. Sinz, E. Froudarakis, P. G. Fahey, S. A. Cadena, E. Y. Walker, E. Cobos, J. Reimer, A. S. Tolias, M. Bethge
International Conference on Learning Representations (ICLR), 2019
@inproceedings{ecker_2019, title: {A rotation-equivariant convolutional neural network model of primary visual cortex}, author: {A. S. Ecker, F. H. Sinz, E. Froudarakis, P. G. Fahey, S. A. Cadena, E. Y. Walker, E. Cobos, J. Reimer, A. S. Tolias, M. Bethge}, year: {2019}, journal: {International Conference on Learning Representations (ICLR)}, }
.bib
C. Michaelis, B. Mitzkus, R. Geirhos, E. Rusak, O. Bringmann, A. S. Ecker, M. Bethge, W. Brendel
Machine Learning for Autonomous Driving Workshop, NeurIPS 2019, 2019
@inproceedings{michaelis2019dragon, title: {Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming}, author: {C. Michaelis, B. Mitzkus, R. Geirhos, E. Rusak, O. Bringmann, A. S. Ecker, M. Bethge, W. Brendel}, year: {2019}, booktitle: {Machine Learning for Autonomous Driving Workshop, NeurIPS 2019}, }
.bib
T. Lüddecke, T. Kulvicius, F. Wörgötter
Robotics and Autonomous Systems (RAS), 2019
@article{lueddecke19a, title: {Context-based Affordance Segmentation from 2D Images for Robot Action}, author: {T. Lüddecke, T. Kulvicius, F. Wörgötter}, year: {2019}, journal: {Robotics and Autonomous Systems (RAS)}, }
.bib
S. A. Cadena, G. H. Denfield, E. Y. Walker, L. A. Gatys, A. S. Tolias, M. Bethge, A. S. Ecker
PLoS Computational Biology, 2019
@article{Cadena2019, title: {Deep convolutional models improve predictions of macaque V1 responses to natural images}, author: {S. A. Cadena, G. H. Denfield, E. Y. Walker, L. A. Gatys, A. S. Tolias, M. Bethge, A. S. Ecker}, year: {2019}, journal: {PLoS Computational Biology}, }
.bib
T. Lüddecke, A. Agostini, M. Fauth, M. Tamosiunaite, F. Wörgötter
Artificial Intelligence, 2019
@article{lueddecke19, title: {Distributional Semantics of Objects in Visual Scenes in Comparison to Text}, author: {T. Lüddecke, A. Agostini, M. Fauth, M. Tamosiunaite, F. Wörgötter}, year: {2019}, journal: {Artificial Intelligence}, }
.bib
S. A. Cadena, F. H. Sinz, T. Muhammad, E. Froudarakis, E. Cobos, E. Y. Walker, J. Reimer, M. Bethge, A. Tolias, A. S. Ecker
NeurIPS Neuro AI Workshop, 2019
@inproceedings{Cadena2019b, title: {How well do deep neural networks trained on object recognition characterize the mouse visual system?}, author: {S. A. Cadena, F. H. Sinz, T. Muhammad, E. Froudarakis, E. Cobos, E. Y. Walker, J. Reimer, M. Bethge, A. Tolias, A. S. Ecker}, year: {2019}, journal: {NeurIPS Neuro AI Workshop}, }
.bib
R. Geirhos, P. Rubisch, C. Michaelis, M. Bethge, F. A. Wichmann, W. Brendel
International Conference on Learning Representations (ICLR), 2019
@inproceedings{Geirhos2019a, title: {ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness}, author: {R. Geirhos, P. Rubisch, C. Michaelis, M. Bethge, F. A. Wichmann, W. Brendel}, year: {2019}, journal: {International Conference on Learning Representations (ICLR)}, }
.bib
E. Y. Walker, F. H. Sinz, E. Froudarakis, P. G. Fahey, T. Muhammad, A. S. Ecker, E. Cobos, J. Reimer, X. Pitkow, A. S. Tolias
Nature Neuroscience, 2019
@article{Walker2019, title: {Inception loops discover what excites neurons most using deep predictive models}, author: {E. Y. Walker, F. H. Sinz, E. Froudarakis, P. G. Fahey, T. Muhammad, A. S. Ecker, E. Cobos, J. Reimer, X. Pitkow, A. S. Tolias}, year: {2019}, journal: {Nature Neuroscience}, }
.bib

2018

G. H. Denfield, A. S. Ecker, T. J. Shinn, M. Bethge, A. S. Tolias
Nature Communications, 2018
@article{Denfield2018, title: {Attentional fluctuations induce shared variability in macaque primary visual cortex}, author: {G. H. Denfield, A. S. Ecker, T. J. Shinn, M. Bethge, A. S. Tolias}, year: {2018}, journal: {Nature Communications}, }
.bib
S. A. Cadena, M. A. Weis, L. A. Gatys, M. Bethge, A. S. Ecker
The European Conference on Computer Vision (ECCV), 2018
@inproceedings{Cadena2018a, title: {Diverse feature visualizations reveal invariances in early layers of deep neural networks}, author: {S. A. Cadena, M. A. Weis, L. A. Gatys, M. Bethge, A. S. Ecker}, year: {2018}, journal: {The European Conference on Computer Vision (ECCV)}, }
.bib
M. Subramaniyan, A. S. Ecker, S. S. Patel, R. J. Cotton, M. Bethge, X. Pitkow, P. Berens, A. S. Tolias
Journal of Neurophysiology, 2018
@article{Subramaniyan2018a, title: {Faster processing of moving compared with flashed bars in awake macaque V1 provides a neural correlate of the flash lag illusion}, author: {M. Subramaniyan, A. S. Ecker, S. S. Patel, R. J. Cotton, M. Bethge, X. Pitkow, P. Berens, A. S. Tolias}, year: {2018}, journal: {Journal of Neurophysiology}, }
.bib
T. S. A. Wallis, C. M. Funke, A. S. Ecker, L. A. Gatys, F. A. Wichmann, M. Bethge
bioRXiv, 2018
@article{Wallis2018a, title: {Image content is more important than Bouma's Law for scene metamers}, author: {T. S. A. Wallis, C. M. Funke, A. S. Ecker, L. A. Gatys, F. A. Wichmann, M. Bethge}, year: {2018}, journal: {bioRXiv}, }
.bib
S. Schneider, A. S. Ecker, J. H. Macke, M. Bethge
NeurIPS Continual Learning Workshop, 2018
@inproceedings{schneider2018multitask, title: {Multi-Task Generalization and Adaptation between Noisy Digit Datasets: An Empirical Study}, author: {S. Schneider, A. S. Ecker, J. H. Macke, M. Bethge}, year: {2018}, journal: {NeurIPS Continual Learning Workshop}, }
.bib
C. Michaelis, I. Ustyuzhaninov, M. Bethge, A. S. Ecker
arXiv, 2018
@article{Michaelis2018b, title: {One-Shot Instance Segmentation}, author: {C. Michaelis, I. Ustyuzhaninov, M. Bethge, A. S. Ecker}, year: {2018}, journal: {arXiv}, }
.bib
C. Michaelis, M. Bethge, A. S. Ecker
ICML, 2018
@inproceedings{Michaelis2018a, title: {One-Shot Segmentation in Clutter}, author: {C. Michaelis, M. Bethge, A. S. Ecker}, year: {2018}, booktitle: {ICML}, }
.bib
S. Schneider, A. S. Ecker, J. H. Macke, M. Bethge
NeurIPS Machine Learning Open Source Software Workshop, 2018
@inproceedings{schneider2018salad, title: {Salad: A Toolbox for Semi-supervised Adaptive Learning Across Domains}, author: {S. Schneider, A. S. Ecker, J. H. Macke, M. Bethge}, year: {2018}, journal: {NeurIPS Machine Learning Open Source Software Workshop}, }
.bib
F. H. Sinz, A. S. Ecker, P. G. Fahey, E. Y. Walker, E. Cobos, E. Froudarakis, D. Yatsenko, X. Pitkow, J. Reimer, A. S. Tolias
Advances in Neural Information Processing Systems 32, 2018
@inproceedings{Sinz2018a, title: {Stimulus domain transfer in recurrent models for large scale cortical population prediction on video}, author: {F. H. Sinz, A. S. Ecker, P. G. Fahey, E. Y. Walker, E. Cobos, E. Froudarakis, D. Yatsenko, X. Pitkow, J. Reimer, A. S. Tolias}, year: {2018}, booktitle: {Advances in Neural Information Processing Systems 32}, }
.bib

2017

T. S. A. Wallis, C. M. Funke, A. S. Ecker, L. A. Gatys, F. A. Wichmann, M. Bethge
Journal of Vision, 2017
@article{wallis_parametric_2017, title: {A Parametric Texture Model Based on Deep Convolutional Features Closely Matches Texture Appearance for Humans}, author: {T. S. A. Wallis, C. M. Funke, A. S. Ecker, L. A. Gatys, F. A. Wichmann, M. Bethge}, year: {2017}, journal: {Journal of Vision}, }
.bib
L. A. Gatys, A. S. Ecker, M. Bethge, A. Hertzmann, E. Shechtman
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017
@inproceedings{Gatys2017a, title: {Controlling Perceptual Factors in Neural Style Transfer}, author: {L. A. Gatys, A. S. Ecker, M. Bethge, A. Hertzmann, E. Shechtman}, year: {2017}, booktitle: {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition}, }
.bib
T. Lüddecke, F. Wörgötter
Learning to Segment Affordances
IEEE International Conference on Computer Vision Workshops (ICCVW), 2017
@inproceedings{lueddecke17, title: {Learning to Segment Affordances}, author: {T. Lüddecke, F. Wörgötter}, year: {2017}, booktitle: {IEEE International Conference on Computer Vision Workshops (ICCVW)}, }
.bib
D. Klindt, A. S. Ecker, T. Euler, M. Bethge
Advances in Neural Information Processing Systems 31, 2017
@inproceedings{Klindt*2017a, title: {Neural system identification for large populations separating “what” and “where”}, author: {D. Klindt, A. S. Ecker, T. Euler, M. Bethge}, year: {2017}, booktitle: {Advances in Neural Information Processing Systems 31}, }
.bib
C. M. Funke, L. A. Gatys, A. S. Ecker, M. Bethge
arXiv, 2017
@techreport{Funke2017, title: {Synthesising Dynamic Textures using Convolutional Neural Networks}, author: {C. M. Funke, L. A. Gatys, A. S. Ecker, M. Bethge}, year: {2017}, journal: {arXiv}, }
.bib
L. A. Gatys, A. S. Ecker, M. Bethge
Current Opinion in Neurobiology, 2017
@article{Gatys2017b, title: {Texture and art with deep neural networks}, author: {L. A. Gatys, A. S. Ecker, M. Bethge}, year: {2017}, journal: {Current Opinion in Neurobiology}, }
.bib

2016

L. A. Gatys, A. S. Ecker, M. Bethge
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016
@inproceedings{Gatys2016a, title: {Image Style Transfer Using Convolutional Neural Networks}, author: {L. A. Gatys, A. S. Ecker, M. Bethge}, year: {2016}, booktitle: {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition}, }
.bib
A. S. Ecker, G. H. Denfield, M. Bethge, A. S. Tolias
Journal of Neuroscience, 2016
@article{Ecker2016, title: {On the Structure of Neuronal Population Activity under Fluctuations in Attentional State}, author: {A. S. Ecker, G. H. Denfield, M. Bethge, A. S. Tolias}, year: {2016}, journal: {Journal of Neuroscience}, }
.bib

2015

L. A. Gatys, A. S. Ecker, M. Bethge
arXiv, 2015
@article{Gatys2015c, title: {A Neural Algorithm of Artistic Style}, author: {L. A. Gatys, A. S. Ecker, M. Bethge}, year: {2015}, journal: {arXiv}, }
.bib
X. Jiang, S. Shen, C. Cadwell, P. Berens, F. Sinz, A. S. Ecker, S. Patel, A. Tolias
Science, 2015
@article{Jiang2015a, title: {Principles of connectivity among morphologically defined cell types in adult neocortex}, author: {X. Jiang, S. Shen, C. Cadwell, P. Berens, F. Sinz, A. S. Ecker, S. Patel, A. Tolias}, year: {2015}, journal: {Science}, }
.bib
L. A. Gatys, A. S. Ecker, T. Tchumatchenko, M. Bethge
Physical Review E, 2015
@article{Gatys2015a, title: {Synaptic unreliability facilitates information transmission in balanced cortical populations}, author: {L. A. Gatys, A. S. Ecker, T. Tchumatchenko, M. Bethge}, year: {2015}, journal: {Physical Review E}, }
.bib
L. A. Gatys, A. S. Ecker, M. Bethge
Advances in Neural Information Processing Systems 28, 2015
@inproceedings{Gatys2015b, title: {Texture Synthesis Using Convolutional Neural Networks}, author: {L. A. Gatys, A. S. Ecker, M. Bethge}, year: {2015}, booktitle: {Advances in Neural Information Processing Systems 28}, }
.bib

2014

A. S. Ecker, A. S. Tolias
Nature Neuroscience, 2014
@article{Ecker2014a, title: {Is there signal in the noise?}, author: {A. S. Ecker, A. S. Tolias}, year: {2014}, journal: {Nature Neuroscience}, }
.bib
E. Froudarakis, P. Berens, A. S. Ecker, R. J. Cotton, F. H. Sinz, D. Yatsenko, P. Saggau, M. Bethge, A. S. Tolias
Nature Neuroscience, 2014
@article{Froudarakis2014a, title: {Population code in mouse V1 facilitates read-out of natural scenes through increased sparseness}, author: {E. Froudarakis, P. Berens, A. S. Ecker, R. J. Cotton, F. H. Sinz, D. Yatsenko, P. Saggau, M. Bethge, A. S. Tolias}, year: {2014}, journal: {Nature Neuroscience}, }
.bib
A. S. Ecker, P. Berens, R. J. Cotton, M. Subramaniyan, G. H. Denfield, C. R. Cadwell, S. M. Smirnakis, M. Bethge, A. S. Tolias
Neuron, 2014
@article{2014a, title: {State dependence of noise correlations in macaque primary visual cortex}, author: {A. S. Ecker, P. Berens, R. J. Cotton, M. Subramaniyan, G. H. Denfield, C. R. Cadwell, S. M. Smirnakis, M. Bethge, A. S. Tolias}, year: {2014}, journal: {Neuron}, }
.bib

2013

M. Subramaniyan, A. S. Ecker, P. Berens, A. S. Tolias
PLoS ONE, 2013
@article{2013a, title: {Macaque monkeys perceive the flash lag illusion}, author: {M. Subramaniyan, A. S. Ecker, P. Berens, A. S. Tolias}, year: {2013}, journal: {PLoS ONE}, }
.bib

2012

P. Berens, A. S. Ecker, R. J. Cotton, W. J. Ma, M. Bethge, A. S. Tolias
Journal of Neuroscience, 2012
@article{Berens2012b, title: {A fast and simple population code for orientation in primate V1}, author: {P. Berens, A. S. Ecker, R. J. Cotton, W. J. Ma, M. Bethge, A. S. Tolias}, year: {2012}, journal: {Journal of Neuroscience}, }
.bib

2011

P. Berens, A. S. Ecker, S. Gerwinn, A. S. Tolias, M. Bethge
Proceedings of the National Academy of Sciences of the United States of America, 2011
@article{Berens2011a, title: {Reassessing optimal neural population codes with neurometric functions}, author: {P. Berens, A. S. Ecker, S. Gerwinn, A. S. Tolias, M. Bethge}, year: {2011}, journal: {Proceedings of the National Academy of Sciences of the United States of America}, }
.bib
A. S. Ecker, P. Berens, A. S. Tolias, M. Bethge
The Journal of Neuroscience, 2011
@article{Ecker2011a, title: {The effect of noise correlations in populations of diversely tuned neurons}, author: {A. S. Ecker, P. Berens, A. S. Tolias, M. Bethge}, year: {2011}, journal: {The Journal of Neuroscience}, }
.bib
Neural Data Science Group
Institute of Computer Science
University of Goettingen