ECVP tutorials – registration closed
There will be 6 tutorials on Sunday 28 August. Three tutorials in the morning (9:30 to 12:30, including breaks), three tutorials in the afternoon (14:00 to 17:00, including breaks), see below. Each tutorial can be followed by approx. 75 ECVP participants.
1. Visual Neuroscience meets Machine Learning (Tim Kietzmann, Kate Storrs, Adrien Doerig, Gemma Roig, Umut Guclu)
A central aspect of understanding how our brains process visual information is via computational modelling. Models provide a more abstract description of the ongoing processes, test hypotheses, and render implicit assumptions explicit. Following the success of deep learning in the domain of AI, recent years have seen an influx of deep neural network models into the domain of visual neuroscience, too, where they act as image-computable, task-performing, and normative models, in which millions of parameters are used to encode world knowledge. This tutorial will introduce the overall rationale behind the research program and highlight advances in how computational neuroscientists use deep neural networks to improve our understanding of biological function.
2. Eye movements recording, analysis, and modeling (Olga Shurygina & Nicolas Roth)
Vision is not a passive process: By moving our eyes, we actively sample information from the environment. In the first part of this tutorial, we will give an overview of how eye movement data is collected, preprocessed, and analyzed. We will briefly review the different types of eye movements and delve into the characteristics of rapid, foveating eye movements – saccades. Participants will get an impression of how to set up an eye-tracking experiment, extract saccades from the recording, and analyze them according to the purpose of the study. In the second part, we will talk about mechanisms that drive our eye movements. How we decide when and where to move our eyes depends on a number of different external factors and internal mechanisms. We will discuss what attracts our attention, what mechanisms underlie our eye movements, and how we can build computational models to simulate them. In the process, participants will get hands-on experience with implementing attentional mechanisms in python and learn about how these mechanisms drive our gaze behavior.
3. Spatiotemporal signals in vision models: diving in with Jupyter Notebook (Lynn Schmittwilken, Joris Vincent)
Images and movies can be described mathematically as spatiotemporally varying signals with specific spectral components. Early visual mechanisms respond to these components and therefore computational models of early visual processes often incorporate spectral analysis. Thus, understanding the fundamentals of these techniques is relevant 1. to understand early visual processes, and 2. to characterise low-level aspects of experimental stimuli to study vision in all levels of the visual hierarchy. In this hands-on tutorial, we explain how (moving) images can be decomposed into their constituent spatial and temporal frequency components. Using the Jupyter Notebook interactive programming environment, we will explore how these different components make up visual stimuli ranging from simple to more naturalistic images. At the end of the tutorial, you will be able to use Jupyter Notebook to analyse (experimental) stimuli and their low-level visual features. One reason to decompose stimuli into their frequency components, is to selectively remove individual components using spatiotemporal filters. The receptive fields of neurons in the early visual system are often characterised as such filters. Therefore, in the second part of this tutorial, we will implement early receptive fields and explore what spatiotemporal information the visual system is most sensitive to. The goal here is to build up a small, working model of early vision that captures common features such as the human contrast sensitivity function.
1. Encoding and decoding models in neuroimaging (Janneke Jehee, Iris Groen, Serge Dumoulin, Ilona Bloem).
Encoding and decoding models are widely used in the analysis of brain data. This workshop will provide rationale for and hands-on experience with such models. Attendees will learn about techniques to extract population receptive fields (pRFs, Serge Dumoulin) and other cortical response properties (Ilona Bloem) from neuroimaging data, temporal models to predict neural activity measured with ECoG (Iris Groen), and probabilistic methods to decode the information contained in population-level responses on a trial-by-trial basis (Janneke Jehee).
2. Neuromodulation using Transcranial Ultrasound Stimulation (Lennart Verhagen)
To understand the mind and brain we need an integrated approach of theory, measurement, and intervention. We have seen a revolution in theory and imaging, but we are lacking the proper tools to probe and modulate neural circuits. Current neuromodulation techniques are either highly invasive, or restricted to the surface of the brain. Transcranial ultrasound has the potential to overcome these limitations. By focussing ultrasound trough the skull, we can target small deep brain structures that were previously inaccessible for non-invasive brain stimulation, such as the lateral geniculate nucleus. We will discuss how we can use low-intensity ultrasound to safely stimulate with unprecedented precision. We will introduce physics, biomechanisms, and safety, before diving into experimental design and current challenges, wrapping up with a roadmap for the adoption of this new technique to study the visual system.
3. Theory and practice of Bayesian inference using JASP (Johnny van Doorn)
This workshop will provide attendees with a friendly, gentle introduction to Bayesian statistics, as well as demonstrate how to perform various Bayesian analyses (e.g., t-test, ANOVA, regression) using JASP statistical software. Workshop attendees will come away understanding the “why” and “how” of Bayesian estimation and hypothesis testing. This workshop is relevant to any student or researcher who wishes to draw conclusions from empirical data. No background in Bayesian statistics is required.