Workshop overview¶
As mentioned on the Welcome page, this workshop will be focused on machine and deep learning within the context of neuroscience. In general, the idea is to do a split between theoretical background with a few examples and hands-on project work. Regarding the first, we will try to provide folks with a brief overview of central topics and important issues within the respective fields and concerning the second, evaluate the feasibility of the corresponding approaches within participant’s own research. We will explain both aspects, as well as setup, etc. below.
The framework and setup¶
The entire workshop will be conducted via the Jupyter ecosystem, utilizing the python programming for all examples, both within the theoretical background and the hands-on sessions. All materials will be provided within the Jupyter Book format you’re currently look, free for everyone to check and try out, as well as utilize further. To help folks that don’t have any experience with these resources, we compiled a set of tutorials that participants can go through within the prerequisite section. While this won’t be enough to go past basic skills, we still hope it will be useful to familiarize yourself with core aspects that will help during the workshop. Each of these tutorials, as well as “main” materials will be in jupyter notebooks and will be a mixture of explanations and code and presented as a “slide show” during the workshop. They can be just viewed, or either run interactively via cloud instances (via mybinder) or locally. Depending on a given participant’s computational resources and infrastructure, we provide multiple ways to participate in the workshop as outlined in the Setup for the workshop section.
The content I - theoretical background¶
Within the first half of the workshop, we will go through some core and important aspects, starting with “classic” machine learning, followed by deep learning. For each the necessary vocabulary will be explained and central elements explored based on example datasets. This will entail an overview of different models, how to train, fit and evaluate them, as well as some more advanced topics such model biases and transfer learning. The idea and aim of this section is to create a foundation for everyone to actively participate in the second half of the workshop, the hands-on session.
The content II - hands-on¶
This part will mark a clear difference to other comparable workshops, as we will spent the second of the day solely on applying the content of the morning to the participant’s datasets. That is, if these methods would actually make sense to apply, how their data should be prepared, if there are pre-existing and -trained models and how to start a potential analysis pipeline. Given that everyone will bring their own dataset and we will rather freely explore, we won’t be able to share respective materials here. However, we will of course try to provide a summary of the things we did and questions or problems we encountered.