Welcome
Contents
Welcome#
Within these pages we would like to provide a (interactive) walkthrough of our publication “A large scale meta-analytic view on the organization of the auditory cortex: clustering, co-activation, functional profiles and gradient embedding” (please check the paper and the preprint). This walkthrough will outline the applied analyses steps and methods in more detail, including the used code and where possible results which can be rerun an interactive manner. You can navigate through the respective sections via the TOC on the left side and within sections via the TOC on the right side. The three symbols in the top allow to enable full screen mode, link to the underlying Github repository and allow you to download the walkthrough as a pdf or jupyter notebook respectively. Some sections will additionally have a little rocket in that row which will allow you to interactively rerun certain analyses via cloud computing. Additionally, we support public reviews and comments through an hypothes.is plugin with which you can interact on the right side. All of this awesomeness (talking about the infrastructure and resource) is possible through the dedicated and second to none work of the Jupyter community, specifically, the Executable/Jupyter Book and mybinder project.
A large scale meta-analytic view on the auditory cortex#
One major problem in neuroscience (and basically every other scientific field) is the tremendous variability of experimental paradigms and analysis methods that together create a lot of degrees of freedom. While this may not sound that bad in the beginning, one has to consider and incorporate two essential points within this discussion: the absence of a ground truth and the prominent paradigm and analysis dependency. Concerning the first point, we usually don’t know the ground truth of a given cognitive process, cortical parcellation, etc. . Thus, evaluating/comparing different paradigms and analysis methods and making corresponding claims is not possible, or only in a very limited scope. This directly relates to the second point: every little change in either the paradigm and/or the analysis method can and will change the outcomes, more or less drastically. Obviously, the auditory cortex is no exception to all of this. On the contrary, results concerning the auditory cortex appear to be less reliable and stable as compared to e.g. the visual and motor cortex [GCR+16]. The graphic below provides a very restricted overview of potential paradigm and analysis choices.
import pandas as pd
import numpy as np
import seaborn as sns
import plotly.graph_objects as go
import plotly.express as pex
from IPython.core.display import display, HTML
from plotly.offline import init_notebook_mode, plot
source_dest = [
['task', 'passive listening'],
['task', 'active task'],
['passive listening', 'entire stimulus'],
['passive listening', 'excerpt of stimulus'],
['active task', 'n-back'],
['active task', 'differentiation'],
['active task', 'imagination'],
['n-back', 'entire stimulus'],
['n-back', 'excerpt of stimulus'],
['differentiation', 'entire stimulus'],
['differentiation', 'excerpt of stimulus'],
['imagination', 'entire stimulus'],
['imagination', 'excerpt of stimulus'],
['entire stimulus', 'scrambled'],
['entire stimulus', 'intact'],
['entire stimulus', 'intelligible'],
['entire stimulus', 'unintelligible'],
['excerpt of stimulus', 'scrambled'],
['excerpt of stimulus', 'intact'],
['excerpt of stimulus', 'intelligible'],
['excerpt of stimulus', 'unintelligible'],
['scrambled', 'music'],
['scrambled', 'speech'],
['scrambled', 'singing'],
['scrambled', 'natural sounds'],
['scrambled', 'artifical sounds'],
['intact', 'music'],
['intact', 'speech'],
['intact', 'singing'],
['intact', 'natural sounds'],
['intact', 'artifical sounds'],
['intelligible', 'music'],
['intelligible', 'speech'],
['intelligible', 'singing'],
['intelligible', 'natural sounds'],
['intelligible', 'artifical sounds'],
['unintelligible', 'music'],
['unintelligible', 'speech'],
['unintelligible', 'singing'],
['unintelligible', 'natural sounds'],
['unintelligible', 'artifical sounds'],
['music', 'instrumental'],
['music', 'vocal'],
['music', 'familiar'],
['music', 'unfamiliar'],
['speech', 'mother tongue'],
['speech', 'foreign'],
['singing', 'mother tongue'],
['singing', 'foreign'],
['singing', 'familiar'],
['singing', 'unfamiliar'],
['natural sounds', 'environment'],
['natural sounds', 'animals'],
['natural sounds', 'city'],
['artifical sounds', 'tone bursts'],
['instrumental', 'encoding'],
['vocal', 'encoding'],
['familiar', 'encoding'],
['unfamiliar', 'encoding'],
['mother tongue', 'encoding'],
['foreign', 'encoding'],
['environment', 'encoding'],
['animals', 'encoding'],
['city', 'encoding'],
['artifical sounds', 'encoding'],
['tone bursts', 'encoding'],
['instrumental', 'decoding'],
['vocal', 'decoding'],
['familiar', 'decoding'],
['unfamiliar', 'decoding'],
['mother tongue', 'decoding'],
['foreign', 'decoding'],
['environment', 'decoding'],
['animals', 'decoding'],
['city', 'decoding'],
['artifical sounds', 'decoding'],
['tone bursts', 'decoding'],
['instrumental', 'connectivity'],
['vocal', 'connectivity'],
['familiar', 'connectivity'],
['unfamiliar', 'connectivity'],
['mother tongue', 'connectivity'],
['foreign', 'connectivity'],
['environment', 'connectivity'],
['animals', 'connectivity'],
['city', 'connectivity'],
['artifical sounds', 'connectivity'],
['resting state', 'connectivity'],
['tone bursts', 'connectivity'],
]
website_vists = pd.DataFrame(source_dest, columns=["Source", "Dest"])
website_vists["Count"] = np.ones(website_vists.shape[0])
website_vists.head()
all_nodes = website_vists.Source.values.tolist() + website_vists.Dest.values.tolist()
source_indices = [all_nodes.index(country) for country in website_vists.Source]
target_indices = [all_nodes.index(measure) for measure in website_vists.Dest]
colors = pex.colors.qualitative.D3
node_colors = 2* [sns.color_palette("viridis", 8).as_hex()[0]] + 4* [sns.color_palette("viridis", 8).as_hex()[1]] + 6* [sns.color_palette("viridis", 8).as_hex()[2]] +\
8* [sns.color_palette("viridis", 8).as_hex()[3]] + 20* [sns.color_palette("viridis", 8).as_hex()[4]] + 15* [sns.color_palette("viridis", 8).as_hex()[5]] +\
30* [sns.color_palette("viridis", 8).as_hex()[6]] + 3* [sns.color_palette("viridis", 8).as_hex()[0]] + 30* [sns.color_palette("viridis", 8).as_hex()[7]]
fig = go.Figure(data=[go.Sankey(
node = dict(
pad = 20,
thickness = 20,
line = dict(color = "black", width = 1.0),
label = all_nodes,
color = node_colors,
),
link = dict(
source = source_indices,
target = target_indices,
value = website_vists['Count'],
))])
fig.add_annotation(x=0.02, y=0.9,
text="Stimulation or rest",
showarrow=False)
fig.add_shape(
dict(type="line", x0=-0.001, x1=0.018, y0=0.875, y1=0.875, line=dict(color=sns.color_palette("viridis", 8).as_hex()[0], width=6)))
fig.add_annotation(x=0.02, y=0.85,
text="Stimulation type",
showarrow=False)
fig.add_shape(
dict(type="line", x0=-0.001, x1=0.018, y0=0.825, y1=0.825, line=dict(color=sns.color_palette("viridis", 8).as_hex()[1], width=6)))
fig.add_annotation(x=0.02, y=0.8,
text="task type",
showarrow=False)
fig.add_shape(
dict(type="line", x0=-0.001, x1=0.018, y0=0.775, y1=0.775, line=dict(color=sns.color_palette("viridis", 8).as_hex()[2], width=6)))
fig.add_annotation(x=0.02, y=0.75,
text="stimulus length",
showarrow=False)
fig.add_shape(
dict(type="line", x0=-0.001, x1=0.018, y0=0.725, y1=0.725, line=dict(color=sns.color_palette("viridis", 8).as_hex()[3], width=6)))
fig.add_annotation(x=0.02, y=0.7,
text="stimulus category",
showarrow=False)
fig.add_shape(
dict(type="line", x0=-0.001, x1=0.018, y0=0.675, y1=0.675, line=dict(color=sns.color_palette("viridis", 8).as_hex()[4], width=6)))
fig.add_annotation(x=0.02, y=0.628,
text="stimulus type",
showarrow=False)
fig.add_shape(
dict(type="line", x0=-0.001, x1=0.018, y0=0.625, y1=0.625, line=dict(color=sns.color_palette("viridis", 8).as_hex()[5], width=6)))
fig.add_annotation(x=0.02, y=0.58,
text="analysis method",
showarrow=False)
fig.add_shape(
dict(type="line", x0=-0.001, x1=0.018, y0=0.575, y1=0.575, line=dict(color='black', width=6)))
fig.update_layout(font=dict(size = 12, color = 'black'),
plot_bgcolor='white', paper_bgcolor='white')
fig.update_layout(
title={
'text': "Variability of paradigms and analyses",
'y':0.9,
'x':0,
'xanchor': 'left',
'yanchor': 'top'})
fig.update_layout(
autosize=False,
width=1000,
height=600,
margin=dict(l=10, r=10))
init_notebook_mode(connected=True)
plot(fig, filename = 'plots/paradigms_analyses.html')
display(HTML('plots/paradigms_analyses.html'))
So, what now? Just accept it? Nope, not cool and not enough. There are definitely ways to address this and provide a more holistic and generalizable view on the auditory cortex and its function, as well as organization. One route we decide to pursue here, is to leverage the ever increasing body of literature to perform a large scale meta-analysis. In more detail, we utilized Neurosynth’s database and core tools to perform several complimentary analyses based on over 14k studies. A respective overview can be found here and distinct analyses can be checked out via the ToC on the left.
Feedback & Questions#
We would highly appreciate and value every feedback, idea or question you might have. Please don’t hesitate a single second to get in touch with us. We would of course prefer if you would use the public comment function via the hypothes.is plugin (on the right side), but depending on your inquiry opening an issue in the GitHub repository or sending an email (herholz dot peer at gmail com) is also fine.
Thanks and Acknowledgment#
We would like to thank the Jupyter community, specifically, the Executable/Jupyter Book and mybinder project for enabling us to create this walkthrough. Furthermore, we are grateful for the entire open neuroscience community and the amazing support and resources it provides. This includes the community driven development of data and processing standards, as well as unbelievable amount of software packages that made this project possible. We would additionally like to thank the NeuroDataScience-ORIGAMI lab for tremendously helpful discussions and pointers for various aspects of this project. Our deepest gratitude also goes to the funding agencies that, through their trust and financial support, enabled this study. This includes …
References#
- GCR+16
Matthew F. Glasser, Timothy S. Coalson, Emma C. Robinson, Carl D. Hacker, John Harwell, Essa Yacoub, Kamil Ugurbil, Jesper Andersson, Christian F. Beckmann, Mark Jenkinson, Stephen M. Smith, and David C. Van Essen. A multi-modal parcellation of human cerebral cortex. Nature, 536(7615):171–8, 2016. arXiv: NIHMS150003 ISBN: 0008-5472 (Print)\r0008-5472 (Linking). URL: http://www.ncbi.nlm.nih.gov/pubmed/27437579%5Cnhttp://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=PMC4990127, doi:10.1038/nature18933.