AUDITORY EEG SIGNAL PROCESSING (AESOP) SYMPOSIUM - MAY 21 - 23, 2018 LEUVEN, BELGIUM

Page created by Tom Mcguire
 
CONTINUE READING
AUDITORY EEG SIGNAL PROCESSING (AESOP) SYMPOSIUM - MAY 21 - 23, 2018 LEUVEN, BELGIUM
Auditory
EEG Signal Processing (AESoP)
         Symposium

          May 21 - 23, 2018
           Leuven, Belgium
AUDITORY EEG SIGNAL PROCESSING (AESOP) SYMPOSIUM - MAY 21 - 23, 2018 LEUVEN, BELGIUM
AUDITORY EEG SIGNAL PROCESSING (AESOP) SYMPOSIUM - MAY 21 - 23, 2018 LEUVEN, BELGIUM
Local Organisation
Tom Francart
Jan Wouters

Steering Committee
Alain de Cheveigné (CNRS, France)
Andrew Dimitrijevic (Sunnybrook Research Institute, Toronto, Canada)
Tom Francart (KU Leuven, Belgium)
Ed Lalor (University of Rochester, New York, USA)
Jonathan Simon (University of Maryland, USA)
Jan Wouters (KU Leuven, Belgium)

Invited speakers
Pamela Abshire (University of Maryland, USA)
Alexander Bertrand (KU Leuven, Belgium)
Michael Cohen (Radboud University, Netherlands)
Stefan Debener (University of Oldenburg, Germany)
Alain de Cheveigné (CNRS, France)
Andrew Dimitrijevic (Sunnybrook Research Institute, Toronto, Canada)
Mounya Elhilali (Johns Hopkins, Maryland, USA)
Molly Henry (University of Western Ontario, Canada)
Preben Kidmose (Aarhus University, Denmark)
Ed Lalor (University of Rochester, New York, USA)
Robert Oostenveld (Radboud University, Netherlands)
Lucas Parra (CCNY, New York, USA)
Jonathan Simon (University of Maryland, USA)
Malcolm Slaney (Google Research, California, USA)

The symposium will be held under the auspices of

                 AESoP symposium, Leuven, 21-23 May 2018               3
AUDITORY EEG SIGNAL PROCESSING (AESOP) SYMPOSIUM - MAY 21 - 23, 2018 LEUVEN, BELGIUM
AESoP 2018 Sponsors

4      AESoP symposium, Leuven, 21-23 May 2018
Contents
 Organizers . . . . . . . . .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .    3
 Sponsors . . . . . . . . . .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .    4
 AESoP introduction . . .       .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .    6
 Practical information . . .    .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .    7
 Program overview . . . . .     .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .    9
 Speaker abstracts . . . . .    .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   12
      Signal processing . .     .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   12
      Speech recognition .      .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   17
      Attention decoding .      .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   22
      Auditory prostheses .     .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   25
      Potpourri . . . . . .     .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   30
      Fundamental EEG .         .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   33
      Age/attention/effort      .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   38
 Poster abstracts . . . . . .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   41
 Registered participants . .    .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   59

                 AESoP symposium, Leuven, 21-23 May 2018                                                                                    5
AESoP introduction
Dear colleagues and friends,

It is our great pleasure to welcome you to Leuven. We hope you will not only enjoy
excellent science, but also great company, some history of our university and city,
and culinary delights.

The idea for this symposium was born from a desire to bring together researchers
from two fields that already interact, but could do so even more: signal processing
and neuroscience. With the wide availability of multi-channel EEG/MEG systems
and ever more powerful computers, it has become possible to analyse brain signals
more thoroughly than with the traditional visual assessment of peaks in the time do-
main, and to do so with a single presentation of natural running speech. A number
of powerful quantitative methods for EEG/MEG signal analysis are now available
and under intensive further development. The goals of the AESoP symposium are
to further encourage development of such methods, but also to make sure they are
widely applied to tackle important neuroscience questions. We therefore want to
bring together engineers and neuroscientists (or any combination within the same
person), for three days of intense scientific exchange.

We were pleasantly surprised by both the quality and quantity of abstracts we re-
ceived, and the overwhelming interest for the symposium.

To stimulate discussion, we’ve allowed time for questions after each presentation,
plenty of breaks, and long poster sessions.

I would like to express my sincere thanks to everyone who contributed to the organ-
isation of this symposium: the steering committee, the enthusiastic Leuven team,
the invited speakers, and of course all participants for making this a success.

Yours sincerely,
Tom Francart
supported by the steering committee:
Jan Wouters
Alain de Cheveigné
Andrew Dimitrijevic
Ed Lalor
Jonathan Simon

6                  AESoP symposium, Leuven, 21-23 May 2018
Practical Information
Posters

To guarantee the smooth running of the poster sessions, we ask you to put up your
poster at the designated number as soon as you arrive. This number can be found
in the program book. Poster panels are 2m high and 1m wide. Wednesday, before
leaving, we kindly ask you to remove your poster.

  • The authors of odd-numbered posters should be present at their poster during
    the Monday poster session.

  • The authors of even-numbered posters should be present at their poster during
    the Tuesday poster session.

Presentations

As a speaker, you are requested to upload your presentation no later than in the
break before your session starts (see program). Speakers are kindly requested to
respect their presentation time to allow time for questions and keep on schedule.
                                The social activity (Tuesday, 22 May)
                                Interbrew, Aarschotsesteenweg 4, 3012 Leuven.

                                To reach Interbrew from the symposium location
                                you can take bus: 1 (direction Kessel-Lo), 2 (di-
                                rection Kessel-Lo) or 616 (direction Leuven) from
                                bus stop ’ Heverlee Kantineplein’. In all cases you
                                need to get off at stop ’Leuven Station’. Next
                                from ’Leuven Station’ you can take one of the
                                buses: 630 (direction Wijgmal), 335 (direction
                                Aarschot), 333 (direction Tremelo) or 334 (direc-
                                tion Aarschot) from peron 11 and get off at stop
                                ’Leuven Vaartvest’. Alternatively, from Leuven
                                Station, you can take a 12 min walk through Diest-
                                sewest, Rederstraat and Aarschotsesteenweg (see
                                map on the left).

                  AESoP symposium, Leuven, 21-23 May 2018                        7
Dinner - De Hoorn, Sluisstraat 79, 3000 Leuven.

To get to the venue of the dinner from Interbrew you can take a 10 min walk (see
map below). To get from the train station to the dinner venue you can take bus 600
(from peron 11) and get off at stop ’Leuven Vaartkom’.

Airport

To reach the airport from the symposium location you can take one of the buses:
1 (direction Kessel-Lo), 2 (direction Kessel-Lo) or 616 (direction Leuven) from bus
stop ’ Heverlee Kantineplein’. In all cases you need to get off at stop ’Leuven
Station’ from where you can take train to Brussels Airport - Zaventem (cost 9,30
EUR including Diabolo fee).
A bus ticket, valid for 60 minutes, costs 3 euros and can be bought on the bus (or
cheaper from a machine or shop).

8                  AESoP symposium, Leuven, 21-23 May 2018
Program
                 Monday, 21 May
11:00            Registration
11:45 - 12:45 Lunch
12:45 - 12:50 Welcome, Tom Francart
12:50 - 14:40 Signal processing, chair: Alain de Cheveigné
          12:50 Edmund Lalor, Modeling the hierarchical processing of natu-
                ral auditory stimuli using EEG.
          13:15 Jonathan Simon, Recent advances in cortical representations
                of speech using MEG.
          13:40 Jieun Song, Exploring cortical and subcortical responses
                to continuous speech.
          14:00 Christina M. Vanden Bosch der Nederlanden, Phase-locking
                to speech and song in children and adults.
          14:20 Xiangbin Teng, Modulation spectra capture characteristic
                neural responses to speech signals.
14:40 - 15:10 Break
15:10 - 17:15 Speech recognition, chair: Tom Francart
          15:10 Jan Wouters, Session introduction: from modulations to
                speech.
          15:20 Lucas Parra, Can EEG be used to measure speech compre-
                hension?
          15:45 Andew Dimitrijevic, Brain coherence to naturalistic environ-
                ments in CI users.
          16:10 Eline Verschueren, Semantic context influences neural enve-
                lope tracking.
          16:30 Lars Riecke, Modulating auditory speech recognition with
                transcranial current stimulation.
          16:50 Molly Henry, Neural synchronization during beat perception
                and its relation to psychophysical performance.
17:15 - 19:15 Poster session

               AESoP symposium, Leuven, 21-23 May 2018                         9
Tuesday, 22 May
     9:00 - 10:15 Attention decoding, chair: Malcolm Slaney
              9:00 Malcolm Slaney, Introduction.
              9:10 Alain de Cheveigné, Multiway canonical correlation analysis.
              9:35 Octave Etard, Real-time decoding of selective attention from
                   the human auditory brainstem response to continuous speech.
              9:55 Waldo Nogueira, Towards Decoding Speech Sound Source Di-
                   rection from Single-Trial EEG Data in Cochlear Implant
                   Users.
 10:15 - 10:35 Auditory prostheses, part 1, chair: Andrew Dimitrijevic
             10:15 Alexander Bertrand, Auditory attention detection in real life?
                   - Effects of acoustics, speech demixing, and EEG miniatur-
                   ization.
 10:35 - 11:05 Break
 11:05 - 12:40 Auditory prostheses, part 2, chair: Andrew Dimitrijevic
             11:05 Anita Wagner, Cortico-acoustic alignment in EEG recordings
                   with CI users.
             11:25 Ben Somers, Speech envelope tracking in cochlear implant
                   users.
             11:50 Stefan Debener, Towards transparent EEG.
             12:15 Preben Kidmose, ASSR based hearing threshold estimation
                   based on Ear-EEG.
 12:40 - 13:25 Lunch
 13:25 - 14:25 Potpourri, chair: Andrew Dimitrijevic
             13:25 Tobias de Taillez, Artificial neural networks as analysis tool
                   for predicted EEG data in auditory attention tasks.
             13:45 James R. Swift, Passive functional mapping of language areas
                   using electrocorticographic signals in humans.
             14:05 Katrin Krumbholz, An automated procedure for evaluating
                   auditory brainstem responses based on dynamic time warping.
 14:25 - 16:25 Poster session
 17:00              Social activity, Interbrew, Aarschotsesteenweg 4, 3012
                    Leuven
 20:00              Dinner, De Hoorn, Sluisstraat 79, 3000 Leuven

10                AESoP symposium, Leuven, 21-23 May 2018
Wednesday, 23 May
 9:00 - 11:05 Fundamental EEG, chair: Jonathan Simon
          9:00 Alain De Cheveigné, Introduction.
          9:05 Mounya Elhilali, Neurocomputational analysis of statistical
               inference in the brain.
          9:30 Pamela Abshire, Low cost, wireless, compressed sensing EEG
               platform: fidelity and power tradeoffs.
          9:55 Robert Oostenveld, Using Open Science to accelerate ad-
               vancements in auditory EEG signal processing.
         10:20 Mike Cohen, The origins of EEG.
         10:45 Jacques Pesnot-Lerousseau, Dynamic Attending Theory:
               testing a key prediction of neural entrainment in MEG.
11:05 - 11:35 Break
11:35 - 12:40 Age/attention/effort, chair: Jan Wouters
         11:35 Ed Lalor, Introduction.
         11:40 Lien Decruy, Disentangling the effects of age and hearing loss
               on neural tracking of the speech envelope.
         12:00 Bojana Mirkovic, Lighten the load - The effect of listening
               demands on continuous EEG in normal-hearing and aided
               hearing-impaired individuals.
         12:20 Brandon T. Paul, Single-trial EEG alpha activity as a cor-
               relate of listening effort during speech-in-noise perception in
               cochlear implant users.
12:40 - 12:45 Closing remarks, Tom Francart
12:45 - 13:45 Lunch

              AESoP symposium, Leuven, 21-23 May 2018                        11
Speaker Abstracts

Monday, 21 May: Signal processing
12:50-14:40 Chair: Alain de Cheveigné

Modeling the hierarchical processing of natural auditory stim-
uli using EEG.
Edmund C. Lalor (1,2)
(1) University of Rochester, Rochester, NY, USA; (2) Trinity College Dublin, Dublin, Ireland
edmund_lalor@urmc.rochester.edu

Many of the sounds we hear in daily life display a natural hierarchy in their struc-
ture – with shorter acoustic units grouping to form longer, more meaningful phrases.
This is particularly obvious in speech, but also pertains to music and other sounds.
In this talk I will discuss our efforts to index the hierarchical processing of natural
sounds using EEG. Much of the talk will focus on our efforts to do this in the con-
text of speech. But I will also discuss recent work aimed at indexing the hierarchical
processing of music. I will also outline efforts to relate neural indices how low and
high-level processing and how these indices are differentially affected by attention
and visual input.

12                    AESoP symposium, Leuven, 21-23 May 2018
Recent advances in cortical representations of speech using
MEG.
Jonathan Z. Simon (1,2,3)
(1) University of Maryland, Department of Electrical & Computer Engineering; (2) University of
Maryland, Department of Biology; (3) University of Maryland, Institute for Systems Research
jzsimon@umd.edu

We investigate how continuous speech, whether clean, degraded, or masked by other
speech signals, is represented in human auditory cortex. We use magnetoencephalog-
raphy (MEG) to record the neural responses of listeners to continuous speech in a
variety of listening scenarios. The obtained cortical representations allow both the
prediction of a neural response from the speech stimulus, and the temporal envelope
of the acoustic speech stream to be reconstructed from the observed neural response
to the speech. This talk will emphasize recent results and methodological advances,
and techniques still under development.

Acknowledgements: Funding is gratefully acknowledged from NIDCD/NIH,
R01DC014085, and NSF, SMA1734892.

                     AESoP symposium, Leuven, 21-23 May 2018                               13
Exploring cortical and subcortical responses to continuous
speech.
Jieun Song (1), Paul Iverson(1)
(1) University College London, Department of Speech, Hearing and Phonetic Sciences
jieun.song@ucl.ac.uk

Previous research has suggested that cortical entrainment to slow amplitude fluc-
tuations in the speech signal (i.e., amplitude envelope) plays an important role in
speech perception. However, it remains unclear how cortical entrainment to the
speech envelope relates to higher-level linguistic processes. The present study inves-
tigated how cortical entrainment to continuous speech differs depending on whether
or not the listener understands the linguistic content of the speech signal. EEG was
thus recorded from listeners with different linguistic backgrounds (i.e., native En-
glish and Korean speakers) while they heard continuous speech in three languages
(i.e., English, Korean, and Spanish) in passive and active listening tasks. Further-
more, some studies have suggested links between the brainstem response to speech
and higher-level speech processing or cognitive functions (e.g., attention), and recent
methods can allow for measurement of the brainstem response to single-trial con-
tinuous speech. The current study thus examined the frequency following response
(FFR) to continuous speech in the same experiment in an attempt to explore rela-
tionships between the FFR and higher-level speech processing. The results suggest
that cortical tracking of the envelope is mainly modulated by acoustic properties of
the speech signal rather than listeners’ higher-level linguistic processing. The results
of the FFR analysis will also be discussed in relation to the cortical entrainment re-
sults.

14                   AESoP symposium, Leuven, 21-23 May 2018
Phase-locking to speech and song in children and adults.
Christina M. Vanden Bosch der Nederlanden (1), Marc F. Joanisse (1),
Jessica A. Grahn (1)
(1) Western University, Brain and Mind Institute
cdernede@uwo.ca

Music and language both contain rhythmic information, but the way that rhyth-
mic information unfolds is different for speech compared to song. The main objec-
tive of the current work is to examine whether adults and children phase-lock to
low-frequency (delta/theta) rhythmic information better for the regular rhythms of
song compared to the irregular rhythms of speech. Further, we examined whether
reading and music abilities were related to the strength of an individual’s ability to
phase-lock to the slow rhythms of music and language.
EEG was recorded as 20 adults and 10 children heard spoken and sung excerpts while
watching a movie. Time-compressed versions of those utterances were included for
a more difficult listening condition. Cerebro-acoustic phase coherence was used as
a measure of phase-locking, which estimates the phase alignment of the EEG signal
to the amplitude envelope of each utterance in the time-frequency domain. Baseline
coherence was indexed using random pairings of stimulus envelope and EEG epochs.
Adults had two significant phase coherence peaks in the delta/theta band and the
beta band, but no difference in coherence between song and speech. There was
greater coherence for speech than song for compressed utterances in the delta band,
contrary to predictions. Preliminary evidence from children found the same peaks
in coherence for the delta/theta band and beta band, with a trend toward better
coherence in song than speech, particularly for poor readers.

Acknowledgements: BMI Cognitive Neuroscience Postdoctoral Fellowship awarded
to CMVBdN, NSERC funding to MJ and JAG

                      AESoP symposium, Leuven, 21-23 May 2018                      15
Modulation spectra capture characteristic neural responses
to speech signals.
Xiangbin Teng(1), Seyma Tuerk(2), David Poeppel(1,3)
(1) Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany; (2) Goethe-Universität
Frankfurt, Institut für Psychologie, Frankfurt, Germany; (3) Department of Psychology, New York
University, New York, NY, USA
xiangbin.teng@gmail.com

Objectives: Natural sounds have distinct long-term modulation spectra, which often
show a 1/f pattern. For example, speech has a steeper spectrum, with an exponent
of f between 1 and 1.5. As speech is fundamental to human communication, here
we test whether the long-term modulation spectrum can capture characteristic re-
sponses of our auditory brain to speech signals, the answer to which may reveal a
speech-specific process.
Methods: Building on Teng et al. (2017), we generated broadband amplitude mod-
ulated noise having 1/f modulation spectra with exponents of 0.5, 0.75, 1, 1.5, and
2, to imitate the irregular dynamics of natural sounds. We also derived a long-term
modulation spectrum from 10-minute speech signals to generate modulated noise
with the speech modulation spectrum (random phases). We presented these mod-
ulated sounds to participants undergoing electroencephalography (EEG) recording
and extracted temporal receptive fields (TRF) to each type of sound.
Conclusion: The local TRFs extracted from different modulated sounds show dis-
tinct signatures and can be used to classify sounds with different long-term mod-
ulation spectra. The neural responses to speech signals can be best predicted by
the TRFs extracted from the stimuli with the speech modulation spectrum and 1/f
modulation spectra with exponent 1 and 1.5. The results demonstrate that the
long-term statistics of natural sounds shapes characteristic neural responses of the
auditory system.

16                   AESoP symposium, Leuven, 21-23 May 2018
Monday, 21 May: Speech recognition
15:10-17:15 Chair: Tom Francart

Can EEG amplitude tracking be used to measured speech
comprehension?
Ivan Iotzov (1), Lucas C. Parra (1)
(1)City University of New York
parra@ccny.cuny.edu

It is well established that EEG ”tracks” speech. By tracking we mean that sound-
evoked potentials in the EEG correlate with the amplitude fluctuations in the speech
sound. The effect is not specific to speech, but it seems that speech is particularly
effective at driving evoked potentials. This has been used to determine whether a
listener is paying attention to a particular speaker, or whether a listener is correctly
detecting words, or even whether a listener correctly captured the semantics of a
sentence. We ask whether this phenomenon can be used to assess speech compre-
hension. In one experiment we measure the EEG of normal subjects as they listen
to continuous natural speech, and compare this to the EEG responses in minimally
conscious patients. In another experiment we manipulate comprehension in normal
subjects using congruent and incongruent audiovisual speech. In both instances the
speech sound is identical, but comprehension is altered. At this meeting we will
report whether EEG speech tracking indeed differs between the comprehended and
non-comprehended speech.

                     AESoP symposium, Leuven, 21-23 May 2018                         17
Brain coherence to naturalistic environments in CI users.
Andrew Dimitrijevic (1), Brandon Paul (1)
(1) Sunnybrook Hospital ENT University of Toronto
andrew.dimitrijevic@sunnybrook.ca

Objectives: The purpose of this study was to compare different methods for the
quantification of “neural tracking” or “entrainment” to auditory stimuli in cochlear
implant (CI) users while experiencing a natural environment. Methods: CI users
watched “The Office” television show with the sound delivered from a circular 8
speaker array. High density EEG recordings were used to quantify neural responses
while CI users watched the movie. Subjective reports included effort, demand and
frustration. We also asked listeners to rate the how many words they perceived and
how much of the conversations they believed they followed. Results: The following
analyses were performed: (1) coherence between the audio envelope and energy; (2)
linear decoder based on the mTRF toolbox between the beamformer regions of in-
terest and the audio envelope; (3) simple cross-correlation between the EEG sensors
and the audio envelope. Coherence analysis and temporal response functions yielded
highest values in auditory ROIs. Cross correlations were similar to TRF functions
except that they typically had broader waveforms. The highest correlations between
electrophysiological measures and behavior were observed with coherence. Conclu-
sions: The results of this study show that audio-brain coherence can be measured
in cochlear implant users and these measures relate to behavior. Different types
of measures likely reflect different brain processes and therefore will have different
relationships with behavior.

18                   AESoP symposium, Leuven, 21-23 May 2018
Semantic context influences neural envelope tracking.
Eline Verschueren (1), Jonas Vanthornhout (1), Tom Francart (1)
(1) ExpORL, Dept. Neurosciences, KU Leuven
eline.verschueren@kuleuven.be

The speech envelope is known to be essential for speech understanding and can
be reconstructed from the electroencephalography (EEG) signal in response to run-
ning speech. Today, the exact features influencing this neural tracking of the speech
envelope are still under debate. Is envelope tracking exclusively encoding of the
acoustic information of speech or is it influenced by top-down processing? In or-
der to investigate this we compared envelope reconstruction and temporal response
functions (TRFs) for stimuli that contained different levels of semantic context.
We recorded the EEG in 19 normal-hearing participants while they listened to two
types of stimuli: Matrix sentences without contextual information and a coherent
story. Each stimulus was presented at different levels of speech understanding by
adding speech weighted noise. The speech envelope was reconstructed from the EEG
in both the delta (0.5-4 Hz) and the theta (4-8 Hz) band with the use of a linear
decoder and then correlated with the real speech envelope. We also conducted a
spatiotemporal analysis using TRFs.
For both stimulus types and filter bands the correlation between the speech enve-
lope and the reconstructed envelope increased with increasing speech understanding.
Correlations were higher for stimuli where semantic context was available, indicating
that neural envelope tracking is more than the encoding of acoustic information and
it can be enhanced by semantic top-down processing.

Acknowledgements: Project funded by the European Research Council (ERC) under
the European Union’s Horizon 2020 research and innovation program (No 637424 to
Tom Francart), the KU Leuven Special Research Fund (OT/14/119) and Research
Foundation Flanders (FWO) (PhD grant Jonas Vanthornhout & Eline Verschueren).

                    AESoP symposium, Leuven, 21-23 May 2018                       19
Modulating auditory speech recognition with transcranial cur-
rent stimulation.
Lars Riecke (1), Elia Formisano (1), Bettina Sorger (1), Deniz Baskent
(2), Etienne Gaudrain (2,3)
(1) Department of Cognitive Neuroscience, Maastricht University; (2) Department of Otorhinolaryn-
gology/Head and Neck Surgery, University of Groningen; (3) Lyon Neuroscience Research Center,
Université de Lyon
l.riecke@maastrichtuniversity.nl

Objectives: Speech-brain entrainment, the alignment of neural activity to the slow
temporal fluctuations (envelope) of acoustic speech input, is a ubiquitous element
of current theories of speech processing. Associations between speech-brain en-
trainment and acoustic speech signal, listening task, and speech intelligibility have
been observed repeatedly. However, a methodological bottleneck has prevented so
far clarifying whether speech-brain entrainment contributes functionally to speech
intelligibility or is merely an epiphenomenon of it.
Methods: To address this issue, we experimentally manipulated speech-brain en-
trainment without concomitant acoustic and task-related variations, using a brain-
stimulation approach that aims to modulate listeners’ neural activity with tran-
scranial currents carrying speech-envelope information. We applied this ‘envTCS’
approach in two experiments resembling respectively a cocktail party-like scenario
and a single-talker situation devoid of aural speech-amplitude envelope input.
Conclusions: Both experiments revealed consistently an effect on listeners’ speech-
recognition performance, demonstrating a causal role of speech-brain entrainment
in speech intelligibility. This implies that speech-brain entrainment is critical for au-
ditory speech comprehension and suggest that transcranial stimulation with speech
envelope-shaped currents can be utilized to modulate speech comprehension in im-
paired listening conditions.

Acknowledgements: Funded by NWO

20                    AESoP symposium, Leuven, 21-23 May 2018
Neural synchronization during beat perception and its rela-
tion to psychophysical performance.
Molly J. Henry (1)
(1) University of Western Ontario
molly.j.3000@gmail.com

The presence of rhythm in the environment influences neural dynamics, most no-
tably because neural activity becomes synchronized with the temporal structure of
rhythms. When this happens, the neural states that are associated with successful
auditory perception change relative to arrhythmic situations, and are determined
by the rhythms’ temporal structure. I’ll present work examining neural dynamics
underlying the seemingly unique sensitivity that humans show to the “beat” in mu-
sical rhythm. In particular, we use electroencephalography (EEG) to investigate
how synchronization of neural oscillations with auditory rhythms might give rise to
beat perception, and how synchronized neural oscillations might affect psychophys-
ical performance. I apply advanced analysis techniques to reveal multi-dimensional
neural states associated with successful auditory perception, and how those states
differ depending on whether a beat is perceived or not. I will describe recent steps
towards importing multivariate tools like representational similarity analysis from
the fMRI domain to understand nonlinear, multi-dimensional EEG data and its re-
lationship to auditory perception.

                     AESoP symposium, Leuven, 21-23 May 2018                     21
Tuesday, 22 May: Attention decoding
09:00-10:15 Chair: Malcolm Slaney

Multiway canonical correlation analysis.
Alain de Cheveigné (1, 2, 3)
(1) Centre National de la Recherche Scientifique (CNRS, France); (2) Ecole normale supérieure
(ENS, France); (3) University College London (UCL, UK)
Alain.de.Cheveigne@ens.fr

Subjects differ in brain anatomy, and the position and orientation of brain sources.
This makes it hard to compare responses between subjects, and to summarize re-
sponses across subjects. Cross-subject averaging might work adequately for re-
sponses with broad topographies, but responses with more local and intricate spa-
tial patterns may cancel each other and wash out. Multiway canonical correlation
analysis (MCCA) addresses this issue by allowing data to be merged over subjects
on the assumption that responses are temporally, if not spatially, similar. Analogous
to standard canonical correlation analysis (CCA), MCCA finds a linear transform
applicable to each data-set such that columns of the transformed data matrix ”line
up” optimally between subjects. Applied to data from multiple subjects in response
to the same stimulation, MCCA can be used to isolate components of stimulus-
evoked activity that are shared. It can also be used more mildly as a denoising
tool to suppress activity strongly divergent from other subjects, by applying the
transform, discarding high-order components (i.e. with low intersubject correlation)
and projecting back. MCCA is particularly useful in the search for stimulus features
predictive of cortical responses, as it does not require prior knowledge of a descriptor
for those features, as required for example by a linear systems (e.g. TRF) model. I
will give a brief overview of the method and present some examples of how it can
be put to use.

22                   AESoP symposium, Leuven, 21-23 May 2018
Real-time decoding of selective attention from the human au-
ditory brainstem response to continuous speech.
Octave Etard (1), Mikolaj Kegler (1), Chananel Braiman (2), Antonio
Elia Forte (1), Tobias Reichenbach (1)
(1) Department of Bioengineering and Centre for Neurotechnology, Imperial College London, South
Kensington Campus, SW7 2AZ, London, U.K.; (2)Tri-Institutional Training Program in Computa-
tional Biology and Medicine, Weill Cornell Medical College, New York, NY 10065, U.S.A
reichenbach@imperial.ac.uk

Humans are highly skilled at analysing complex acoustic scenes. The segregation
of different acoustic streams and the formation of corresponding neural representa-
tions is mostly attributed to the auditory cortex. Decoding the focus of selective
attention from neuroimaging has therefore focussed on cortical responses to sound.
Here, we show that the auditory brainstem response to speech is modulated by at-
tention as well, a result that we achieve through developing a novel mathematical
method for measuring the brainstem response to the pitch of continuous speech [1].
Although this auditory brainstem response has a smaller magnitude than that of the
auditory cortex, is occurs at much higher frequencies, is therefore much less affected
by recording artifacts and can be measured from a few channels only. We thus
demonstrate that the attentional modulation of the brainstem response to speech
can be employed to decode the attentional focus of a listener from a small number of
recording channels and in real time, from short measurements of ten seconds or less
in duration. These results open up new avenues for investigating the neural mech-
anisms for selective attention in the brainstem and for developing efficient auditory
brain-computer interfaces.
[1] A. E. Forte, O. Etard and T. Reichenbach (2017) The human auditory brainstem
response to running speech reveals a subcortical mechanism for selective attention,
eLife 6:e27203.

Acknowledgements: This research was supported by EPSRC grant EP/M026728/1
to T.R., by Wellcome Trust grant 108295/Z/15/Z, as well as in part by the National
Science Foundation under Grant No. NSF PHY-1125915.

                     AESoP symposium, Leuven, 21-23 May 2018                                23
Towards decoding speech sound source direction from single-
trial EEG data in cochlear implant users.
Waldo Nogueira (1), Irina Schierholz (1), Andreas Büchner (1), Stefan
Debener (2), Martin Bleichner (2), Bojana Mirkovich (2), Giulio Cosatti
(1)
(1) Department of Otolaryngology, Hannover Medical School, Germany; Cluster of Excellence Hear-
ing4all; (2) Applied Neurocognitive Psychology, Carl-von-Ossietzky-University, Oldenburg, Germany;
Cluster of Excellence Hearing4all
nogueiravazquez.waldo@mh-hannover.de

The goal of this study is to investigate whether selective attention can be decoded
in CI users from single-trial EEG. First, experiments in NH listeners using original
and vocoded sounds were conducted to investigate if spectral smearing decreases
accuracy in detecting selective attention. Next, experiments in a group of CI users
were conducted to assess whether the artefact decreases selective attention accu-
racy. 12 normal hearing (NH) listeners and 12 bilateral CI users participated in
the study. Speech from two audio books was presented through inner ear phones
to the NH listeners and via direct audio cable to the CI users. Participants were
instructed to attend to one out of the two concurrent speech streams presented
while a 96 channel EEG was recorded. For NH listeners, the experiment was re-
peated using a noise-vocoder. Speech reconstruction from single-trial EEG data was
obtained by training decoders using a regularized least square estimation method.
Decoding accuracy was defined as the percentage of accurately reconstructed tri-
als for each subject. Results hshow the feasibility to decode selective attention by
means of single-trial EEG not only in NH with a vocoder simulation, but also in CI
users. It seems that the limitations in detecting selective attention in CI users are
more influenced by the lack of spectral resolution than by the artifact caused by CI
stimulation. Possible implications for further research and application are discussed.

Acknowledgements: This work was supported by the DFG Cluster of Excellence
EXC 1077/1 “Hearing4all”.

24                    AESoP symposium, Leuven, 21-23 May 2018
Tuesday, 22 May: Auditory prostheses
10:15-12:40 Chair: Andrew Dimitrijevic

Auditory attention detection in real life? - Effects of acous-
tics, speech demixing, and EEG miniaturization.
Alexander Bertrand (1)
(1) KU Leuven, Dept. of Electrical Engineering, ESAT-STADIUS
alexander.bertrand@esat.kuleuven.be

People with hearing impairment often have difficulties to understand speech in noisy
environments, which is why hearing aids are equipped with noise reduction algo-
rithms. However, in multi-speaker scenarios a fundamental problem appears: how
can the algorithm decide which speaker the listener aims to attend to, and which
speaker(s) should be treated as noise? Recent research has shown that it is possi-
ble to infer auditory attention from EEG recordings. This opens the door towards
neuro-steered hearing prostheses where neural signals are used to steer noise reduc-
tion algorithms to extract the attended speaker and suppress unattended sources.
In this talk, we give an overview of our ongoing research in which we tackle several
of the important hurdles on the road towards deploying such neuro-steered hear-
ing devices in real life. We explain how auditory attention detection (AAD) can
be performed when only speech mixtures are available from a hearing aid’s micro-
phones. We also investigate the boundary conditions to bootstrap an AAD system
in terms of background noise and speaker locations. Finally, we show some recent
findings on the impact of EEG miniaturization on AAD performance in the context
of EEG sensor networks. In particular, we demonstrate that the AAD performance
using galvanically separated short-distance EEG measurements is comparable to
long-distance EEG measurements if the electrodes are optimally positioned on the
scalp.

Acknowledgements: This work was carried out at the ESAT and ExpORL labo-
ratory of KU Leuven. It was partially supported by a research gift of Starkey
Hearing Technologies, and has received funding from KU Leuven Special Research
Fund C14/16/057, FWO project nrs. 1.5.123.16N and G0A4918N.

                    AESoP symposium, Leuven, 21-23 May 2018                      25
Cortico-acoustic alignment in EEG recordings with CI users.
Anita Wagner (1,2), Natasha Maurits (2,3), Deniz BaŞkent (1,2)
(1) Department of Ear, Nose and Throat Head & Neck Surgery at the University Medical Center
Groningen, University of Groningen, Groningen, The Netherlands; (2) Graduate School of Medical
Sciences, School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen,
The Netherlands; (3) Department of Neurology, University Medical Center Groningen, University of
Groningen, Groningen, The Netherlands
a.wagner@umcg.nl

Temporal amplitude modulations of speech entrain neural oscillations, and such
cortico-acoustic alignments seem to play a role in speech processing. In EEG record-
ings with cochlear implant (CI) users cortico-acoustic alignment is also a device-
induced artifact. We study the sources of cortico-acoustic alignment in EEG record-
ings with CI users by combining cortico-acoustic coherence with online measures of
speech processing, and with recordings of individual’s CI artefacts, as recorded with
a dummy head. EEG recordings of experienced CI users are compared to normal-
hearing (NH) controls, and to NH listeners presented with acoustic CI simulations.
The stimuli are sentences of 3 to 6 sec duration, of which the first 2 sec are used to
compute cortico-acoustic coherence in the range of 2-20Hz. The endings of the sen-
tences elicit ERPs of phonological and semantic processing. Individual’s ERPs and
cortico-acoustic coherence are ranked based on their performance in experiments of
speech-processing fluency. For NH, we found significant coherence in the delta and
theta ranges (2-8 Hz) for midline electrodes and on the temporal lobes. For NH
with vocoded speech, coherence was smaller in magnitude. For CI users, greater co-
herence in the theta range on temporal electrodes was attributed to the CI artifact.
Pairings with behavioral and ERP measures showed that CI users who are fluent
perceivers show increased coherence in the delta range on midline electrodes.

26                    AESoP symposium, Leuven, 21-23 May 2018
Speech envelope tracking in cochlear implant users.
Ben Somers (1), Eline Verschueren (1), Tom Francart (1)
(1) ExpORL, Dept. Neurosciences, KU Leuven
ben.somers@med.kuleuven.be

Objective: The temporal envelope of speech is encoded in oscillating patterns of the
normal hearing brain and can be decoded from neural recordings. This envelope-
tracking response is hypothesized to also be present during electrical stimulation in
cochlear implant users but is difficult to measure because of electrical stimulation
artifacts. The objective of this study is to measure neural tracking of the speech
envelope in CI users. Methods: EEG was measured from CI users while they lis-
tened to natural running speech. A novel technique to suppress the stimulation
artifacts was applied during preprocessing. The artifact-free EEG was used to train
and test a linear speech envelope decoder. The strength of neural envelope tracking
is measured by correlating reconstructed and real envelope. EEG recorded during
stimulation below the CI user’s threshold levels were used to assess the effective-
ness of artifact removal. Conclusions: Significant correlations between reconstructed
and real speech envelope are found during audible stimulation but not during sub-
threshold stimulation, indicating that the envelope reconstruction is not dominated
by artifacts. This study demonstrates that electrical stimulation causes a neural
response that tracks the speech envelope, and that this response can be measured
using a novel CI artifact suppression technique. The demonstrated measure for the
neural encoding of speech may support future objective measures of hearing with a
CI.

Acknowledgements: ERC H2020 grant No. 637424, and FWO PhD grant 1S46117N

                    AESoP symposium, Leuven, 21-23 May 2018                       27
Towards transparent EEG.
Stefan Debener (1,2)
(1) University of Oldenburg, Department of Psychology, Oldenburg, Germany; (2) Cluster of Excel-
lence Hearing4all, Oldenburg, Germany
stefan.debener@uol.de

Most technologies for the recording of human brain activity do not tolerate mo-
tion during signal acquisition very well. Unfortunately, recently developed mobile
EEG systems, while portable, are not necessarily mobile, that is, they do not fea-
ture motion-robust signal acquisition. Moreover, these systems are clearly visible
and therefore cannot be used in daily-life situations. A transparent EEG would not
only be portable and motion-tolerant, it would also feature low visibility and gener-
ally minimal interference with daily-life activities. The recording of brain-electrical
activity from the outer ear and around the ear may be an important step towards
reaching this ambitious goal. Ear-EEG may also play a role in future, cognitive hear-
ing aids, by delivering information on current brain-states and listener demands. I
will report on the development and validation of the cEEGrids, flex-printed elec-
trodes placed around the ear. We have conducted several validation studies, among
which some compare cEEGrids with concurrently recorded high-density EEG sig-
nals for decoding auditory attention. In another study, we explore the possibility of
combining live experimental hearing aid processing with ear-EEG acquisition. Cur-
rent work aims towards the long-term capturing of brain-states with ear-EEG will
also be reported. I will name remaining limitations and suggest possible next steps
for developing transparent EEG technology.

Acknowledgements: This work is supported by the Cluster of Excellence Hearing4all
Oldenburg, Germany

28                    AESoP symposium, Leuven, 21-23 May 2018
ASSR based hearing threshold estimation based on Ear-EEG.
Christian Bech Christensen (1), Preben Kidmose (1)
(1) Department of Engineering, Aarhus University
pki@eng.au.dk

Objectives: To provide an overview of developments in the ear-EEG technology
and methods; and to present and discuss recent results related to objective hear-
ing threshold (HT) estimation based on auditory steady-state responses (ASSR)
recorded from ear-EEG.
Methods: To investigate the feasibility of ear-EEG based HT estimation we con-
ducted a study with chirp based ASSRs in a population of normal hearing subjects.
HT were estimated at 0.5, 1, 2 and 4 kHz with modulation frequencies around 90 Hz.
These recordings were compared to both behavioral thresholds and to scalp EEG
based ASSR thresholds. In a subsequent study, ASSR thresholds were estimated
from subjects with sensorineural hearing loss using the same experimental setup as
for the normal hearing listeners, and HTs were compared for both normal hearing
and hearing impaired listeners. In a third study, chirp based ASSRs from both
scalp EEG and ear-EEG where recorded at 12 different chirp repetition rates rang-
ing from 20 to 95 Hz. The corresponding ASSR repetition rate transfer functions
were compared and discussed.
Conclusions: HTs estimated from ear-EEG showed low threshold offsets relative to
behavioral thresholds, and with inter-subject variations comparable to conventional
scalp EEG thresholds. The SNR of the ear-EEG based ASSRs were relatively con-
stant across repetition rates, thus favoring high repetition rates as these are less
influenced by attention. In conclusion, ear-EEG is feasible for ASSR based HT es-
timation.

                     AESoP symposium, Leuven, 21-23 May 2018                     29
Tuesday, 22 May: Potpourri
13:25-14:25 Chair: Andrew Dimitrijevic

Artificial neural networks as analysis tool for predicted EEG
data in auditory attention tasks.
Tobias de Taillez (1), Bojana Mirkovic (2), Birger Kollmeier (1), Bernd
T. Meyer (1)
(1) University of Oldenburg, Germany, Department of Medical Physics; (2) University of Oldenburg,
Germany, Department of Neuropsychology
tobias.de.taillez@uni-oldenburg.de

Objectives: To further our understanding of auditory attention decoding based
on analyzing EEG data evoked by continuous speech using artificial neural net-
works. Method: Participants listened to two simultaneously presented stories for an
hour that were simulated using virtual acoustics at +- 45° azimuth while EEG was
recorded with 84 channels. EEG data was down-sampled to 250 Hz and band-pass
filtered (1-125 Hz). Speech envelope was extracted and down-sampled to 250 Hz.
An artificial neural network was trained to predict the EEG response related to the
attended and unattended speech envelopes, both of which are used as input feature
to the net. Attention-related auditory processing properties of the net are analyzed
by measuring the impulse response. Spectral characteristics of the simulated EEG
were also analyzed. Outcome: The simulated EEG impulse response of the attended
stream is comparable to to the temporal response functions presented in recent liter-
ature and as such can be compared to auditory evoked responses. Also, the impulse
response of the unattended stream shows a suppressed activity at the P2 latency in
contrast to the attended stream’s response. The spectral analysis indicates a gap
in simulated EEG activity at 5 Hz between attended and unattended simulations.
Conclusion: An neural network trained to predict EEG from continuous speech
produces impulse responses that are physiologically plausible since they resemble
classical auditory evoked responses.

30                    AESoP symposium, Leuven, 21-23 May 2018
Passive functional mapping of language areas using electro-
corticographic signals in humans.
J.R. Swift (1,2,6), W.G. Coon (1,4,5,6), C. Guger (1), P. Brunner (3,6),
M.Bunch (3), T. Lynch (3), B. Frawley (3), A.L. Ritaccio (3,6), G. Schalk
(2,3,6)
(1) g.tec neurotechnology USA, Rensselaer, NY, USA; (2) Dept. of Biomedical Sciences, State Uni-
versity of New York at Albany, Albany, NY, USA; (3) Dept. of Neurology, Albany Medical College,
Albany, NY, USA; (4) Division of Sleep Medicine, Harvard Medical School, Boston, MA, USA; (5)
Dept. of Psychiatry, Massachusetts General Hospital, Boston, MA, USA; (6) National Ctr. for
Adaptive Neurotechnologies, Wadsworth Center, NY State Department of Health,Albany, NY, USA

Objective: To validate the use of passive functional mapping using electrocortico-
graphic (ECoG) signals for identifying receptive language cortex in a large-scale
study.Methods: We mapped language function in 23 patients using high gamma
electrocorticography (ECoG) and using electrical cortical stimulation (ECS) in a
subset of 15 subjects. Results: The qualitative comparison between cortical sites
identified by ECoG and ECS show a high concordance. A quantitative comparison
indicates good sensitivity (95%) but a lower level of specificity (59%). Further anal-
ysis reveals that 82% of all cortical sites identified by ECoG were within 1.5 cm of
a site identified by ECS. Conclusions: These results show that passive functional
mapping reliably localizes receptive language areas, and that there is a substantial
concordance between the ECoG- and ECS-based methods. They also lend a greater
understanding of the differences between ECoG- and ECS-based mappings. This
refined understanding helps to clarify the instances in which the two methods dis-
agree and can explain why neurosurgical practice has established the concept of a
“safety margin.” Significance: Passive functional mapping using ECoG signals pro-
vides a robust, reliable, and fast method for identifying receptive language areas
while eliminating many of the risks and limitations associated with ECS.

Acknowledgements: This work was supported by the NIH (P41-EB018783, P50-
MH109429), the US Army Research Office (W911NF-14-1-0440), and Fondazione
Neurone.

                      AESoP symposium, Leuven, 21-23 May 2018                                31
An automated procedure for evaluating auditory brainstem
responses based on dynamic time warping.
Katrin Krumbholz (1), Jessica de Boer (1), Alexander Harding (1,2)
(1) MRC Institute of Hearing Research, School of Medicine, University of Nottingham; (2) School of
Psychology, University of Nottingham
Katrin.Krumbholz@nottingham.ac.uk

Auditory brainstem responses (ABRs) play an important role in diagnosing hearing
loss, and may also indicate ”hidden” forms of hearing damage without audiometric
loss [1]. ABR waves are typically evaluated by manually picking the wave peaks and
following troughs. Manual ABR peak picking can be difficult when the responses
are weak, and may become prohibitively labor-intensive when using many subjects
with multiple conditions. This study was aimed at designing an automated ABR
peak picking procedure that would mimic manual picking behavior. The proce-
dure uses a form of dynamic time warping (DTW), similar to those used previously
for analyzing speech movements [2]. A nonlinear, twice-differentiable time warping
function was computed by maximizing the correlation between individual ABRs and
appropriate time-warped jack-knife averages. Here, the procedure is demonstrated
on a click ABR data set acquired with the ”derived-band” method, which reveals
differential response contributions from specific cochlear regions [3]. The automated
picking results were similar to a gold-standard set of manual picking results, per-
formed independently by three pickers and cross-validated in cases of disagreement.
The DTW procedure was more reliable than a linear procedure involving only time
shifting and scaling.
[1] Liberman & Kujawa (2017) Hear Res 349:138-47. [2] Lucero et al. (1997) J
Speech Lang Hear Res 40:1111-7. [3] Don & Eggermont (1978) J Acoust Soc Am
63:1084-92.

Acknowledgements: This work was funded by the MRC intramural grant
MC_U135097128.

32                    AESoP symposium, Leuven, 21-23 May 2018
Wednesday, 23 May: Fundamental EEG
9:00-11:05 Chair: Jonathan Simon

Neurocomputational analysis of statistical inference in the
brain.
Mounya Elhilali (1)
(1) Johns Hopkins University, Department of Electrical and Computer Engineering
mounya@jhu.edu

The brain’s ability to extract statistical regularities from sound is an important tool
in auditory scene analysis, necessary for object detection and recognition, structural
learning (in speech or music), or texture perception. Traditionally, the study of
statistical structure of sound patterns has focused on first-order regularities; partic-
ularly mean and variance which can be easily assessed using physiological measures.
In this talk, we will examine insights from EEG recordings using complex sound pat-
terns and present an integrated neuro-computational analysis of statistical tracking
in the brain.

                     AESoP symposium, Leuven, 21-23 May 2018                         33
Low cost, wireless, compressed sensing EEG platform:
fidelity and power tradeoffs.
Bathiya Senevirathna (1,2), Pamela Abshire (1,2)
(1) Department of Electrical and Computer Engineering, University of Maryland, College Park, MD,
USA; (2) Institute for Systems Research, University of Maryland, College Park, MD, USA
pabshire@umd.edu
We discuss the fidelity and power tradeoffs for a low cost, mobile EEG system with
on board compressed sensing. The EEG system comprises an analog front end, mi-
crocontroller, and wireless transceiver. A novel implementation was pursued in order
to reduce costs ( $200 USD) and support local signal compression, allowing more
channels to be transmitted wirelessly. The hardware allows six channels of raw data
to be transmitted wirelessly. In order to transmit 16 channels, a minimum compres-
sion ratio of 2.67 is required. Signal compression was performed using single and
multi-channel compressed sensing (CS). The rakeness CS approach shows improved
performance for higher compression rates. We measured the power consumption
of the system under a variety of conditions and developed simple models for the
power consumption of each component. We find that the costs of transmission and
computation are roughly equivalent. Reconstruction performance depends strongly
on the compression ratio and weakly on the method of spatiotemporal encoding.
Performance was evaluated using spontaneous and evoked EEG datasets recorded
during moderate movement. The EEG system provides a low cost platform that is
useful for obtaining high quality, ambulatory recordings of multi-channel EEG data.

Acknowledgements: We acknowledge support from the University of Maryland Brain
and Behavior Initiative’s 2017 seed grant program.

34                    AESoP symposium, Leuven, 21-23 May 2018
Using Open Science to accelerate advancements in auditory
EEG signal processing.
Robert Oostenveld (1)
(1) Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, NL
r.oostenveld@donders.ru.nl

My presentation gives an overview how signal processing methods for EEG research
have evolved and how open research methods, such as Open Source toolboxes, have
contributed. I will discuss how experimental questions, research methodologies,
analysis tools develop hand-in-hand with experimental research. Using the Field-
Trip toolbox as example, I will provide arguments for more open research methods.
Since open science and open data is not only expected from us by our funding agen-
cies, but actually starts making more and more sense from the perspective of the
individual researchers, I will introduce BIDS as new initiative to organize and share
EEG data.

                     AESoP symposium, Leuven, 21-23 May 2018                                 35
The origins of EEG (hint: I don’t know and neither does
anyone else, but we’re working on it).
Mike Cohen (1,2)
(1) Radboud University Medical Center; (2) Donders Centre for Neuroscience
mikexcohen@gmail.com

EEG is one of the most widely used tools for investigating human brain function and
dysfunction for nearly a century. It is remarkable that we have almost no idea of
where this signal comes from and how to interpret it. To be clear, we know several
things with certainty: EEG comes from the head (brain+muscles+noise); the neu-
ral part of EEG comes from populations of temporally coherent and geometrically
aligned neurons; spatial, spectral, and temporal features of EEG are linked to cog-
nitive, perceptual, and behavioral phenomena; and aberrant EEG patterns can be
used to diagnose clinical disorders such as epilepsy or tumors. From a physics and
engineering perspective, ”where EEG comes from” is solved to a reasonable degree
of accuracy (Maxwell’s equations and ADC/amplifiers). But I will argue that from
a neuroscience perspective, we have no clue how to interpret features of cognitive
EEG in terms of the neural circuitry that implements the computational building
blocks of cognition. My lab is now heavily focused on empirically addressing this
question using large-scale and multi-scale recordings in rodents. I don’t (yet) have
any simple answers, but I will outline our nascent adventures in trying to address
what I consider to be one of the most important questions in 21st century neu-
roscience: How do dynamics at one spatiotemporal scale affect dynamics at other
spatiotemporal scales, and what are the effects of such cross-scale interactions are
for cognition?

36                   AESoP symposium, Leuven, 21-23 May 2018
Dynamic Attending Theory: testing a key prediction of neu-
ral entrainment in MEG.
Jacques Pesnot-Lerousseau (1), Daniele Schön (1)
(1) Aix Marseille University, Inserm, INS, Institut Neuroscience Système, Marseille, France
jacques.pesnot@hotmail.fr

The Dynamic Attending Theory, proposed by M.R. Jones in the 80s is one of the
most influential theory in the auditory research field. It is based on the concept of
entrainment, and characterized by two fundamental properties: (H1) the coupling
of internal oscillators in phase and frequency with rhythmical external stimuli and
(H2) a self-sustained oscillatory activity, even after the disappearing of the stim-
ulus. Showing (H2) is crucial because it allows one to disentangle simple evoked
activity from proper entrainment. While some studies have already tested (H2) in
human behavior, none has directly addressed this question from the neural point
of view. We recorded neural brain activity with MEG in healthy adults and sEEG
in epileptic patients. Auditory stimuli consisted of 16 tones, with a fixed SOA of
390 ms. Classical analysis tools, namely source-reconstruction and time-frequency
decomposition, allowed us to reproduce the results of Fujioka et al. (2012) during
the presentation of the sounds. These classical analysis failed to capture systematic
biases in the silence following the presentation of the sounds, a time window crucial
to test (H2). Strong inter-individual differences and bad signal-to-noise ratio lead
us to use more sophisticated analysis. Preliminary results indicates that encoding
models fitting, in particular temporal response functions, does not suffer from these
limits and reveals systematic oscillatory activity in the silence, consistent with (H2).

                      AESoP symposium, Leuven, 21-23 May 2018                                 37
Wednesday, 23 May: Age/attention/effort
11:35-12:40 Chair: Jan Wouters

Disentangling the effects of age and hearing loss on neural
tracking of the speech envelope.
Lien Decruy (1), Jonas Vanthornhout (1), Tom Francart (1)
(1) ExpORL, Dept. Neurosciences, KU Leuven
lien.decruy@kuleuven.be

Objectives: Since the speech envelope is an important cue for speech understanding,
we believe that measuring neural tracking of the envelope can offer objective and
complementary information to behavioral speech audiometry. As the clinical pop-
ulation mainly consists of older hearing impaired persons, our aim is to study the
effects of age and hearing loss on the processing of speech. Methods: We recorded
the EEG of 49 normal-hearing adults (17-82 years) and 6 adults with sensorineural
hearing loss who were provided with linear amplification. During the EEG, partic-
ipants were asked to recall Matrix sentences at multiple SNRs to obtain a direct
link with behavioral speech audiometry. Two maskers were used: speech weighted
noise and a competing talker. Envelope tracking was estimated by training a linear
decoder to reconstruct the envelope from EEG and correlating it with the original
envelope. To compare with related studies, tone pips with a 500 Hz carrier were
presented at a rate of 1,92 Hz to study the processing of non-speech stimuli. Con-
clusions: Higher envelope tracking was found for older adults at intelligibility levels
>40%. This may suggest that older adults use more resources starting from levels
around the speech reception threshold. Furthermore, hearing loss seems to result
in an increase in envelope tracking in addition to aging. The responses to the tone
pips, however, suggest the opposite as we found lower amplitudes for older adults
and no effect of hearing loss.

Acknowledgements: This project is funded by the ERC (637424) and KU Leuven
Special Research Fund (OT/14/119). Research of Jonas Vanthornhout is funded by
a PhD grant of the Research Foundation Flanders (FWO).

38                  AESoP symposium, Leuven, 21-23 May 2018
You can also read