Research Article
Open access
Published Online: 23 April 2013

The Virtual Brain Integrates Computational Modeling and Multimodal Neuroimaging

Publication: Brain Connectivity
Volume 3, Issue Number 2

Abstract

Brain function is thought to emerge from the interactions among neuronal populations. Apart from traditional efforts to reproduce brain dynamics from the micro- to macroscopic scales, complementary approaches develop phenomenological models of lower complexity. Such macroscopic models typically generate only a few selected—ideally functionally relevant—aspects of the brain dynamics. Importantly, they often allow an understanding of the underlying mechanisms beyond computational reproduction. Adding detail to these models will widen their ability to reproduce a broader range of dynamic features of the brain. For instance, such models allow for the exploration of consequences of focal and distributed pathological changes in the system, enabling us to identify and develop approaches to counteract those unfavorable processes. Toward this end, The Virtual Brain (TVB) (www.thevirtualbrain.org), a neuroinformatics platform with a brain simulator that incorporates a range of neuronal models and dynamics at its core, has been developed. This integrated framework allows the model-based simulation, analysis, and inference of neurophysiological mechanisms over several brain scales that underlie the generation of macroscopic neuroimaging signals. In this article, we describe how TVB works, and we present the first proof of concept.

Introduction

Cognition is thought to emerge through integrated activity of neuronal populations throughout the brain, following the principles of segregation and integration. Ongoing reconfigurations of distantly interacting local neuronal subsets or modules yield the continuous flow of perceptual, cognitive, and behavioral functions. Observation of signals emitted by neurons and neuronal populations is a common approach in neuroscience to obtain insight into the brain function. While details about the physiological behavior at microscopic levels are important for understanding neuronal computation, it is at the whole-brain scale or large scale where the actual integration occurs and cognition emerges. Noninvasive neuroimaging methods such as electroencephalography (EEG), magnetoencephalography (MEG), functional magnetic resonance imaging (fMRI), and positron-emission tomography provide a global macroscopic picture of neuronal dynamics. Macroscopic signal features emerge from the interaction of neuronal populations at local and global scales. Their explicit link to function—the neural code—still remains unclear. Instead of trying to understand neuronal interactions in every detail, a reasonable first step is to model functionally relevant dynamics at coarser levels.
In this article, we describe a newly developed neuroinformatics platform—The Virtual Brain (TVB; www.thevirtualbrain.org)—and we provide the proof of concept for our proposed approach. The platform integrates the large-scale structure of brain connectivity spanning salient brain regions, each being equipped with a regional neuronal model that is able to capture relevant features of brain activity at the mesoscopic scale, that is, the scale of cortical columns, nuclei, and populations comprising up to several hundreds of neurons. Thus, the regional dynamics can be evaluated in the context of long-range interactions, enabling the principles of segregation and integration to be modeled explicitly.
Model-based multimodal data integration approaches have many advantages over conventional approaches such as direct data fusion or converging evidence (Horwitz and Poeppel, 2002; Valdes-Sosa et al., 2009). Direct data fusion aims for the direct comparison of data sets from different imaging modalities using mathematical or statistical assessment criteria. However, these approaches often rest on the assumption of identical neuronal signal generators or that signals from different modalities have compatible features. The signals of the spatiotemporal source activity patterns do not necessarily need to overlap in different modalities nor do high statistical correlations between activation levels need to point to the true site of the signal source (Schulz et al., 2004). Model-based approaches, on the other hand, are able to integrate information from different modalities in a complementary manner, while avoiding limitations arising from pure statistical comparisons that do not incorporate structural and dynamical knowledge about the distinct underlying neurophysiological sources that generate the respective signals. Thus, in our view, the most reasonable way to integrate multimodal neuroimaging data together with existing knowledge about brain functioning and physiology is by means of a computational model that describes the emergence of high-level phenomena on the basis of basic bottom-up interaction between the elementary brain-processing units.
Large-scale neural network models can be built to incorporate prior knowledge about brain physiology and can be combined with forward models that allow the simulation of neuroimaging modalities (e.g., fMRI and EEG). This approach promotes the direct formulation and evaluation of hypotheses on how cognitive processes are generated by neuronal interaction. This can take the form of specific features of the model network structure, that is, interaction strength and delay between neural elements, or biophysical properties such as resting potentials, membrane permeability, plasticity effects, and so on. The data generated by these models can be directly compared to the signals from the respective imaging modality, and model parameters can be adjusted to the observed data. In an iterative manner, the fit between model output and empirical data is evaluated and used to improve the unobservable model components such as parameter settings or the model structure itself. The careful selection of local mesoscopic models and the iterative refinement of model network structure and parameter optimization lead to systematic improvements in model validity and thereby in knowledge about brain physiology and mechanisms underlying cognitive processing. Cognitive and behavioral experiments can then be interpreted in the light of model behavior that directly points to the underlying neuronal processes. For example, analyzing the variation of parameter settings in response to experimental conditions can deliver insights on the role of the associated structural or dynamical aspects in a certain cognitive function. Further, the relevance of neurobiological features for the emergence of self-organized criticality and neuronal information processing can be tested.
In summary, the model-based nexus of experimental and theoretical evidence allows the systematic inference of determinants of the generation of neuronal dynamics. Model-based integration of different neuroimaging modalities enables the exploration of consequences of biophysically relevant parameter changes in the system as they occur in changing brain states or during pathology. Hence, TVB helps us to uncover (1) the fundamental neuronal mechanisms that give rise to typical features of brain dynamics and (2) how intersubject variability in the brain structure results in differential brain dynamics.
The purpose of this article was to describe the main concept and to provide the proof of principle based on empirical data and example simulations. The article is structured as follows:
In the first section Modeling Large-Scale Brain Dynamics, we describe the principles governing the construction of large-scale brain models in TVB. This includes a description of the neuronal source model and the forward models used to translate the signals from individual sources into brain imaging signals. We describe the local mesoscopic models representing the local dynamics of the brain regions as well as the anatomical connectivity constraints coming from tract length, capacity, and directionality. Finally, we describe how the mean field activity is translated into signals that are comparable to the brain-imaging methods such as EEG and fMRI.
Next, in the section Identification of Spatiotemporal Motifs, we describe the proposed steps for the integration of functional empirical and simulated data. This includes deriving spatial register between data sets and reducing the parameter space to minimize the computation time. We detail our employed parameter estimation algorithm and discuss the possibilities provided by Building a dictionary of dynamical regimes, which lists the collection of parameter settings that produce different large-scale dynamics.
Finally, in the section Benefits of TVB Platform, we discuss examples how TVB can be used to obtain insights into brain function that could not be obtained by either empirical or theoretical approaches alone and help formulating new testable hypotheses.

Modeling Large-Scale Brain Dynamics

A choice of local mesoscopic dynamics

The brain contains ∼1011 neurons linked by ∼1015 connections, with each neuron having inputs in the order of 105. The complex and highly nonlinear neuronal interaction patterns are only poorly understood, and the number of degrees of freedom of a microscopic model attempting to describe every neuron, every connection, and every interaction is astronomically large and therefore far too high for fitting it directly with the recorded macroscopic data. The gap between the microscopic sources of scalp potentials at the cell membranes and the recorded macroscopic potentials can be bridged by an intermediate mesoscopic description (Nunez and Silberstein, 2000). Mesoscopic dynamics describe the activity of populations of neurons organized as cortical columns or subcortical nuclei. Several features of mesoscopic and macroscopic electric behavior, for example, dynamic patterns such as synchrony of oscillations or evoked potentials, show good correspondence to certain cognitive functions, for example, resting-state activity, sleep patterns, or event-related activity.
Common assumptions in the neural mass or mean-field modeling are that explicit structural features or temporal details of neuronal networks (e.g., spiking dynamics of single neurons) are irrelevant for the analysis of complex mesoscopic dynamics, and the emergent collective behavior is only weakly sensitive to the details of individual neuron behavior (Breakspear and Jirsa, 2007). Basic mean field models capture changes of the mean firing rate (Brunel and Wang, 2003), whereas more sophisticated mean field models account for parameter dispersion in the neurons and the subsequent richer behavioral repertoire of the mean field dynamics (Assisi et al., 2005; Jirsa and Stefanescu, 2011; Stefanescu and Jirsa, 2008, 2011). These approaches demonstrate a relatively new concept from statistical physics that macroscopic physical systems obey laws that are independent of the details of the microscopic constituents they are built of (Haken, 1975). These and related ideas have been exploited in neurosciences (Buzsaki, 2006; Kelso, 1995). Thus, our main interest lies in deriving the mesoscopic laws that drive the observed dynamical processes at the macroscopic scale in a systematic manner.
In the framework of TVB, we incorporate biologically realistic large-scale coupling of neural populations at salient brain regions that is mediated by long-range neural fiber tracts as identified with diffusion tensor imaging (DTI)-based tractography together with mean-field models as local node models. Various mean-field models are available in TVB, reproducing typical features of the mesoscopic population dynamics. For the simulations in present work, we use the Stefanescu–Jirsa model (2008) that is based on mean-field dynamics of a heterogeneous network of Hindmarsh–Rose neurons capable of displaying various spiking and bursting behaviors.
The Stefanescu–Jirsa neural population model provides a low-dimensional description of complex neural population dynamics, including synchronization and random firings of neurons, multiclustered behaviors, bursting, and oscillator death. The conditions under which these behaviors occur are shaped by specific parameter settings, including, for example, connectivity strengths and neuronal membrane excitability. A characteristic feature of the Stefanescu–Jirsa model is that it takes into account the parameter dispersion (i.e., nonidentical neurons in the population), giving rise to a rich repertoire of behaviors.
A neural population model \(f ( x_i ( t ) ) = { \dot x}_i ( t )\) describes the local dynamics at each of the nodes i of the large-scale network. The six coupled first-order differential equations of the Stefanescu–Jirsa model are a reduced representation of the mean-field dynamics of populations of fully connected neurons that are clustered into excitatory and inhibitory pools (see Fig. 1A).
FIG. 1. (A) Schematic of the local source node network architecture underlying mean field modeling. Excitatory (red circles) and inhibitory (black squares) neurons occupy a volume (left). Couplings are indicated by black connecting lines. Conceptually, both neuron types can be clustered in two subpopulations (middle). Each subpopulation can be then characterized by a mean behavior under certain conditions (right) and a mean connectivity (K11, K21, and K12) [Figure courtesy: Stefanescu and Jirsa (2008)]. (B) Effects of changing the coupling strength parameter K11. Left: mainly, the excitatory coupling strength (n=0.5) leads to synchronization between neurons in the excitatory (red) and inhibitory subpopulations (black). Right: mainly inhibitory coupling: mainly, inhibitory coupling (n=1.5) leads to small oscillations in the inhibitory subpopulation and chaotic oscillations in the excitatory neurons [Figure courtesy: Stefanescu and Jirsa (2008)]. (C) Dynamical regimes of a dominantly inhibitory neural population: contour map of the mean-field amplitude calculated for a fixed ratio n=2.5 of inhibitory and excitatory coupling. Simulation databases are generated by sweeping values of n, K21/K11, K12, and σ and storing the resulting waveforms [Figure courtesy: Stefanescu and Jirsa (2008)].
Since the reduced system is composed of three components or modes, each variable and parameter is either a column vector with three rows or a 3×3 square matrix:
\begin{align*} & { \dot {x}}_i = y_i - ax_i^3 + bx_i^2 - z_i + [ K_{11} ( X_1 - x_i ) - K_{12} ( X_2 - x_i ) ] + IE_i \\& { \dot y}_i = c_i - dx_i^2 - y_i \\& { \dot z}_i = rsx_i - rz_i - m_i \\& { \dot w}_i = v_i - aw_i^3 + bw_i^2 - u_i + K_{21} ( X_1 - w_i ) + II_i \\& { \dot v}_i = h_i - p_iw_i^2 - v_i \\& { \dot u}_i = rsw_i - ru_i - n_i \tag{1} \end{align*}
This dynamical system describes the state evolution of two coupled populations of excitatory (variables x, y, and z) and inhibitory neurons (variables w, v, and u). In its original single-neuron formulation—that is known for its good reproduction of burst and spike activity and other empirically observed patterns—the variable x(t) encodes the neuron membrane potential at time t, while y(t) and z(t) account for the transport of ions across the membrane through ion channels. The spiking variable y(t) accounts for the flux of sodium and potassium through fast channels, while z(t), called bursting variable, accounts for the inward current through slow-ion channels (Hindmarsh and Rose, 1984). Even if it is not justified to directly expand this interpretation of variables and parameters from the single-neuron formulation to an analogous comprehension of their mean-field counterparts, the mean-field terms are nevertheless related to some of the underlying biophysical properties of the system [see Stefanescu and Jirsa (2008)]. Several biophysical parameters are permanently subject to ongoing fluctuations that are directly related to qualitative changes in the dynamics of the network. It is noteworthy that the mathematical structure of the single-neuron model is reflected in the mathematical structure of the population model. The population parameters comprise the contributions of the couplings and the distribution of the parameters.
Each of the three modes of the Stefanescu–Jirsa model (Eq. 1) reflects distinct dynamical behaviors depending on the range of membrane excitabilities of the neuron cluster (Stefanescu and Jirsa, 2008).

Full-brain model: coupling mesoscopic models with individual connectivity features

When traversing the scale to the large-scale network, population dynamics described by Equation 1 are connected using DTI tractography-derived coupling parameters. Each node of the large-scale network is now governed by its own intrinsic dynamics generated by the six equations and the dynamics of the coupled nodes that is summed to the local mean-field potential xi. This yields the following evolution equation for the time course \(t = 1 , \ldots , T\) of the network mean-field potential {xi(t)} at node i:
\begin{align*}x_i ( t + 1 ) = x_i ( t ) + f ( x_i ( t ) ) dt + c \mathop\sum_{j = 1}^N w_{ij}x_j ( t - \Delta t_{ij} ) \,dt+ \eta ( t ) . \tag{2}\end{align*}
The equation describes the numerical integration of a network of N connected neural populations \(i =1, \ldots , N\) . The large-scale network is described by connection weights \(w_{ij}\) , where index j indicates the weight of node j exerting an influence on the node indexed by i. Connection weights are derived from fiber-tract sizes. The time delays for information transmission \(\Delta t_{ij} = d_{ij} / \nu\) depend on a distance matrix dij and a constant conduction speed ν. Weights are scaled by a constant c. Additive noise is introduced by the term η(t). The coupling of the large-scale network is constrained by individual DTI-based tractography data combined with directionality data from the CoCoMac database [http://cocomac.org (Bezgin et al., 2011); see examples of structural connectivity (SC) matrices of 96 regions reflecting fiber tract lengths, fiber tract sizes and directionality of the neuronal pathways in Figure 2A. Anatomical names of the 96 regions as listed in Table 1].
FIG. 2. (A) Exemplary connectivity strength and distance matrices of a single subject obtained by diffusion tensor imaging tractography and directionality matrix of the macaque obtained from the CoCoMac database. Shown are normalized capacities between different source nodes that resemble the weight of individual node interactions estimated by the size of the fiber tract connecting the nodes (left) and distances between nodes (middle) estimated by the fiber-tract length between source node voxels. In addition, directionality information (black: explicit no connectivity) obtained from macaque axon-tract tracing data of the CoCoMac database is shown (right). Area names of the 96 regions are listed in Table 1—ordering of regions is equal in table and matrices. (B) Empirical and simulated functional connectivity (FC) of functional magnetic resonance imaging (fMRI) data. By a coarse numerical parameter space search, we identified parameter settings that yielded functional blood oxygen level-dependent (BOLD) connectivity that significantly correlated with the FC of empirical fMRI data (in the shown example, a correlation of r=0.47 has been achieved). The window length of the empirical BOLD data used to calculate FC was 38 scans=73,72 sec (repetition time=1.94 sec/sampling rate 0.5155 Hz). Altogether, the empirical BOLD data collected for a single subject had the length of 666 scans. We slid the window for the calculation of the BOLD FC over the entire empirical time series in steps of a single volume resulting in 666−38=628 different empirical FCs per subject. Simulated BOLD data were obtained by convolving a simulated mean field time series of the excitatory population (duration: 200 sec; sampled at 250 Hz) with the hemodynamic response function. This signal was subsequently downsampling to the fMRI scan frequency, resulting in a timeseries of 86 samples. Simulated FC was calculated over an identical time window of 38 scans as in empirical data. By sliding the window over the simulated time series in one-scan steps, we obtained 86−38=48 different simulated FCs. All simulated FCs were compared to all empirical FCs by calculating correlation coefficients resulting in 628×48=30,144 r values per subject and parameter setting. In this initial coarse parameter space search, we tested 78 different parameter settings by varying the parameters GC, structural connectivity (SC), v_cond, K11, K21, K12, and the six noise terms (abbreviations explained in Table 2). See examples for the distributions of resulting r-values in (C). The parameter values yielding the here-shown example of simulated data are listed in Table 2. The area names of the 96 regions for which FC is shown can be found in Table 1 in the identical order as depicted here. (C) Individual structure predicts function. BOLD FC matrices of nine subjects (represented by the different panels) were correlated with simulated FC matrices based on SC of a single subject. Shown are the distributions of correlation coefficients (CCs) for 30,144 comparisons (628 empirical FCs×48 simulated FCs). The red-boxed panel contains the data where SC and FC come from same subject—resulting in notably higher CCs.
Table 1. Area Names of the 96 Regions Represented in the Model and Their Abbreviations
No.Area nameAbbreviation 1Temporal polar cortexRM-TCpol_R 2Superior temporal cortexRM-TCs_R 3AmygdaleRM-Amyg_R 4Orbitoinferior prefrontal cortexRM-PFCoi_R 5Anterior insulaRM-IA_R 6Orbitomedial prefrontal cortexRM-PFCom_R 7Central temporal cortexRM-TCc_R 8Orbitolateral prefrontal cortexRM-PFCol_R 9Inferior temporal cortexRM-TCi_R10Parahippocampal cortexRM-PHC_R11Gustatory cortexMM82a-G_R12Ventrolateral premotor cortexRM-PMCvl_R13Anterior visual area, ventral partRM-VACv_R14Posterior insulaRM-Ip_R15Prefrontal polar cortexRM-PFCpol_R16HippocampusRM-HC_R17Subgenual cingulate cortexRM-CCs_R18Ventrolateral prefrontal cortexRM-PFCvl_R19Visual area 2 (secondary visual cortex)RM-V2_R20Medial prefrontal cortexRM-PFCm_R21Ventral temporal cortexRM-TCv_R22Anterior visual area, dorsal partRM-VACd_R23Visual area 1 (primary visual cortex)RM-V1_R24Centrolateral prefrontal cortexRM-PFCcl_R25Secondary auditory cortexRM-A2_R26Retrosplenial cingulate cortexRM-CCr_R27Posterior cingulate cortexRM-CCp_R28Anterior cingulate cortexRM-CCa_R29Secondary somatosensory cortexRM-S2_R30Primary somatosensory cortexRM-S1_R31Primary auditory cortexRM-A1_R32Primary motor cortexRM-M1_R33Inferior parietal cortexRM-PCi_R34Medial parietal cortexRM-PCm_R35Dorsomedial prefrontal cortexRM-PFCdm_R36Intraparietal cortexRM-PCip_R37Superior parietal cortexRM-PCs_R38Frontal eye fieldRM-FEF_R39Dorsolateral prefrontal cortexRM-PFCdl_R40Medial premotor cortexRM-PMCm_R41Dorsolateral premotor cortexRM-PMCdl_R42Thalamic ROI with major frontal connectionsTM-F_R43Thalamic ROI with major temporal connectionsTM-T_R44Thalamic ROI with major occipitoparietal connectionsTM-OP_R45Caudate nucleusBG-Cd_R46PutamenBG-Pu_R47PallidumBG-Pa_R48Accumbens nucleusBG-Acc_R49Temporal polar cortexRM-TCpol_L50Superior temporal cortexRM-TCs_L51AmygdalaRM-Amyg_L52Orbitoinferior prefrontal cortexRM-PFCoi_L53Anterior insulaRM-IA_L54Orbitomedial prefrontal cortexRM-PFCom_L55Central temporal cortexRM-TCc_L56Orbitolateral prefrontal cortexRM-PFCol_L57Inferior temporal cortexRM-TCi_L58Parahippocampal cortexRM-PHC_L59Gustatory cortexMM82a-G_L60Ventrolateral premotor cortexRM-PMCvl_L61Anterior visual area, ventral partRM-VACv_L62Posterior insulaRM-Ip_L63Prefrontal polar cortexRM-PFCpol_L64HippocampusRM-HC_L65Subgenual cingulate cortexRM-CCs_L66Ventrolateral prefrontal cortexRM-PFCvl_L67Visual area 2 (secondary visual cortex)RM-V2_L68Medial prefrontal cortexRM-PFCm_L69Ventral temporal cortexRM-TCv_L70Anterior visual area, dorsal partRM-VACd_L71Visual area 1 (primary visual cortex)RM-V1_L72Centrolateral prefrontal cortexRM-PFCcl_L73Secondary auditory cortexRM-A2_L74Retrosplenial cingulate cortexRM-CCr_L75Posterior cingulate cortexRM-CCp_L76Anterior cingulate cortexRM-CCa_L77Secondary somatosensory cortexRM-S2_L78Primary somatosensory cortexRM-S1_L79Primary auditory cortexRM-A1_L80Primary motor cortexRM-M1_L81Inferior parietal cortexRM-PCi_L82Medial parietal cortexRM-PCm_L83Dorsomedial prefrontal cortexRM-PFCdm_L84Intraparietal cortexRM-PCip_L85Superior parietal cortexRM-PCs_L86Frontal eye fieldRM-FEF_L87Dorsolateral prefrontal cortexRM-PFCdl_L88Medial premotor cortexRM-PMCm_L89Dorsolateral premotor cortexRM-PMCdl_L90Thalamic ROI with major frontal connectionsTM-F_L91Thalamic ROI with major temporal connectionsTM-T_L92Thalamic ROI with major occipitoparietal connectionsTM-OP_L93Caudate nucleusBG-Cd_L94PutamenBG-Pu_L95PallidumBG-Pa_L96Accumbens nucleusBG-Acc_L
The order of names represents the order of nodes in the connectivity matrices depicted throughout the article.
The brain connectivity of TVB distinguishes region-based and surface-based connectivity. In the former case, the networks comprise discrete nodes and connectivity, in which each node models the neural population activity of a brain region, and the connectivity is composed of inter-regional fibers. Region-based networks contain typically 30–200 network nodes. In the latter case, the cortical and subcortical areas are modeled on a finer scale by 5000–150,000 points, in which each point represents a neural population model. This approach allows a detailed spatial sampling, in particular, of the cortical surface, resulting in a spatially continuous approximation of the neural activity known as neural field modeling (Amari, 1977; Jirsa and Haken, 1996; Nunez, 1974; Robinson, 1997; Wilson-Cowan, 1972). Here, the connectivity is composed of local intracortical and global intercortical fibers.
An example of the mean-field potential of two different brain areas as generated by the Stefanescu-Jirsa model is given in Figure 3A. As visible in the power spectral densities (PSD) next to the traces the upper mean-field potential has a spectral peak in the alpha band, while the lower one is dominated by delta activity. Figure 3B shows 15 seconds of simulated EEG activity for a selection of six channels. In Figure 3C a comparison of simulated and empirical BOLD activity (average over region-voxels) for low and high functional connectivity is shown. Figure 3D compares PSD of simulated mean-field source, EEG and BOLD activity.
FIG. 3. (A) Exemplary local field source activity of two different nodes. Left: Fifteen seconds of local mean field activity. Middle: Power spectra. While in the upper panel we see dominant alpha-oscillations, the source in the lower panel exhibits a dominant delta-activity. Right: Topography of the alpha- and delta-rhythms. (B) Simulated electroencephalography (EEG). Displayed are 15 sec of approximated EEG activity for six selected channels referenced to channel FCz. (C) BOLD timeseries of regions of interest (ROIs) exhibiting high FC (top row) and low FC (bottom row). Left: Simulated BOLD signal. Right: Empirical BOLD signal. Timecourses of identical ROIs are shown for empirical and simulated data. (D) Frequency spectra of mean-field source activity, EEG, and BOLD signals of exemplary nodes/channels. Note the peaks in the delta-/alpha-range for the electrophysiological simulations and in the <0.1-Hz range for BOLD.

Forward model

Upon simulating brain activity in the simulator core of TVB, the generated neural source activity time courses from both region- and surface-based approaches are projected into EEG, MEG, and blood oxygen level-dependent (BOLD) contrast space using a forward model (Bojak et al., 2010). The first neuroinformatic integration of these elements has been performed by Jirsa and colleagues (2002) demonstrating neural field modeling in an event-related paradigm. Homogeneous connectivity along the lines of Jirsa and Haken (1996) was implemented. Neural field activity was simulated on a spherical surface for computational efficiency and then mapped upon the convoluted cortical surface with its gyri and sulci. The forward solutions of EEG and MEG signals showed that a surprisingly rich complexity is observable in the EEG and MEG space, despite simplicity in the neural field dynamics. In particular, neural field models (Amari, 1978; Jirsa and Haken, 1996; Nunez, 1974 Robinson, 1997; Wilson-Cowan, 1974) have a spatial symmetry in their connectivity, which is always reflected in the symmetry of the resulting neural source activations, even though it may be significantly less apparent (if at all) in the EEG and MEG space. This led to the conclusion that the integration of tractographic data is imperative for future large-scale brain modeling attempts (Jirsa et al., 2002), since the symmetry of the connectivity will constrain the solutions of the neural sources.

Computing EEG/MEG signals

The forward problem of the EEG and MEG is the calculation of the electric potential V(x) on the skull and the magnetic field B(x) outside the head from a given primary current distribution D(x,t) as described by Jirsa and colleagues (2002) which we summarize in the following. The sources of the electric and magnetic fields are both primary and return currents. The situation is complicated by the fact that the present conductivities such as the brain tissue and the skull differ by the order of 100. Three-compartment volume conductor models are constructed from structural MRI data; the surfaces for the interfaces between the gray matter, cerebrospinal fluid, and white matter are approximated with triangular meshes (see Fig. 4). For EEG predictions, volume conduction models for the skull and scalp surfaces are incorporated. Here it is assumed that the electric source activity can be well approximated by the fluctuation of equivalent current dipoles generated by excitatory neurons that have dendritic trees oriented roughly perpendicular to the cortical surface and that constitute the majority of neuronal cells (∼85% of all neurons). We neglect dipole contributions from inhibitory neurons, since they are only present in a low number (∼15%), and their dendrites fan out spherically. Therefore, dipole strength can be assumed to be roughly proportional to the average membrane potential of the excitatory population (Bojak et al., 2010).
FIG. 4. EEG forward modeling—calculating the EEG channel signals yielded by the neuronal source activity at the cortical nodes. (A) Triangulation of the Montreal Neurological Institute brain surface constructed by 81,920 vertices. The regions representing the cortical and subcortical nodes are color coded. In this example, the large-scale model comprises 96 regions; however, 14 of them representing subcortical structures that do not directly contribute to the channel space signals and therefore represented by a single color here. (B) The vertex normals represent the orientation of the dipoles and serve for the calculation of the lead field matrix that describes the transformation from the signals at the cortical sources to the EEG channel space.
Dipole locations are assumed to be identical to source locations used for DTI tractography, while orientations are inferred as the normal of the triangulations of the segmented anatomical MR images (Fig. 4). Besides amplitude, every dipole has six additional degrees of freedom necessary for describing its position and orientation within the cortical tissue. At this stage, we have a representation of the current distribution in the three-dimensional physical space \(x \in { \rm R}^3\) and its evolution over time t given by the neural source modeling. To make the proper comparison with experimental data, the forward solutions of the scalar electric potential V(x) on the skull surface and of the magnetic field vector B(x) at the detector locations have to be calculated. Here it is useful to divide the current density vector J(x) produced by neural activity into two components. The volume or return current density \({ \bf J}^{ v} ( x ) = \sigma ( x ) { \bf E}\) is passive and results from the macroscopic electric fields E(x) acting on the charge carriers in the conducting medium with the macroscopic conductivity σ(x). The primary current density is the site of the sources of brain activity and is approximately identical to the neural field activity, because although the conversion of chemical gradients is due to diffusion, the primary currents are determined largely by the cellular-level details of conductivity. The current flow is perpendicular to the cortical surface due to the perpendicular alignment and elongated shape of pyramidal neurons as discussed above. In the quasistatic approximation of the Maxwell equations, the electric field becomes \({ \bf E} = - \triangledown V\) , where \(\triangledown\) is the Nabla operator \(( \ldots \partial /\partial x \ldots ) ^T\) .
The current density J is
\begin{align*}{ \bf J} ( x ) = \psi ( x , t ) { \bf n} ( x ) + \sigma ( x ) { \bf E} ( x ) = \psi ( x , t ) { \bf n} ( x ) - \sigma ( x ) \triangledown {V} ( x ) \tag{3}\end{align*}
where n(x) is the cortical surface normal vector at location x.
Following the lines of Hamalainen et al. (1989, 1993) and using the Ampere–Laplace law, the forward MEG solution is obtained by the volume integral
\begin{align*} { \bf B } ( x ) = \frac { \mu_ { \rm o } } { 4\pi } \int ( \psi ( x { \prime } , t ) { \bf n } ( x { \prime } ) + V ( x { \prime } ) \triangledown { \prime } \sigma ( x { \prime } ) ) \times \frac { x - x^ { \prime } } { \mid x - x^ { \prime } \mid ^3 } d \nu { \prime } \tag { 4 } \end{align*}
where dν′ is the volume element, \(\triangledown \prime\) the Nabla operator with respect to x′, and μo the magnetic vacuum permeability.
The forward EEG solution is given by the boundary problem
\begin{align*}\triangledown \cdot ( \sigma ( x ) \triangledown { V} ( x ) ) = \triangledown \cdot\, ( \psi ( x , t ) { \bf n} ( x ) ) \tag{5}\end{align*}
which is to be solved numerically for an arbitrary head shape, typically using boundary element techniques as presented by Hamalainen (1992) and Nunez and Srinivasan (2006). In particular, these authors showed that for the computation of neuromagnetic and neuroelectric fields arising from cortical sources, it is sufficient to replace the skull by a perfect insulator, and, therefore, to model the head as a bounded brain-shaped homogeneous conductor. Three surfaces S1S2S3 have to be considered at the scalp–air, the skull–scalp, and the skull–brain interface, respectively, whereas the latter provides the major contribution to the return currents. The three-dimensional geometry of these surfaces is obtained from MRI scans. There are various methods to solve the boundary problem above [see for instance, Hamalainen (1992) and Oostendorp et al. (2000)], but all of them compute some form of a transfer matrix acting upon the distributed neural sources and providing the forward EEG at electrode locations. The boundary-element (BEM) models are based on meshes that form a closed triangulation of the compartment surfaces from segmented T1-weighed MRI surfaces. Finite element method models consist of multiple tetrahedra, allowing to model tissue anisotropy that is physiologically more realistic at the cost of computational resources.
In the current study, we used the software packages spm8 (www.fil.ion.ucl.ac.uk/spm/software/spm8/) and Fieldtrip (http://fieldtrip.fcdonders.nl) to generate a transfer matrix for EEG, a so-called lead-field matrix (LFM) to be used for the forward modeling of EEG data (Litvak et al., 2011). In a first step, we loaded the electrode positions of our 64-channel MR-compatible EEG cap (Easy Cap) from a standard file containing 97 positions of the International 10–20 system, subsequently deleting the positions not represented by our cap, resulting in 61 remaining positions (the remaining three channels were used for recording of electrocardiography required to correct the heartbeat-related artifacts in the EEG caused by the static field B0 field of the MR scanner and for Electrooculography). Three fiducial points (Nasion, left- and right preauricular points) were used for alignment of the sensor positions to the head model. Next, we selected the canonical mesh/triangulation of the Montreal Neurological Institute (MNI) head model (SPM8) consisting of 20,484 vertices and 40,960 faces. Alternatively, one can calculate individual meshes using Fieldtrip with the option to choose the desired degree of detail. Distances between electrodes and scalp surface are minimized manually. Subsequently, the actual forward model is computed. For this, first, we defined all vertices of the cortex as sources (dipoles) and setting their orientation to normal, that is, choosing the vertex normal of each vertex point—pointing out of the cortex (Fig. 4). The normals are calculated by Fieldtrip's built-in function normals(). After that, the LFM is computed by using the Fieldtrip function ft_prepare_lead field(), providing a 61×20,484 matrix, describing the projection of each source (vertex) onto the electrodes.
EEG signals can now be calculated by applying the resulting LFM to the mean field source activity generated by the Stefanescu–Jirsa model (example shown in Fig. 3B). The corresponding frequency spectrum of the thus generated EEG is shown in Figure 3D.

Computing fMRI BOLD signals

Subsequent to the fitting of electric activity with the model, we want to compare model predictions of BOLD contrast with actual recorded fMRI timeseries data to deduce and integrate further constrains from and into the model and to perform analyses on the coupling between neural activity and BOLD contrast fluctuation.
Yet, the relation between neuronal activity and the BOLD signal is far from being elucidated (Bojak et al., 2010). Some major questions are not yet sufficiently answered, such as which neuronal and glial processes contribute to the BOLD signal? Via which molecular mechanisms do they influence the BOLD signal? What are the precise space–time properties of such a neurohemodynamic coupling?
Empirical and theoretical evidence indicates that different types of neuronal processes, such as action potentials, synaptic transmission, neurotransmitter cycling, and cytoskeletal turnover, have differential effects on the BOLD signal. For example, synaptic transmission and transmitter cycling are associated with higher metabolic demands than the conduction of action potentials (Attwell and Laughlin, 2001; Creutzfeldt et al., 1975; Dunning and Wolff, 1937). This is in line with a study showing a tighter relation of the BOLD signal to local field potentials that mainly represent postsynaptic potentials than to single- and multiunit activity, which represents action potentials (Logothetis et al., 2001). With respect to the molecular mechanism, the excitatory neurotransmitter glutamate has been shown to be a major modulator of the BOLD signal (Giaume et al., 2010; Petzold et al., 2008). Due to conflicting findings, there is a longstanding debate whether and if so how inhibitory neuronal activity can be detected by fMRI (Ritter et al., 2002). The issue gets complicated since likely multiple effects contribute at varying degrees in different brain regions and under different conditions (Iadecola, 2004).
The temporal properties of blood oxygenation/BOLD signal changes are modeled by either convolution of the approximated neuronal activity with the canonical hemodynamic response function assuming neurohemodynamic coupling to be linear and time invariant (Wobst et al., 2001), using a parametric set of response functions to account for variations in coupling and for nonlinearities (Friston et al., 1998), or by using the more elaborated biophysical hemodynamic model referred to as the Ballon–Windkessel model (Buxton et al., 1998; Friston et al., 2000, 2003; Mandeville et al., 1999). The latter approach is a state model with four state variables: synaptic activity, flow, volume, and deoxygenated hemoglobin. It assumes a three-stage coupling cascade: (1) a linear coupling between synaptic activity and blood flow; (2) a nonlinear coupling between blood flow and blood oxygenation and blood volume due the elastic properties of the venous vessels—hence, the name Ballon–Windkessel model; and (3) a nonlinear transformation of the variable flow and volume into the BOLD signal (Friston et al., 2003).
In a full-brain model comprising oscillators that in contrast to the here used Stefanescu–Jirsa model were less biophysically founded, local neural activity was approximated as the derivative of the state variable and used as an input for a neurohemodynamic coupling model (Ghosh et al., 2008). Approaches that are based on spiking neuron models use firing rates as input for BOLD signal estimation. Authors motivated that by the fact that in most cases, the firing rate is well correlated with excitatory synaptic activity (Deco and Jirsa, 2012). In the present study, however, the Stefanescu-Jirsa model provides six state variables with different biophysical counterparts; futhermore, each of them is described by the sum of the activity of three modes which model the activity of different neuronal subclusters of the inhibitory and the excitatory populations. For the simulated BOLD data shown here, we follow the line of Bojak and colleagues (2010) in assuming that BOLD contrast is primarily modulated by glutamate release. Hence, we approximate the BOLD signal time courses from the mean fields of the excitatory populations at each node. The mean-field amplitude time courses are convolved with a canonical hemodynamic response function as included in the SPM software package (www.fil.ion.ucl.ac.uk/spm). Since evidence exists that also inhibitory activity influences blood oxygenation, different neuronal input functions, including those representing a weighed sum of excitatory and inhibitory mean field activity, will be tested in the future. This will allow us to explore different neurohemodynamic coupling scenarios for their ability to predict empirical data.
In Figure 3C, we show the approximated BOLD signal dynamics for regions of interest (ROIs) that exhibit high and low functional connectivity (FC). In Figure 3C, we also show empirical BOLD signal time courses extracted from the same cortical nodes by averaging the BOLD signal over all voxels contained by the respective ROI.
The frequency spectrum in Figure 3D indicates that the simulated BOLD signals exhibit peak frequencies below 0.1 Hz; that is, they exhibit the typical low-frequency oscillations reported previously for the resting-state BOLD signal.
There is potential for further advancement of the neurohemodynamic model, for example, by including assumptions about how astrocytes mediate the signal from neurons that cause dilatation of arterioles and resulting increase in blood flow considering the finite range of astrocyte projections (Drysdale et al., 2010). It is pointed out by the authors of this article that the distribution of effective connectivity between neurons and flow regulation points has not been measured experimentally yet, but is an area of active research interest [see also Aquino and associates (2012)]. Incorporating more details on the spatiotemporal aspects of neurohemodynamic coupling would considerably increase the value of the BOLD signal in model identification.

Identification of Spatiotemporal Motifs

Fitting computational models with actual recorded neuroimaging data requires several methodological considerations. Fitting the model with short epochs of empirical EEG-fMRI data yields individual model instances (parameter values). Thereby, we determine parameter values that are related to particular features of the recorded EEG-fMRI timeseries. Parameter estimation is challenging due to the high number of free parameters (number of free parameters of all six equations for the excitatory and inhibitory population times the number of sources) and the resulting high model complexity. There exist various approaches to address this challenge. Model inversion approaches [e.g., Dynamic Causal Modeling (Daunizeau et al., 2009; Friston et al., 2003), Bayesian Inversion, or Bayesian Model Comparison-based approaches] build on the inversion of a generative model of only a few sources (up to about 10). Although in principle suitable, at the current stage of development, these approaches are intractable in our case due to the high number of contributing neural sources and the resulting combinatorial explosion of parameter combinations. Bayesian methods are able to handle noisy or uncertain measurements and have the important advantage of inferring the whole probability distribution of the parameters than just point estimates. However, their main limitation is computational, since Bayesian methods are based on the marginalization of the likelihood that requires the solution of a high-dimensional integration problem. Hence, in the context of TVB, the parameter values are planned to be systematically inferred using an estimation strategy that is guided by different principles:
• Parameter ranges are constrained and informed by biophysical priors (Breakspear et al., 2003; Larter et al., 1999; Scherg and Berg, 1991; Sotero and Trujillo-Barreto, 2008).
• Dictionaries are created that associate specific parameter settings with resulting model dynamics. This provides a convenient way to fit the model with experimental data using previously identified parameter settings as priors. Aside from dictionaries that associate parameter settings with mean-field amplitudes of large-scale nodes, we will create dictionaries that associate typical mean-field configurations of nodes with the resulting EEG and BOLD topologies (see Section below).
• Parameter estimation is informed by integrating multiple simultaneously recorded modalities on different spatiotemporal scales by iteratively applying forward and inverse modeling.
• Inverse modeling of decoupled source activity is used to estimate local node parameters for each of the nodes individually at a lower computational cost than estimating parameters for the full system.
• Forward modeling of the full system is used to compare emerging macroscopic properties on long temporal scales (e.g., FC patterns) to experimental data and to infer global coupling parameters of the large-scale system.
• Complementary information obtained from different signals is used for refinement of ambiguous parameter estimates.

Empirical data

Different neuroimaging methods capture different subsets of neuronal activity, and their combination of several imaging modalities provides complementary information that is central to TVB. Observed data serve as a blue print for the model's supposed output. Empirical data are considered when initially selecting a certain model type. The data provide also a reference when fine-tuning the model parameters. Finally, the empirical data serve for model validation. That is, one tests whether a model makes predictions of the behavior of new empirical data.
EEG and fMRI data complement each other with respect to their spatial and temporal resolution. In terms of temporal resolution, fMRI's limitations are twofold: First, the acquisition time of a whole volume, that is, scans of the full brain, lies in the order of seconds. Second, the deoxygenation response that follows changes of neuronal activity and that builds the basis of the BOLD contrast provides a delayed and blurred image of neuronal activity. The spatial resolution, however, can go up to the order of cubic millimeters. fMRI is capable to assess activity in the entire brain, including subcortical structures. EEG provides an excellent temporal resolution in the order of milliseconds and hence captures also fast aspects of neuronal activity up to frequencies around 600 Hz (Freyer et al., 2009; Ritter et al., 2008). Due to volume conduction, the spatial resolution of EEG is limited to the order of cubic centimeters, and the inverse problem prevents an assumption-free localization of the underlying neuronal sources.
Here we use the synergistic information that the two imaging methods provide by employing simultaneously acquired EEG and fMRI data (Ritter and Villringer, 2006) of nine healthy subjects (mean age 24.6 years, five men) during the resting-state; that is, the subjects were lying in the bore of the MR scanner with their eyes closed being asked to relax and not to fall asleep. Functional EEG-fMRI data were acquired for the duration of 22 min. For fMRI, we employed an echo planar imaging (EPI) sequence (repetition time=1.94 sec, 666 volumes, 32 slices, voxel size 3×3×3 mm3).
For EEG recording, we used an MR-compatible 64-channel system (BrainAmp MR Plus; Brain Products) and an MR-compatible EEG cap (Easy Cap) using ring-type sintered silver chloride electrodes with iron-free copper leads. Sixty-two scalp electrodes were arranged according to the International 10–20 System with the reference located at electrode position FCz. In addition, two electrocardiogram channels were recorded. Impedances of all electrodes were kept below 15 kohm. Each electrode was equipped with an impedance of 10 kohm to prevent heating during switching of magnetic fields. The EEG amplifier's recording range was±16.38 MV at a resolution of 0.5 μV; the sampling rate was 5 kHz, and data were low-pass filtered by a hardware 250-Hz filter. EEG sampling clock was synchronized to the gradient-switching clock of the MR scanner (Freyer et al., 2009).
In addition, for each subject, we run different MR sequences to obtain structural data: diffusion-weighed MRI (DTI), T1-weighed MRI with an MPRAGE sequence (TR 1900 msec/echo time (TE) 2.32 msec; field of view (FOV) 230 mm; 192 slices sagital; 0.9×0.9×0.9-mm3 voxel size), and a T2 (TA: 5:52 min; voxel size 1×1×1 mm3; FoV 256 mm, TR: 5000 ms; TE: 502 ms) sequence.
Preprocessing of EEG (i.e., image-acquisition artifact and ballistocardiogram correction) and of fMRI data (motion correction, realignment, and smoothing) has been described elsewhere (Becker et al., 2011; Freyer et al., 2009; Ritter et al., 2007, 2008, 2009, 2010).
Brain region parcellation has been performed on a monkey brain surface (Bezgin et al., 2012) following the mapping rules of Kotter and Wanke (2005). The resulting parcellation was deformed to the MNI human brain template using landmarks defined in Caret (www.nitrc.org/projects/caret) (Van Essen et al., 2001; Van Essen and Dierker, 2007). Following the lines of Zalesky and Fornito (2009), we extracted two types of connectivity measures from the diffusion-weighed data: capacities and distances. There from, we derived two N×N structural adjacency matrices (SAMs) that quantify the relative connectivity between all pairs of the N cortical regions for each subject. The matrices approximate the macroscopic aspects of white matter connectivity with a network model, where cortical regions serve as vertices and fiber bundles as edges. Since directionality of fiber bundles cannot be inferred using DTI, connections are considered unidirectional, and SAMs are hence symmetrical. Capacities are intended to reflect the maximum rate of information transmission of a fiber bundle, which is assumed to be constricted by its minimum cross-sectional area, since this diameter in turn limits the maximum number of axons it can support. Thereby, it is implicitly assumed that axons cannot be more closely packed at a bottleneck, and that the capacity of each axon is proportional to its cross-sectional area. However, several physiological aspects such as pathologies or degree of myelination might considerably influence connectivity. Capacities are derived by constructing a 3D scaffolding graph that contains a link between each pair of white matter voxels in a 26-voxel neighborhood around each voxel for which their two principal eigenvalues form a sufficiently small angle. Connectivity between two regions is then measured as the number of link-disjoint paths that connects the two regions in the white matter graph, that is, the number of paths between two regions that have no links in common. Further, we derived the distances between the brain regions by calculating the mean length of all tracts found to connect two regions in the subjects' native space. Individual tracts are represented as edges on a graph, and their lengths were calculated using the boost C++ graph library. Before tractography and metrics extraction, raw diffusion-weighed Digital Imaging and Communications in Medicine images were converted to the Neuroimaging Informatics Technology Initiative format, and gradient and b-vectors were computed using the MRIcron package. A brain mask was generated using the Brain Extraction Tool from the FSL package. FSL eddy current correction was used to correct for distortions caused by eddy currents and head motion. Diffusion tensor models were fitted to the data using DTIFIT from the FSL package. ANTS was used for coregistration of subjects' T1 images to the standard MNI brain (MNI_152_T1_1 mm) and for coregistration of subjects' T1 images to their respective DTI images. T2 images were acquired to improve the accuracy of the registration from T1 to DTI images. Therefore, a linear transform was computed to register T1 data to T2 data and a nonlinear transform for registering T2 to DTI images, and both transforms were concatenated. Further, T1-to-MNI and T1-to-DTI transforms were concatenated to produce an MNI-to-DTI transform for implementation of our parcellation scheme. White matter and gray matter segmentation was performed on T1 data using N3. Alternatively, all image preprocessing steps as well as fiber tracking can be integratively performed with software MRTrix. The connectivity matrices can be subsequently estimated using the connectome mapper pipeline, which is part of the connectome-mapping toolkit.

The advantage of operating in source space

The dimensionality of the parameter space of the full model is equal to the number of sources (96, in the present example) times the number of free parameters per source (12) yielding, together with two additional global parameters, 1154 free parameters in total for the coupled large-scale model. Certainly, the number of free parameters of the coupled system is far too high to be fitted concurrently. However, uncoupling sources and inverting their local equations, we only need to fit 12 parameters at a time. The dynamic equations of the uncoupled nodes of the large-scale model are used as inverse models that are individually fitted with the EEG data to yield parameter sets for local nodes. The mean-field output of a single node with recorded EEG can be fitted in the source space by first applying a source reconstruction scheme to the EEG data and then subtracting the impact of coupled nodes from local mean-field activity of each node. The full large-scale model, on the other hand, is used as a forward model and fitted with a simultaneously acquired BOLD signal to recreate its spatial topology pattern and slow wave dynamics and to decide among ambiguous solutions that were computed in the previous parameter estimation step. As a result, instead of estimating all parameters for the full large-scale model at once, we only have to calculate a source reconstruction once for the EEG data and are then able to effectively infer the local parameter sets. Thereby, parameters that need to be fitted reduce from k*n to n, where n is the number of parameters of a local node and k the number of nodes.

Backward solution: estimating source time courses from empirical large-scale signals

Noninvasive neuroimaging signals in general constitute the superimposed representations of the activity of many sources leading to high ambiguity in the mapping between internal states and observable signals; that is, the pairing between internal states of the neural network and observed neuroimaging signals is surjective, but not bijective. As a consequence, the EEG and MEG backward solution is underdetermined. Hence, while the forward problem of EEG has a unique solution, the inverse problem of EEG, that is, the estimation of a tomography of neural sources from EEG channel data, is an ill-conditioned problem lacking a unique solution (Helmholtz, 1853; Long et al., 2011). We address this ill-posedness by the introduction of the aforementioned constraints, namely, realistic, subject-specific head models segmented from anatomical MR images, physiological priors, and source space-based regularization schemes and constraints. A commonly used prior is to restrict neural sources using the popular equivalent current dipole model (Scherg and Von Cramon, 1986) reducing the backward problem to the estimation of one or a few dipole locations and orientations. This approach is straightforward and fairly realistic, since the basis of the described modeling approach rests on the assumption that significant parts of a recorded EEG timeseries are generated by the interaction of large-scale model sources. Consequently, we can incorporate the location and orientation information of these sources as priors, thereby improving the reliability of the backward solution in this modeling scenario.
More general source imaging approaches that attempt to estimate source activity over every point of the cortex rest on more realistic assumptions, but need further constraining to be computationally tractable (Michel et al., 2004). Nevertheless, current density-field maps are not necessarily closer to reality than dipole source models, since the spatial spread is due to imprecision in the source estimation method rather than a direct reconstruction of the potential field of the actual source (Plummer et al., 2008).
Source activity is estimated from a recorded EEG timeseries, by inversion of an LFM that is based on an equivalent current dipole approach. Source space projection can be done with commonly used software packages (e.g., the open-source Matlab toolbox FieldTrip or BrainStorm or the commercial products BrainVoyager or Besa) by inverting a forward solution on the basis of given source dipole positions and orientations and a volume conductor model; all of the above can be individually estimated from anatomical MRI data of subjects (Chan et al., 2009). Further, priors derived from BOLD contrast fluctuation can be exploited to inform source imaging [cf. technical details and reviews: (Grech et al., 2008; He and Liu, 2008; Liu and He, 2008; Michel et al., 2004; Pascual-Marqui, 1999)].

Reducing parameter space by operating with uncoupled local dynamics

The parameter space can be considerably reduced by regularizing the model from coupled dynamics to uncoupled dynamics by disentangling all coupled interactions. That is, for a given node, the incoming potentials from all other nodes are subtracted from each time series. In TVB, the long-range input received by a node i from all coupled nodes \(j \neq i\) is modeled by adding the weighed and time-delayed sum of all excitatory mean-field potentials \(\left( \sum_{j = i} \omega_{ij}x_j ( t - \Delta t_j ) \right)\) of all coupled nodes as described in Equation 1 to its excitatory uncoupled regional mean field xi as described in Equation 2.
This summation operation is reversible and therefore allows the reconstruction of intrinsic source activity by inverting the assumptions upon which forward modeling is based. For the sake of simplicity, the noise term η(t) is omitted in the reverse procedure. Since \(x_i ( t + 1 ) , x_i ( t )\) and \(\left( \sum_{j = i} \omega_{ij}x_j ( t - \Delta t_j ) \right)\) can be empirically measured for each time-step t=1,…, N−1, with N being the total number of data points, and approximated by source imaging, inserting their respective values, rearranging the equation, and solving for \({ \dot x}_i ( t )\) yield the mean-field change for the next future time step:
\begin{align*} f(\dot{x}_i(t)) = x_i( t+ 1 ) - x_i ( t ) - \left( \sum_{j \neq i} \omega_{i j} x_{j} (t - {\Delta t}_j))dt \right) / dt) \tag{6} \end{align*}
Now, since a subset of the state vector [xi(t) and \({ \bar x}_i ( t )]\) is known for each timestep, model parameters can be obtained for time snippets of the uncoupled source timeseries by using a procedure for estimating parameters of dynamical systems as described below.

Initial coarse scanning of the parameter space

To get acquainted with model dynamics, we start by laying a coarse grid over the parameter space and simulate mean-field activity for grid points (Fig. 1B, C). Subsequently, the resolution of the grid can be increased according to desired accuracy or space and runtime constraints (some other approaches are outlined below). Each resulting waveform is classified according to several criteria that discern specific dynamical regimes. This could be, for example, the dynamic behavior of the system's state variables such as fixed points or limit cycles—switching between the two has been shown, for example, for the posterior alpha-rhythm (Freyer et al., 2009, 2011, 2012). Another feature could be the expression of certain spatial patterns such as resting-state networks (RSNs) known from fMRI (Greicius et al., 2003; Gusnard and Raichle, 2001; Mantini et al., 2007; Raichle et al., 2001) or characteristic spectral distributions (brain chords). With respect to the power spectrum, a feature would be the 1/f characteristics or the alpha-peak (Buzsaki, 2006).
During this initial parameter space scanning, only coupling parameters (K11, K12, K21) and distribution parameters of membrane excitabilities (mean m and dispersion σ) are varied, since these are the main determinants of the resulting dynamic regime of the mean field (Fig. 1). Parameters that correspond to biophysical properties of the respective neuron populations (p1p12) are optimized in subsequent steps.
In Figure 2B, we show an example of how the coarse grid method has identified parameter settings that yield patterns/topologies of coherent BOLD activity very similar to the FC we observe in empirical data (r=0.47).

Snippets, motifs, and dictionaries

Since the discovery of EEG, several recurrent EEG features and patterns have been identified and associated with various conditions. Among many other instances, waveform snippets have been termed and cataloged as, for example, spike-and-wave complexes, burst suppression patterns, alpha-waves, lambda-waves, and so on (Stern and Engel, 2004). However, all of these temporal motifs have been identified by subjective visual inspection of EEG traces and not by a principled and automated search. Nor have these macroscopic neuronal dynamics been classified by means of the internal behavior of a system that models the underlying microscopic and mesoscopic physiological mechanisms. Mueen and associates (2009) extend the notion of timeseries motifs to EEG timeseries with a motif being an EEG snippet that contains a recurrent (and potentially meaningful) pattern in a set of longer timeseries. They designed and tested an algorithm for automated motif identification and construction of dictionaries that was able to identify typical meaningful EEG patterns such as K-complexes. We incorporate the notion of a motif as a recurring prototypic activity pattern, but extend it and conceptualize a neural motif as an activity pattern that corresponds to the dynamic flow of a state variable on an underlying geometrical object in the state space of a neuronal ensemble, that is, a structured flow on a manifold (Jirsa and Pillai, 2012). In short, a neural motif is a snippet of a neuronal timeseries that can be reproduced by a neuronal model under fixed parameter settings. Further, we extend the ideas of Prinz and colleagues (2003) and Mueen and colleagues (2009) of building a dictionary of waveform patterns. Upon expressing timeseries by a series of motifs and adding unobserved underlying processes (i.e., parameter settings and the flow of unobservable state variables), we ultimately associate these microscopic mechanisms with emerging macroscopic properties of the system, thereby we identify emerging macroscopic system dynamics such as RSNs with the local and global interaction of the timeseries motifs, that is, the system's flow on low-dimensional manifolds in the state space of the model. Exemplary one-second motif patterns of source activity and corresponding model fits that resulted from a preliminary random search (see the section Stochastic optimization) are shown in Figure 5.
FIG. 5. Exemplary motif-fitting results. Shown are one-second snippets of source estimates from EEG (red) and corresponding model fits (blue). Parameters were obtained using a random Monte Carlo search routine that optimized waveform correlation after running for one hour on a single core of a 2.13-GHz Intel Core 2 Duo processor. Despite high correlation coefficients (r=0.78 for the best fit), some waveform features were not captured appropriately. We expect more sophisticated parameter estimation approaches (cf. section A, choice of parameter estimation algorithms) to produce even better fits.

Building a dictionary of dynamical regimes

Automatic parameter optimization heuristics such as evolutionary algorithms are often able to find acceptable solutions in complex optimization problems. However, it is very likely that model parameters that correspond to variable biophysical entities are subject to ongoing variations throughout the course of even short time series, and it hence becomes necessary to refit the parameters for every time segment. Further, a wide range of EEG patterns or time-series motifs are highly conserved and repeatedly appear over the course of an EEG recording (Stern and Engel, 2004). Therefore, it is unreasonable to perform time-consuming searches through the very large parameter space of the coupled network (the number of model instances is 1096 for a parcellation of the brain into 96 regions and 10 free parameters per node) for every short epoch of experimental data. Besides source activity decoupling, we plan to ameliorate this expensive search problem by storing estimation results in a dictionary that associates parameter settings with dynamical regimes.
This dictionary has three major purposes: First, it will help to define priors for subsequent parameter fitting routines and hence reduce computational cost. Second, it is in itself a knowledge database that relates microscopic and mesoscopic biophysical properties—as defined by model parameters—with macroscopic neuronal dynamics. Third, it enables continuous refinement of previous motif candidates. We assume that motifs are highly conserved across subjects, since they resemble instances of prototypic population activity. However, waveform patterns will be subject to small variation, for example, by noise from ongoing background activity. After generating an initially large number of similar good-fitting model waveforms, each waveform is evaluated for its fitness to explain motif patterns across a large number of subjects and signal features. Upon exposure to an increasing number of motif patterns from different subjects and elimination of bad-fitting dictionary entries, only the most generic motif instances will remain in the dictionary. Therefore, only those motif patterns from a class of similarly shaped waveforms that have the highest explanatory power can be seen as most generic and remain in the dictionary.
Such a dictionary maps specific parameter settings to the emerging prototypical model dynamics. Besides waveform, timeseries snippets are classified according to a variety of dynamical features that have relevance for cognition. This may be geometric objects in the state space, that is, flows and manifolds, or a certain succession of relevant signal features, for example, a trajectory of the relative power of frequency bands in each source node or other variables of interest such as RSN patterns, or the presence of prominent features of EEG activity such as sleep spindles, alpha-bursts, or K-complexes. An illustration of the proposed operational flow is shown in Figure 6.
FIG. 6. Model-based knowledge generation. The virtual brain estimates parameter sets that enable the model-based replication of short empirical source activity timeseries. Upon emulation of observable timeseries, internal model state variables can then be analyzed to infer knowledge about unobservable system states. Empirical EEG and BOLD data are used to estimate electrical source activity. Employing individual structural priors (fiber-tract capacities and distances), each node's activity can be disentangled from the influences of the other nodes. Spatial and temporal properties of the resulting data are compared to the model output. Parameter settings yielding the best fit are identified. A central point is the identification of spatiotemporal motifs, that is, the identification of similar dynamical behaviors in simulated and empirical data. Matching empirical- and model-based structural flows on manifolds are stored in a dictionary, for example, in the form of prototypical source timecourses. Priors, that is, initial parameter settings known to yield specific classes of dynamics observed in empirical data, are taken from the dictionary for subsequent simulations. Taking advantage from the pool of existing knowledge increasingly reduces the costs for parameter optimization for different empirical dynamical scenarios. In other words, an integrative method of induction and deduction serves our model optimization procedure. Statistical analysis of the parameters sets stored in the dictionary yields the desired insight about which biophysical and/or mathematical settings are related to the different observed brain states.
When fitting new timeseries, instead of re-estimating the parameters for a dynamic regime that might have been inferred before, a dictionary search is conducted, and observed dynamics are related to probable parameter settings. The idea of generating simulation databases was previously successfully applied to several neuroscience models (Calin-Jageman et al., 2007; Doloc-Mihu and Calabrese, 2011; Günay et al., 2008, 2009; Günay and Prinz, 2010; Lytton and Omurtag, 2007; Prinz et al., 2003).
Dictionary-based signal separation was previously successfully applied to several neuroscience problems (Zibulevsky and Pearlmutter, 2001).
To recreate experimental time-series with TVB, we compare variables of interest stored in the dictionary with empirical data. The database can be screened for models that reproduce observed neuronal behavior, and dictionary entries with minimal discrepancy are used as priors for subsequent parameter estimation to keep the search space small; that is, they are used as initial conditions and for the definition of search space boundaries. Further, statistics over the database can give an insight into how specific model configurations relate to the observed neuronal dynamics. To construct the database, the results of previous parameter estimation runs, that is, parameter settings, are stored along with the generated mean-field waveform and metrics of interest that quantify the dynamics of this waveform. Database searches can be based, for example, on waveform correlation or other similarity metrics such as mutual information or spectral coherence, as well as combinations thereof.

Model fitting

The benefits of tuning a model to multimodal empirical data

Simultaneous EEG-fMRI recordings enable us to fit model activity with two different empirical target variables that exist on two different spatiotemporal scales.
On the one hand, the estimated spatiotemporal BOLD activity emerging from fully coupled large-scale model simulations are fitted to match those of empirically measured slow hemodynamic BOLD activity. On the other hand, fast electric dynamics of individual large-scale nodes are fitted to match the corresponding localized source activity derived from EEG source imaging.
Accordingly, we use different metrics to quantify the goodness of fit for the two different spatiotemporal scales. In the case of localized source activity, our target is the reproduction of salient features of the mean-field waveform of source activity. Therefore, we aim to optimize the metrics of interest between empirical and model timeseries such as the least-squares fit between waveforms. Other optimization targets include waveform correlation or coherence and other nonlinear dependence measures such as mutual information. In the case of simulated BOLD signals, we aim for reproducing the global spatial interaction topology. To match spatial FC patterns, the correlation between experimental and simulated FC matrices is maximized.
On the one hand, this enables the forward modeling-based exploration of conditions under which certain FC patterns emerge. On the other hand, we use the fMRI signal to refine ambiguous parameter estimates in situations where it is not possible to determine a unique parameter set that reproduces source activity. This twofold strategy is motivated by the fact that the set of parameters of the full large-scale model can be divided into global and local parameters. While global parameters specify the interaction between distant brain regions, each set of local parameters governs the dynamics of its associated node. To reproduce brain activity by a model, it is necessary to emulate both the uncoupled local dynamics as well as the dynamics emerging from the global interactions between sources. By decoupling the source activity according to the global coupling parameters that were obtained during the forward estimation step, we gain uncoupled source activity. Subsequently, each local node is tuned to reproduce this uncoupled source activity. Note that there is no experimental counterpart of this activity since directly measuring uncoupled intrinsic region activity is impossible as long as brain regions are connected and interacting.
However, if uncoupled local populations are able to reproduce this virtual source activity estimated from empirical data, the full large-scale model will reproduce the originally measured activity after being recoupled again. This uncoupling scheme resembles the straightforward inversion of the coupling scheme implemented in the model. After recoupling sources and performing forward simulations with both estimated parameter sets, the measured EEG-fMRI data are generated on both spatiotemporal scales preserving local source activity that translate to fast EEG activity as well as global source interaction patterns that translate to slow BOLD patterns.

A choice of parameter estimation algorithms

In the following, we review three different parameter estimation approaches we consider most important for accomplishing this task and a method for refinement of solutions based on the integration of the fMRI data. The three approaches are based on stochastic optimization, state observers, and dimensionality reduction. Up to now, only the first approach was used for parameter estimation in the context of TVB, yet we aim to also use the others in the future.
Stochastic optimization
Stochastic optimization (Spall, 2003) is a Monte Carlo parameter estimation strategy that is able to conquer high-dimensional search spaces using random variables. In contrast to a fully randomized search that chooses parameters without any further constraints, during stochastic search, parameters are not generated in an entirely random fashion, but multivariate Gaussian distributions are centered on some initial values, and the algorithm draws a large number of initial parameter sets from this distribution for evaluation. Then, if new points are found to generate better matching results, the multivariate distribution is moved and centered at the new best points in the parameter space. Further, the variance of the distribution is contracted at the end of each iteration, and a smaller part of the search space is searched more thoroughly. Multiple instances of this search engine can be initialized at different starting points and evaluated to increase the chances to find a global minimum and to decrease the probability of getting stuck in the local minima. Examples of how this algorithm is able to fit different motifs found in the source space with the Stefanescu–Jirsa model results are shown in Figure 5.
More sophisticated stochastic methods mimic strategies from biological evolution [Evolutionary computation (Fogel, 2005)], cooling of metals [Simulated annealing (Brooks and Morgan, 1995)], and other biological or physical phenomena [e.g., Ant Colony Optimization, Taboo Search, and particle swarm methods (Dorigo and Di Caro, 1999)].
In the present case, evolutionary approaches are well suited for both parameter sets: the forward model-based estimation of the set of global parameters on longer time-scales and the inverse model-based estimation of local parameters for short time-scales.
Further, the two approaches can be used in a complementary manner to mutually inform themselves about good starting values that correspond to macroscopic phenomena in the case of source space fits or to microscopic phenomena in the case of large-scale fits. For instance, an optimization routine that fits local parameters with snippets that show dynamics on a short time scale can be initialized with parameter values that were obtained during forward simulations that yielded the emergence of RSNs that were observed along with these snippets during the experiment.
Although metaheuristics cannot guarantee to find the global optimum, they often compute the vicinity of global solutions in modest computation times and are therefore the method of choice for large parameter estimation problems (Moles et al., 2003).
State observers
In the framework of control theory, the challenge of parameter estimation in complex dynamical systems is approached by the implementation of state observers aiming to provide estimates of the internal states of a system, given measurements of the input and output of the system. Recently, several state observer techniques have been developed and successfully applied to biological systems, and the use of extended and unscented Kalman filtering methods has become a de facto standard of nonlinear state estimation (Lillacci and Khammash, 2010; Quach et al., 2007; Sun et al., 2008; Wang et al., 2009). When parameters are assumed to be constants, they are considered as additional state variables with a rate of change equal to zero. By incorporating the specific structure of the problem using this state extension, the problem of parameter identification is converted into a problem of state estimation, viz., determining the initial conditions that generate an observed output.
State observers are typically two-step procedures: first, the process state and covariance are estimated from the model. Then, feedback from noisy measurements is used to improve the previous prediction, and the process is repeated until convergence. However, convergence of state estimation is not guaranteed in the nonlinear case if the initial estimates are chosen badly. Even worse, the process of state extension can introduce nonuniqueness of the solution, that is, several sets of parameters or ranges of parameters that produce equally good fits. In this case, the model is referred to as being nonidentifiable.
Dimensionality reduction
A third approach we do consider relevant for generating a dictionary of dynamic motifs, since it corresponds to the state space equivalent of the timeseries motifs: flows on low-dimensional manifolds in a higher dimensional state space.
A data vector of length d can be regarded as a point that is embedded in a d-dimensional space. However, that does not necessarily mean that d is the actual dimension of the data. The intrinsic dimensionality (ID) of a data set equals the minimum number of free variables needed to represent the data without the loss of information. Equivalently, the ID of that data vector is equal to M, if all its elements lie within an M-dimensional subspace. Each extra dimension in regression space considerably hardens parameter estimation (curse of dimensionality); therefore, we seek for a low-dimensional description of our data. Manifolds can be informally described as generalization of surfaces into higher dimensions. Signals and other data can often be naturally described as points and flows on a manifold; that is, they are manifold-valued (Perdikis et al., 2011). Several physiologically motivated models of neuronal activity indicate that neuronal activity patterns have an underlying structure that is inherently lower dimensional and contained on the surface (or tightly around the surface) of a low-dimensional manifold that is embedded within the higher-dimensional data space (Deco et al., 2010; Freyer et al., 2009, 2011, 2012). Therefore, we view manifolds as geometric representations of meaningful system dynamics. To reconstruct manifolds, viz., manifold learning, one seeks for a mapping that projects higher-dimensional input data onto such a low-dimensional embedding while preserving important characteristics of the data. Therefore, describing the system in terms of a dynamic model is equivalent to learning the structure of state space manifolds. The properties of the manifold on which neuronal activity is embedded directly correspond to the intrinsic dynamics of the system. It follows that a model that is intended to capture neuronal dynamics must necessarily reproduce flows on this underlying manifold. Thus, structured flows on manifolds can be regarded as a compact description of the system's dynamics and therefore resemble the state space equivalent of our concept of timeseries motifs.
We outline a three-step procedure for a manifold-based parameter estimation scheme that is based on dimensionality reduction, manifold learning, and subsequent parameter estimation as proposed by Ohlsson and colleagues (2007). The goal of this procedure was to infer a low-dimensional mapping of data points. When data points become constrained to a lower-dimensional manifold, and also regression vectors are, and therefore the dimensionality of the regression problem is equivalently reduced.
Step 1: Dimensionality estimation:
The first step in this process is to estimate the ID of the data. We assume that observed data lie on a limited part of space, and that the dimensionality of this manifold corresponds to the ID of the data, and to the degrees of freedom of the underlying system from which it is generated. It follows that the dimensionality of the data points toward the type and order of a model that can be used for describing the system or to the number of parameters that needs to be estimated if the structure of the model is already known. Consequently, the target dimension for the subsequent manifold learning step, which must be provided for most approaches, is set to the ID of the dataset. Dimensionality estimation (DE) methods can be classified as local or global (Camastra, 2003; Van der Maaten, 2007). Local methods use the information contained in neighborhoods of data points. Specifically, they rest on the observation that the number of data points within a hypersphere around a data point with radius r grows proportionally to rd, with d being the ID of the data. Thus, d can be estimated by counting and comparing the numbers of data points in hyperspheres with different radii. Global DE methods estimate the dimensionality of the full dataset. For instance, principal component analysis (PCA) can be used to estimate the dimensionality of a dataset by defining a cutoff value for the amount of variance that is explained by principal components. Then, the ID of a dataset is equal to the number of principal components that lie above the threshold, since there are typically d components that explain a large amount of variance (where d is the ID of the dataset), whereas the remaining eigenvalues are typically small and only account for noise in the data.
Step 2: Manifold learning:
During this step, one aims to find a transformation of the data into a new coordinate representation on a manifold, termed intrinsic or embedded coordinates, that is, a mapping between the high-dimensional space down to the lower-dimensional space that retains the geometry of the data as far as possible. Many algorithms for manifold learning and dimensionality reduction methods exist and are apart from parameter estimation valuable tools for classification, compression, and visualization of high-dimensional data (Van der Maaten et al., 2009); however, they seldom produce an explicit mapping from the high-dimensional space to intrinsic coordinates. Instead, the algorithm has to be rerun as new data are introduced. However, Ohlsson and his colleagues (2007) propose a remedy for this problem by linearly interpolating between implicit mappings. In the remainder of this section, we will list some approaches to exemplify some general ideas manifold learning techniques are based on and leave it to the reader to pick a suitable method for the specific properties of their data from the referenced literature. For dimensionality reduction algorithms, one commonly distinguishes between linear techniques, including traditional methods such as PCA and linear discriminant analysis (LDA), and nonlinear techniques as well as between the global and local techniques. The purpose of the global techniques is to retain global properties of the data, while local techniques typically only maintain properties of small neighborhoods of the data. Similar to PCA and LDA, global nonlinear techniques preserve global properties of the data, but are, in contrast, able to generate a nonlinear transformation between the high- and low-dimensional data. Multidimensional scaling (MDS) (Cox and Cox, 2000) seeks a low-dimensional representation that preserves pairwise distances between data points as much as possible by minimizing a stress function that measures the error between the pairwise distances in the low-dimensional and the high-dimensional representation. Stochastic proximity embedding (Agrafiotis, 2003) also aims at minimizing the multidimensional scaling raw stress function, but differs in the efficient rule that it employs to update the current estimate of the low-dimensional data representation. Isomap (Tenenbaum, 1998) overcomes a weakness of MDS, which is based on Euclidean distances, by accounting for the distribution of neighboring data points using geodesic distances. If the high-dimensional points lie on a curved surface, Euclidean distance underestimates their displacement, while geodesic (or curvilinear) distance measures their actual stretch over the manifold. Maximum variance unfolding (Weinberger et al., 2004) aims at unfolding data manifolds by maximizing the Euclidean distance between data points while keeping the local geometry, that is, all pairwise distances in a neighborhood around each point, fixed. Diffusion maps (Lafon and Lee, 2006) are constructed by performing random walks on the graph of the data. Therefore, a measure of the distance of data points is obtained, since walking to a nearby point is more likely than walking to one that is far away. Using this measure, diffusion distances are computed with the aim to retain them in the low-dimensional representation. Kernel PCA is an extension of PCA incorporating the kernel trick (Schölkopf et al., 1998). In contrast to traditional linear PCA, Kernel PCA computes the principal eigenvectors of a kernel matrix, instead of those of the covariance matrix. The kernel matrix is computed by applying a kernel function to data points that map them into the higher-dimensional kernel space, yielding a nonlinear mapping. Generalized discriminant analysis (Baudat and Anouar, 2000) is the kernel-based reformulation of LDA that attempts to maximize the Fisher criterion (similar to LDA) in the higher-dimensional kernel space. Multilayer autoencoders (Hinton and Salakhutdinov, 2006) are feed-forward networks with an odd number of hidden layers that are trained to minimize the mean-squared error between the input and output, which are ideally equal. Consequently, upon minimization, the middle layer constitutes a coded, low-dimensional representation of the data. Local nonlinear techniques include local linear embedding (LLE), Laplacian eigenmaps, Hessian LLE, and local tangent space analysis (LTSA). LLE (Roweis and Saul, 2000) assumes local linearity, since it represents a data point as a linear combination of its nearest neighbors by fitting a hyperplane through these points. During Hessian LLE (Donoho and Grimes, 2003), the curviness of the high-dimensional manifold is additionally minimized when embedding it into a low-dimensional space. Similarly, LTSA (Zhang and Zha, 2004) represents high-dimensional data points by linearly mapping them to the local tangent space of a data point. Thereby, it provides the coordinates of the low-dimensional data representations, along with a linear mapping of those coordinates to the local tangent space of the high-dimensional data. The Laplacian eigenmaps technique (Belkin and Niyogi, 2001) preserves pairwise distances between neighboring points by finding a representation that minimizes the distances between a data point and its k nearest neighbors. Apart from strictly global or local methods and extensions and variants thereof [cf. Van der Maaten et al. (2009) for further reading], there also exist hybrid techniques that combine both types of methods by computing several linear local models along with a global alignment of these local embeddings, for example, locally linear coordination (Teh and Roweis, 2002), manifold charting (Brand, 2003), and coordinated factor analysis (Verbeek, 2006).
Step 3: Manifold-based gray box identification:
Manifold-based regression (Ohlsson, 2008) can be combined with prior physical knowledge about the dynamics of the data in the form of a dynamical model. Ohlsson and Ljung (2009) recently developed a technique for manifold-based gray box system identification that they dub weight determination by manifold regularization. They propose a new regression algorithm that enables the inclusion of prior knowledge in high-dimensional regression problems, that is, gray-box system identification. The physical knowledge expressed by a dynamical model is introduced into the regression problem by addition of a regularization term to the optimization function that results in high cost, if the regressors expressed in the coordination of the manifold do not behave according to the physical model. The manifold generating optimization problem then also minimizes over the states of the state space model and hence tries to find a coordination of the manifold that can be well described by the assumed model and at the same time fit well with the manifold assumption and the measured output. Apart from this specific implementation, a manifold-based approach offers exciting possibilities for the integration of empirical data with physical models as well as visualization and analysis of dynamical properties of the system under study that should be extended in the future.

Identifiability: capitalizing on multimodal empirical data

When the number of unknown parameters is very large, parameter estimation schemes often find several equally fitting parameter sets, or ranges of values, instead of a unique optimal solution. In these cases, the model is qualified as nonidentifiable. Due to the large extend of the parameter space of the full model, the solution of parameter estimation will be characterized by large uncertainties and nonuniqueness with a high probability of existence of infinite sets of parameters that are equally likely to be correct. Given one or several sets of model parameters that generate equally good estimates, one may ask, what is the probability that model parameters are correct? The parameters of the underlying natural system, that is, the biophysical properties that relate to our model parameters, only have a single realization at a particular time.
In a model situation, the likelihood of a parameter set is the probability that an observed data set is generated given this parameter set. However, the computation of likelihoods is very costly when the dimensionality of the data is high. As a remedy, it is possible to include additional information about the system under study that was not used for fitting. To improve the probability of finding the correct solution, we introduce additional criteria that discern plausible from implausible estimates. In our case, we plan to obtain this additional information from an fMRI signal that was simultaneously acquired with the EEG.
In the context of biological systems, this problem has been addressed by an approach that combines extended Kalman filtering with a posteriori calculated measures of accuracy of the estimation process based on a χ2 variance test on measurement noise (Lillacci and Khammash, 2010). The core idea is to examine the reliability of estimates after they have been computed by using additional information gathered from noise statistics from the experiment to ensure that the estimated parameters are consistent with all available empirical data or otherwise defined constraints. After each estimation step, the algorithm checks whether the estimate satisfies several constraints. This step can be cast as a separate optimization problem. If the estimate fullfills the constraints, the algorithm was able to recover the unique solution to the problem and continues with this estimate. If not, most likely, no unique solution exists, and the estimate is replaced by the solution of the constraint satisfaction problem, and the parameter search continues on the basis of the thereby yielded refined estimate.
Following these lines, we propose an additional a posteriori test for refinement of candidate parameter sets that were obtained during the estimation steps using one of the above-listed parameter estimation approaches. In our case, we exploit the existence of two simultaneously acquired data sets that capture partially complementary dynamics on two different time scales. While fitting the model to the fast dynamics, we use the slow BOLD dynamics for verifying the previous estimates and selecting among ambigious estimates. The time resolution of fMRI observations is much lower than that of EEG observations, with typically one datapoint every 2 sec. In principle, this setup could be modeled by directly fusing both signals using a time-varying observation matrix integrating EEG and fMRI into a single-observation model (Purdon et al., 2010). Moreover, it would also be possible to directly integrate BOLD-derived inequality constraints by using constrained Kalman filtering approaches (Simon and Chia, 2002; Simon and Simon, 2003) or to extend approaches for the slow–fast systems. However, BOLD observations do not need to be integrated directly in the observation model, but can be used it in the line of Lillacci and Khammash (2010) for selection between ambiguous estimates.
What is the additional information that we can extract from BOLD data that allow a further refinement of initial estimates? It is well recognized that EEG mainly reflects the superpositioned and spatiotemporally smoothed version of electrical activity of pyramidal neurons in the superficial layers of the cortex oriented that are oriented perpendicular to the scalp (Buzsáki et al., 2012). Since we fitted the model with the source-projected EEG data, it is reasonable to assume that only the activity of excitatory neurons is captured. However, there is evidence estimating that the activity of inhibitory neurons, which comprise about 15%–20% of cortical neurons, accounts for 10%–15% of the total oxidative metabolism (Attwell and Iadecola, 2002). In contrast to pyramidal cells, internerons that majorily represent the inhibitory cell fraction do not have a dipolar configuration and hence do not contribute to the EEG signal—while they are suppoed to have a metabolic demand. Despite there is ongoing debate whether and how inhibitory activity influences the BOLD signal (Ritter et al., 2002), consensus exists that fMRI sees a fraction of neuronal activity that—although indirectly—EEG is not able to detect. Therefore, it becomes possible to use the additional information that is unobservable with EEG to refine previous fits of the model with EEG data. A simple example of several candidate parameter sets that produce equally fitting, similarly shaped electric waveforms have been identified. Thereby, mean-field timeseries of the inhibitory population have been generated by the model. This additional information can then be incorporated into forward BOLD data simulation, and the resulting BOLD waveform and FC topology can be used as an additional crtierion for discerning between parameter candidates.
Another refinement strategy is based on the robustness of a parameter set to explain (1) increasingly longer snippets and (2) more snippets across different subjects. We assume that the motif patterns constitute distinct activity profiles of neural population dynamics that appear highly conserved in different subjects, but exhibit slight variations due to background activity. Therefore, the quality of a parameter set can be quantified by its robustness in explaining a noisy motif pattern across a large number of different subjects. Thus, it is possible to assign the scores to parameter sets that rank their robustness in predicting motif patterns across different subjects. Equally fitting parameter sets can be initially stored in the dictionary and subsequently dismissed, as the model is fitted to longer snippets and to new data sets from different subjects.
Despite the existence of Kalman filter approaches for filtering systems with multiple timescales, that is, slow–fast systems, and further, approaches that directly incorporate inequality constraints, we prefer the proposed two-step estimation–refinement procedure out of several reasons. First, it constitutes a convenient way to add further constraints that are unknown at the time of the estimation, but becomes available later a posteriori. Therefore, we store all parameter settings that allow the reproduction of a certain waveform in a dictionary and apply pruning of the results upon availability of new physiological constraints. Second, besides the fact that an additional BOLD model increases the already-huge space of unknowns that need to be fitted and thereby hardening the estimation process, we are uncertain about the actual neurovascular coupling, and this uncertainty might bias the estimator and introduce more disadvantages than benefits when fitting the model to data. Facing the lack of knowledge about the exact inter-relationship between neuronal activity and hemodynamic signal changes, our agenda is to use the least amount of assumptions about electrovascular coupling as possible and to stay as flexible as possible to use this information only for refinement of fittings of electrical activity. Third, the forward BOLD model might change. Since we use a very simple BOLD model at the present time and it is very likely that better spatiotemporal BOLD models will appear in the near future, we want to keep the flexibility to incorporate new models without the necessity to repeat the estimation runs. Fourth, we are actually interested in ambiguous solutions, since parameter settings that allow the emergence of certain dynamics directly correspond to physiological ranges and conditions under which a certain behavior is possible in the real system. This allows an in silico study of physiological properties of the neural populations that permit the emergence of certain activity patterns. This in turn enables us to formulate hypotheses that can be verified in vitro or in vivo later on.

Benefits of TVB Platform

Predicting unobservable internal states and parameters: model-based formulation of new hypotheses

By identifying parameter trajectories, we also identify phase flows of unobservable system components; that is, we build a computational microscope that allows the inference of internal states and processes of the system that lie below the resolution of the imaging device. For example, it is impossible to observe the neuronal source activity of the entire human brain simultaneously. If at all, we have patients with implanted single electrodes or grids capturing a certain region, so that only local field potentials or multiunit activity can be assessed at one time. With our modeling approach, we are able to predict the electric source activity that underlies our observed EEG and fMRI signals throughout the brain simultaneously. An example of source activity in 96 regions of the brain is shown in Figure 3A (right).
In a similar vein-fitting model, parameters to empirical data lead to the prediction of certain biophysical properties of the system units, given the observed functional imaging data. For example, altering the coupling factors K11, K12, and K21 between the inhibitory and excitatory neuronal population of the mesoscopic node modifies the properties of the resulting source activity. This leads to the quantitative hypotheses of the role of the balance between inhibitory and excitatory local coupling testable by means of a pharmacological intervention using selective gamma-aminobutyric acid or glutamate antagonists/agonists.
Another issue addressable by TVB is the relation between electric neuronal activity and the BOLD signal. This is so important since fMRI is such a widely used brain-imaging tool. However, the link between the BOLD signal and electrophysiological signals is far from being clear. In particular, the link between fast oscillatory activity—a dominant feature of electrophysiological data—and the BOLD signal is not resolved (Riera and Sumiyoshi, 2010). While the strengths of those oscillations that are thought to reflect the degree of synchrony in the underlying neuronal populations is correlated with the BOLD signal (Moosmann et al., 2003; Ritter et al., 2008, 2009), the biophysical principles of this coupling are still unrevealed. Yet, gaining more insight about this relation is essential, since evidence attributes important functional roles to neuronal oscillations and synchrony. Changes of these variables are indicators for functionally relevant brain state alterations (Becker et al., 2011; Freyer et al.; Reinacher et al., 2009). TVB offers the possibility to explore both the interrelations of different types of neuronal activity and the neurohemodynamic coupling. The former is relevant to examine the neuronal mechanisms underlying brain function such as the highly debated question whether phase resetting of ongoing oscillations is a common principle in the brain (Ritter and Becker, 2009). The latter is relevant to clarify how oscillations, their phase, and synchrony behavior relate to the BOLD signal. The dictionary approach allows us to identify and store parameter sets that yield best model fits to multimodal empirical data. Such information can be collected for large numbers of data sets. Post hoc statistics over the collected parameter sets and corresponding goodness of fits allow us to identify mechanistic scenarios that yield with high reliability our empirical observations. The unobservable states predicted by such identified models can then be used to formulate new hypotheses, for example, about underlying molecular mechanisms that can be tested by means of appropriate methods such as intracranial recordings, pharmacological interventions, or in vitro research.

Revealing mechanisms that yield specific features of large-scale dynamics

EEG waveforms recorded on the scalp are a linear superposition of microcurrent sources. However, the mechanisms of source interaction from which dynamic signals emerge remain mostly unknown. In the past, it has been shown that time delays of signal transmission between large-scale brain regions that emerge from the specific underlying large-scale connectivity structure due to finite transmission speeds can have a profound impact on the dynamic properties of the system (Ghosh et al., 2008; Knock et al., 2009). Ghosh and colleagues demonstrate that in large-scale models, besides realistic long-range connectivity, the addition of noise and time delays enables the emergence of fast neuroelectric rhythms in the 1–100-Hz range and slow hemodynamic oscillations in the ultraslow regimes <0.1 Hz (Ghosh et al., 2008). These two studies demonstrate how large-scale modeling helps to reveal mechanisms that based on empirical data alone would not have been recovered. The same large-scale modeling approaches that have been employed in both studies now build important foundations of TVB.
TVB accounts for individual differences in SC. Hence, it can be used to investigate the effects of variation in coupling (time delays and connection strengths) on emerging brain dynamics by comparing predictions and real data across different subjects. Figure 2C illustrates how TVB can reveal individual structure–function relations. SC in terms of fiber-tract sizes and length was determined for a single subject (No. 7). A Stefanescu–Jirsa full-brain model was constructed based on the features of the SC of this subject. Simulations were run for 80 different parameter settings. The resulting 96×96 simulated BOLD FC matrices were compared with the empirical BOLD FC matrices of this subject and of eight others. The distribution of correlation coefficients between the simulated (based on the connectivity of subject No. 7) and the empirical (of all nine subjects) FC for a single tested parameter setting (Table 2) is displayed in Figure 2C. Each subplot represents the distribution of correlation coefficients obtained for the empirical FCs of an individual subject. In this example, model predictions match far better the empirical data of subject No. 7, that is, the subject whose SC was used. A systematic evaluation of these results will be published elsewhere. This example illustrates that TVB enables us to assess the individual structure–function relationship of the brain. Statistical evaluations over a larger number of subjects will provide us with insights regarding which structural parameters give rise to which features of the space–time structure of brain activity, and hence helps us understanding interindividual differences.
Table 2. Parameter Values of the Full-Brain Model Yielding the Exemplary Results Shown in Figure 3A (Mean Field Source Activity), Figure 3B (Electroencephalography), Figure 3C (Blood Oxygen Level-Dependent Activity), Figure 3D (Power Spectra of Mean Field Source, Electroencephalography, and Blood Oxygen Level-Dependent Activity), Figure 2B (Blood Oxygen Level-Dependent Functional Connectivity), and Figure 7 (Functional Connectivity of Mean Field Source Activity)
ParameterValueCommentLarge-scale connectivitycsf0.0672Capacities scaling factor (used to rescale DTI-derived capacities between brain regions)v_cond10Conduction velocity (used to estimate time delays from DTI-derived distances between brain regions)Original Hindmarsh–Rose model parametersr0.006Controls the speed of variation of the slow variables (i.e., z and u)s4Adaptation variablex0−1.6Membrane resting potentiala1Models behavior of fast ion channelsb3Models behavior of fast ion channelsc1Models behavior of fast ion channelsd5Models behavior of fast ion channelsLocal population and local coupling parametersK110.5Connectivity strength within the excitatory subpopulationsK120.1Connectivity strength from excitatory to inhibitory populations (default setting 0.15)K210.15Connectivity strength from inhibitory to excitatory subpopulationsmu2.2Mean of the distribution of membrane excitabilities for each subpopulationsigma0.3Dispersion of membrane excitabilities for each subpopulationNv1500Resolution of excitatory distributionsNu1500Resolution of inhibitory distributionsNoise for the six state equationsQx0.001Mean of noise distribution for state variable xQy0.001Mean of noise distribution for state variable yQz0.001Mean of noise distribution for state variable zQw0.001Mean of noise distribution for state variable wQv0.001Mean of noise distribution for state variable vQu0.001Mean of noise distribution for state variable u
DTI, diffusion tensor imaging.
The role of several features concerning the relation between structure and dynamics as well as the relation between source activity and the signals observed with brain imaging devices can be addressed by TVB. To mention just a few, the role of time delays, directionality of information flow, or synchrony in certain frequency bands or between regions can be systematically investigated by TVB. As an illustration we show in Figure 7, how different frequency bands of electric source activity do reflect the functional BOLD connectivity observed in empirical data.
FIG. 7. Simulated FC of individual frequency bands of mean-field activity and its correlation with the empirical BOLD FC are shown in Figure 2B. Area names of the 96 regions are listed in Table 1—ordering of regions is equal in table and matrices.
Following the reconstruction of model dynamics from empirical timeseries, several neurobiological questions can be addressed. State and parameter space trajectories can be related to experimental conditions and behavior. Therefore, the proposed approach allows for the direct association of low-level neuronal processes with the top-level cognitive processes. It identifies metrics that quantify the functional relevance of dynamical features for cognition and behavior under normal and pathological conditions. Ultimately, we believe that this tool enables us to gain further insights into the mechanisms and principles of neural computation.

Conclusions

In this article, we have shown the development of a computational platform called TVB that aims to model large-scale brain dynamics as accessible by brain-imaging methods such as EEG and fMRI. We chose the Stefanescu–Jirsa model for the simulation of local dynamics at 96 cortical regions. These regions or nodes were connected to form a network. Time delays and interaction strength between the nodes were based on real tractography data of individual subjects.
This is the first open-source computational platform that provides a complete and customizable toolbox to model large-scale brain phenomena of individual subjects. It enables identification of mechanisms underlying the space–time structure of the brain and the generation of new hypotheses not possible experimentally. Thereby, it integrates theory-driven and data-driven research approaches into a single framework.
While we have provided here a proof of concept for the functioning of TVB, TVB is a community project. Only the contributions of many sites will lead to a major breakthrough in our understanding of the brain. It is the systematic and standardized exploration of model dynamics on multiple spatiotemporal scales and the organized storage of new knowledge in the TVB dictionary that finally will give us an understanding of the principles that underlie the complex space–time structure of our brain.

Acknowledgments

The authors acknowledge the support of the James S. McDonnell Foundation (Brain Network Recovery Group JSMF22002082) granted to P.R., A.R.M., and V.J.; the German Ministry of Education and Research (Bernstein Focus State Dependencies of Learning 01GQ0971) granted to P.R.; and the Max-Planck Society (Minerva Program) granted to P.R.

References

Agrafiotis DK2003. Stochastic proximity embeddingJ Comput Chem241215-1221. Agrafiotis DK. 2003. Stochastic proximity embedding. J Comput Chem 24:1215–1221.
Amari S.-i1977. “Dynamics of pattern formation in lateral-inhibition type neural fields.”Biological cybernetics2777-87. Amari, S.-i. (1977). “Dynamics of pattern formation in lateral-inhibition type neural fields.” Biological cybernetics 27:77–87.
Aquino KMSchira MM et al.2012. Hemodynamic traveling waves in human visual cortexPLoS Comput Biol8e1002435. Aquino KM, Schira MM, et al. 2012. Hemodynamic traveling waves in human visual cortex. PLoS Comput Biol 8:e1002435.
Assisi CGJirsa VK et al.2005. Synchrony and clustering in heterogeneous networks with global coupling and parameter dispersionPhys Rev Lett94018106. Assisi CG, Jirsa VK, et al. 2005. Synchrony and clustering in heterogeneous networks with global coupling and parameter dispersion. Phys Rev Lett 94:018106.
Attwell DIadecola C2002. The neural basis of functional brain imaging signalsTrends Neurosci25621-625. Attwell D, Iadecola C. 2002. The neural basis of functional brain imaging signals. Trends Neurosci 25:621–625.
Attwell DLaughlin SB2001. An energy budget for signaling in the grey matter of the brainJ Cereb Blood Flow Metab211133-1145. Attwell D, Laughlin SB. 2001. An energy budget for signaling in the grey matter of the brain. J Cereb Blood Flow Metab 21:1133–1145.
Balocchi RMenicucci D et al.2003. Empirical mode decomposition to approach the problem of detecting sources from a reduced number of mixturesIEEE. Balocchi R, Menicucci D, et al. 2003. Empirical mode decomposition to approach the problem of detecting sources from a reduced number of mixtures. IEEE.
Baudat GAnouar F2000. Generalized discriminant analysis using a kernel approachNeural Comput122385-2404. Baudat G, Anouar F. 2000. Generalized discriminant analysis using a kernel approach. Neural Comput 12:2385–2404.
Becker RReinacher M et al.2011. How ongoing neuronal oscillations account for evoked fMRI variabilityJ Neurosci3111016-11027. Becker R, Reinacher M, et al. 2011. How ongoing neuronal oscillations account for evoked fMRI variability. J Neurosci 31:11016–11027.
Belkin MNiyogi P2001. Laplacian eigenmaps and spectral techniques for embedding and clusteringAdv Neural Inf Process Syst14585-591. Belkin M, Niyogi P. 2001. Laplacian eigenmaps and spectral techniques for embedding and clustering. Adv Neural Inf Process Syst 14:585–591.
Bezgin GVakorin VA et al.2012. Hundreds of brain maps in one atlas: registering coordinate-independent primate neuro-anatomical data to a standard brainNeuroimage6267-76. Bezgin G, Vakorin VA, et al. 2012. Hundreds of brain maps in one atlas: registering coordinate-independent primate neuro-anatomical data to a standard brain. Neuroimage 62:67–76.
Bezgin GYGibson E et al.2011. What structure underlies the human brain function? Combining human DTI data with axonal tract tracing animal studiesAnnual Meeting of the Society for NeuroscienceWashington DC. Bezgin GY, Gibson E, et al. 2011. What structure underlies the human brain function? Combining human DTI data with axonal tract tracing animal studies. Annual Meeting of the Society for Neuroscience, Washington DC.
Bojak IOostendorp TF et al.2010. Connecting mean field models of neural activity to EEG and fMRI dataBrain Topogr23139-149. Bojak I, Oostendorp TF, et al. 2010. Connecting mean field models of neural activity to EEG and fMRI data. Brain Topogr 23:139–149.
Brand M2003. Charting a manifold. Advances in Neural Information Processing Systems 15 (NIPS'02)S. BeckerS. ThrunK. Obermayer961-968MIT Press2003. Brand M. 2003. Charting a manifold. Advances in Neural Information Processing Systems 15 (NIPS'02), (S. Becker, S. Thrun, and K. Obermayer, eds.), pp. 961–968, MIT Press, 2003.
Breakspear MJirsa V2007. Neuronal dynamics and brain connectivityJirsa VKMcIntosh ARHandbook of brain connectivityBerlin Heidelberg, New YorkSpringer3-64. Breakspear M, Jirsa V. 2007. Neuronal dynamics and brain connectivity. In: Jirsa VK, McIntosh AR (eds) Handbook of brain connectivity. Berlin Heidelberg, New York: Springer; pp. 3–64.
Breakspear MTerry JR et al.2003. Modulation of excitatory synaptic coupling facilitates synchronization and complex dynamics in a biophysical model of neuronal dynamicsNetwork14703-732. Breakspear M, Terry JR, et al. 2003. Modulation of excitatory synaptic coupling facilitates synchronization and complex dynamics in a biophysical model of neuronal dynamics. Network 14:703–732.
Brooks SMorgan B1995. Optimization using simulated annealingStatistician44241-257. Brooks S, Morgan B. 1995. Optimization using simulated annealing. Statistician 44:241–257.
Brunel NWang XJ2003. What determines the frequency of fast network oscillations with irregular neural discharges? I. Synaptic dynamics and excitation-inhibition balanceJ Neurophysiol90415-430. Brunel N, Wang XJ. 2003. What determines the frequency of fast network oscillations with irregular neural discharges? I. Synaptic dynamics and excitation-inhibition balance. J Neurophysiol 90:415–430.
Buxton RBWong EC et al.1998. Dynamics of blood flow and oxygenation changes during brain activation: the balloon modelMagn Reson Med39855-864. Buxton RB, Wong EC, et al. 1998. Dynamics of blood flow and oxygenation changes during brain activation: the balloon model. Magn Reson Med 39:855–864.
Buzsaki G2006. Rhythms of the BrainNew YorkOxford University Press. Buzsaki G. 2006. Rhythms of the Brain. New York: Oxford University Press.
Buzsáki GAnastassiou CA et al.2012. The origin of extracellular fields and currents—EEG, ECoG, LFP and spikesNat Rev Neurosci13407-420. Buzsáki G, Anastassiou CA, et al. 2012. The origin of extracellular fields and currents—EEG, ECoG, LFP and spikes. Nat Rev Neurosci 13:407–420.
Calin-Jageman RJTunstall MJ et al.2007. Parameter space analysis suggests multi-site plasticity contributes to motor pattern initiation in TritoniaJ Neurophysiol982382-2398. Calin-Jageman RJ, Tunstall MJ, et al. 2007. Parameter space analysis suggests multi-site plasticity contributes to motor pattern initiation in Tritonia. J Neurophysiol 98:2382–2398.
Camastra F2003. Data dimensionality estimation methods: a surveyPattern Recognit362945-2954. Camastra F. 2003. Data dimensionality estimation methods: a survey. Pattern Recognit 36:2945–2954.
Chan HChen YS et al.2009. Lead field space projection for spatiotemporal imaging of independent brain activitiesYu WHe HZhang NAdvances in Neural NetworksWuhan, ChinaISNN512-519. Chan H, Chen YS, et al. 2009. Lead field space projection for spatiotemporal imaging of independent brain activities. In: Yu W, He H, Zhang N (eds.) Advances in Neural Networks. Wuhan, China: ISNN; pp. 512–519.
Cox TFCox MAA2000Multidimensional ScalingLondonChapman & Hall/CRC. Cox TF, Cox MAA. 2000. Multidimensional Scaling. London: Chapman & Hall/CRC.
Creutzfeldt ODIngvar DH et al.1975. Neurophysiological correlates of different functional states of the brainIngvar DHLassen NABrain Work: The Coupling of Function, Metabolism and Blood Flow in the BrainMunksgaardCopenhagen, Denmark21-46. Creutzfeldt OD, Ingvar DH, et al. 1975. Neurophysiological correlates of different functional states of the brain. In: Ingvar DH, Lassen NA, (eds). Brain Work: The Coupling of Function, Metabolism and Blood Flow in the Brain. Munksgaard: Copenhagen, Denmark; pp. 21–46.
Daunizeau JDavid O et al.2011. Dynamic causal modelling: a critical review of the biophysical and statistical foundationsNeuroimage58312-322. Daunizeau J, David O, et al. 2011. Dynamic causal modelling: a critical review of the biophysical and statistical foundations. Neuroimage 58:312–322.
Deco GJirsa VK2012. Ongoing cortical activity at rest: criticality, multistability, and ghost attractorsJ Neurosci323366-3375. Deco G, Jirsa VK. 2012. Ongoing cortical activity at rest: criticality, multistability, and ghost attractors. J Neurosci 32:3366–3375.
Deco GJirsa VK et al.2010. Emerging concepts for the dynamical organization of resting-state activity in the brainNat Rev Neurosci1243-56. Deco G, Jirsa VK, et al. 2010. Emerging concepts for the dynamical organization of resting-state activity in the brain. Nat Rev Neurosci 12:43–56.
Doloc-Mihu ACalabrese RL2011. A database of computational models of a half-center oscillator for analyzing how neuronal parameters influence network activityJ Biol Phys37263-283. Doloc-Mihu A, Calabrese RL. 2011. A database of computational models of a half-center oscillator for analyzing how neuronal parameters influence network activity. J Biol Phys 37:263–283.
Donoho DLGrimes C2003. Hessian eigenmaps: locally linear embedding techniques for high-dimensional dataProc Natl Acad Sci U S A1005591-5596. Donoho DL, Grimes C. 2003. Hessian eigenmaps: locally linear embedding techniques for high-dimensional data. Proc Natl Acad Sci U S A 100:5591–5596.
Donoho DLHuo X2001. Uncertainty principles and ideal atomic decompositionIEEE Trans Inf Theory472845-2862. Donoho DL, Huo X. 2001. Uncertainty principles and ideal atomic decomposition. IEEE Trans Inf Theory 47:2845–2862.
Dorigo MDi Caro G1999. New ideas in optimizationNew ideas in optimizationMcGraw-Hill. Dorigo M, Di Caro G. 1999. New ideas in optimization. New ideas in optimization. McGraw-Hill.
Drysdale PMHuber JP et al.2010. Spatiotemporal BOLD dynamics from a poroelastic hemodynamic modelJ Theor Biol265524-534. Drysdale PM, Huber JP, et al. 2010. Spatiotemporal BOLD dynamics from a poroelastic hemodynamic model. J Theor Biol 265:524–534.
Dunning HSWolff HG1937. The relative vascularity of various parts of the central and peripheral nervous system of the cat and its relation to functionJ Comp Neurol67433-450. Dunning HS, Wolff HG. 1937. The relative vascularity of various parts of the central and peripheral nervous system of the cat and its relation to function. J Comp Neurol 67:433–450.
Fogel DB2005. Evolutionary Computation: Toward a New Philosophy of Machine IntelligenceHoboken, New JerseyWiley-IEEE Press. Fogel DB. 2005. Evolutionary Computation: Toward a New Philosophy of Machine Intelligence. Hoboken, New Jersey: Wiley-IEEE Press.
Freyer FAquino K et al.2009. Bistability and non-Gaussian fluctuations in spontaneous cortical activityJ Neurosci298512-8524. Freyer F, Aquino K, et al. 2009. Bistability and non-Gaussian fluctuations in spontaneous cortical activity. J Neurosci 29:8512–8524.
Freyer FBecker R et al.2009. Ultrahigh-frequency EEG during fMRI: pushing the limits of imaging-artifact correctionNeuroimage4894-108. Freyer F, Becker R, et al. 2009. Ultrahigh-frequency EEG during fMRI: pushing the limits of imaging-artifact correction. Neuroimage 48:94–108.
Freyer FBecker R et al.2013. “State-Dependent Perceptual Learning.”The Journal of Neuroscience332900-2907. Freyer F, Becker R, et al. (2013). “State-Dependent Perceptual Learning.” The Journal of Neuroscience 33:2900–2907.
Freyer FRoberts JA et al.2011. Biophysical mechanisms of multistability in resting-state cortical rhythmsJ Neurosci: the official journal of the Society for Neuroscience316353-6361. Freyer F, Roberts JA, et al. 2011. Biophysical mechanisms of multistability in resting-state cortical rhythms. J Neurosci: the official journal of the Society for Neuroscience 31:6353–6361.
Freyer FRoberts JA et al.2012. A canonical model of multistability and scale-invariance in biological systemsPLoS Comput Biol8e1002634. Freyer F, Roberts JA, et al. 2012. A canonical model of multistability and scale-invariance in biological systems. PLoS Comput Biol 8:e1002634.
Friston KJHarrison L et al.2003. Dynamic causal modellingNeuroimage191273-1302. Friston KJ, Harrison L, et al. 2003. Dynamic causal modelling. Neuroimage 19:1273–1302.
Friston KJJezzard P et al.1998. The analysis of functional MRI time-seriesHum Brain Mapping269-78Wiley. Friston KJ, Jezzard P, et al. 1998. The analysis of functional MRI time-series. Hum Brain Mapping 269–78. Wiley.
Friston KJMechelli A et al.2000. Nonlinear responses in fMRI: the Balloon model, Volterra kernels, and other hemodynamicsNeuroimage12466-477. Friston KJ, Mechelli A, et al. 2000. Nonlinear responses in fMRI: the Balloon model, Volterra kernels, and other hemodynamics. Neuroimage 12:466–477.
Ghosh ARho Y et al.2008. Noise during rest enables the exploration of the brain's dynamic repertoirePLoS Comput Biol4e1000196. Ghosh A, Rho Y, et al. 2008. Noise during rest enables the exploration of the brain's dynamic repertoire. PLoS Comput Biol 4:e1000196.
Giaume CKoulakoff A et al.2010. Astroglial networks: a step further in neuroglial and gliovascular interactionsNat Rev Neurosci1187-99. Giaume C, Koulakoff A, et al. 2010. Astroglial networks: a step further in neuroglial and gliovascular interactions. Nat Rev Neurosci 11:87–99.
Grech RCassar T et al.2008. Review on solving the inverse problem in EEG source analysisJ Neuroeng Rehabil525. Grech R, Cassar T, et al. 2008. Review on solving the inverse problem in EEG source analysis. J Neuroeng Rehabil 5:25.
Greicius MDKrasnow B et al.2003. Functional connectivity in the resting brain: a network analysis of the default mode hypothesisProc Natl Acad Sci U S A100253-258. Greicius MD, Krasnow B, et al. 2003. Functional connectivity in the resting brain: a network analysis of the default mode hypothesis. Proc Natl Acad Sci U S A 100:253–258.
Günay CEdgerton JR et al.2008. Channel density distributions explain spiking variability in the globus pallidus: a combined physiology and computer simulation database approachJ Neurosci287476-7491. Günay C, Edgerton JR, et al. 2008. Channel density distributions explain spiking variability in the globus pallidus: a combined physiology and computer simulation database approach. J Neurosci 28:7476–7491.
Günay CEdgerton JR et al.2009. Database analysis of simulated and recorded electrophysiological datasets with PANDORA's toolboxNeuroinformatics793-111. Günay C, Edgerton JR, et al. 2009. Database analysis of simulated and recorded electrophysiological datasets with PANDORA's toolbox. Neuroinformatics 7:93–111.
Günay CPrinz AA2010. Model calcium sensors for network homeostasis: sensor and readout parameter analysis from a database of model neuronal networksJ Neurosci301686-1698. Günay C, Prinz AA. 2010. Model calcium sensors for network homeostasis: sensor and readout parameter analysis from a database of model neuronal networks. J Neurosci 30:1686–1698.
Gusnard DARaichle ME2001. Searching for a baseline: functional imaging and the resting human brainNat Rev Neurosci2685-694. Gusnard DA, Raichle ME. 2001. Searching for a baseline: functional imaging and the resting human brain. Nat Rev Neurosci 2:685–694.
Haken H.1975. “Cooperative phenomena in systems far from thermal equilibrium and in nonphysical systems.”Reviews of Modern Physics4767. Haken, H. (1975). “Cooperative phenomena in systems far from thermal equilibrium and in nonphysical systems.” Reviews of Modern Physics 47:67.
Hämäläinen MSarvas J1989. “Realistic conductivity geometry model of the human head for interpretation of neuromagnetic data,”IEEE Trans Biomed Eng36165-171. Hämäläinen M, and Sarvas J. 1989. “Realistic conductivity geometry model of the human head for interpretation of neuromagnetic data,” IEEE Trans Biomed Eng 36:165–171.
Hämäläinen MHari RIlmoniemi RJKnuutila JLounasmaa OV1993. “Magnetoencephalography-Theory, instrumentation and applications to noninvasive studies of the working human brain,”Rev Mod Phys65413-497. Hämäläinen M, Hari R, Ilmoniemi RJ, Knuutila J, and Lounasmaa OV. 1993. “Magnetoencephalography-Theory, instrumentation and applications to noninvasive studies of the working human brain,” Rev Mod Phys 65:413–497.
Hamalainen MS1992. Magnetoencephalography: a tool for functional brain imagingBrain Topogr595-102. Hamalainen MS. 1992. Magnetoencephalography: a tool for functional brain imaging. Brain Topogr 5:95–102.
He BLiu Z2008. Multimodal functional neuroimaging: integrating functional MRI and EEG/MEGIEEE Rev Biomed Eng123-40. He B, Liu Z. 2008. Multimodal functional neuroimaging: integrating functional MRI and EEG/MEG. IEEE Rev Biomed Eng 1:23–40.
Helmholtz H1853. Ueber einige Gesetze der Vertheilung elektrischer Ströme in körperlichen Leitern mit Anwendung auf die thierisch-elektrischen VersucheAnnalen der Physik165211-233. Helmholtz H. 1853. Ueber einige Gesetze der Vertheilung elektrischer Ströme in körperlichen Leitern mit Anwendung auf die thierisch-elektrischen Versuche. Annalen der Physik 165:211–233.
Hindmarsh JRose R1984. A model of neuronal bursting using three coupled first order differential equationsProc R Soc Lond B Biol Sci22187-102. Hindmarsh J, Rose R. 1984. A model of neuronal bursting using three coupled first order differential equations. Proc R Soc Lond B Biol Sci 221:87–102.
Hinton GESalakhutdinov RR2006. Reducing the dimensionality of data with neural networksScience313504-507. Hinton GE, Salakhutdinov RR. 2006. Reducing the dimensionality of data with neural networks. Science 313:504–507.
Horwitz BPoeppel D2002. How can EEG/MEG and fMRI/PET data be combined?Hum Brain Mapp171-3. Horwitz B, Poeppel D. 2002. How can EEG/MEG and fMRI/PET data be combined? Hum Brain Mapp 17:1–3.
Iadecola C2004. Neurovascular regulation in the normal brain and in Alzheimer's diseaseNat Rev Neurosci5347-360. Iadecola C. 2004. Neurovascular regulation in the normal brain and in Alzheimer's disease. Nat Rev Neurosci 5:347–360.
Jirsa VKHuys RPillai ASPerdikis M2012. Connectivity and dynamics of neural information processingMikhail I. RabinovichKarl J. FristonPablo VaronaPrinciples of Brain Dynamics: Global State Interactions. Jirsa VK, Huys R, Pillai AS, Perdikis M. 2012. Connectivity and dynamics of neural information processing. In Mikhail I. Rabinovich, Karl J. Friston and Pablo Varona (eds.): Principles of Brain Dynamics: Global State Interactions.
Jirsa VKHaken H1996. Field theory of electromagnetic brain activityPhys Rev Lett77960-963. Jirsa VK, Haken H. 1996. Field theory of electromagnetic brain activity. Phys Rev Lett 77:960–963.
Jirsa VKJantzen KJ et al.2002. Spatiotemporal forward solution of the EEG and MEG using network modelingIEEE Trans Med Imaging21493-504. Jirsa VK, Jantzen KJ, et al. 2002. Spatiotemporal forward solution of the EEG and MEG using network modeling. IEEE Trans Med Imaging 21:493–504.
Jirsa VKStefanescu RA2011. Neural population modes capture biologically realistic large scale network dynamicsBull Math Biol73325-343. Jirsa VK, Stefanescu RA. 2011. Neural population modes capture biologically realistic large scale network dynamics. Bull Math Biol 73:325–343.
Kelso JAS1995. Dynamic Patterns: The Self-Organization of Brain and BehaviorCambridgeThe MIT Press. Kelso JAS. 1995. Dynamic Patterns: The Self-Organization of Brain and Behavior. Cambridge: The MIT Press.
Knock SMcIntosh A et al.2009. The effects of physiologically plausible connectivity structure on local and global dynamics in large scale brain modelsJ Neurosci Methods18386-94. Knock S, McIntosh A, et al. 2009. The effects of physiologically plausible connectivity structure on local and global dynamics in large scale brain models. J Neurosci Methods 183:86–94.
Kotter RWanke E2005. Mapping brains without coordinatesPhilos Trans R Soc Lond B Biol Sci360751-766. Kotter R, Wanke E. 2005. Mapping brains without coordinates. Philos Trans R Soc Lond B Biol Sci 360:751–766.
Lafon SLee AB2006. Diffusion maps and coarse-graining: a unified framework for dimensionality reduction, graph partitioning, and data set parameterizationIEEE Trans Pattern Anal Mach Intell281393-1403. Lafon S, Lee AB. 2006. Diffusion maps and coarse-graining: a unified framework for dimensionality reduction, graph partitioning, and data set parameterization. IEEE Trans Pattern Anal Mach Intell 28:1393–1403.
Larter RSpeelman B et al.1999. A coupled ordinary differential equation lattice model for the simulation of epileptic seizuresChaos9795-804. Larter R, Speelman B, et al. 1999. A coupled ordinary differential equation lattice model for the simulation of epileptic seizures. Chaos 9:795–804.
Lillacci GKhammash M2010. Parameter estimation and model selection in computational biologyPLoS Comput Biol6e1000696. Lillacci G, Khammash M. 2010. Parameter estimation and model selection in computational biology. PLoS Comput Biol 6:e1000696.
Litvak VMattout J et al.2011. EEG and MEG data analysis in SPM8Comput Intell Neurosci. Litvak V, Mattout J, et al. 2011. EEG and MEG data analysis in SPM8. Comput Intell Neurosci.
Liu ZHe B2008. fMRI-EEG integrated cortical source imaging by use of time-variant spatial constraintsNeuroimage391198-1214. Liu Z, He B. 2008. fMRI-EEG integrated cortical source imaging by use of time-variant spatial constraints. Neuroimage 39:1198–1214.
Logothetis NKPauls J et al.2001. Neurophysiological investigation of the basis of the fMRI signalNature412150-157. Logothetis NK, Pauls J, et al. 2001. Neurophysiological investigation of the basis of the fMRI signal. Nature 412:150–157.
Long CJPurdon PL et al.2011. State-space solutions to the dynamic magnetoencephalography inverse problem using high performance computingAnn Appl Stat51207-1228. Long CJ, Purdon PL, et al. 2011. State-space solutions to the dynamic magnetoencephalography inverse problem using high performance computing. Ann Appl Stat 5:1207–1228.
Lytton WWOmurtag A2007. Tonic-clonic transitions in computer simulationJ Clin Neurophysiol: official publication of the American Electroencephalographic Society24175. Lytton WW, Omurtag A. 2007. Tonic-clonic transitions in computer simulation. J Clin Neurophysiol: official publication of the American Electroencephalographic Society 24:175.
Mandeville JBMarota JJ et al.1999. Evidence of a cerebrovascular postarteriole windkessel with delayed complianceJ Cereb Blood Flow Metab19679-689. Mandeville JB, Marota JJ, et al. 1999. Evidence of a cerebrovascular postarteriole windkessel with delayed compliance. J Cereb Blood Flow Metab 19:679–689.
Mantini DPerrucci MG et al.2007. Electrophysiological signatures of resting state networks in the human brainProc Natl Acad Sci U S A10413170-13175. Mantini D, Perrucci MG, et al. 2007. Electrophysiological signatures of resting state networks in the human brain. Proc Natl Acad Sci U S A 104:13170–13175.
Michel CMMurray MM et al.2004. EEG source imagingClin Neurophysiol1152195-2222. Michel CM, Murray MM, et al. 2004. EEG source imaging. Clin Neurophysiol 115:2195–2222.
Mijovic BDe Vos M et al.2010. Source separation from single-channel recordings by combining empirical-mode decomposition and independent component analysisIEEE Trans Biomed Eng572188-2196. Mijovic B, De Vos M, et al. 2010. Source separation from single-channel recordings by combining empirical-mode decomposition and independent component analysis. IEEE Trans Biomed Eng 57:2188–2196.
Moles CGMendes P et al.2003. Parameter estimation in biochemical pathways: a comparison of global optimization methodsGenome Res132467-2474. Moles CG, Mendes P, et al. 2003. Parameter estimation in biochemical pathways: a comparison of global optimization methods. Genome Res 13:2467–2474.
Moosmann MRitter P et al.2003. Correlates of alpha rhythm in functional magnetic resonance imaging and near infrared spectroscopyNeuroimage20145-158. Moosmann M, Ritter P, et al. 2003. Correlates of alpha rhythm in functional magnetic resonance imaging and near infrared spectroscopy. Neuroimage 20:145–158.
Mueen AKoegh E et al.2009. Exact Discovery of Time Series MotifsSIAM Proceedings Proc. of 2009 SIAM International Conference on Data Mining: SDM. Mueen A, Koegh E, et al. 2009. Exact Discovery of Time Series Motifs. SIAM Proceedings Proc. of 2009 SIAM International Conference on Data Mining: SDM.
Nunez P. L.1974. “The brain wave equation: A model for the EEG.”Mathematical Biosciences21279-297. Nunez, P. L. (1974). “The brain wave equation: A model for the EEG.” Mathematical Biosciences 21:279–297.
Nunez PLSilberstein RB2000. On the relationship of synaptic activity to macroscopic measurements: does co-registration of EEG with fMRI make sense?Brain Topogr1379-96. Nunez PL, Silberstein RB. 2000. On the relationship of synaptic activity to macroscopic measurements: does co-registration of EEG with fMRI make sense? Brain Topogr 13:79–96.
Nunez PLSrinivasan R2006. Electric Fields of the Brain: The Neurophysics of EEGNew YorkOxford University Press. Nunez PL, Srinivasan R. 2006. Electric Fields of the Brain: The Neurophysics of EEG. New York: Oxford University Press.
Ohlsson H2008. Regression on manifolds with implications for system identificationLinköping University Electronic PressKarlstad University. Ohlsson H. 2008. Regression on manifolds with implications for system identification. Linköping University Electronic Press: Karlstad University.
Ohlsson HLjung L2009. Gray-Box Identification for High-Dimensional Manifold Constrained RegressionIn Proceedings of the 15th IFAC Symposium on System Identification, SYSID. Ohlsson H, Ljung L. 2009. Gray-Box Identification for High-Dimensional Manifold Constrained Regression. In Proceedings of the 15th IFAC Symposium on System Identification, SYSID.
Ohlsson HRoll J et al.2007. Using Manifold Learning for Nonlinear System IdentificationIn Proceedings of the 7th IFAC Symposium on Nonlinear Control Systems (NOLCOS)Pretoria, South Africa Saint-Malo France. Ohlsson H, Roll J, et al. 2007. Using Manifold Learning for Nonlinear System Identification. In Proceedings of the 7th IFAC Symposium on Nonlinear Control Systems (NOLCOS), Pretoria, South Africa Saint-Malo France.
Oostendorp TFDelbeke J et al.2000. The conductivity of the human skull: results of in vivo and in vitro measurementsIEEE Trans Biomed Eng471487-1492. Oostendorp TF, Delbeke J, et al. 2000. The conductivity of the human skull: results of in vivo and in vitro measurements. IEEE Trans Biomed Eng 47:1487–1492.
Pascual-Marqui RD1999. Review of methods for solving the EEG inverse problemInt J Bioelectromagn175-86. Pascual-Marqui RD. 1999. Review of methods for solving the EEG inverse problem. Int J Bioelectromagn 1:75–86.
Perdikis DHuys R et al.2011. Time scale hierarchies in the functional organization of complex behaviorsPLoS Comput Biol7e1002198. Perdikis D, Huys R, et al. 2011. Time scale hierarchies in the functional organization of complex behaviors. PLoS Comput Biol 7:e1002198.
Petzold GCAlbeanu DF et al.2008. Coupling of neural activity to blood flow in olfactory glomeruli is mediated by astrocytic pathwaysNeuron58897-910. Petzold GC, Albeanu DF, et al. 2008. Coupling of neural activity to blood flow in olfactory glomeruli is mediated by astrocytic pathways. Neuron 58:897–910.
Plummer CHarvey AS et al.2008. EEG source localization in focal epilepsy: where are we now?Epilepsia49201-218. Plummer C, Harvey AS, et al. 2008. EEG source localization in focal epilepsy: where are we now? Epilepsia 49:201–218.
Prinz AABillimoria CP et al.2003. Alternative to hand-tuning conductance-based models: construction and analysis of databases of model neuronsJ Neurophysiol903998-4015. Prinz AA, Billimoria CP, et al. 2003. Alternative to hand-tuning conductance-based models: construction and analysis of databases of model neurons. J Neurophysiol 90:3998–4015.
Purdon PLamus C et al.2010. A state space approach to multimodal integration of simultaneously recorded EEG and fMRIIn Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on, IEEE DallasTexas, USA. Purdon P, Lamus C, et al. 2010. A state space approach to multimodal integration of simultaneously recorded EEG and fMRI. In Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on, IEEE Dallas, Texas, USA.
Quach MBrunel N et al.2007. Estimating parameters and hidden variables in non-linear state-space models based on ODEs for biological networks inferenceBioinformatics233209-3216. Quach M, Brunel N, et al. 2007. Estimating parameters and hidden variables in non-linear state-space models based on ODEs for biological networks inference. Bioinformatics 23:3209–3216.
Raichle MEMacLeod AM et al.2001. A default mode of brain functionProc Natl Acad Sci U S A98676-682. Raichle ME, MacLeod AM, et al. 2001. A default mode of brain function. Proc Natl Acad Sci U S A 98:676–682.
Reinacher MBecker R et al.2009. Oscillatory brain states interact with late cognitive components of the somatosensory evoked potentialJ Neurosci Methods. Reinacher M, Becker R, et al. 2009. Oscillatory brain states interact with late cognitive components of the somatosensory evoked potential. J Neurosci Methods.
Riera JJSumiyoshi A2010. Brain oscillations: ideal scenery to understand the neurovascular couplingCurr Opin Neurol23374-381. Riera JJ, Sumiyoshi A. 2010. Brain oscillations: ideal scenery to understand the neurovascular coupling. Curr Opin Neurol 23:374–381.
Ritter PBecker R2009. Detecting alpha rhythm phase reset by phase sorting: caveats to considerNeuroimage471-4. Ritter P, Becker R. 2009. Detecting alpha rhythm phase reset by phase sorting: caveats to consider. Neuroimage 47:1–4.
Ritter PBecker R et al.2007. Evaluating gradient artifact correction of EEG data acquired simultaneously with fMRIMagn Reson Imaging25923-932. Ritter P, Becker R, et al. 2007. Evaluating gradient artifact correction of EEG data acquired simultaneously with fMRI. Magn Reson Imaging 25:923–932.
Ritter PBecker R et al.2010. EEG quality: the image acquisition artefactMulert CLemieux LEEG-fMRI Physiology, Technique and ApplicationHeidelbergSpringer153-171. Ritter P, Becker R, et al. 2010. EEG quality: the image acquisition artefact. In: Mulert C, Lemieux L (eds) EEG-fMRI Physiology, Technique and Application. Heidelberg: Springer; pp. 153–171.
Ritter PFreyer F et al.2008. High-frequency (600 Hz) population spikes in human EEG delineate thalamic and cortical fMRI activation sitesNeuroimage42483-490. Ritter P, Freyer F, et al. 2008. High-frequency (600 Hz) population spikes in human EEG delineate thalamic and cortical fMRI activation sites. Neuroimage 42:483–490.
Ritter PMoosmann M et al.2009. Rolandic alpha and beta EEG rhythms' strengths are inversely related to fMRI-BOLD signal in primary somatosensory and motor cortexHum Brain Mapp301168-1187. Ritter P, Moosmann M, et al. 2009. Rolandic alpha and beta EEG rhythms' strengths are inversely related to fMRI-BOLD signal in primary somatosensory and motor cortex. Hum Brain Mapp 30:1168–1187.
Ritter PVillringer A2006. Simultaneous EEG-fMRINeurosci Biobehav Rev30823-838. Ritter P, Villringer A. 2006. Simultaneous EEG-fMRI. Neurosci Biobehav Rev 30:823–838.
Ritter PVillringer A et al.2002. Inhibition and functional magnetic resonance imagingM. TomitaI. KannoE. HamelBrain Activation and CBF ControlElsevier213-222. Ritter P, Villringer A, et al. 2002. Inhibition and functional magnetic resonance imaging. In M. Tomita, I. Kanno, & E. Hamel (Eds.): Brain Activation and CBF Control. Elsevier; pp. 213–222.
Robinson PARennie CJ et al.1997. “Propagation and stability of waves of electrical activity in the cerebral cortex.”Physical Review E56826. Robinson PA, Rennie CJ, et al. (1997). “Propagation and stability of waves of electrical activity in the cerebral cortex.” Physical Review E 56:826.
Roweis S. T.L. K. Saul2000. “Nonlinear dimensionality reduction by locally linear embedding.”Science29055002323-2326. Roweis, S. T. and L. K. Saul (2000). “Nonlinear dimensionality reduction by locally linear embedding.” Science 290(5500): 2323–2326.
Scherg MBerg P1991. Use of prior knowledge in brain electromagnetic source analysisBrain Topogr4143-150. Scherg M, Berg P. 1991. Use of prior knowledge in brain electromagnetic source analysis. Brain Topogr 4:143–150.
Scherg MVon Cramon D1986. Evoked dipole source potentials of the human auditory cortexElectroencephalogr Clin Neurophysiol/Evoked Potentials Section65344-360. Scherg M, Von Cramon D. 1986. Evoked dipole source potentials of the human auditory cortex. Electroencephalogr Clin Neurophysiol/Evoked Potentials Section 65:344–360.
Schölkopf BSmola A et al.1998. Nonlinear component analysis as a kernel eigenvalue problemNeural Comput101299-1319. Schölkopf B, Smola A, et al. 1998. Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput 10:1299–1319.
Schulz MChau W et al.2004. An integrative MEG-fMRI study of the primary somatosensory cortex using cross-modal correspondence analysisNeuroimage22120-133. Schulz M, Chau W, et al. 2004. An integrative MEG-fMRI study of the primary somatosensory cortex using cross-modal correspondence analysis. Neuroimage 22:120–133.
Simon DChia TL2002. Kalman filtering with state equality constraintsIEEE Trans Aerosp Electron Syst38128-136. Simon D, Chia TL. 2002. Kalman filtering with state equality constraints. IEEE Trans Aerosp Electron Syst 38:128–136.
Simon DSimon DL2003. Aircraft turbofan engine health estimation using constrained Kalman filtering. DTIC DocumentASME Journal of Engineering for Gas Turbine and Power127323-328. Simon D, Simon DL. 2003. Aircraft turbofan engine health estimation using constrained Kalman filtering. DTIC Document. ASME Journal of Engineering for Gas Turbine and Power, 127, 323–328.
Sotero RCTrujillo-Barreto NJ2008. Biophysical model for integrating neuronal activity, EEG, fMRI and metabolismNeuroimage39290-309. Sotero RC, Trujillo-Barreto NJ. 2008. Biophysical model for integrating neuronal activity, EEG, fMRI and metabolism. Neuroimage 39:290–309.
Spall JC2003. Introduction to Stochastic Search and Optimization: Estimation, Simulation, and ControlHoboken, New JerseyWiley-Interscience. Spall JC. 2003. Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control. Hoboken, New Jersey: Wiley-Interscience.
Stefanescu RAJirsa VK2008. A low dimensional description of globally coupled heterogeneous neural networks of excitatory and inhibitory neuronsPLoS Comput Biol4e1000219. Stefanescu RA, Jirsa VK. 2008. A low dimensional description of globally coupled heterogeneous neural networks of excitatory and inhibitory neurons. PLoS Comput Biol 4:e1000219.
Stefanescu RAJirsa VK2011. Reduced representations of heterogeneous mixed neural networks with synaptic couplingPhys Rev E Stat Nonlin Soft Matter Phys8312 Pt 2026204. Stefanescu RA, Jirsa VK. 2011. Reduced representations of heterogeneous mixed neural networks with synaptic coupling. Phys Rev E Stat Nonlin Soft Matter Phys 83(2 Pt 2):026204.
Stern JMEngel J Jr.2004An Atlas of EEG PatternsPhiladelphiaLippincott Williams & Wilkins. Stern JM, Engel J, Jr. 2004. An Atlas of EEG Patterns. Philadelphia: Lippincott Williams & Wilkins.
Sun XJin L et al.2008. Extended Kalman filter for estimation of parameters in nonlinear state-space models of biochemical networksPloS One3e3758. Sun X, Jin L, et al. 2008. Extended Kalman filter for estimation of parameters in nonlinear state-space models of biochemical networks. PloS One 3:e3758.
Teh YWRoweis S2002. Automatic alignment of local representationsAdv Neural Inf Process Syst15841-848. Teh YW, Roweis S. 2002. Automatic alignment of local representations. Adv Neural Inf Process Syst 15:841–848.
Tenenbaum JB1998. Mapping a manifold of perceptual observationsAdv Neural Inf Process Syst Proceedings of the 1997 conference on Advances in neural Information processing systems 10MIT Press682-688. Tenenbaum JB. 1998. Mapping a manifold of perceptual observations. Adv Neural Inf Process Syst Proceedings of the 1997 conference on Advances in neural Information processing systems 10, MIT Press. 682–688.
Valdes-Sosa PASanchez-Bornot JM et al.2009. Model driven EEG/fMRI fusion of brain oscillationsHum Brain Mapp302701-2721. Valdes-Sosa PA, Sanchez-Bornot JM, et al. 2009. Model driven EEG/fMRI fusion of brain oscillations. Hum Brain Mapp 30:2701–2721.
Van der Maaten L2007. An introduction to dimensionality reduction using matlabReport120107-07. Van der Maaten L. 2007. An introduction to dimensionality reduction using matlab. Report 1201:07–07.
Van der Maaten LPostma E et al.2009. Dimensionality reduction: a comparative reviewJ Mach Learn Res101-41. Van der Maaten L, Postma E, et al. 2009. Dimensionality reduction: a comparative review. J Mach Learn Res 10:1–41.
Van Essen DCDierker DL2007. Surface-based and probabilistic atlases of primate cerebral cortexNeuron56209-225. Van Essen DC, Dierker DL. 2007. Surface-based and probabilistic atlases of primate cerebral cortex. Neuron 56:209–225.
Van Essen DCDrury HA et al.2001. An integrated software suite for surface-based analyses of cerebral cortexJ Am Med Inform Assoc8443-459. Van Essen DC, Drury HA, et al. 2001. An integrated software suite for surface-based analyses of cerebral cortex. J Am Med Inform Assoc 8:443–459.
Verbeek J2006. Learning nonlinear image manifolds by global alignment of local linear modelsIEEE Trans Pattern Anal Mach Intell281236-1250. Verbeek J. 2006. Learning nonlinear image manifolds by global alignment of local linear models. IEEE Trans Pattern Anal Mach Intell 28:1236–1250.
Wang ZLiu X et al.2009. An extended Kalman filtering approach to modeling nonlinear dynamic gene regulatory networks via short gene expression time seriesIEEE/ACM Trans Comput Biol Bioinform6410-419. Wang Z, Liu X, et al. 2009. An extended Kalman filtering approach to modeling nonlinear dynamic gene regulatory networks via short gene expression time series. IEEE/ACM Trans Comput Biol Bioinform 6:410–419.
Weinberger KQSha F et al.2004. Learning a Kernel Matrix for Nonlinear Dimensionality ReductionIn Proceedings of the Twenty First International Conference on Machine Learning, ACM BanffCanada. Weinberger KQ, Sha F, et al. 2004. Learning a Kernel Matrix for Nonlinear Dimensionality Reduction. In Proceedings of the Twenty-First International Conference on Machine Learning, ACM Banff, Canada.
Wilson H. R.J. D. Cowan1972. “Excitatory and inhibitory interactions in localized populations of model neurons.“Biophysical journal1211-24. Wilson, H. R. and J. D. Cowan (1972). “Excitatory and inhibitory interactions in localized populations of model neurons.“ Biophysical journal 12(1): 1–24.
Wobst PWenzel R et al.2001. Linear aspects of changes in deoxygenated hemoglobin concentration and cytochrome oxidase oxidation during brain activationNeuroimage13520-530. Wobst P, Wenzel R, et al. 2001. Linear aspects of changes in deoxygenated hemoglobin concentration and cytochrome oxidase oxidation during brain activation. Neuroimage 13:520–530.
Zalesky AFornito A2009. A DTI-derived measure of cortico-cortical connectivityIEEE Trans Med Imaging281023-1036. Zalesky A, Fornito A. 2009. A DTI-derived measure of cortico-cortical connectivity. IEEE Trans Med Imaging 28:1023–1036.
Zhang ZZha H2004. Principal manifolds and nonlinear dimensionality reduction via tangent space alignmentJ Shanghai Univ (English Edition)8406-424. Zhang Z, Zha H. 2004. Principal manifolds and nonlinear dimensionality reduction via tangent space alignment. J Shanghai Univ (English Edition) 8:406–424.
Zibulevsky MPearlmutter BA2001. Blind source separation by sparse decomposition in a signal dictionaryNeural Comput13863-882. Zibulevsky M, Pearlmutter BA. 2001. Blind source separation by sparse decomposition in a signal dictionary. Neural Comput 13:863–882.

Information & Authors

Information

Published In

cover image Brain Connectivity
Brain Connectivity
Volume 3Issue Number 22013
Pages: 121 - 145
PubMed: 23442172

History

Published online: 23 April 2013
Published in print: 2013
Published ahead of production: 26 February 2013

Permissions

Request permissions for this article.

Topics

Authors

Affiliations

Petra Ritter
Minerva Research Group Brain Modes, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
Bernstein Focus State Dependencies of Learning & Bernstein Center for Computational Neuroscience, Berlin, Germany.
Department of Neurology, Charité—University Medicine, Berlin, Germany.
Berlin School of Mind and Brain & Mind and Brain Institute, Humboldt University, Berlin, Germany.
Michael Schirner
Bernstein Focus State Dependencies of Learning & Bernstein Center for Computational Neuroscience, Berlin, Germany.
Department of Neurology, Charité—University Medicine, Berlin, Germany.
Anthony R. McIntosh
Rotman Research Institute of Baycrest Centre, University of Toronto, Toronto, Canada.
Viktor K. Jirsa
Institut de Neurosciences des Systèmes UMR INSERM 1106, Aix-Marseille Université Faculté de Médecine, Marseille, France.

Notes

Address correspondence to:Petra RitterDepartment of NeurologyCharité Universitaetsmedizin BerlinCharitéplatz 110117 BerlinGermany
E-mail: [email protected]

Author Disclosure Statement

No competing financial interests exist.

Metrics & Citations

Metrics

Citations

Export citation

Select the format you want to export the citations of this publication.

View Options

View options

PDF/EPUB

View PDF/ePub

Get Access

Access content

To read the fulltext, please use one of the options below to sign in or purchase access.

Society Access

If you are a member of a society that has access to this content please log in via your society website and then return to this publication.

Restore your content access

Enter your email address to restore your content access:

Note: This functionality works only for purchases done as a guest. If you already have an account, log in to access the content to which you are entitled.

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share on social media

Back to Top