Bio

Academic Appointments


Professional Education


  • Ph.D., UC Berkeley, Theoretical Physics (2004)
  • M.A., UC Berkeley, Mathematics (2004)
  • M.Eng., MIT, Electrical Engineering and Computer Science (1998)
  • B.S., MIT, Mathematics (1998)
  • B.S., MIT, Physics (1998)
  • B.S., MIT, Electrical Engineering and Computer Science (1998)

Research & Scholarship

Current Research and Scholarly Interests


Theoretical / computational neuroscience

Teaching

2013-14 Courses


Postdoctoral Advisees


Graduate and Fellowship Programs


Publications

Journal Articles


  • Investigating the role of firing-rate normalization and dimensionality reduction in brain-machine interface robustness. Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Conference Kao, J. C., Nuyujukian, P., Stavisky, S., Ryu, S. I., Ganguli, S., Shenoy, K. V. 2013; 2013: 293-298

    Abstract

    The intraday robustness of brain-machine interfaces (BMIs) is important to their clinical viability. In particular, BMIs must be robust to intraday perturbations in neuron firing rates, which may arise from several factors including recording loss and external noise. Using a state-of-the-art decode algorithm, the Recalibrated Feedback Intention Trained Kalman filter (ReFIT-KF) [1] we introduce two novel modifications: (1) a normalization of the firing rates, and (2) a reduction of the dimensionality of the data via principal component analysis (PCA). We demonstrate in online studies that a ReFIT-KF equipped with normalization and PCA (NPC-ReFIT-KF) (1) achieves comparable performance to a standard ReFIT-KF when at least 60% of the neural variance is captured, and (2) is more robust to the undetected loss of channels. We present intuition as to how both modifications may increase the robustness of BMIs, and investigate the contribution of each modification to robustness. These advances, which lead to a decoder achieving state-of-the-art performance with improved robustness, are important for the clinical viability of BMI systems.

    View details for DOI 10.1109/EMBC.2013.6609495

    View details for PubMedID 24109682

  • A Hebbian learning rule gives rise to mirror neurons and links them to control theoretic inverse models FRONTIERS IN NEURAL CIRCUITS Hanuschkin, A., Ganguli, S., Hahnloser, R. H. 2013; 7

    Abstract

    Mirror neurons are neurons whose responses to the observation of a motor act resemble responses measured during production of that act. Computationally, mirror neurons have been viewed as evidence for the existence of internal inverse models. Such models, rooted within control theory, map-desired sensory targets onto the motor commands required to generate those targets. To jointly explore both the formation of mirrored responses and their functional contribution to inverse models, we develop a correlation-based theory of interactions between a sensory and a motor area. We show that a simple eligibility-weighted Hebbian learning rule, operating within a sensorimotor loop during motor explorations and stabilized by heterosynaptic competition, naturally gives rise to mirror neurons as well as control theoretic inverse models encoded in the synaptic weights from sensory to motor neurons. Crucially, we find that the correlational structure or stereotypy of the neural code underlying motor explorations determines the nature of the learned inverse model: random motor codes lead to causal inverses that map sensory activity patterns to their motor causes; such inverses are maximally useful, by allowing the imitation of arbitrary sensory target sequences. By contrast, stereotyped motor codes lead to less useful predictive inverses that map sensory activity to future motor actions. Our theory generalizes previous work on inverse models by showing that such models can be learned in a simple Hebbian framework without the need for error signals or backpropagation, and it makes new conceptual connections between the causal nature of inverse models, the statistical structure of motor variability, and the time-lag between sensory and motor responses of mirror neurons. Applied to bird song learning, our theory can account for puzzling aspects of the song system, including necessity of sensorimotor gating and selectivity of auditory responses to bird's own song (BOS) stimuli.

    View details for DOI 10.3389/fncir.2013.00106

    View details for Web of Science ID 000320922000001

    View details for PubMedID 23801941

  • Statistical mechanics of complex neural systems and high dimensional data JOURNAL OF STATISTICAL MECHANICS-THEORY AND EXPERIMENT Advani, M., Lahiri, S., Ganguli, S. 2013
  • Spatial Information Outflow from the Hippocampal Circuit: Distributed Spatial Coding and Phase Precession in the Subiculum JOURNAL OF NEUROSCIENCE Kim, S. M., Ganguli, S., Frank, L. M. 2012; 32 (34): 11539-11558

    Abstract

    Hippocampal place cells convey spatial information through a combination of spatially selective firing and theta phase precession. The way in which this information influences regions like the subiculum that receive input from the hippocampus remains unclear. The subiculum receives direct inputs from area CA1 of the hippocampus and sends divergent output projections to many other parts of the brain, so we examined the firing patterns of rat subicular neurons. We found a substantial transformation in the subicular code for space from sparse to dense firing rate representations along a proximal-distal anatomical gradient: neurons in the proximal subiculum are more similar to canonical, sparsely firing hippocampal place cells, whereas neurons in the distal subiculum have higher firing rates and more distributed spatial firing patterns. Using information theory, we found that the more distributed spatial representation in the subiculum carries, on average, more information about spatial location and context than the sparse spatial representation in CA1. Remarkably, despite the disparate firing rate properties of subicular neurons, we found that neurons at all proximal-distal locations exhibit robust theta phase precession, with similar spiking oscillation frequencies as neurons in area CA1. Our findings suggest that the subiculum is specialized to compress sparse hippocampal spatial codes into highly informative distributed codes suitable for efficient communication to other brain regions. Moreover, despite this substantial compression, the subiculum maintains finer scale temporal properties that may allow it to participate in oscillatory phase coding and spike timing-dependent plasticity in coordination with other regions of the hippocampal circuit.

    View details for DOI 10.1523/JNEUROSCI.5942-11.2012

    View details for Web of Science ID 000308140500004

    View details for PubMedID 22915100

  • Compressed Sensing, Sparsity, and Dimensionality in Neuronal Information Processing and Data Analysis ANNUAL REVIEW OF NEUROSCIENCE, VOL 35 Ganguli, S., Sompolinsky, H. 2012; 35: 485-508

    Abstract

    The curse of dimensionality poses severe challenges to both technical and conceptual progress in neuroscience. In particular, it plagues our ability to acquire, process, and model high-dimensional data sets. Moreover, neural systems must cope with the challenge of processing data in high dimensions to learn and operate successfully within a complex world. We review recent mathematical advances that provide ways to combat dimensionality in specific situations. These advances shed light on two dual questions in neuroscience. First, how can we as neuroscientists rapidly acquire high-dimensional data from the brain and subsequently extract meaningful models from limited amounts of these data? And second, how do brains themselves process information in their intrinsically high-dimensional patterns of neural activity as well as learn meaningful, generalizable models of the external world from limited experience?

    View details for DOI 10.1146/annurev-neuro-062111-150410

    View details for Web of Science ID 000307960400024

    View details for PubMedID 22483042

  • Feedforward to the Past: The Relation between Neuronal Connectivity, Amplification, and Short-Term Memory NEURON Ganguli, S., Latham, P. 2009; 61 (4): 499-501

    Abstract

    Two studies in this issue of Neuron challenge widely held assumptions about the role of positive feedback in recurrent neuronal networks. Goldman shows that such feedback is not necessary for memory maintenance in a neural integrator, and Murphy and Miller show that it is not necessary for amplification of orientation patterns in V1. Both suggest that seemingly recurrent networks can be feedforward in disguise.

    View details for DOI 10.1016/j.neuron.2009.02.006

    View details for Web of Science ID 000263816300004

    View details for PubMedID 19249270

  • Memory traces in dynamical systems PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA Ganguli, S., Huh, D., Sompolinsky, H. 2008; 105 (48): 18970-18975

    Abstract

    To perform nontrivial, real-time computations on a sensory input stream, biological systems must retain a short-term memory trace of their recent inputs. It has been proposed that generic high-dimensional dynamical systems could retain a memory trace for past inputs in their current state. This raises important questions about the fundamental limits of such memory traces and the properties required of dynamical systems to achieve these limits. We address these issues by applying Fisher information theory to dynamical systems driven by time-dependent signals corrupted by noise. We introduce the Fisher Memory Curve (FMC) as a measure of the signal-to-noise ratio (SNR) embedded in the dynamical state relative to the input SNR. The integrated FMC indicates the total memory capacity. We apply this theory to linear neuronal networks and show that the capacity of networks with normal connectivity matrices is exactly 1 and that of any network of N neurons is, at most, N. A nonnormal network achieving this bound is subject to stringent design constraints: It must have a hidden feedforward architecture that superlinearly amplifies its input for a time of order N, and the input connectivity must optimally match this architecture. The memory capacity of networks subject to saturating nonlinearities is further limited, and cannot exceed square root N. This limit can be realized by feedforward structures with divergent fan out that distributes the signal across neurons, thereby avoiding saturation. We illustrate the generality of the theory by showing that memory in fluid systems can be sustained by transient nonnormal amplification due to convective instability or the onset of turbulence.

    View details for DOI 10.1073/pnas.0804451105

    View details for Web of Science ID 000261489100065

    View details for PubMedID 19020074

  • One-dimensional dynamics of attention and decision making in LIP NEURON Ganguli, S., Bisley, J. W., Roitman, J. D., Shadlen, M. N., Goldberg, M. E., Miller, K. D. 2008; 58 (1): 15-25

    Abstract

    Where we allocate our visual spatial attention depends upon a continual competition between internally generated goals and external distractions. Recently it was shown that single neurons in the macaque lateral intraparietal area (LIP) can predict the amount of time a distractor can shift the locus of spatial attention away from a goal. We propose that this remarkable dynamical correspondence between single neurons and attention can be explained by a network model in which generically high-dimensional firing-rate vectors rapidly decay to a single mode. We find direct experimental evidence for this model, not only in the original attentional task, but also in a very different task involving perceptual decision making. These results confirm a theoretical prediction that slowly varying activity patterns are proportional to spontaneous activity, pose constraints on models of persistent activity, and suggest a network mechanism for the emergence of robust behavioral timing from heterogeneous neuronal populations.

    View details for DOI 10.1016/j.neuron.2008.01.038

    View details for Web of Science ID 000254946200006

    View details for PubMedID 18400159

  • Function constrains network architecture and dynamics: A case study on the yeast cell cycle Boolean network PHYSICAL REVIEW E Lau, K., Ganguli, S., Tang, C. 2007; 75 (5)

    Abstract

    We develop a general method to explore how the function performed by a biological network can constrain both its structural and dynamical network properties. This approach is orthogonal to prior studies which examine the functional consequences of a given structural feature, for example a scale free architecture. A key step is to construct an algorithm that allows us to efficiently sample from a maximum entropy distribution on the space of Boolean dynamical networks constrained to perform a specific function, or cascade of gene expression. Such a distribution can act as a "functional null model" to test the significance of any given network feature, and can aid in revealing underlying evolutionary selection pressures on various network properties. Although our methods are general, we illustrate them in an analysis of the yeast cell cycle cascade. This analysis uncovers strong constraints on the architecture of the cell cycle regulatory network as well as significant selection pressures on this network to maintain ordered and convergent dynamics, possibly at the expense of sacrificing robustness to structural perturbations.

    View details for DOI 10.1103/PhysRevE.75.051907

    View details for Web of Science ID 000246890100094

    View details for PubMedID 17677098

  • E10 Orbifolds Journal of High Energy Physics Brown, J., Ganguli, S., Ganor, O., Helfgott, C. 2005; 06 (057)
  • Twisted six dimensional gauge theories on tori, matrix models, and integrable systems JOURNAL OF HIGH ENERGY PHYSICS Ganguli, S., Ganor, O. J., Gill, J. 2004
  • Holographic protection of chronology in universes of the Godel type PHYSICAL REVIEW D Boyda, E. K., Ganguli, S., Horava, P., Varadarajan, U. 2003; 67 (10)

Books and Book Chapters


  • Vocal learning with inverse models Principles of Neural Coding Hahnloser, R., Ganguli, S. CRC Press. 2013

Conference Proceedings


  • A memory frontier for complex synapses Neural Information Processing Systems (NIPS) Lahiri, S., Ganguli, S. 2013
  • Learning hierarchical category structure in deep neural networks Proceedings of the Cognitive Science Society Saxe, A., McClelland, J., Ganguli, S. 2013: 1271-1276
  • Short-term memory in neuronal networks through dynamical compressed sensing Neural Information Processing Systems (NIPS) Gangui, S., Sompolinsky, H. 2010

Stanford Medicine Resources: