NSFReport of the Workshop on:

Synergistic
Neuroscience and Mathematics/Physics/Engineering
Approaches for Understanding
Information Processing in
Biological and Artificial Intelligent Systems

(Held on: April 8-10, 1996; Arlington, VA)

Report Date: August 15, 1996

Any opinions, findings, conclusions, or recommendations expressed in this report are those of the participants.

Preface

On April 8-10, NSF organized the workshop on:

Synergistic Neuroscience and Mathematics/Physics/Engineering Approaches for Understanding

INFORMATION PROCESSING IN BIOLOGICAL AND ARTIFICIAL INTELLIGENT SYSTEMS

to bring together neuroscientists, physicists, mathematicians, and researchers in AI,Control Theory and Bioengineering, to discuss new multidisciplinary research opportunities of modeling and understanding information processing in biological systems, including the brain, for the benefit of studying biological systems per se and for applyng such knowledge on artificial intelligent systems.

Neural networks have been an effective approach in deriving formal understanding of information processing in neurobiological systems, and simulation methods are becoming espoused. In addition, physicists, mathematicians, control engineers and neurobiologists are exploring methods from mathematics and physics to understand neurobiological systems. It is recognized that more work along these lines would be desirable.

Theoretical physicists working in nuclear, high energy, and condensed matter physics have developed an array of methods and tools to model and understand complex systems, with many degrees of freedom and complex state and phase spaces. To deal with such systems physicists have developed and applied a number of approaches: modeling, statistical and thermodynamics methods, scaling, and renormalization, and have used collective states and effective interactions to reduce the complexity of the theoretical models and solve energy minimization problems. Numerical analysis, probability and statistics, topology, optimization, group theory, and graph and network theory are some of the mathematical tools that also are used. Likewise, control theory researchers have developed analysis and design tools for optimization of complex dynamic systems. The question is to what extent such methods can be of use for neurobiosystems modeling.

It has been suggested that for the neurobiological community a "new culture" needs to be developed, where there is more collaboration between theoreticians and experimentalists. There seems to be a strong belief that the theoretical community needs practical applications and the practitioners need theory, that experimentalists need help in interpreting the data they collect, and theorists need closer ties with applications.

In addition to building on current successes of single-cell modeling, the need is for collaboration in developing models which can help us to understand how large populations of neurons interact and how information is processed. Together with simulation approaches, there is need to support more development of higher level models, including systems models, and to help the community realize the benefits of this distinction. While simulations are very useful and provide a complementary approach in understanding system behavior, the absence of higher level models is a hindering factor. An analogous case is atmospheric modeling, where successful simulations are based on macroscopic models rather than molecular level models of the atmosphere. There is the potential of achieving more rapid progress by bringing together groups that have the experience to deal with such approaches.

At the same time, emphasis on theory should not be to the exclusion of experiment. Among other areas with expressed need for collaborative work is the development of algorithms for analysis of the data; for example, recent advances in optical imaging systems and multi-electrode recorders, capable of recording 10,000 signals, have significantly increased the ability to monitor signals in biological systems and also created the need for new tools to analyze these data. In addition there is an expressed need for means for sharing algorithms used to analyze the data and means for sharing the data.

Biologists are rich in data, mathematicians, physicists and engineers are rich in methods and tools. Under the current mode of funding, such collaborations are not adequately fostered. The existing experience from the involvement of physicists, mathematicians, systems control theorists and neurobiologists in multidisciplinary neurobiosystems research, can be used to assess the potential and the needs of furthering this type of collaboration. There is also a need to determine what educational curricula need to be developed to encourage and enable a new generation of students to acquire the cross-disciplinary training that would facilitate such multidisciplinary collaborations.

The workshop took place in Arlington, VA, on April 8-10, and brought together scientists and engineers who discussed the problems, the needs, and the possibilities of such multidisciplinary research and education. This report summarizes such discussions that took place during the workshop.

Workshop co-chairs
Michael Arbib, University of Southern California
John Hopfield, Caltech

Working Group Chairs:
Stephen P. DeWeerth, Georgia Tech
Larry Sirovich, Rockefeller University
Ralph Linsker, T. J. Watson IBM Research Center

Workshop Steering Committee
Michael Arbib, University of Southern California
Fred Delcomyn, University of Illinois, Urbana-Champaign
John Hopfield, Caltech
K. S. Narendra, Yale University
Karen Sigvardt, University of California, Davis
Larry Sirovich, Rockefeller University
Peter Wolynes, University of Illinois Urbana-Champaign

NSF Coordinating Committee
Frederica Darema, CISE Directorate, Chair
Randakishan Baheti, ENG Directorate
John Cherniavski, CISE Directorate
W. Otto Friesen, BIO Directorate
Larry Reeker, CISE Directorate
Bruce Umminger, BIO Directorate
Paul Werbos, ENG Directorate


Table of Contents

1. Types of Interdisciplinary Linkage
2. The Impact of Computing on the Modeling of Neural Systems
3. The analysis of the interaction among network dynamics at multiple time scales
4. The Scope of the Working Groups
4.1. Small Systems
4.2. Dynamics of the Cortex
4.3. Large Populations of Neurons
5. Opportunities for Cross-Disciplinary Interactions
5.1. Small Systems
5.1.1. The effect of external influences on the function and organization of networks
5.1.2. Real-time hardware implementations of neuromorphic sensorimotor systems
5.2. Dynamics of the Cortex
5.2.1. Tools for Data Analysis
5.2.2. Tools for Multiple Neuron Recording
5.2.3. Neural Coding and Representation
5.3. Large Populations of Neurons
5.3.1. New approaches to parallel distributed processing (PDP)
5.3.2. Neural representations
5.3.3. New schemes for feature discovery and learning
5.3.4. Signal and noise in neural systems
5.3.5. Plasticity and Self-Organization
6. Education
7. Methods for Stimulating Interdisciplinary Collaboration
Appendix 1: Agenda
Appendix 2: Participants
Appendix 3: '1-page thoughts' (pre-workshop)
Appendix 4: '1-page thoughts' (post-workshop)

Executive Summary

The workshop organized by NSF on: "Information Processing in Biological and Artificial Intelligent Systems" took place at the Holiday Inn Ballston, adjacent to NSF, on April 9-10, 1996. The workshop discussed strategies for integrating biological and physical science/engineering approaches to neurobiological systems. It brought together neuroscientists, physicists, mathematicians, and researchers in AI, Control Theory and Bioengineering (see the Appendix 2 for a list of Participants) to discuss multidisciplinary research opportunities in modeling and in understanding information processing in the brain. Neural networks have been an effective approach in deriving formal understanding of some aspects of information processing in neurobiological systems, and number of physicists, mathematicians, control engineers and neurobiologists are exploring methods from mathematics and physics to understand neurobiological systems, thus laying the foundations for the field of Neurobio-Informatics Science and Engineering (NISE).

The consensus from the Workshop was that such methods can be of increasing use for neurobiosystems modeling, but that the neurobiological community needs to develop a "new culture", with greater collaboration between theoreticians and experimentalists. There is great potential for achieving more rapid progress by encouraging the formation of groups that have the multi-disciplinary experience to deal with such approaches. At the same time, emphasis on theory should not be to the exclusion of experiment. Another opportunity for collaborative work is the development of algorithms for analysis of the data; for example, recent advances in optical imaging systems and multi-electrode recorders, capable of recording 10,000 signals, have significantly increased the ability to monitor signals in biological systems but also created the need for new tools to analyze these data. In addition there is an expressed need for means for sharing data and the algorithms used to analyze them.

Biology is rich in data; physicists and engineers are rich in methods and tools. Under the current mode of funding such collaborations are not adequately fostered. The Workshop analyzed the experience of physicists, mathematicians, control theorists and neurobiologists in multidisciplinary neurobiosystems research to date to suggest effective means for extending this type of collaboration, and suggest criteria for educational curricula to be developed to encourage and enable a new generation of students to acquire the necessary training to facilitate such multidisciplinary collaborations.

The Workshop combined a few plenary presentations (see Appendix 1 for the full Agenda) to discuss the issues and pose the questions with multiple sessions for each of three Working Groups to debate the issues in depth. We wish to record our debt to the Chairs of the three Working Groups: Stephen P. DeWeerth: Chair, Working Group on Small Systems; Larry Sirovich: Chair, Working Group on the Dynamics of Cerebral Cortex; and Ralph Linsker: Chair, Working Group on Large Populations of Neurons, whose leadership contributed much to the fruitfulness of the Workshop, and whose written reports contributed much to the text of this overview.

Michael A. Arbib
John Hopfield
Frederica Darema

August 15, 1996


1. Types of Interdisciplinary Linkage

By the nature of their goals and key problems, interdisciplinary linkages exist between neurobiology on the one hand, and mathematics, physics, and engineering ("MPE") on the other, at several levels:

Goals and subject matter that are common to both fields: Neural mediation of behavior requires exploiting regularities present in the environment, learning some of these regularities by experience, predicting and inferring environmental states, and generating effective motor outputs. These tasks are important components of the MPE disciplines of control theory, statistical inference, pattern recognition, learning and generalization theory, information theory, and robotics. Interplay at this level is of great value in both directions. Practical goals such as a face recognizer, a method for separating interfering acoustic sources, or a semiautonomous robot drive invention. The solutions invented by workers who are also cognizant of the relevant neurobiology will, in some cases, be "biologically plausible"; that is, the style of the solution will be compatible with known or likely biological mechanisms. Such solutions can inspire a search for mechanisms that may serve a similar or analogous function in neurobiology. Conversely, knowledge of what is accomplished by biological neural systems provides at least an "existence proof," and sometimes much more detailed guidance, concerning how to build an artificial system having desired properties.
Results of MPE research on more abstract problems, or on conceptually related problems in different domains, can provide insights for neurobiology. Some examples:
Optimization theory: Relationships between environmental state and appropriate behavior can often be cast as problems of optimization of resources subject to costs and constraints. Directly relevant MPE disciplines include: optimization theory (numerical methods, recent work on "rugged landscapes"), the physics of spin glasses, and protein folding regarded as a constrained optimization problem.

Statistical mechanics: In systems comprising many components, there may be important aspects of behavior that are governed by a small number of statistical properties of the system. It may also be possible to describe the system's dynamics in a "scale-invariant" manner, so that the description of the system is reduced to that of a much smaller system. These types of reduction underlie statistical mechanics and the subfield of renormalization methods and critical phenomena. There are several reasons to expect that our understanding of neural processing may benefit from insights in statistical physics: (i) There are already significant links between statistical mechanics and the field of statistical inference. (ii) Multiscale structures, having scale-invariant properties, are present in visual and auditory natural input. (iii) An important theme in learning theory is the reduction of dimensionality of the space of input patterns by exploiting statistical properties of the input ensemble.

Dynamical systems theory: Nonlinear mathematical and physical systems, even those described by only a few variables, can exhibit strikingly complex (including chaotic) properties under certain conditions. It has been proposed that the dynamics of chaotic systems play a key role in neural information processing. While this remains an open question, methods from dynamical systems theory are being fruitfully used to characterize and analyze activity patterns in neural populations. Another important challenge to understanding neurobiological systems is the identification of the biological parameters that are critically important to the operation of a given system. This identification is necessary for appropriate parameter reductions so that representations can be abstracted and models can be simplified. The application of dynamical systems theory and bifurcation analysis provides methods for identifying parameters that are crucial in either stabilizing or sensitively altering the behavior of a system. These methods can also identify parameters to which the system is insensitive. Many biologists, however, are unaware of the power of such mathematical techniques. The interaction between neuroscientists and mathematicians would facilitate the use of these powerful parameter distillation techniques in neural modeling efforts, ultimately providing much more efficient models of a variety of complex biological systems.

Novel tools from MPE domains for neurobiology: Striking examples include new imaging methods such as positron emission tomography, functional magnetic resonance imaging, and optical imaging of brain activity (using fluorescent dyes or directly measuring changes in local blood flow). Multielectrode arrays for electrophysiology are an important tool that will benefit from improvements in materials, design, and signal processing. Hybrid preparations comprise an intracellular electrode that communicates bidirectionally with a computer, and allows artificial "ion channels" having arbitrary dynamics to be added to a real neuron.

Important experimental tools from domains other than MPE include pharmacologic manipulations and methods from molecular biology, including genetic "knockout" experiments, in which cells and/or neural circuits are manipulated to study structure/function relationships.

MPE methods provide analytic tools for neurobiology: Examples include methods for analysis of neurobiological data, circuit and other simulation tools, and the like. Linkages of this type are predominantly one way, in that they benefit neurobiology by borrowing tools from MPE, but may be less likely to lead directly to progress on MPE goals.


2. The Impact of Computing on the Modeling of Neural Systems

Numerical simulation and other computations have become indispensable for the investigation of models of neural system function. Yet the success has been piecemeal, and further progress will require intense MPE-neurobiology collaborations. Many groups have developed neural simulators, but none of the present systems seems to fit the needs of the neurosciences community sufficiently well to become a dominant tool and de facto standard. Nor has there been a concerted effort to develop mechanisms for easy interchange of data among different simulation packages. This state of affairs is confusing for neuroscientists who would like to engage in modeling efforts but have limited computational skills. They confront both divergent opinions about which tools to use and barriers to the successful installation of the package that they choose. None of the tools seem to be flexible enough to encompass most of the problems that neuroscientists want to study. Significant advances in software engineering tools can facilitate the development of better tools and of data interchange standards, but the production of robust software requires careful management and close attention to the needs of users. For example, in order to attract the broadest group of users, the software would have to have a user friendly interface, would have to allow for the modeling of several neurons (membrane properties and morphology), and would allow for interconnections defined by the experimenter. On the other hand, it is also important to include tutorials on the techniques for good modeling. If no interdisciplinary education is involved regarding procedures for good modeling practices and what can and cannot be concluded from a given type of model, it is too easy to abuse the methods. Modeling require insight and understanding not just of the system being modeled, but also of the principles and algorithms that underlie the modeling tools themselves.

Computation also has much to offer the neurosciences beyond simulation. Data assimilation is an area of intense interest in other applications of high performance computing. In the neurosciences, there is great interest in building databases from imaging and electrophysiological measurements, but the assimilation of these data into modeling environments has only recently begun to receive attention. There are apparent opportunities inherent in improved network communications for the sharing and publication of extensive data sets. Systems for doing so will encourage much more extensive use of data from diverse sources in modeling studies. Algorithmic and software development for the incorporation of these data into modeling tools is needed.

There is also a need for software tools that confront the problems of large dimensions and model reduction. We are continually confronted with the need to base models on incomplete and imprecise information about model parameters. As the complexity of models grows, so does the number of parameters that need to be fitted. The expense of simulation is related to both the dimension of underlying models and their stiffness as systems of differential equations. Therefore, parameter identification for models will be greatly facilitated by the development of systematic tools for the reduction of models to ones that involve only essential variables, for sensitivity analysis that identifies critical parameters and for bifurcation analysis that directly identifies regions of parameter space in which there are substantial changes that take place in the dynamics.


3. The analysis of the interaction among network dynamics at multiple time scales

Neural systems have a hierarchy of time scales, ranging from the millisecond timescale of the action potential to developmental changes that take place over years. In order to understand the operation of neural systems, it is critical that the relationships between events on these multiple time scales be explored. For example, much of the communication within neural systems is conveyed by the timing and frequency of action potentials, so it is imperative to understand how these quantities are modulated by synaptic plasticity and other longer term interactions. Another example of multiple time scale interactions are the bursting oscillations (with a frequency in the one second range) that are observed in many systems, from central pattern generators of inveterbrates to thalamocortical rhythms of mammals.

The mathematical foundations for the analysis of multiple time scale dynamics lies within the domain of singular perturbation theory. Geometric caricatures of the observed multiple time scale behavior have been drawn, but the theory hardly encompasses many observed phenomena. In particular, existing theory deals almost exclusively with systems in which the fastest time scales relax to slowly changing equilibria, or systems in which the rapid time scales are integrable. The observations of neural systems fit in neither of these contexts. Extensions of existing mathematical theories are needed to enhance our inituition about the dynamical behavior of these multiple time scale systems.

On a more abstract level, neural systems combine discrete and continuous components. The neural processing of our brains translates continuous signals of electrical activity into discrete concepts and then back again into continuous signals that drive our vocal cords when we put those thoughts into words. Dynamical systems theory has concerned itself with modeling both continuous time and discrete systems, but it has done so individually. Dynamical models of brain function that encompass what we do need to put these together into a coherent framework.

Much attention has also been paid to the need for describing and interpreting the temporal patterns of activity among groups of neurons. This is also known as the problem of ensemble coding: how are different sensory stimuli encoded into patterns of neural activity among groups of neurons and how are such patterns distinguished or decoded at subsequent stages? The problems of decoding, transforming and encoding such patterns are more approachable in small systems with a limited number of elements and with well specified connectivity than in large, unbounded systems where the connectivity can only be poorly specified or specified only statistically. New mathematical methods are needed for describing the temporal patterns among even small groups of neurons. Mathematical and computer modeling is certainly necessary for understanding the cellular and circuit features that are essential for the decoding and encoding of activity patterns among neurons.


4. The Scope of the Working Groups

We now outline the scope of the three working groups. In the process, we further characterize the promise for integrating biological and physical science/engineering approaches to neurobiological systems

4.1. Small Systems

Within the discipline of neuroscience, the investigation of "small systems" has contributed enormously to the current understanding of the cellular mechanisms underlying information processing in the brain. "Small systems" are subsystems within the nervous systems of many animals, from invertebrates to primates, that are more analyzable than larger, more complex parts of the nervous system. The small system distinction is centered on the idea that a quantitative understanding of the system's operation is accessible. These subsystems are relatively self-contained, the component neurons are accessible to intracellular analysis, and, in many cases, the system produces a quantifiable motor output so that the neurons can be studied in a functional context. For example, many small nervous systems when removed from the organism can continue to produce organized outputs which are remarkably similar to the output of the intact animal.

Numerous examples exist of small systems that have been well studied. These examples come from systems in animals as diverse at invertebrates (e.g. the control of movements or the processing of sensory information in leeches, lobsters and crickets) to lower vertebrates (e.g. sensation in electric fish and swimming in lamprey) to primates (e.g. the control of eye movements or spinal reflexes). Thus, these systems are not at all defined by the complexity of the animal in which they reside, but by the presence of a rigorously quantified behavior, the cellular and circuit basis of which may be understood using neurophysiological experimental techniques combined with modeling and quantitative analysis.

The primary motivation for studying small systems is that these systems provide a context in which to address a number of issues that cannot be nearly as easily studied in larger, more complex systems. Small systems are considerably more tractable than their larger counterparts, facilitating the study of system architecture in semantic context and at a level of detail that is often unattainable in large and/or intact systems. Small systems are a necessary intermediate step in the study of nervous system function, between the level of molecules and membranes and the level of large systems of neurons or the behaving organism. Insights and principles have been, and continue to be, uncovered in the constrained world of small systems that can then be used as building blocks in understanding higher levels of organization.

One of the major issues being addressed using a small systems approach is the relationship between structure/dynamics and function with the goal of determining whether a direct relationship between specific organizational structures and particular behavior patterns can be identified. For example, do feedback loops of a particular length or strength produce some patterns which are more or less stable than others? Are some feedback/feedforward relationships necessary for stability? Although neuroscientists are making progress in defining the architectures that underlie specific functions and dynamics, there is at present very little intuition about how these relationships generalize. Quantitative and computational methods, such as those used in disciplines such as physics, mathematics, and engineering, have the potential to provide the means for deriving these generalizations. The collaboration between neuroscientists and MPE in the modeling of small neural systems offers the potential for the development of the first steps in understanding the relationship between neural organization and function. These collaborations will certainly reveal principles that are novel and perhaps even counterintuitive to both groups (e.g. the importance of the remarkable amount of positive feedback in the nervous system is not intuitively understood by either community).

The processes that control self organization can also be studied directly in small systems. Small systems such as slices or cultured neurons are well suited for establishing the properties of cellular and synaptic plasticity. Once such properties are established, the question arises as to whether they are also necessary and sufficient for explaining the full range of learning related phenomena at higher levels of analysis, including the cognitive and behavioral levels. The question can also be asked in the reverse direction. What cellular or synaptic properties are required to explain learning related phenomena at higher levels of analysis? In general, modeling and mathematics can be used to study the implications of cellular properties for multicomponent systems and higher order learning phenomena, and to develop the large scale implications of learning rules discovered in small systems. One example is provided by a recent modeling prediction that has been verified subsequently. It has been demonstrated that some properties of neurons that are lost when they are removed from a network are redeveloped in a period of days. In other words, the neuron "knows" what it is supposed to do and can get there without circuit interactions.

In the exploration of all of these issues, one of the essential goals is the determination of general organizational principles that span a wide variety of systems types and magnitudes regardless of how they are instantiated. This goal is perhaps the primary motivation for studying small systems, providing the potential for modeling tractable (i.e. "small") systems with the ultimate result of illuminating the basis for computation in much larger systems. This exploration is impossible without the direct interaction between neurophysiologists, modelers, and theoreticians, each group playing a major role in the overall process.

4.2. Dynamics of the Cortex

The cerebral cortex of vertebrates is a vast and complex structure that has attracted attention over the years because it is closely linked to important sensory, motor and cognitive behaviors. Several technologies are providing a great deal of specific information on cortical function. Patch clamp recording from neurons in slices of the cortex are providing information on the biophysics of cortical neurons. Optical imaging and multisite recording methods are providing information on the spatial and temporal dynamics of populations of neurons in different regions of the cortex. Imaging methods such as PET and functional MRI are beginning to extend our knowledge of cortical function to humans performing specific behavioral tasks. Ultimately, it will be important to relate events at the cellular level to the behavior of populations of neurons engaged in specific behavioral tasks. This is probably impossible without the aid of several types of cortical models.

Models are essential for the assimilation and understanding of the results of cortical experiments. This task offers challenging research opportunities to scientists with skills in developing mathematical models, for example physicists and mathematicians as well as neuroscientists experienced in cortical research.

Many classes of models are needed to serve well-defined purposes beyond just fitting and explaining existing data. Examples of challenges facing cortical research are the following:

Provide frameworks for developing new experiments to characterize and understand specific neural mechanisms.
Predict neural behavior (qualitatively and quantitatively). Create testable hypotheses.
Creation of dynamical models of interacting populations. Account for modalities that are served by populations of neurons.
Models containing parallel networks. Exploration of the means of recruitment of individual neurons into populations.
Detailed biophysical models (implemented via simulations or analog VLSI hardware) of cortical neurons can be used to relate cellular and molecular processes to the functional properties of cortical circuits.
Dynamical models of interacting populations of neurons can be used to study the functional organization of neurons involved in common tasks but located in different cortical areas. These models will be essential to the interpretation of data collected using imaging and multisite recording methods.
Models not necessarily tied closely to experiments should suggest new paradigms for understanding the function of the nervous system and could serve as the basis for the generation of new technologies, which might create hardware capable of learning, self organization and the exhibition of near real behaviors.

There is also a need for imaginative (perhaps impressionistic) models. Such models should suggest new paradigms for understanding the function of the nervous system and could served as the basis for the generation of new technologies which might create hardware capable of learning, self organization and exhibiting near real behaviors. Possible areas of emphasis:

System as active agent, initiation and execution of motor action; Active interpretation of sensory data
Learning, homeostatic and self-organizing systems.

4.3. Large Populations of Neurons

The broad goals of neurobiology are to explain how brain functions mediate behavior in the context of the animal's internal and external environment, and how these brain functions are implemented by lower-level mechanisms. "Large populations of neurons" poses key open problems in neurobiology including:

What are the "neural codes" i.e., the relationships between biologically relevant information and the physical properties of neuronal interaction and signaling?
What does the local neocortical circuit do, from an information processing standpoint?
What principles underlie multistage processing in sensory and motor systems? How does each stage acquire its functional properties, what are the learning rules, what are the roles of feedback between stages, etc.?
What principles underlie the segregation and integration of processing streams?
What is the higher level organization of the brain? What roles (including nonmotor) does the cerebellum play, and how? How are "cognitive" and "emotional" processing linked? What are the neurobiologically distinct types of memory and how are they implemented?


5. Opportunities for Cross-Disciplinary Interactions

Having outlined in general the scope of the three Working Groups, we now review in more detail the opportunities for Neurobiology-MPE interaction identified by each group in turn.

5.1. Small Systems

5.1.1. The effect of external influences on the function and organization of networks

Many small neuronal systems afford the advantage of reliably staining, and recording and manipulating the activity of, identified neurons or classes of neurons. This advantage permits a definition of the anatomical limits of neuronal connectivity and the delineation of functional state-dependent circuitry. The state dependence of such functional differences results from differences in sensory input, neuromodulatory input, and/or hormonal levels. One of the challenges that faces scientists who work on these small systems is to fully understand how these influences sculpt functional neuronal circuits from anatomical networks. We have come to realize through myriad experimental studies that these influences act on both the membrane properties of component neurons (through modulation of ionic conductance mechanisms) and on synaptic interactions. Because modulatory influences are often multifaceted, these experimental studies often fail to identify the critical parameters that are modulated for the emergence of a functional neuronal network. Moreover, even when critical modulated parameters are identified, true understanding is elusive because the multitude of interactions of synaptic and membrane properties in even the simplest functional circuit outstrips intuition. Given these limits, it is not surprising that there is no substantive theoretical framework for understanding the emergence of functional neuronal networks from anatomical circuits.

Modeling at both the realistic (conductance based neuronal and circuit modeling) and the abstract level provides the means for expanding our current understanding of neural systems. These models will be more insightfully developed through the interaction of experimental neuroscientists with computer scientists, engineers, and mathematicians. For example, dynamical systems analysis has proved a fruitful approach toward understanding how rhythmically active neurons and limited networks function and is currently being expanded to encompass changes in critical parameters by modulatory influences. This mathematical approach has led to the development of theoretical frameworks for understanding operationally similar neuronal circuits that differ in many anatomical and physiological details. Computer tools for such analysis need to be more fully developed and packaged to better faciliate the interaction between mathematicians and neuroscientists. Detailed models will be essential for defining which details are important in conferring the functionality to a particular neuronal circuit and in pointing out how theoretical frameworks must be expanded.

Another area where interaction among the disciplines will aid small systems analysis is in the development of better hybrid systems for circuit analysis. Interfacing computer models with neuronal circuits has already begun, but should be continued so that whole neurons or parts of circuits can be simulated on-line and interfaced through microelectrodes to neuronal circuits. By varying the parameters in the simulated components and observing the response in the biological circuit, new insights should result. Such hybrid systems could also be used for parameter optimization in realistic models and in defining the useful parameter space of the biological circuit. Biological signals could be compared with on-line simulation and parameter correction implemented. Such ideas will necessitate implementation of new combinations of hardware and software resources as well as new methods for error estimation and the use of error estimation in parameter delineation.

5.1.2. Real-time hardware implementations of neuromorphic sensorimotor systems

One promising approach for furthering our understanding of neural information processing and control lies in developing real-time implementations of small sensorimotor systems in functional contexts. To understand how biological neural circuits implement robust, adaptive solutions to complex real world control problems,it is essential to view the nervous system as one component in an integrated system that involves the coupled dynamics of the neural controller, the physical plant, and the external environment. For practical reasons, experimental research on small neural systems tends to focus on the dynamics of isolated neural circuits rather than the coupled dynamics of complete sensory-motor systems. One way to shift the emphasis toward integrated systems level considerations is to bring together biologists, engineers, and dynamical systems scientists to work together on the design and implementation of biologically inspired artificial systems that carry out functional tasks in the real world. A few of the cross-disciplinary research areas relevant to the synthesis of biomorphic robotic systems include the development and analysis of new "design algorithms" for creating neural architectures that meet particular functional goals (evolutionary algorithms, developmental algorithms, learning algorithms, etc.), real-time emulation of neural circuits in silicon (both analog and digital implementations), and engineering analysis of active sensing, sensory feedback, and adaptive motor control systems found in biology.

One specific area in which much progress has been made in this area is in the development of analog VLSI (very large-scale intergrated) circuits that model neurobiological systems. One of the goals of this research is the development of silicon neurons that incorporate ultimately all of the essential computational features found in biological neurons, resulting in large numbers of model neurons fabricated on small pieces of silicon. The second portion of this research, the capability of interconnecting large numbers of silicon neurons, is also under development. The interconnection systems already in use permit easy implementation of arbitrary connectionsbetween model neurons, sensory elements, and actuators. In addition, the parameters that set dynamics, spike firing threshold, and synaptic weight, to name afew,are readily programmable through a standard computer interface. Therefore, these systems have the potential to model, in real time, the signal processing capability found in the nervous systems of simple animals, facilitating the implementation of complex neuromorphic sensorimotor systems.

One of the primary challenges to assembling these artificial nervous systems is the development of the correct architectures and connectivity. It is not at all clear how the interconnects and dynamics needed to support the desired behavior can be designed, in the engineering sense, from basic principles or experience. If not by theprocess of design then just how will these systems of artificial neurons be put together? A fruitful approach may be to apply developmental and evolutionary principles known, or thought, to exist in biology. Collaborations between developmental neurobiologists and neuromorphic engineers as well as collaborations between researchers studying evolution and neuromorphic engineers should be encouraged and supported. This could result in advances in all three fields by providing an experimental system that, although very different from biological organisms, allows the testing of theories and ideas on how complex systems evolve and develop. The resulting systems would also present unique opportunities to study computation in a sensorimotor "nervous" system in which all state variables are completely and readily available.

The implementation of such neuromorphic sensorimotor systems is only possible through the intensive collaboration between engineering and neuroscientists. However, if such an endeavor is to be successful, it is also necessary to include theory and computer modeling in both the development and the analysis of the hardware implementations. The complexity of designing hardware mandates that predictions of system operation be made through theoretical and simulation means prior to the system being implemented. Also, once the system is implemented, the inevitable complexity of its structure and operation must be studied in much the same way as its biological counterparts, again requiring the inclusion of mathematical techniques. In the end, the implementation of such systems provide unprecedented opportunity for collaboration among neuroscientists, computer modelers, theoreticians, and engineers.

5.2. Dynamics of the Cortex

5.2.1. Tools for Data Analysis

Recent advances in multi-site and image recordings permit us to go well beyond single neuron interpretation of cortical function and open the door to the investigation of the spatio-temporal nature of the neural code. However, they have created the need for new tools to analyze and understand the information contained in massive data sets - 1010 to 1012 bytes. Such immense datasets, which at first sight seem mind numbing, create vast opportunities for research which of necessity will have to be multi-disciplinary. Specific goals are:

Correlate cortical states with behavioral states and the statistics of the environment.
Identification of latent topological structures in the data sets.
Development of dimensionality reduction techniques based on general optimization principles for lossless and lossy compression.
Apply dynamical systems theory for the analysis of reduction algorithms for stability, convergence properties and bifurcations.
Development of tools and techniques for the identification, modeling and reduction of noise.

5.2.2. Tools for Multiple Neuron Recording

Sensory input and intended motor output are represented in the activity of many neurons. Thus we must further technologies to gather data from populations of neurons. Many of the technological goals will require the joining of skills from engineering and physics as well experimental neuroscientists. A summary of the goals and opportunities is contained in the following Table:

Spatial Scale Temporal Rate Method Future Needs
Single Cell 10^4hz
Intracellular Electrical
Extracellular Electrical
Voltage Dyes w/ Optical Scanning
Miniaturized Hardware for multisite Recording
Recording from Large Populations: Hardware and Software
Scanning Technology for in vivo Recording
Genetic Introduction of Optical Probes
SmallScale Spatially Averaged Populations ( sim 100 mu m) 10^2hz
Voltage or Ca ^ ++
Dyes w/Imaging Local Field Potential
Magnetic Imaging; (MEG)
Fast Cameras: Large Dynamic Range
Multiplesite Recording Techniques
Models of Cortical Conductivity
Low Cost SQUID Tech.
Improved Analysis Methods
Large-Scale Populations (1mm^230mm^2) 0.1hz - 100hz
Intrinsic; Optical Imaging fMRI Metabolic Imaging Voltage Sensitive Dyes
Model: Hemodynamic & Neuronal Coupling
Characterize:Biological Sources of Variation
Probes: Genetic,SiteSpecific
New Dyes: High signal/noise;Neuron specific

In another vein, MEG and VEP are examples of techniques that attempt to infer cortical information from noninvasive external detection techniques. A true knowledge of the actual cortical activity being sensed by such detectors is an inverse problem of considerable difficulty. The challenge of solving this problem represents an important research opportunity for a diversely trained group of engineers and scientists.

5.2.3. Neural Coding and Representation

Many questions in this area require investigation:

How are features extracted by any of the methods being coded and relayed to higher areas of visual cortex? What is the interplay between sparse coding, code complexity and code description length? How is binding of several objects in the scene done in terms of coding the compound representation?

In characterizing cortical responses there is a need for:

The use of more complex and more natural stimuli or motor behaviors.
Response measures that allow assessment of neural coding on behavioral and finer time scales.
Animals must respond in short (one hundred milliseconds) times to single neural responses, despite great variability in cortical responses to given stimuli.
Role and precision of temporal coding is presently unknown.
Systematic means of exploring a stimulus space, guided by neural responses. (To find regions that best drive cells.)
Examine context dependent neuronal responses, both spatial and temporal.
Develop stimulus ensembles that are experimentally practicable yet capture salient aspects of natural stimuli
Develop mathematical models of the structure of natural stimuli.


5.3. Large Populations of Neurons

5.3.1. New approaches to parallel distributed processing (PDP)

PDP has been largely predicated on the use of processing units (artificial "neurons") that implement a sigmoidal or threshold output function. Current and future lines of development should include:

Units (representing neurons) having more realistic dynamics and ionchannel properties.
Units that represent processing at higher levels of integration e.g., local cortical circuits, or units that represent "maps" such as the ocular dominance and orientation maps of primary visual cortex, appropriately parametrized.
Better understanding of the roles of feedback connections between processing stages.
Novel learning algorithms.
Modeling of subsystems (cortical and subcortical) and their relationships.
More insightful linkages between biologically identified subsystems and the functionally defined subsystems that are studied as part of cognitive science. (Such linkages are studied under the rubric of "cognitive neuroscience.")

5.3.2. Neural representations

Sensory regions in cortex embody multiple distributed representations (or "maps") of the incoming signals. These representations emphasize different aspects of the input, reflect different levels of abstraction of input features, and are modulated by interactions within and among regions. What determines which aspects of the input are represented, and what functional tranformations are implemented by the projections from one region to another? Are there optimization principles that generate the observed transformations, and that have predictive power? Can mathematical methods new to neuroscience e.g., the functional analysis methods of mathematical physics, which are important in formulations of quantum field theory bear usefully on the analysis of neural maps and their transformations?

These distributed representations are in some cases modifiable by neural experience. Various principles have been proposed to govern learning and plasticity. The time is ripe for deeper collaboration between theorists and experimental neurobiologists in this area, owing to recent progress in: the experimental methods for measuring neural activity; the understanding of mechanisms for Hebbian learning; the elucidation of the importance of plasticity in adult animals; the understanding of the statistical properties of realistic ensembles of natural sensory data; and the computing power available to analyze the consequences of applying learning principles to such data.

Open questions related to neural representations are fundamental both for neurobiological understanding and for the design of practical machines (such as an autonomous walking robot) that can perform sensorimotor tasks with animallike flexibility and sophistication. In addition to those posed above, these questions include: How are representations integrated across sensory modalities? How do these representations guide motor behavior? How are motor "features" (such as gait, but at various levels of abstraction) learned, used in motor planning, then translated into specific instantiations of motor output that vary depending on environmental details?

5.3.3. New schemes for feature discovery and learning

Much more attention should be given to the statistics and structure of real sensory input (visual, auditory, etc.). Mammalian brains are remarkable at using realworld structure to learn and to perform biologically relevant tasks. What properties of this structure may be essential for the learning process? Some candidate properties include the fact that input features exist at multiple scales (in space, time, frequency, etc.), and that input statistics are typically nonGaussian. Many researchers ignore realworld statistical properties, perhaps on the assumption that animals are capable of "generalpurpose" learning, so that the type of structured input is unimportant, or perhaps in the belief that such data is unavailable or too difficult to obtain. Clues obtained from an understanding of realworld statistics may, however, be vital both for understanding biological learning and performance, and for implementation in artificial systems.

5.3.4. Signal and noise in neural systems

Neural activity comprises irregular, apparently random, behaviors commonly considered as noise. Important issues include:

Does apparently random activity (e.g., the precise timing of spikes) in fact encode information used by the brain? b. Does noise aid information processing or control (as in the case of stochastic resonance)?
What role does the optimization of signaltonoise ratio play in sensory processing, both at the periphery (e.g., retinal processing) and at later stages?
Links between neural processing, optimal encoding, and information theory have been proposed and successfully tested by analyses of this type.
Do deterministic (e.g., chaotic) dynamics underlie at least part of the observed irregular activity? If so, do they play an information processing role, or are they an unimportant epiphenomenon?
The separation of signal from noise can be generalized to the problem of separating a desired signal from a background of other signals that may also be meaningful. This is a problem of practical importance (e.g., separation of multiple acoustic sources, segmentation of visual scenes), having implications for the neurobiology of attention, sensory processing, and learning. Relevant disciplines include information and control theory, optimal coding methods, psychophysics, and signal processing. As emphasized above, statistical properties of realworld input may be vital in solving such problems.

5.3.5. Plasticity and Self-Organization

The problems of learning are at the heart of the problems of intelligence.

What learning rules lead to effective computational result, given the world we live in?
What learning rules can account for development and or adult plasticity? What are the distinctions between different learning rules?
What is the role of learning in mature function of sensory and motor systems?
How do network activity dynamics and learning dynamics interact?
How can sensory motor learning and temporal sequence learning be accomplished?
Can biochemical realistic models be develop for LTP, LTD, and other plasticity rules? How do these compare with simplified models?
How do cells and networks of plastic neurons achieve stability or homeostasis, staying within the operating rate.


6. Education

A key element to aid research that integrates neurobiology and MPE should be the training of new scientists. To build expertise in interdisciplinary research, more is needed than to train students in multiple disciplines. Students need to develop an independent style of thinking, and judgment in choosing problems that are important and tractable. It is easy, and dangerous, for interdisciplinary students to choose problems whose criteria for success are vague because the problem fails to address goals that are central to either discipline. For example, there have been many neural network projects that have yielded a "biologically inspired" classification method that is neither an improvement over other classification algorithms, nor adds to biological understanding.

Specific schenarios:

Renewable training grants (graduate student or PostDoc) that may be used at any institution.
Support for faculty leaves in complementary fields.
Sponsorship for neuroscience workshops directed toward mathematicians, physicists and engineers.
Support for development of interdisciplinary books, Websites, and other educational material between neuroscience and mathematics/physics/engineering.
Independence, maturity, and strong quantitative skills are particularly important to trainees in newly established cross-disciplinary studies.
Interactions should be encouraged between students in different disciplines. Programs should consider adopting a "buddy" system, in which students from different disciplines are paired up and assigned a common problem to work on.
Core curricula should be defined for a key set of disciplinary constellations: If students have some shared understanding, it is much easier for them to interact with one another productively, even if their backgrounds are diverse.

The success of any collaboration requires that the members of the collaborative team learn at least the basics of the language and techniques of all its members. Without a common language and an appreciation of the background and the challenges of each discipline,there is no collaboration. Therefore, any interdisciplinary program should require that a person trained in physics, dynamical systems or computer science spend a significant amount of time in the laboratory of the neurobiologist and vice versa. However, each student must achieve a "critical mass" of expertise in some core discipline (whether already existing or yet to emerge) if a true contribution is to be made to the collaborative effort.


7. Methods for Stimulating Interdisciplinary Collaboration

There are a number of potential efforts, in addition to the educational initiatives described in the previous section, that could address the development of an understanding of neurobiological systems from a computational and theoretical perspective. All of these programs center around encouraging and promoting interdisciplinary interaction. Establishing new interdisciplinary collaborations first requires that students and senior researchers from different disciplines meet in a relatively informal environment to learn about problems common to both. NSF is in a unique position to support such interactions. Formal workshops such as this one which bring together experts in the field should be followed by support for workshops in other venues for both students and faculty to learn about multidisciplinary computational and theoretical approaches to neuroscience. This could be done in summer workshops or by establishing extended training programs at academic institutions.The latter would be established at institutions where expertise exists. Senior researchers could be invited to give lectures and become involved in the program, and any interested postdocs and graduate students from around the country could receive support for spending a year there.

The recent explosion of the use of the internet provides a new opportunity for establishing collaborations. Web sites could be created for "advertising" intriguing problems that individuals or groups of neurobiologists see as needing input from other disciplines. Similarly, funding agencies could use the Web to advertise new interdisciplinary programs.

A critical mass of funding is crucial for neurobiology-MPE collaborations. We strongly believe that new interdisciplinary Programs that fund only a handful of the submitted proposals discourage collaborations rather than nurturing them. The best potential applicants will shy away from new programs if the effort required to put together an application has little chance of payback. Interdisciplinary collaborations should receive interdisciplinary support from the Directorates most related to the disciplines of the collaborators.

Proper review of interdisciplinary applications requires to be done by ad hoc panels chosen very carefully to represent the disciplines of the applicants. It is also critical that the individuals chosen for these panels have an appreciation of the value of multidisciplinary efforts in research. Too often experts in a particular field fail to appreciate the difficulty of the collaborative approach, and view the fact that only part of the proposal is related to their expertise as being a negative factor, rather than positively evaluating the effort to place the area of their expertise in an effective interdisciplinary framework.

Programs should be organized around a set of goals, rather than focussing on too narrowly defined skill sets. In a new cross-disciplinary field the prediction of results is particularly difficult, directions can quickly change, and students need to be prepared for a wide class of future problems. It may take con siderable time for new teams of researchers from different discipolines to learn anough of each others approaches to make a truly effective team effort. This can be recognized programmatically by NSF providing longer term grants for interdisciplinary projects (e.g., 5 years).


Appendix 1: Workshop Agenda

Monday, April 8
8:00-8:30 Welcome and introduction (Darema, Arbib)
8:30-9:00 NSF ADs and Directors

Tuesday, April 9
8:30 - 9:00 Arbib and Hopfield: Formal introduction
9:00 - 9:30 Ron Calabrese: Small Systems
9:30 - 10:00 Barry Richmond: Cortical Dynamics
10:00 - 10:30 Break
10:30 - 11:00 Ehud Kaplan: Large Populations of Neurons
11:00 - 11:30 Misha Mahawald: Open Problems in Neurobiology
11:30 - 12:00 Harel Shouval: LTP, LTD and Cortical Receptive Fields: Is there a Unifying Principle for Synaptic Modification
12:00 - 12:15 Harel Shouval: Education Experiences
12:15 - 1:10 Lunch
1:10 - 1:25 Sankar Shastry: Lessons from the NSF workshop on Biosystems Analysis and Control
1:25 - 1:40 John Guckenheimer: Summary of the NSF Workshop on Computational Biology
1:40 - 2:00 Charges to the WGs (Arbib and Hopfield)
2:00 - 3:30 WGs Breakout Session
4:00 - 6:00 WGs Breakout Session

Wednesday, April 10
8:30 - 10:00 WG chairs present interim summaries
10:00 - 10:30 Break
10:30 - 12:00 Working Groups Breakout Session
12:00 - 1:00 Working Lunch/Working Groups Breakout Session
1:00 - 2:30 Summaries presented by WG chairs
2:30 - 4:00 General Discussion, Summaries by Arbib and Hopfield
4:00 Workshop Adjourns

Working Groups

WG1: Small Systems; Chair: Stephen P. DeWeerth
WG2: Dynamics of Cerebral Cortex; Chair: Larry Sirovich
WG3: Large Populations of Neurons; Chair: Ralph Linsker


Appendix 2: Participants

NSF NEUROINFOSCIENCES WORKSHOP

APRIL 8-10, 1996

HOLIDAY INN BALLSTON, ARLINGTON, VIRGINIA


PARTICIPANTS' LIST


Dr. Stephen Adler
Institute for Advanced Study
School of Natural Sciences
Olden Lane
Princeton, NJ 08540
Telephone: (609) 734-8051
Fax: (609) 924-8399
E-Mail: adler@sns.ias.edu

Dr. Michael Arbib
University of Southern California
Center for Neural Engineering
Los Angeles, CA 90089-2520
Telephone: (213) 740-9220
Fax: (213) 740-5687
E-Mail: arbib@pollux.usc.edu

Dr. Joseph Atick
Computational Neuroscience Lab
Rockefeller University
1230 York Avenue
New York, NY 10021
Telephone: (212) 327-7421
Fax: (212) 327-7422
E-Mail: atick@venezia.rockefeller.edu

Dr. Curtis Bell
R.S. Dow Neurological Sciences Institute
1120 N.W. 20th Avenue
Portland, OR 97209
Telephone: (503) 413-7222
Fax: (503) 413-7229
E-Mail: bellc@lhs.org

Dr. Ronald Calabrese
Department of Biology
Emory University
1510 Clifton Road
Atlanta, GA 30322
Telephone: (404) 722-0319
Fax: (404) 727-2880
E-Mail: rcalabre@biology.emory.edu

Dr. Avis Cohen
Department of Zoology
Zoo/Psy Building
University of Maryland
College Park, MD 20742
Telephone: (301) 405-0069
Fax: (301) 314-9358
E-Mail: ac61 @ umail.umd.edu

Dr. Michael Creutz
Physics Department
Brookhaven National Laboratory
BNLS l OA
P.O. Box 5000
Upton, NY 11973-5000
Telephone: (516) 344-3871
Fax: (516) 344-5568
E-Mail: creutz@bnl.gov

Dr. Fred Delcomyn
Department of Entomology
University of Illinois
320 Morrill Hall
505 South Goodwin
Urbana, IL 61801
Telephone: (217) 333-8793
Fax: (217) 244-3499
E-Mail: delcomyn@uiuc.edu

Dr. Stephen DeWeerth
School of Electrical & Computer Engr.
Georgia Institute of Technology
Atlanta, GA 30332-0950
Telephone: (404) 894-4738
Fax: (404) 894-9959
E-Mail: steve.deweerth@ece.gatech.edu

Dr. Rolf Eckmiller
Division of Neuroinforrnatics
Department of Computer Science
University of B onn
Roemerstr. 164
D-53117 Bonn (F.R. Germany)
Telephone: 011-49-228-73-4422
Fax: 011 -49- 228-73-4425
E-Mail: eckmiller@nero.uni-bonn.de

Dr. John Elias
Department of Electrical Engineering
University of Delaware
Newark, DE 19716
Telephone: (302) 831-1163
Fax: (302) 831-4316
E-Mail: elias@ee.udel.edu

Dr. Stephanie Forrest
Department of Computer Science
University of New Mexico
Albuquerque, NM 87131-1386
Telephone: (505) 277-7104
Fax: (505) 277-6927
E-Mail: forrest@cs.unm.edu

Dr. Bijoy K. Ghosh
Systems Science and Mathematics Dept.
Campus Box 1040
Washington University
One Brookings Drive
Saint Louis, MO 63130
Telephone: (314) 935-6039
Fax: (314) 935-6121
E-Mail: ghosh@zach.wustl.edu

Dr. John Guckenheimer
Department of Mathematics
Cornell University
Center for Applied Mathematics
657 Rhodes Hall
Ithaca, NY 14853
Telephone: (607) 255-4336
Fax: (607) 255-9860
E-Mail: gucken@cam.cornell.edu

Dr. John Hopfield
CALTECH (139-74)
Pasadena, CA 91125
Telephone: (818) 395-2808
Fax: (818) 792-7402
E-Mail: john@hope.caltech.edu

Dr. James Houk
Department of Physiology
Northwestern University Medical School
303 E. Chicago Avenue
Ward Building 5-003, M211
Chicago, IL 60611
Telephone: (312) 503-8219
Fax: (312) 503-5101
E-Mail: houk@casbah.acns.nwu.edu

Dr. Ehud Kaplan
Ophthalmology Department
Mount Sinai School of Medicine
One Gustave Levy Place
New York, NY 10029
Telephone: (212) 327-8392
Fax: (212) 289-5945
E-Mail: kaplane@rockvax.rockefeller.edu

Dr. Michael Kirby
Department of Mathematics
Engineering E. Wing
Colorado State University
Ft. Collins, CO 80524
Telephone: (970) 491 -6850
Fax: (970) 491 -2161
E-Mail: kirby@ritz2.math.colostate.edu

Dr. David Kleinfeld
Department of Physics
University of California - San Diego
9500 Gilman Drive, Dept. 0319
La Jolla, CA 92093-0319
Telephone: (619) 534-4818
Fax: (619) 534-7697
E-Mail: dk@physics.ucsd.edu

Dr. Aurel Lazar
Department of Electrical Engineering
Columbia University (CTR)
530 West 120th Street, Room 801
New York, NY 10027-6699
Telephone: (212) 854-1747
Fax: (212) 316-9068
E-Mail: aurel@ctr.columbia.edu

Dr. Ralph Linsker
IBM
T.J. Watson Research Center
P.O. Box 218 (Room 35-110)
Yorktown Heights, NY 10598
Telephone: (914) 945-1077
Fax: (914) 945-4454
E-Mail: linsker@watson.ibm.com

Dr. Alexander Lukashin
University of Minnesota Medical School
Brain Sciences Center (l lB)
VA Medical Center
One Veterans Drive
Minneapolis, MN 55417
Telephone: (612) 725-2000, ext. 5568
Fax: (612) 725-2283
E-Mail: lukas001@maroon.tc.umn.edu

Dr. Misha Mahowald
Institute fuer Neuroinformatik
Gloria Strasse 32
CH-8006 Zuerich
SWITZERLAND
Telephone: 41-1-257-2661
Fax: 41-1-257-6983
E-Mail: misha@neuroinf.ethz.ch

Dr. Kenneth Miller
Department of Physiology
University of California-San Francisco
513 Parnassus
San Francisco, CA 94143-0444
Telephone: (415) 476-8217
Fax: (415) 476-4929
E-Mail: ken@phy.ucsf.edu

Dr. Willard Mirarlker
81 Meadow Road
Briarcliff Manor, NY 10510
Telephone/Fax: (914) 941-5435
E-Mail: miranker@cs.yale.edu

Dr. Sanjoy Mitter
Massachusetts Institute of Technology
LIDS
77 Massachusetts Avenue
Room/Bldg. 35-308
Cambridge, MA 02139
Telephone: (617) 253-2160
Fax: (617) 253-3578
E-Mail: mitter@lids.edu

Dr. Ron Mohler
Department of Elec. & Computer Engnr.
Oregon State University
Corvallis, OR 97331
Telephone: (541) 737-3470
Fax: (541) 737-3617
E-Mail: mn@ece.orst.edu

Dr. Mark Nelson Beckman Institute
University of Illinois
405 North Mathews Urbana, IL 61801
Telephone: (217) 244-1371
Fax: (217) 244-5180
E-Mail: m-nelson@uiuc.edu

Dr. Jose N. Onuchic
Department of Physics
University of California- San Diego
9500 Gilman Drive, Dept.0319
La Jolla, CA 92093- 0319
Telephone: (619) 534-7067
Fax: (619) 534-7697
E-Mail: jonuchic@ucsd.edu

Dr. Tomaso Poggio
Massachusetts Institute of Technology
Building E25, Room 201
45 Carlton Street
Cambridge, MA 02142
Telephone: (617) 253-5230
Fax: (617) 253-2964
E-Mail: tp-temp@ai.mit.edu

Dr. Barry Richmond
Laboratory of Neuropsychology
National Institute of Mental Health
Building 49, Room lB-80
Bethesda, MD 20892
Telephone: (301) 496-5625 ext.225
Fax: (301) 402-0046
E-Mail: bjr@ln.nimh.nih.gov

Dr. Shankar Sastry
Dept. of Elec. Engnr. & Computer Sciences
261 M Cory Hall
University of California-Berkeley
Berkeley, CA 94720-1770
Telephone: (510) 642-1857
Fax: (510) 642-1341
E-Mail: sastry@eecs.berkeley.edu

Dr. Harel Shouval
Brown University
Box 1843
Providence, RI 02912
Telephone: (401) 863-3920
Fax: (401) 863-3494
E-Mail: hzs@cns.brown.edu

Dr. Ralph Siegel
Center for Molecular & Behavioral Neuroscience
Rutgers
197 University Avenue
Newark, NJ 07102
Telephone: (201) 648-1080 ext.3261
Fax: (201) 648- 1272
E-Mail: axon@cortex.rutgers.edu

Dr. Karen Sigvardt
Center for Neuroscience
University of California - Davis
1544 Newton Court
Davis, CA 95616
Telephone: (916) 754-5022
Fax: (916) 757-8827
E-Mail: sigvardt@violot.berkeley.edu

Dr. Eero Simoncelli
Computer & Information Science Department
University of Pennsylvania
3401 Walnut Street
Room 335-C
Philadelphia, PA 19104
Telephone: (215) 898-0376
Fax: (215) 573-2048
E-Mail: eero@central.cis.upenn.edu

Dr. Larry Sirovich
Laboratory Applied Mathematics
Mount Sinai School
Box 1023
New York, NY 10029-6574
Telephone: (212) 241-3994
Fax: (212) 369-7516
E-Mail: chico@camelot.mssm.edu

Dr. Warren Smith
Biomedical Engineering Program
California State University - Sacramento
6000 J Street
Sacramento, CA 95816-6019
Telephone: (916) 278-6458
Fax: (916) 278-5949
E-Mail: smithwd@csus.edu

Dr. David Tank
Biological Computation Research Department
Bell Laboratories
Murry Hill, NJ 07974
Telephone: (908) 582-7058
Fax: (908) 582-2451
E-Mail: dwt@physics.att.com

Dr. Philip Ulinski
Department of Organismal Biology
University of Chicago
1027 East 57th Street
Chicago, IL 60637
Telephone: (312) 702-8081
Fax: (312) 702-0037
E-Mail: pulinski@midway.uchicago.edu

Dr. Alan Yuille
Smith-Kettlewell Eye Research Institute
2232 Webster Street
San Francisco, CA 94115
Telephone: (415) 561-1620
Fax: (415) 561-1610
E-Mail: yuille@skivs.ski.org


OBSERVERS AND NSF STAFF LIST

Dr. Radhakishan Baheti
ENG/ECS Room 675
National Science Foundation
4201 Wilson Boulevard
Arlington, VA 22230
Telephone: (703) 306-1339
E-Mail: rbaheti@nsf.gov

Dr. Martin Beckerrnan
Computer Science and Mathematics Division
Oak Ridge National Laboratory
P.O. Box 2008
Oak Ridge, TN 37831 -6364
Telephone: (423) 574-7514
Fax: (423) 574-0680
E-Mail: beckerrnanrn.ornl.gov

Dr. Joseph Bordogna
ENG/OAD Room 505
National Science Foundation
4201 Wilson Boulevard
Arlington, VA 22230
Telephone: (703) 306-1033
E-Mail: jbordogn@nsf.gov

Dr. Linda Bushnell
U.S. Army Research Office
P.O. Box 12211
Res. Triangle Park, NC 27709
Telephone: (919) 549-4319
Fax : (919) 549-4354
Email: bushnell@aaro-ernh1.army.mil

Dr. John Cherniavsky
CISE/CDA Room 1160
National Science Foundation
4201 Wilson Boulevard
Arlington. VA 22230
Telephone: (703) 306-1980
E-Mail: jchernia@nsf.gov

Dr. Mary Clutter
BIO/BIO Room 605
National Science Foundation
4201 Wilson Boulevard
Arlington, VA 22230
Telephone: (703) 306-1400
E-Mail: mclutter@nsf.gov

Dr. Frederica Darema
National Science Foundation
4201 Wilson Boulevard
Arlington, VA 22230
Telephone: (703) 306-1900
Fax: (703) 306-0577
E-Mail: fdarema@nsf.gov

Dr. Otto Friesen
National Science Foundation
4201 Wilson Boulevard
Arlington. VA 22230
E-Mail: wfriesen@nsf.gov

Dr. Lawrence Goldberg
ENG/ECS Room 675
National Science Foundation
4201 Wilson Boulevard
Arlington. VA 22230
Telephone: (703) 306-1339
E-Mail: Igoldber@nsf.gov

Dr. Genevieve Haddad
Air Force Office of Scientific Sciences
110 Duncan Avenue
Suite B115
Bolling AFB. DC 20332-8080
Telephone: (202) 767-5023
Fax: (202) 404-7475
E-Mail: gen.haddad@afosr.af.mil

Dr. Maryanna Henkart
BIO/MCB Room 65S
National Science Foundation
4201 Wilson Boulevard
Arlington, VA 22230
Telephone: (703) 306-1440
E-Mail: mhenkart@nsf.gov

Dr. Peter Katona
The Whitaker Foundation
1700 N. Moore Street
Suite 2200
Rosslyn, VA 22209
Telephone: (703) 528-2430
Fax: (703) 528-2431
E-Mail: p00019@psilink.com

Dr. Stephen Koslow
Director
Division of Neuroscience and Behavioral
Science
National Institute of Mental Health
5600 Fishers Lane
Room 11 - 103
Rockville, MD 20857
Telephone: (301) 443-3S63
Fax: (301) 443-1731
E-Mail: koz@helix.nih.gov

Dr. Ranganathan Krishnan
School of Natural Sciences
Institute for Advanced Study
Olden Lane
Princeton, NJ 08540
Telephone: (609) 734-8174
Fax: (609) 924-8399
E-Mail: rangasns@ias.edu

Dr. Herbert Levitan
EHR/DUE Room 835
National Science Foundation
4201 Wilson Boulevard
Arlington, VA 22230
Telephone: (703) 306-1666
E-Mail: hlevitan@nsf.gov

Dr. Richard Nakamura
Director
Office of Science Policy and
Program Planning
National Institute of Mental Health
Room 17C26
5600 Fishers Lane
Rockville, MD 20857
Telephone: (301) 443-4335
Fax: (301) 443-3225
E-Mail: nrn@helix.nih.gov

Dr. Larry Reeker
CISE/IRI Room 1115
National Science Foundation
4201 Wilson Boulevard
Arlington, VA 22230
Telephone: (703) 306-1926
E-Mail: Ireeker@nsf.gov

Dr. Howard Shrobe
Assistant Director, Intelligent Systems and Software
Information Technology Office
ARPA
3701 N. Fairfax Drive
Arlington, VA 22201
Telephone: (703) 696-4466
Fax: (703) 696-2202
E-Mail: hshrobe.arpa-4@darpa.mil

Dr. Kamal Shukla
BIO/MCB Room 655
National Science Foundation
4201 Wilson Boulevard
Arlington, VA 22230
Telephone: (703) 306-1444
E-Mail: kshukla@nsf.gov

Dr. Rolf Sinclair
MPS/PHY Room 1015
National Science Foundation
4201 Wilson Boulevard
Arlington, VA 22230
Telephone: (703) 306-1809
E-Mail: rsinclai@nsf.gov

Dr. Michael Steuerwalt
MPS/DMS Room 1025
National Science Foundation
4201 Wilson Boulevard
Arlington, VA 22230
Telephone: (703) 306-1878
E-Mail: msteuerw@nsf.gov

Dr. Fred Stolloitz
IBN/BIO Room 685
National Science Foundation
4201 Wilson Boulevard
Arlington. VA 22230
Telephone: (703) 306-1419
E-Mail: fstollui@nsf.gov

Dr. John Tangney
AFOSR/NL
110 Duncan Avenue
Suite B 115
Bolling AFB. DC 20332-8080
Telephone: (202) 767-8075
Fax: (202) 404-7475
E-Mail: john.tangney@afosr.af.mil

Dr. Bruce Umminger
BIO/IBN Room 685
National Science Foundation
4201 Wilson Boulevard
Arlington, VA 22230
Telephone: (703) 306-1420
E-Mail: bumminge@nsf.gov

Dr. Paul Werbos
ENG/ECS Room 668
National Science Foundation
4201 Wilson Boulevard
Arlington, VA 22230
Telephone: (703) 306-1339
E-Mail: pwerbos@nsf.gov

Dr. Hollis Wickman
MPS/DMR Room 1065
National Science Foundation
4201 Wilson Boulevard
Arlington, VA 22230
Telephone: (703) 306-1818
E-Mail: hwickrnan@nsf.gov

Appendix 3: '1-page thoughts' (pre-Workshop)

Prior to this multidisciplianry workshop we had requested the participants to submit a "1-page" of their thoughts on the important issues, research problems and research directions that such multidisciplinary discussions should address. Nearly all the invited participants provided such 1-pagers. While many of the ideas were addressed in the two-day workshop and are reflected in the summary above, we felt that it's useful to include this compedium also in this report.


Open Problems in Neurobiology:

I am very optimistic about the progress of neurobiology and the prospects for collaborative research, particularly for understanding the visual cortex. Workers in computer vision are developing useful practical systems which demonstrate good engineering and computational understanding of visual processing. Similarly neurobiologists and psychophysicists are obtaining fascinating experimental results. There is still the need, however, to link the computational theories to experiment and to design mathematical theories of vision which are capable of encompassing all known visual phenomena (rather than just having a "bunch of tricks"). Hopefully understanding of the visual cortex will give insights into understanding the rest of the brain.

I think theories of vision based on statistical estimation, particularly Bayesian techniques, offer the best chance of a mathematical framework for all vision. Theories of these type (for example, work by Mumford, Ullman, and Poggio) appear to map nicely onto the known visual cortex but considerable work remains to match these, or other, theories to experimental data. Recent experimental results coming from Shiller's laboratory at MIT suggests that computational theories of this type may even be partially implemented as early as V1. These theories, in turn, seem to closely agree with psychophysical experiments by Nakayama and others.

Open problems in vision therefore consist of: (i) understanding image segementation and the role of V1, (ii) using the roles of MT and MST for motion processing, (iv) understanding object recognition and scene understanding and in particular the role of the feedback and lateral connections in the visual cortex, (v) developing a mathematical framework for vision, (vi) investigating how computational models can be implemented using realistic neurons. All these problems require an interdisciplinary approach combining mathematics, engineering, psychophysics, and neuropsysiology.


Issues on 'Small' Systems:

Question 1:

Some of us study simple, or "small" systems because we take them as models for large systems, with the hope that we can develop principles that will apply to mammalian systems. Is this appropriate? Or are there qualitative differences between systems of such different scales? Are there changes that occur at some point simply as a function of scaling up, which have nothing to do with changes in the underlying mechanisms of action? If so, what could they be? Can we identify them? Which types of principles are most vulnerable to such scale changes?

Are there types of connectivity patterns that will tend to bring out these differences? For example, are some recurrent loops, or loops of some length more apt to produce true differences between large systems which have many diverging and converging pathways versus small systems which are necessarily more limited in their circuitry?

Question 2:

Given the evidence that circuits can be reconfigured from moment to moment, and that neurons can qualitatively alter their patterns of activity abruptly as their states change, single-cell recording methods are going to be inadequate to understand any neural circuit, whether it be a "simple system", a cortical circuit, or a large neuronal circuit. These realities will often be obscured if one cannot record from several closely related neurons simultaneously under a variety of conditions, and/or if neurons of the circuit are not identifiable from one animal to another.

How then do we study the details of circuit function in any system more complex than the STG? This question clearly applies to both motor and sensory systems, and the spinal cord as well as the cortex. Can mathematicians help us to identify the salient features of singularities that might provide clues to the underlying properties of a system? Can engineers or physicists help us develop new methodologies for recording from large systems of neurons stably? Can the mathematicians then help us find ways to pull out the important information from the masses of data we record?


Open Issues in Neuro-Info-Science and Engineering

Neuronal codes in sensory systems have been frequently interpreted as if they are unidimensional, i.e., as if some simple spike counter is adequate to describe the information that they carry. The dynamics that did occur were studied in relation to the dynamics of the stimulus. Over the past several years we and others have found that there is stimulus related in the pattern as well as the strength of neuronal responses in the visual system. Eckhorn and his collaborators (1989) and Gray and Singer (1989) found oscillatory activity that was synchronized across different locations within the visual cortex and following an idea first proposed by von der Malsburg (1986), they suggested that synchronous oscillatory activity could used by higher centers to combine neural information about different spatial aspects of a perceptually unified stimulus.

In motor systems it has been known that spinal and brainstem neurons carry dynamic movement signals, the dynamics of cortical signals have not been as clearly related to movements. Schwartz has recently shown that the several aspects of the dynamics of practiced movements are simultaneously encoded in the dynamics of primary motor cortex neurons. Again this leads to the conclusion that information is carried in the multidimensional dynamics of neuronal activity.

Given that the dynamics are important, it becomes important to know at what scale they ,operate. Several studies have found evidence for precise timing of neural impulses within the visual system (Strehler and Lestienne, 1986) and in frontal cortex (Vaadia et al, 1995). Vaadia et al reported that this type of precise firing pattern become significantly more frequent during directed attentive activity. Both Abeles (1991) with his synfire proposal, and Lestienne and Strehler (1987) have proposed mechanisms for such occurrences. Others have debated the need for precise timing as a coding mechanism (Softky, Shadlen).

All of these findings and discussions leave us with at least the following questions: What is the precision of neural codes? Can wedecode the different dimensions of the responses, i.e., can we give them any simple interpretation? How are the codes from adjacent neurons related? How does the relationship between neurons constrain the function of networks of these neurons?

All of these questions must be answered to understand cortical function. All of them require data related to the issue, reliable methods of analyzing these data quantitatively, which becomes more and more difficult as the dimensionality of the data increases, and solid theoretical frameworks in which to interpret the results. This requires that the experimentalist provide suitable measurements for the theoreticians and that theoreticians provide frameworks that are presented in terms of reliably measurable variables.


Systems and Control Problems that Arises in Visual Attention and Ocular Control:

Biologists and Neuroscientists have been interested in the science of perception for a long time. Their investigations have come to a point where we have begun to understand the various subsystems that are responsible in the process of recognizing a visual signal and generation of the appropriate control commands to the muscles in the coordination of, for example, eye neck as a control system. For example, we now understand how eye is kept focused on an object while neck is undergoing motion. Various psychological studies have also been made for touch and audio signals and one has a fairly general idea as to how this information is integrated in the visual cortex.

Research in Vision has revealed a multitude of strategies biological systems utilize to achieve extremely efficient real-time operation. One of the major objectives of these systems is to provide sophisticated dynamic control of the flow of visual information and allocation of resources to achieve the desired level of performance for the task at hand with minimal extraneous computations. Such a system can be viewed as being composed of two major subsystems. The first is a highly focused attentive system that provides detailed processing on a highly restricted subset of the available visual information region in a serial fashion. The second system operating in parallel carries out a more global survey of the scene, which is utilized to alert the animal to potentially new and dangerous objects as well as providing guidance to the allocation of the more expensive focal attentive processing.

Interest in Vision from the point of view of a Control-scientist is somewhat more recent with a major emphasis in Machine Vision as applied to problems in Robotics. One of the major problem here is Visual Servoing with vision in the feedback loop where the objective is to visually navigate a Robotic Manipulator for the purpose of tracking and grasping in a dynamic and uncertain environment. The Visual Sensor usually chosen is a CCD camera.

While experiments are still going on in fine tuning the underlying paradigms in visual signal processing that is common between Neuroscientists and Control Scientists, it is equally important to understand the underlying mathematical structure from a higher component level and from a lower neural level. It is equally important to understand how the models obtained from these two viewpoints interact. For example, at a higher component level an important problem is to study the rotation of eye and neck in stabilizing vision and directing gaze towards a moving object. This could be viewed as a subclass of a nonlinear control problem. In this paradigm, one replaces the eye and the muscle attachment by a nonlinear differential equation with the muscle tensions as the actuating control commands. The nonlinear dynamical system, so obtained, can be used for the purpose of gaze control as an example of nonlinear regulation and tracking. The neck and muscle attachments to it can be similarly replaced by a nonlinear differential equation and one can study the 'neck control' problem likewise. Our grand challenge would be to consider the artificial 'gaze-neck' as a nonlinear control problem with a pair of eyes placed on a neck mechanism with actuating signals derived from visual signals. From the point of view of Neural Modelling, it is important to obtain models of attentional mechanism for the purpose of forming position, scale and motion invariant object representations. In this paradigm, one would use control neurons to dynamically modify the synaptic strengths of intra-cortical connections.

Very little has been done on component level modeling of Bio-Systems from a dynamical systems point of view. We are only beginning to understand the appropriate models of Eye and Neck as a control problem. Even less is understood as to how component level models would interact with Neural Models that control and actuate muscles.


What are the open problems in neurobiology and how might mathematicians/physicists/engineers help advance knowledge in the field?

The big questions in neurobiology -- at least at the systems, behavioral and psychobiological levels -- remain largely unanswered. These are the questions that attracted most of us to the field and include such questions as: What is consciousness? What is the relation between thought processes and neural processes? What occurs in the brain during the consideration, initiation and carrying out of a motor act? How is a decision to act reached? How are complex sensory stimuli represented? How are various forms of memory stored and retrieved? How are emotional or motivational processes represented and linked to more cognitive processes? Hypotheses and suggestions have been advanced to answer such questions and some relevant experimental results have been obtained but we are still far away from satisfying answers. Mathematicians/ physicists/ engineers do not seem to be any better equipped than any one else for resolving these very general and high level questions. However, as a neurobiologist working on more limited questions, I can see a number of ways in which mathematicians etc. might be able to help with theory and techniques.

In the realm of theory, one of the major problems is that of understanding the operation of multiple centers or regions that are reciprocally or recurrently interconnected. I wonder if there is a mathematics that is adequate for such richly interconnected centers. The problem is made much more difficult, perhaps even insoluble, by the fact that neural centers are not connected by single lines conveying a simple scalar quantity but by thousands of parallel lines conveying nerve impulses. How can such patterns of nerve impulses be described and understood? Recent work indicates that the relative timing of impulses, down to the microsecond level, conveys important information. A complete description of the temporal relations among n neurons appears to require an n-dimensional space, but spaces of more than 3 dimensions are hard to work with and grasp intuitively.

The problem of large numbers of neurons or fibers is made a little more tractable by the modular character of many brain regions. It would seem that theorists or mathematicians might be able to help understand the function that is carried out by a small region (the canonical circuit) of the neocortex, the cerebellar cortex, the tectum, the olfactory bulb, the thalamus, or the retina? They might help suggest the best way of describing the transformations that occur between the input and the output of such regions, quite independently of the specific information being processed. Again, hypotheses and suggestions have been made but a clear understanding of local circuit processing and its function does not seem to have been attained for any region.

Mathematicians appear to have a special ability to stand back from a set of findings and uncover an underlying structure of categories or relationships that might or might not fit some mathematical structure. The mathematical analysis of hierarchy among visual cortical areas in a recent Science article is an example of this process. This ability of mathematicians and the independence that they have from the bias of a particular preparation or system (in contrast to experimentalists) could enhance the conceptual base of neurobiology.

Most of my daily frustration as an experimentalist comes from technical limitations such as obtaining good intracellular recordings from small but important cell types or in analyzing large bodies of data. I only occasionally sense the limitations of my conceptual or theoretical tools. Any help that mathematicians/physicists/engineers can provide with the technical problems of obtaining and analyzing data is welcome (improved voltage sensitive dyes and methods for analysing the data that they yield, for example).


Issues in Neuro-info-Science & Engineering:

One important set of open problems springs from the concept of "population encoding". Information in much of the brain seems to be represented using groups of cells whose tunings (in a classical receptive field sense) collectively cover some range of parameter values. Computational researchers can help to advance knowledge in this area by building our understanding of this type of representation. In particular, one would like to have answers to the following types of questions:

How can we recognize whether two cells are part of the same population (in the sense that they jointly encode the same information)?
Given a population of neurons, what measurements (using single or multiple-cell recordings) need to be carried out in order to characterize its behavior?
What characteristics of a computational population are responsible for behaviors (or psychophysical performance) on tasks such as detection or discrimination? Can we predict performance in such tasks from population characteristics?
How does a population respond to superposition of stimuli? Can we account for psychophysical effects such as masking or adaptation?


Central Problems in Neurobiology:

Determine the Composition, Organization and Dynamics of functional neuronal ensembles.
Discover the communication codes used by neurons and neuronal ensembles.
Discover the mechanisms for the creation and modulation of neuronal ensembles.
INTERACTION BETWEEN MATHEMATICIANS, PHYSICISTS AND NEUROBIOLOGISTS:
Develop new methods to collect and analyze data from multi-element recordings (multiunits, many pixels from video frames, multiple EEG elecrodes or magnetic detectors, etc.). This includes methods for spike sorting, new ways to display multi-detector data, methods for exploring and displaying the (usually nonlinear) interactions among ensemble members, methods for compressing and searching large databases, applications of tools from nonlinear dynamics to neuronal function and structure and so on.
Develop objective, robust and efficient methods to distinguish between signal and artifact.
Investigate the mathematical and informational aspects of neural codes.
Develope improved computational and mathematical approaches to modeling of neuronal ensembles and single neurons.
Study the topology of functional organization of the nervous system.


Some Open Questions in Neuroscience:

The number of open questions in Neuroscience is immense, yet it is very important to choose the problems we study carefully in order to maximize the benefits of our research. I will choose to concentrate on several major problems. I believe that these problems are of major conceptual importance. Furthermore we have reached a stage in which the experimental and theoretical tools at our disposal may be sufficient for tackling them.

One of the main problems is to study multi-neuronal code exhibited in its firing rate. Multi channel recordings at high temporal resolution have made it possible to obtain spike train data from many neurons simultaneously. It is thus possible to study real spatio-temporal codes. However since such code is high dimensional, it is very difficult to visualizeor analyze it. Modern statistical and neural network techniques with the aid of powerful computers have a chance of cracking this code. Some recent work has started addressing these questions, however the complexity of these types of problems should not be underestimated, and it may take many more years until they are resolved. We expect that the nature of the codes used reflected the properties of neural machinery as well as (statistical) properties of the environment.

Another major question is that of neural plasticity, which is closely related to the first question. Plasticity could be considered as a change in the neural code in response to the environment. Experiments have revealed plasticity both on the system level and on the cellular level. The experiments are becoming detailed enough for us to determine what the functional form of the plasticity is. Different details of the form of synaptic plasticity reflect different features that are singled out as significant by the cortex. When analyzing high dimensional data such as that obtained from multi electrode recording we are faced with the question, which features in this complex data are important? This is a similar question to the one faced by our brains when confronted with the high dimensional data conveyed by our senses. Plasticity is the method used by our brains to sort out this data and to single of the important features in this data. The contribution of physical and mathematical scientists to the domain is to analyze and simulate different learning rules in order to figure out which ones fit better with experimental results, as well as what are the information processing implications of different rules.

Another problem of major interest concerns the mechanisms underlying binding of entities both on a short term such as combining a few words together into a sentence, or on a longer term, such as binding the telephone to the desk it sits on. We have very little idea about how mental functions related to binding are performed and if there is a common mechanisms for language vision and other difficult brain tasks, but these questions are of immense importance both to the understanding of human brains and for the construction of somewhat intelligent machines.


Dynamics for the Neurosciences:

Here is a list of three problem areas within dynamical systems theory that I believe are critical to understanding the dynamical properties of neural networks (and many other systems):

1. The qualitative dynamics of singularly perturbed and hybrid dynamical systems.

Neural systems make evident use of multiple time scales in their function. "Bursting" oscillations that combine slow rhythmic variations with action potentials during a portion of the slow oscillations are common. Singular perturbation models dependent upon two infinitely separated time scales have been formulated for such processes, but the systematic mathematical theory for systems with finitely separated time scales concentrates on cases in which the fast attractors within the system are equilibrium points. Furthermore, there has been little attempt to look at dynamics of multiple time scale systems on times that are long relative to the slow time scale. The best developed parts of the theory examine periodic trajectories of multiple time scale systems. Further development of these areas of mathematics are feasible and present rich, challenging problems for dynamical systems theory. The results of such work should contribute directly to our insight into biological motor control and other dynamical processes of neural systems.

2. The numerical analysis of bifurcations in multiparameter vector fields.

The application of dynamical systems theory to other fields frequently depends upon the dynamical analysis of models that depend upon many parameters. Fitting parameters within the models is hindered by our inability to measure many parameters directly. Use of the models is hindered by our inability to readily calculate bifurcations; i.e., those parameters at which qualitative changes in the dynamics of the system occur. Classification of generic bifurcations has been one of the major themes within dynamical systems theory. Numerical computations with many examples have demonstrated that automated calculation of bifurcation diagrams is a difficult and challenging problem. Such computations are a necessary step in determining the biological implications of models. Even more challenging is the problem of optimizing the fit between dynamical models and experimental data as model parameters are varied. The numerical analysis of dynamical systems and their bifurcations is an active research area that deserves increased attention and support. It provides algorithmic underpinnings important in the analysis of large classes of simulation models.

3. The relationship between the network architecture of systems of coupled oscillators and the resulting dynamics.

The only hope that we have for intelligent understanding the dynamics of vertebrate neural systems is that the hierarchical organization of such systems will allow us to decompose the systems in components with a relatively small number of degrees of freedom. We simply cannot cope with the complexity of systems with large numbers of degrees of freedom unless there are principles that lead to significant simplification of the relevant parts of the system dynamics. We know almost nothing about how the structure of a multi-compartment system constrains its behavior (beyond analysis of the role of symmetry when it is present).


Dynamics and Thermodynamics on Comples Landscapes: The Protein Folding Problem:

I am sure that during the discussions on WG3 - "Large populations of neurons," kinetic and search problems in complex landscapes will be an important topic. This problem, however, has been explored in several other fields of biophysics in addition to neural computations. Therefore, I believe this program should be more open to related problems and not be restricted to only "neuronal" questions. Following, I put a short personal description of the protein folding problem on this context.

Protein folding is a collective self-organization process, conventionally described as a chemical reaction. However, this process generally does not occur by an obligate series of discrete intermediates, a ``pathway,'' but by a multiplicity of routes down a folding funnel [1,2,3,4,5,6,7]. Dynamics within a folding funnel involves the progressive formation of an ensemble of partially ordered structures, which is transiently impeded in its flow toward the folded structure by trapping in local minima on the energy landscape. As one proceeds down the folding funnel, the different ensembles of partially ordered structures can be conveniently described by one or more collective reaction coordinates or order parameters. Thermodynamically this funnel is characterized by a free energy that is a function of the reaction coordinate which is determined by the competition between the energy and entropy. A crucial feature of the funnel description is the concerted change in both the energy and the entropy as one moves along the reaction coordinate. As the entropy decreases so does the energy. The gradient of the free energy determines the average drift up or down the funnel. Superimposed on this drift is a stochastic motion whose statistics depends on the jumps between local minima. To first approximation this process can be described as diffusion. Folding rates are determined both by the free energy profile of the funnel and the stochastic dynamics of the reaction coordinates.


Small Systems:

Every nervous system is composed of networks of interconnected single elements called neurons. These networks allow an animal to integrate information about its external and internal environment and produce behavior that is appropriate to that context. How the nervous system performs this integration, transforming particular inputs into useful output, is what we are trying to understand. The focus of this working groups is on the fact that the neurons comprising these networks are not simple elements - each neuron within any network is most likely different from every other neuron if examined carefully. Over the past thirty years, neuroscientists, with the help of continual technical advances, have been able to gather amazing amounts of data about the biophysical and anatomical properties of individual neurons and their interconnections. It has been found that neurons can vary in morphology (number of dendritic branches, number of dendritic spines or input sites, and diameter and area of the soma, dendrites, dendritic spines and axons) and in passive properties (membrane resistivity, input resistance, time constants, and space constant). Sodium, potassium, calcium and chloride ions are the electric charge carriers for most neurons. A single neuron may have five to ten different types of ion channels. The opening and closing of ion channels in the neuronal cell membrane is stochastic; the probability of opening and closing for different channels is influenced by voltage, chemicals released from other neurons and the neuron's previous activity. Therefore, neurons also vary in the types of ion channels, the conductances and kinetics of these channels and the channel distributions. All of the above properties endow individual neurons with particular integrative characteristics and specific firing properties.

Despite our tremendous knowledge about neuronal properties, the number of insights into how variations in neuronal properties influence the behavior of the neurons have remained relatively limited, in comparison. One of the limitations has been our own brain which, for most of us, has a limited ability to integrate this ever increasing amount of data. We have been aided in recent years by computational approaches in which mathematical analysis and computer simulation is used to model the structure and function of the nervous system. Again, however, the number of new insights remain limited. The neurobiologists in this working group will outline for the other participants the types and complexity of our data set, as outlined above, and how the output of small networks of these neurons, as exemplified by the presentation of Dr. Calabrese, depends about the properties of its elements. The goal is to discover mathematical, statistical and theoretical approaches that physicists and engineers may have used to address or answer analogous issues in their disciplines.


Neurobiology:

These are very numerous, but some that seem particularly promising for interaction of theory and experiment include:

  1. Neural coding:
    1. what aspects of neural responses carry information about the world and/or about the animal's state, and how can useful experiments be formulated that can quantitatively address this question?
    2. how *should* the nervous system represent information, according to optimization or computational principles (e.g. redundancy reduction), and how can such considerations be used to derive experimentally testable hypotheses?
    3. How can the representations found in areas like V1 be quantitatively related to our psychophysical performance; what variables (e.g. statistics of responses, not just response selectivity) need to be measured to test such hypotheses?

  2. Neural circuits and dynamics: can we analyze and understand the behavior of models having sufficient realism to capture relevant time scales of neural cells and circuits? In particular, some of the issues that arise in attempting to understand the cerebral cortical circuitry include:
    1. inputs are conductances, having reversal potentials and altering the time constant of the cell, rather than currents;
    2. excitatory and inhibitory neurons have different dynamical properties, e.g. in cerebral cortex excitatory cells show spike rate adaptation whereas most hibitory neurons do not;
    3. synaptic inputs cause conductance changes on at least two time scales: there are both fast and slow excitatory and inhibitory synapses;
    4. synapses themselves have rapid dynamics, e.g. paired pulse facilitation or depression depending on the synapse, that lead to large changes in their efficacies.
    5. many favorite tools of modelers, such as short-range or feature-specific excitation and long-range or feature-nonspecific inhibition, lack any obvious anatomical substrate; these effects, if they exist, must emerge physiologically and/or dynamically, must be *derived* from the network properties rather than assumed.

  3. Many-cell recording: There is a need for better probes, better spike-sorting methods, and better methods of storing and analysing the resulting data.

  4. Organization and control: how is attention mediated, how are declarative memories stored and accessed, what are the relative roles of feedforward' and 'feedback' connections in thalamocortical and corticocortical processing, what is the flow of information in the brain and how is it organized and controlled? Here, we are very much in the dark; abstract theories are not likely to be of much help at this stage, but heoretical attempts to grapple with existing data and thus formulate experimental strategies might be fruitful.

  5. Learning and development: there are many general unknowns. A few of these include:
    1. What learning rules lead to effective representations given the world we evolved in?
    2. Conversely, what might be the computational function of the correlation-based learning rules that seem to exist, what forms of these rules might be consistent with experimental observations to date yet have interesting computational power, and how can the existence of such forms be tested?
    3. What, biologically, makes learning competitive, so that cells come to respond to some patterns at the expense of losing responses to others; and so that different cells come to respond to different patterns, perhaps in such a way that patterns become represented proportionally to their occurence in the environment?
    4. How can diffuse global reinforcement be utilized in neurally plausible and computationally useful ways, and what are tests for such proposals?
    5. How can circuits be dynamically modified and yet continue to carry "readable" meaning for circuits downstream; or must the entire system from sensory to motor be "co-modified"?
    6. There are a host of specific problems about accounting for the development of specific systems; here, the LGN and primary visual cortex seem to be the best testbeds.

      The key issue is to connect either the *mechanisms* or the optimization/computational *principles* embodied in a given model with experimental tests. Too often, models are judged by comparing the pictures they produce with experimental pictures, but the details of the pictures often depend on inessential aspects of the model, or on issues that the modeler purposely did not address, or even on nonbiological rather than biological elements of the model. Instead, we must use models to understand how different mechanisms and/or principles lead to different outcomes, and, thus, to formulate experiments that can test and distinguish between proposed mechanisms or principles.

Possibilities for collaborative research between neurobiologists and mathematicians/ physicists/engineers:

The key point here I think is that, to be useful, such collaborations must be deeply rooted in data and in experimental work. The theorist must think about some particular system on which experimentalists work and get to know that system, understand our current knowledge and our current experimental capabilities and limitations. This can only happen through deep and frequent interaction with experimentalists. Given such interaction, the theorist can bring his or her tools to bear, hopefully to better and more systematically understand the system. Such interaction leads to new points of view and new questions that may uncover real structure that was not apparent. But in the absence of such interaction, theoretical work is not likely to deeply influence our understanding of the brain -- it will stand apart.


Open Problems in Neurobiology:

Information Processing in Single Neurons:

Obviously, we need to understand how much processing is achieved by single neurons or small neuron assemblies. This is still terra incognita, even though it is so basic.

Neural Development:

One should seek extremely well defined systems, e.g., the development of the lateral geniculate nucleus, and attempt a convergence of mathematical modelling, molecular neurobiology, and neuroanatomy to provide a complete explanation of developmental trategies of a few brain structures.

Brain Maps and Computational Geometry:

Brain maps associate neurons to task and can capture topological characteristics of task (vision, auditory, motion, language) spaces through synaptic connections. Concepts can be borrowed from computational geometry. This might benefit theoretical neurobiology as much as learning theory benefitted it earlier. Neural network theory, in as far as it deals with algorithms, should become a branch of the theory of algorithms and avoid as much as possible its fuzzy roots in neurobiology.

Neural Coding:

A central question remains how the brain codes information. Brain areas for which maps are well established can serve to achieve further advances. Still of particular interest is firing synchronicity for binding of information.

Engineering Approaches to Theoretical Neurobiology:

To test the practical validity of concepts Theoretical Neurobiology should adopt engineering approaches for the study of integrative brain functions, e.g., for the study of visuo-motor coordination. Integrative brain functions are also a means to study how several brain capacities, e.g., vision and motion, work together.

Brain Implants:

Brain implants are becoming a practical means for partial restauration of damaged brain functions. The question how implants should present (e.g., accustical) information to provide an optimal input to brain tissues should be persued.

Consciousness:

Forget it for a while.


Themes and Problems of Neurobiology:

Themes:

Recording from very large neural populations (MRI, Optical Imaging, PET, SQUID, Multiple electrodes, etc.)

Analysis of large scale neural databases.

Dynamical models of neural populations.

Architecture, function and dynamics.

Mobilization of neurons by networks.

General organizing principles.

Problems:

Analysis and role of stochasicity.

Extraction of small signals from an active background.

Extension of single neuron activity to dynamics of large populations.

Realistic models.

Neural codes and information content.

New recording instruments with improved spatiotemporal resolution.

Investigation of the above topics requires expertise in a broad range of topics: statistical mechanics; stochastic analysis; dynamical systems; chaos theory; information and coding theory; large scale computations; signal analysis; and more. Perhaps new physical and biological principles and new mathematical concepts will be required. The vastness of the topics and goals will demand the joint effort of biologists, engineers, mathematicians and physicists.


Synthesis of Artificial Neuromorphic Systems as a Tool for Understanding Biological Information Processing and Control:

One of the most fruitful approaches for advancing our understanding of biological intelligence lies in fostering multidisciplinary research efforts between neurobiologists, physicists, engineers, and mathematicians aimed at the SYNTHESIS of adaptive, intelligent machines and devices which are designed and constructed with a level of biological homology that permits these systems to be used as tools for improving ourunderstanding of biological information processing and control systems.

This synthetic approach is usually seen as falling outside the purview of traditional neuroscience research which historically emphasizes the ANALYSIS of biological neural systems and the extraction of basic organizational and functional principles, but does not extend to applying those principles to the creation of artificial systems. Nor is this approach widely embraced by the engineering community, largely because neurally-inspired solutions are not yet sufficiently advanced to routinely outperform more traditional adaptive signal processing and adaptive control techniques. Thus there are currently only a few research laboratories worldwide that are making serious attempts to design and construct artificial neuromorphic systems with the goal of generating insights into biological function.

When considering the complementary approaches of analysis and synthesis of intelligent systems, many of the scientific issues that arise are similar, but others issues are unique to one domain or the other. For example, a neurobiologist concerned with the analysis of a neural circuit involved in pattern generation during locomotion might ask "what are the underlying biophysical interactions that give rise to and modulate the observed phase relationships between individual neurons in this circuit?" On the other hand, a neuromorphic design engineer that wanted to synthesize a locomotor control circuit for a robot might ask "what are the essential dynamics required to construct a network of N neuron-like oscillators with M adjustable phase relationships?" Often the questions that arise when considering the synthesis of neuromorphic systems tend to be of a broader and more general nature than those that arise in the analysis of one particular biological model system. Thus synthetic approaches can both broaden and deepen our understanding of functional and organizational principles in biological systems.

A few cross-disciplinary research areas relevant to the synthesis of neuromorphic systems include:
evolutionary algorithms and coding schemes for evolving complex systems
developmental algorithms and analysis of developmental dynamics for self-organization of artificial neural circuits
design principles for constructing/developing/evolving network architectures hat meet particular dynamic specifications
real-time emulation of neural circuits in silicon - both analog and digital implementations
biologically-inspired robotics - active sensing, adaptive motor control, sensorimotor and cross-modality integration, adaptive behavior, etc.


Summary of the Issues:

1. Patterns of Collaboration
1.1. Mathematical Structure as a Liberating Perspective

1.2. Multi-Level Modeling
1.3. Sociological Obstructions to Collaboration
1.4. Underemphasized Issues
Clinical Applications
Neural Mechanisms for Social Interactions

2. Small Systems 2.1. Detailed Analysis of Single Cell Function

2.2. Polyfunctional Circuitry
2.4. Canonical Circuits

3. Large Populations of Neurons 3.1. Coding in Neural Systems

Neuronal codes in sensory systems
Neuronal codes in motor systems
Population Encoding
Multi-Neuronal Code Exhibited in Firing Rates
Information vs. Meaning in Neural Systems
3.2. Cortical Dynamics
Cortex as a Dynamical System
Cortical Memory
3.3. Multiple Brain Regions
The Binding Problem
3.4. Nonlinear Dynamics for the Neurosciences and Analogous Subjects
Chaos Theory
Multiple Time Scales
Numerical Analysis of Bifurcations
Protein Folding Problem: The Dynamics And Thermodynamics
On Complex Landscapes
Relationship Between Network Architecture/System
Decomposition and Dynamics
Statistical Mechanics

4. Systems Perspectives 4.1. Cognitive Neuroscience

From Neuroethology to Cognition
Relating Neural Networks to Behavioral Outcomes
Processing Information in Context
4.2. Self-Organization: The Brain as a Continuingly Self- Organizing System.
Neural Development
Dynamics of Neural Codes
Brain Maps and Computational Geometry
4.3. System-Level Models of Visuomotor Coordination
Control of Saccades
Prism Adaptation for Throwing

5. Autonomous Robots/Brain Synthesis/Neuromorphic Engineering 5.1. Analog VLSI

5.2. From aVLSI to Functioning Robots
5.3. Biologically-Inspired Robotics
5.4. Brain Implants
5.5. Brain Theory <--> Autonomous Robots
Action-Oriented Perception
Styles of Learning
Hand-Arm Robots
Mobile Robots
The Robotic Salamander
Rana Robotrix
Autonomous Flying Vehicles
5.6. Socio-Robotics
5.7. Organizational Principles for Neuroethology, Cognitive
Architecture and Autonomous Robots: Principles for the
Design of Intelligent Agents
Principle 1: Action-Oriented Perception
Principle 2. Cooperative computation of schemas
Principle 3: Interaction of partial representations
Principle 4. Evolution and modulation
Principle 5: Schema-based Learning
Principle 6: The "Great Move" in the evolution of intelligence
Principle 7: Distributed goal-directed planning
Principle 8: Intelligence is Social

6. Data Analysis and Data/Model Sharing 6.1. Brain Models on the Web: Confronting Models & Data

Putting a Model into Brain Models on the Web
Towards Standards for Neural Simulation Languages
6.2. Imaging Patterns of Neural Activity
6.3. Monitoring a Patient's Brain State


Remarks on Neural Network Modeling:

I want to make a few remarks about the adequacy of the standard neural network formalism. I will argue that too little thought has been given to the computational level associated with the formalism of excitatory and inhibitory network algorithms, and that this can prevent us from being able to map neural circuitry onto desired behavioral outcomes in a simple and plausible way.

My remarks will be exemplified by a dependency structure known in statistics as a 'Wermuth condition' and also referred to as the 'explaining-away' phenomenon in the AI literature. I will make use of an example due to Judea Pearl. Suppose that you have an Alarm system installed in your home. Alarms can be triggered by Burglars. To model this piece of knowledge we might invoke a two-node network, with a node for Burglar, a node for Alarm, and an excitatory link between the nodes. We also want to be able to do inductive inference, that is, to infer that Burglar is more likely if Alarm is observed, so we assume some neurally-plausible version of backpropagation that can go backward along the excitatory link. (We could also make use of Bayes rule). This process will have an effective excitatory effect and drive the activation of Burglar higher if the activation of Alarm is increased. Now imagine that you are driving home because someone has told you that your Alarm is going off at your home and you are worried about Burglars. On the way home you turn on your radio and discover that there has been an Earthquake in your area. Earthquakes can shake the ground and trigger Alarms, thus we add to our circuitry another node for Earthquake with a positive link to Alarm. We also invoke the ability to go backwards, which again has an effective excitatory effect. Note that the entire circuitry is excitatory, both in the forward and backward directions. Thus, hearing about the Earthquake will make Alarm even more likely, which will increase the belief in Burglar. However, clearly the behavioral facts in the matter are that as soon as one learns about the Earthquake one stops worrying about the Burglar. The Earthquake ``explains away'' the Burglar. But how can we capture this in our circuitry? Where is the inhibition? It makes no sense to add inhibition between Burglar and Earthquake, because (a) as children we learned about Burglars and Earthquakes at different times, so there was no opportunity for correlative learning; (b) leaving learning aside, if we have to make Burglar inhibit Earthquake then we have to make essentially everything inhibit everything else; (c) if anything, Burglars and Earthquakes are marginally positively correlated (cf. the looting associated with Earthquakes). So simple fixed inhibition is not adequate. The other standard neural network fix---to introduce hidden units---is also not very appealing because one would require that the connections to these hidden units be learned, and this would require going home and finding that indeed there was no Burglar. Moreover, the fact is that with proper use of probability theory one can {\em infer} that the probability of Burglar should decrease if Earthquake is observed, given the excitatory influences I've described above; this doesn't need to be {\em learned}.

There are message passing algorithms that can handle the explaining away phenomenon. Pearl's original work was restricted to acyclic graphs, but subsequent work has removed that restriction. Some of the currently available algorithms have ``neural'' flavor; others do not. The basic concept needed is that of a ``Markov blanket.'' Algorithms that take into account the full Markov blanket of node are in position to handle the explaining away semantics readily; algorithms that do not take into account the Markov blanket make accounting for the phenomenon overly difficult.

There are other dependency phenomena that are as sensible behaviorally as the explaining away phenomenon. I would argue that these are the kinds of phenomena that one should be aiming it in thinking about the computational properties of neural circuits. Simply accepting the linear-plus-sigmoid-network-with-excitatory-and-inhibitory-weights as a computationally adequate formalism, either because of its Turing equivalence or because of its putative relationship to anatomy and physiology, misses the boat (indeed the standard layered network activation rules do {\em not}take into account the entire Markov blanket of each of the nodes). There needs to be a better computational theory in place to guide research on network algorithms. My own view is that this computational theory is available, at least in part, in the statistical literature on graphical models, where graphs and probability theory are married.


Tools for Data Reduction and the Modeling of High-Dimensional Systems

The understanding of complicated and high-dimensional processes is one of the great challenges in the physical and biological sciences. The enormous increases in widely available computational power have actually made this situation worse. Simulations of complex systems routinely produce gigabytes of data in a matter of seconds. In addition, tools for data acquisition have progressed to the point that information may be collected much faster than it can be processed, or even stored in a memory system. Researchers in scientific computing are creating information much faster than they can comprehend it and, as a consequence, raw data processing capabilities have become an essential rate-determining step in endeavors to understand the world around us.

Data reduction methods are thus essential in attempts to digest the large amounts of information associated with the investigation of high-dimensional processes. In recent years, many promising methods have been developed or adapted for analyzing massive data sets consisting, for example, of sequences of highqresolution images or computer dumps of numerical simulations. In addition, these methods provide a means for remodeling redundant macroscopic models into reduced systems of equations which reflect the intrinsic dimensionality of the process in question. While these tools are still being developed, they may already provide significant benefits to researchers who are currently in need of data reduction methods. Also, the developers of these techniques would benefit greatly from involvement in interdisciplinary research efforts which emphasize applications to challenging real world problems and concomitant constraints, rather than basing method design on the performance of the approaches on toy problems.

The human brain has staggering information processing and storage capabilities. The brain ability to accomplish tasks such as image recognition suggest powerful data reduction tools. Mathematically derived procedures for optimal data reduction are still far behind in overall general performance. There will be an enormous payoff if an understanding of biological information processing techniques can be translated into computational algorithms for modeling and model reduction. Efforts in this direction might be best pursued by research groups with the appropriately diverse backgrounds in Mathematics, Engineering, Physics and Biology.

Such an approach suggests more than collecting teams of scientists from different academic departments and industry. While this is an essential component, it is necessary that we educate future scientists, engineers and mathematicians with a core curriculum that permits the rapid exchange of ideas in diverse fields. One of the major arguments for teaching a mainstream calculus course, rather than teaching a specialized version for the pure mathematicians and an applied flavored one for the engineers, is that students should develop a common knowledge base. In addition, special courses which integrate research into the classroom targeting a broad audience from many disciplines will greatly improve the Tower of Babel situation that currently exists over research as a whole.


Open Problems in Neurobiology:

At the intersection of neurobiology, mathematics, physics, and engineering lies a critical problem: the nature and origin of meaning in a physical system. In an engineered system, such as a digital computer, it is straightforward to assign meaning to the processes of the computer because an engineer has designed the hardware to perform specific logical functions and a programmer has written the program and ascribed significance to the input and output. The entire specification appears top down and the transistors play only bit parts. In the digital system, the meaning of the computation rests outside of the machine. This dissociation between the machine and the meaning is a characteristic of explicitly engineered systems. In contrast, evolution is a bottom up process in which meaning arises from the needs of the organism.

There is a gap in the theoretical tools available to deal with meaningful information. Measures of information transfer have been used in analysis of the bandwidth of photoreceptors in the fly's eye. This kind of analysis is mostly based on the idea that all information is equally accessible and equally valuable. It does not, for example, deal well with ``detectors'' like the ``fly detector'' of the frog's retina, which, with a small number of action-potential bits, communicates very significant information to the animal. Thus, the relationship between meaningful information and information transmission in the physical sense remains to be developed.

Evolved biological systems include a number of features of self-organization not found in digital computers. Firstly, neuronal systems must maintain their own integrity. Thus, homeostatic functions exists at all levels, from the subcellular to the network level. It is unclear how the requirement for metabolic optimization influences information processing strategies. It has been proposed that the bandwidth constraint of the optic nerve for example, dictates a specific information encoding by the retinal circuits. In this example, the metabolic/physical constraint of low bandwidth interacts with computation. In the cortex, a simple homeostatic problem is how to preserve electrical stability in the face of recurrent excitatory connections. Secondly, the brain "programs" itself in three distinct ways: through development, during learning, and when it simply executes routine functions of the organism. There are any number of important open issues in this area; for example, the problem of local learning rules for recurrent networks. In addition, neural network research has done little to address the problem of dynamic programability of a network; programability that does not rely on long-term synaptic changes. A hallmark of this kind of programability is the use of the same neurons in different contexts to perform different functions. A simple example of this is found in the parietal cortex where neurons shift their receptive fields in anticipation of an eye movement.

The general method for processing information in a context is also an open question. For example, there is no circuit level description of how psychophysically described processes such as motion capture and perceptual grouping occur. At a cognitive level, an analogous problem arises when ambiguous symbols, such as letters common to cyrillic and roman text, are interpreted to belong to a particular script. This phenomena shows a switching dynamic similar to the perceptual switching in binocular rivalry. The role of attention in isolating features from their context is also an open problem. It has been demonstrated that the response of neurons in V4 to a stimulus in the presence of a distractor is reduced and that this reduction can be reversed if the animal attends to the stimulus. By what neuronal mechanism is this function realized?

Meaning is defined only in a context. The method of traditional science and engineering has been to isolate of the problem from the context. In these disciplines, the observer is not part of the system, the specifications are not changed by the material, and the animal is anesthetized! The challenge of contemporary science is to find tools to incorporate what has been left out.


Open Issues in Neuro-info-Science & Engineering:

I have the feeling that ongoing discussions about possible opportunities for physical and mathematical scientists to work in neurobiology are in many ways misleading. There is a very long history of scientists who have received all or part of their formal training in the physical sciences continuing on to make substantial contributions to our understanding of the nervous system. The key to their success -- I believe -- is that they became interested in some particular problem and then worked on the problem. A good example is Walter Heiligenberg who studied with both Heisenberg and Lorentz and went on to become the dominant worker in the neuroethology of electric fish. There is no doubt that his physical science training helped him in reaching the understaning of, for example, population coding that is the hallmark of his contributions. Its actually difficult to thing of significant problems in neurobiology that would not profit by quantitative analysis, so I think physical scientists who are interested should find the problem that most interest them and then go to work.

What are the problems ? I do not believe that there is a finite list of unresolved problems in neurobiology. However, a basic division in developing in what remains to be done in cellular neurobiology. A dominant them in recent years has been the growing realization that many mechanisms at the cellular level are conserved between types of cells and organisms. Our understanding of synaptic transmission and voltage-gated channels, for example, is heavily focused on problems that are general to many types of cells and increasingly are becoming problems in structural biology. There is a growing need for scientists with very good quantitative skills to work on these problems but they are -- in many ways -- not problems that are characteristic of the nervous system.

The future of work on the nervous system clearly -- I believe -- lies in understanding how animals (including people) use the nervous system to to execute the various categories of behavior. The small groups established for this workshop are a reasonable representation of the problems that we will face as we continue thinking about nervous systems as dynamic entitities. The fundamental problems are how animals carry out specific tasks. How do they see, how to they learn, how do they generate movements, etc.


Neurobiology:

What are the "neural codes" (i.e., the relationships between biologically relevant information and the physical properties of neuronal interaction & signaling)?

What does the local neocortical circuit do (re: information processing)?

What principles underlie multi-stage processing?

What principles underlie the segregation and integration of processing streams?

What are the higher-level organizing principles of mammalian brains? (E.g. cerebellar function; linking of "cognitive" with "emotional" processing; types of memory & how implemented)

Some Methodological Issues:

What new experimental tools/methods can facilitate:

Understanding of local anatomical and functional relationships?

Example of relationships: neuronal connectivity, activity correlations, role of specific receptors and cell types in neuronal circuitry, etc.

Examples of existing methods: biochemical and (more recently) viral tracers of connectivity, multielectrode and optical recording, genetic 'knockout' experiments, etc.

Understanding of system-wide functional relationships?

E.g., correlation of regional activity with behavior, relationship between distinct processing streams

Existing methods include functional MRI, PET

Artificial implementations to increase neurobiological understanding

Computer simulation of known or putative mechanisms

Implementation of putative principles -- not necessarily "simulation": What does such implementation and operation teach about?

whether the principles are sufficient to account for biologically observed information processing capabilities;

whether an implementation is "biologically plausible";

proposed experimental comparisons to determine whether biological systems function along simlar lines?

Collaborative Areas:

Some instances of collaborative research between neurobiology and mathematics, physics, and engineering that can help advance neurobiological knowledge

Experimental tools and methods:

Advances in multielectrode probe systems (hardware design and fabrication, signal processing)

Advances in optical stimulation and activity recording (dye and intrinsic signal based)

Advances in imaging methods (in & beyond fMRI, PET)

Theoretical approaches:
Optimality criteria for neural processing?

Links with information theory, control theory, optimal estimation methods, as applicable to sensory processing, prediction, motor control

In what ways is it biologically desirable to encode neural information (e.g., using spike coding, principal components, factorial coding, etc.)?

What processing architectures are preferred (e.g., on the basis of a putative optimization principle)? Choice of architectures and algorithms:

affects how much training data is required for a learning task;

affects signal-to-noise enhancement under various conditions;

is related to variations across species and environment.

Taking the problem of multiple localoptima seriously

Given a putative optimization principle, what dev elopmental or algorithmic considerations determine how well an optimum can be realized?

More abstractly: possible role of rugged landscape theory (Kauffman) for biological learning?

Dynamics

Oscillations -- role & significance in neural processing

Precision of spike intervals within networks (Abeles)


Open Issues in Neuroscience:

Chaos theory is appearing throughout the sciences and is beginning to be applied to the study of the brain(Skarda and Freeman, 1987). Chaos theory is more properly known as non-linear dynamical theory, and as such, deals with the properties of systems in which the non-linearities play an important role in the system's temporal evolution. For example, a driven pendulum is always thought to behave in a periodic, indeed clocklike, manner. However if a non-linearity is introduced, e.g. friction, the behavior of the pendulum becomes erratic. Similarly in neural tissue, a periodically firing pacemaker neuron can have drastic changes in its behavior when a non-linearity, e.g. pharmacological agent altering a particular conductance, is applied.

Similar examples can be drawn from the level of the neuronal system. Neuronal systems, from single cells to the entire complex of the central nervous system, are capable of complex behaviors in both time and space. Neurons have been described that oscillate rhythmically while other neurons fire with great irregularity. At the system level, distributed patterns of activity across the cortical surface have been described using optical and multielectrode methods. Can these many disparate behaviors be unified in a single theoretical context?

Chaos theory may prove to be a branch of mathematics that might provide this unified theoretical underpinnings.It utilizes a host of techniques, varying from simple graphics to arcane numerical measures and provides insight into fundamental mechanisms for physicists. It is important to emphasize that it is not the presence of a chaotic temporal dynamical behavior that is important. Irregular activities abound in large-scale neuronal systems and some of them will be chaotic. It is the principles of the theory of chaos that may prove ultimately of value to the neuroscientist studying the spatial-temporal activity of large collections of neurons. Some of these principles are the ability to produce a greatly reduced representation of the data, the ability to predict future activity of complicated dynamics, and the assertion that transitions in global behavior are dependent on mathematical constraints as well as biological constraints.

Both real and modeled cortex appear to have some of the properties of dynamical systems. Data from physiological single unit recordings from area 17 of the cat have been used to demonstrate the presence of complicated temporal patterns and transitions in activity that are explicable in the context of non-linear dynamical theory (Siegel, 1990).

In an modeling study, layer II/III of primary visual cortex was modeled to explore the range of deterministic dynamics supported by a laterally connected functional architecture. The Hodgkin-Huxley equations were used to simulate the neuronal elements so as to provide a modicum of similarity to real cortical neurons. Activity could propagate through the functional architecture; depending on the synaptic kinetics, the system could settle down into quiescence, oscillations, or seemingly random behavior. Order could be found in this last behavior by the application of techniques from chaos theory. Furthermore, the range and transitions of the temporal patterns in the modeled collection of neurons can be accounted for by the mathematical laws of non-linear system theory.


Understanding and Technical Exploitation of Information Processing in Neural Systems

Thesis 1) Humans are high-dimensional complex systems with low-dimensional cognition. Accordingly, their scientific theories, tools, and methods may tell more about their brain functions than about realities and laws of the universe. ---So let us all be humble---

Thesis 2) Typical neuroscience and -engineering related research programs world-wide suffer from several obstacles:

  1. Ignoring the facts of thesis 1, no real efforts are made to increase the cognitive dimensionality by hybrid teamwork.
  2. Unorthodox research outside the mainstream as defined by the (mostly old) Gurus or study section members is effectively discouraged.
  3. The 'critical mass' in time, man power, and funds is not provided to perform 'break through' research.
Thesis 3) Possible solutions for the dilemmas stated in theses 1 and 2 may be:
  1. Identification of 'break through' research topics, e.g.:
    1. Bi-directional communication between neural systems in vivo and computers via implanted neural interfaces.
    2. Why and how is stable and robust information processing possible in networks with nonlinear and chaotic neural elements?
    3. Why it is hopeless to understand neural systems dynamics and information processing by starting at the molecular or cellular level!
  2. Training of young hybrid scientists in teams of severaltech experts (e.g. mathematics, systems engineering) and bio-med experts (e.g. electrophysiology, cell biology, neurology), by working on 'break through' topics.
  3. Soliciting both advice and collateral funds from industry with corresponding hybrid interests.

Since I am coordinating a research project on Retina Implants with funds from the German Federal Research Ministry(BMBF), 10 Mill DM for 4 years, within an interdisciplinary consortium of 14 expert teams from retina surgery, neurobiology, and various technology fields (microsystems, neural networks, microelectronics, tec.), I would be prepared to give some information about the new Neurotechnology Program including Retina Implant.


Motor Pattern Generating Networks: Experiments, Modeling And Silicon:

In most motor pattern-generating-networks synaptic interactions among neurons and intrinsic membrane properties both contribute to rhythmicity. In all of these networks reciprocal inhibitory synaptic interactions between neurons or groups or neurons are found, and oscillations derive from the interplay of membrane properties with reciprocal inhibition. Plateau formation upon release from hyperpolarization may account for the postinhibitory rebound often observed in these networks. Several different types of conductance mechanisms can account for plateau formation including T-type Ca2+ conductance or NMDA-mediated conductance. Sag potentials, slow depolarizations associated with activation of Ih by inhibition (or hyperpolarization), are also prevalent. Activation of Ih allows the inhibited cell(s) of the reciprocally inhibitory network to escape from that inhibition and terminate activity in the opposite cell(s), thus, forcing the transition necessary for oscillation.

The heartbeat central pattern generator of the leech is a useful model system for exploring how intrinsic membrane properties interact with reciprocal inhibition to produce oscillations. Elemental oscillators, consisting of pairs of reciprocally inhibitory heart interneurons, pace this rhythm. Each neuron of the pair posses, in addition to currents associated with action potentials, slow currents that support plateaus (persistent Na+ current, IP, and low-threshold Ca2+ currents, ICa's) and a hyperpolarization-activated inward current, Ih. Inhibitory transmission between the neurons consists of both a spike-mediated and a graded component. Extensive computer modeling studies have been used to elucidate the interactions between intrinsic membrane currents and synaptic currents that promote oscillation. These studies indicate that oscillation is produced when inhibition causes delayed activation of Ih, which restores the membrane potential to a level where plateau formation is supported by IP and the ICa's. The inhibition also removes inactivation form the ICa's, thus priming the cells for plateau formation.

We have now begun a collaboration with electrical engineers (laboratory group of Dr. Stephen P. DeWeerth - College of Computing Georgia Tech) to construct analog Very Large Scale Integrated circuit (aVLSI) representations of neurons that embody the electrical properties of leech heart interneurons for use in analog simulations of multisegmental pattern generating networks such as those that underlie axial locomotion (swimming) in fish, tadpoles and leeches. The use of aVLSI should permit us to do real time analysis of these models that would not be possible with computer models. Through these studies we hope to gain insights both into intersegmental coordination in pattern generating networks and a VLSI design.


Preliminary Remarks on Workshop Topics:

My own experience, appropriate to the topic of neurobiology and neural-network applications, have been in control and system modeling with applications to engineering and biology - in particular large electric power systems, highly maneuverable aircraft and immunology. Based on my limited knowledge, traditional artificial neural networks are natural dynamical representation for many immune functions and are accurate nonlinear approximations for the engineering systems studied. For adaptive/intelligent control, however, it was found that the learning/training process had to be performed off line. Consequently, there is a need for special neural-based VLSI chips such as the recent N:1000 (with 1024 sparsely connected neurons, 256 inputs and 64 outputs) for primarily pattern recognition.

I believe that the nonlinearly (and non dynamically) coupled bilinear dynamic systems, evolve naturally for coupled cellular and molecular dynamics (as for the immune system) and are natural models here. Furthermore, past research on bilinear systems does provide a good basis for research on new methodologies for optimum training/learning and control.

On a broader scale, there remains a definite need for closer cooperation between biologists, life scientists, mathematicians and engineers: to plan and conduct more systematic and efficient experiments, to derive appropriate methods of nonlinear analysis of experimental data, to develop more effective neural-based computing architectures and chips (including the concept of neural-based automata), and to research and develop new modeling and control methodologies and algorithms based on the above. Of course, all aspects should include an integrated multi-disciplinary approach. The biggest hindrance to this integration is a basic common knowledge of biology, mathematics and engineering, which suggests a special educational need.


Issue on Complex Engineering Problems:

The understanding and subsequent application of neurobiological architectures obviously offers great promise for the development of novel solutions to complex engineering problems in areas such as autonomous systems, signal processing, and neural prosthetics. Although physiologists have made substantial strides toward understanding neural systems, traditional physiological techniques are often not sufficient to attack the problem of understanding the relationship between neural architectures and system-level behaviors. Modeling, in conjunction with experimentation, provides an approach for developing a much better understanding of this relationship. In combination, modeling and experimentation have a synergistic relationship in which experimentation suggests models that can be implemented and tested, in turn, suggesting further experimentation to validate the modeling results.

Although a number of quantative and modeling approaches offer great promise for understanding neural systems, my colleagues and I are particularly interested in the modeling of biological sensorimotor systems using analog very large-scale integrated (VLSI) circuits. In general, VLSI circuits provide a method for the creation of real-time models of complex neural systems. Specifically, the modeling of senso rimotor architectures using VLSI circuits facilitates neuromorphic systems that can directly interact with real-world environments. These systems can thus be studied in their closed-loop response to biologically relevant sensory stimuli, eliminating much of the need for subjective evaluation of the resulting system performance. A number of the interesting motor and sensorimotor systems are also of sufficiently limited complexity ('small systems'), such that models that implement a large portion of a system but also contain low-level (e.g. membrane) properties can be developed.

The development of neuromorphic VLSI sensorimotor systems has the potential to aid in the answering of a number of general neurobiological questions, including: What is the impact of membrane and neuron properties on system-level functionality? How are properties/functions such as passive compliance, reflexes, pattern generation, and cortical control integrated to generate the rich variety of intricate movements exhibited by animals? How can sensory information be used to adapt synaptic connectivity and thus modify motor behaviors? Is the control of motor behaviors hierarchical? How is sensory information mapped into motor space?


Open Problems in Neurobiology: Memory and Learning:

Learning has long been a central problem in understanding intelligence and developing intelligent machines. Until recently, however, the key role of plasticity and learning has not been fully recognized in system neuroscience. I believe that there is now an opportunity for major advances in the understanding of how networks of neurons underly perception and action. In particular, I believe that exploiting this opportunity will require developing neurobiological theories of visual and motor learning in close contact with psycophysical and physiological experiments. In more detail:

1) That the brain is not hard-wired but rather can be modified by experience is one of the most remarkable features of nervous system function. Recent findings have revealed that experience modifies the structure and function of both single cells and neural circuits, not only in the developing brain but also in that of the mature adult. Such modifications provide the basis for learning and memory. At the systems level, the view of how neurons of the cerebral cortex mediate perception and action is undergoing a paradigm shift: rather than being hard-wired, the functional anatomy of the cortex appears to display activity- dependent plasticity on a time-scale sufficiently short to play a role in most perceptual and motor tasks. Even simple brain functions such as the definition of visual acuity seem to involve a short-term learning phase and hence to depend upon experience. Computational methods have supported these findings and have provided models in which perceptual and recognition tasks are learned by simple experiential modifications of networks of cells. Thus, activity- dependent and task-dependent plasticity seems likely to be a key aspect of how networks of neurons mediate perception and action. The multi-neuron units involved in this plasticity may well be defined during the next three to ten years.

2) A complete description of the mechanistic basis of intelligent behavior -- for example, of how our brain recognizes specific objects in the visual world, such as faces -- will require some years to solve. Nonetheless, efforts to solve such fundamental problems must be supported now. Such efforts will require the analysis not only of networks of relatively few neurons -- hundreds or thousands -- but also of systems that are several orders of magnitude more complex, such as the neural architecture responsible for object recognition, which involves multiple areas of the cortex as well as other areas of the brain. An understanding of such complex neural systems will no doubt require an understanding of the mechanisms of neural plasticity, since the problem of learning is at the heart of the problem of intelligence.


Thoughts on Neurophysiologic Problems and Collaborative Research:

As a biomedical engineer, I have been working for over 15 years with anesthesiologists on ways to monitor a patient's depth of general anesthesia during surgery. The anesthesiologist wants to use no more anesthetic than necessary to ensure that the patient is adequately anesthetized, that is, to prevent the patient from moving, experiencing pain, or being aware during the surgery and from remembering the surgery afterward. Muscle relaxants are used to prevent the patient from moving and reduce the need for anesthetic, but the hazard is that a patient when paralyzed can be fully conscious and suffering the pain of surgery and yet be totally unable to communicate that fact.

A search is on for a monitor that can predict whether or not a patient is adequately anesthetized. Monitors based on cardiovascular, pupillary, electroencephalographic, electromyographic, and other variables have been tried. So far, no ideal monitor has been found.

Relevant to the proposed NSF workshop, I see the following needs in the search for an anesthetic depth monitor:

  1. What are the neurophysiologic correlates of the brain states associated with awareness and the ability to perceive pain and form memories?
  2. How can these neurophysiologic correlates be monitored to predict the patient's brain state?
Of benefit in monitoring anesthetic depth would be improved methods of imaging patterns of neural activity within the brain and patterns of neural activity emanating from the brain. Two possibilities:

  1. The surface of the visual cortex can exhibit patterns of optical reflectance that depend on the target being viewed by the subject. Also, optical sensors on the surface of the forehead can measure the degree of hemoglobin saturation within the cerebral cortex. If neural activity affects optical reflectance, and presumably transmittance, and if light can penetrate through the skull into the cortex, can optical methods be developed to image brain neural activity, similar to x-ray computerized omography?
  2. The conducting medium of the body is an effective shield to electric fields, but magnetic fields are relatively unaffected. Can SQUID arrays using high temperature superconductors be developed to generate surface maps of motor nerve activity emanating from the brain?


Comments on Cortical memory:

Layered computational structures are known to convey significant algorithmic performance advantages, especially if the information flow is reciprocal. Since cortical memory is arranged in a reciprocating layered hierarchy, the computations it performs in storing and retrieving information are likely enhanced in ways some of which should be reminiscent of corresponding enhancements in computation and algorithms. While there we speak of speed, here we expect that speed my be traded-off for improvements in memory capacity and stability (robustness). We also expect such structure to impart advantages in such qualitative (cognitive) memory function as inference and the ability to generalize.

A model of cortical memory must capture key biological features, for we expect that these are what give the remarkable behavior and performance of cortical memory. These details include not only the reciprocating layered structure, but a ramified arrangement of both excitatory and inhibitory synapses. The Hebbian dynamics should also model a complete set of biological qualities such as non-local effects, dissipation (forgetting) and stochasticity, among others. Such properties impact the learning rates, capacity, and indirectly, the totality of memory performance.

The measurement of capacity and performance for such models needs to be studied in two ways. One is in the customary (complexity) style (such as by the production of analytic results like the N/log N capacity of Hopfield nets). The second is in an empirical way, where memory capacity is measured through simulation with respect to sets of realistic signals and cues. A central question is: How (what) does the cortex compute? The neural system performs computations of at least two qualitatively different varieties: explicit (local to a neuron) and implicit (global in the network). The explicit computation is the execution of the Hebbian dynamics and recall dynamics. The implicit computation needs to uncovered through simulation andanalysis. Here are some methods for doing this.

Principal Component Analysis: This analysis should be extended to a reciprocating layered model. These features will have an impact on the information storage compaction properties of PCA (that is, of cortex). Stochasticity of inputs throughout the hierarchy could be modeled through corresponding stochastic hypotheses in the detailed properties of the neurons and the network. The non-local dissipative properties of Hebb rules which underlay PCA must also be devised for a hierarchy.

Hebb rules and residual correction: A Hebb rule may be interpreted as a local form of supervised learning (in particular, of reinforcement learning). Non-local variants of this can be organized using residual computation in various ways. For example, the duration of neuronal inputs and correspondingly the duration of the I/O-correlation can be made non-local in time, thereby allowing for the expression of that correlation in terms of non-local synaptic weights. This is in fact a residual of a formal requirement of agreement as in the conventional Hebb rule, and the correction of that residual in terms of changes in the augmented set of weights is anon-local Hebb rule.

Topotopic Maps: Cortical organization in the context of information processing has a considerable topotopic character. A key feature of this is the winner-take-all behavior of neuronal net processing, and, in particular, of lateral inhibition. The impact of a reciprocating hierarchical architecture on this topotopic character of neural mapping (such as edge detection, center surround cells, grandmother cells, etc.) should be studied.


Appendix 4: '1-page thoughts' (post-Workshop)

At the conclusion of the workshop, we also asked the participants on a voluntary basis to provide any additional thoughts; we received a couple and include them here.


NSF Neuroinfosciences Workshop -- Now What Can We Do?

The emphasis of the Workshop, as structured in its working groups, was quite properly on research areas trends, innovative approaches, what kinds of collaborative work might increase progress, and so forth. These matters will be covered by our group reports. However, I think that there are issues worth addressing that transcend specific areas of research, particularly concerning the nature of innovative and of collaborative research. I think that any attempt to foster innovation and collaborations without taking account of the special features that set them apart from what most of us do most of the time will be less effective than it might otherwise be seriously doubt that anyone would not consider the promotion of innovative research and research collaborations a worthwhile goal. However, there is no agreement as to how to reach this goal. With regard to innovation, the difficulty I see is that it is not always easy to distinguish between a novel idea that has great potential and a crackpot idea that will never go anywhere. Successful innovations can only be identified with certainty after the fact, by the new results or greater understanding of a phenomenon that they generate. Personally, I believe it is not productive to spend a great deal of time trying to identify new and innovative approaches in a whole area of science. New ideas come about as a consequence of hard thought by one or two individuals over some period of time. Michael Arbib's remark that the only surprise of the Workshop was that there was no surprise is pertinent here, since, even allowing for a little hyperbole, it speaks directly to the inability of any group to come up with approaches that everyone will immediately recognize as new. So do we give it up? Not at all. I strongly favor the casting-bread-upon-the-waters approach if support is available for 'innovative' approaches,innovative approaches will be forthcoming. However, and this is a big however, the group reviewing the proposals should then reward bold ideas, even if they seem unworkable, as long as the proposer(s) has/have laid out a well-thought-out plan of action. Certainly some projects will fail, but I'd also bet that some will succeed as well, with benefits far beyond even the proposer's original vision. The intense competition for funds at present makes it almost impossible for a really novel project to be funded, because there are so many other excellent projects that offer much greater hope of success.

With regard to collaborations, my personal experience as PI on a collaborative project leads me to point out some of the features and pitfalls of collaborative projects. One "secret" to an effective collaboration is that all members of the collaboration be fully committed to the project; sometimes members see their contribution as an extension of their own particular research interests that happen to intersect with the interests of other members of the team. This approach may lead to fragmentation of effort, and worse, loss of the main benefits of a collaboration, a true cross fertilization of ideas and techniques. In my opinion, evaluating an interdisciplinary project that involves collaboration should therefore include an assessment as to whether members of the group have any prior experience working together and are focussed on a truly joint project, or whether members seem to have come together for expediency only.

Both innovative and collaborative research take time to develop. Therefore, I also believe that any agency that desires to foster them needs to think about a time frame longer than the traditional three years. If five years seems too long to commit to on a hope and a prayer, perhaps a mechanism of a project review could be instituted. This would be less than a full competitive review, with its need for a show of progress (i.e. publications), but more than is required now for continuation of a grant. As long as evidence is presented that the collaborative team is functioning well as a unit, or that an innovative idea still shows promise, the grant could be continued for a longer time.

There are a lot of extremely bright and committed people out there. It seems to me that one of the main tasks before us in neuroscience is to come up with some mechanism that will free them to exert at least some of their efforts in new and untraditional directions.


Analog VLSI:

In recent years there has been a persistent effort in a number of laboratories to develop analog VLSI neurons (neuromorphs) that incorporate features found in biological neurons that, in the view of many neuroscientists, are essential for information processing. For example, silicon dendrites have been developed that enable the processing of spatial and temporal components in sensory inputs. And, silicon analogs of voltage sensitive channels have been developed that mimic the complex dynamics of cortical neurons. Research is continuing in these labs, as well as others, to develop silicon neurons that, ultimately, incorporate all essential computational features found in biological neurons. We must assume that these research efforts will succeed and that large numbers of neuromorphs that can mimic the computational capability of biological neurons will be fabricated on small pieces of silicon.

The same research groups developing neuromorphs are also rapidly approaching, or have already reached, having the capability of interconnecting large numbers of silicon neurons. The interconnection systems already in use permit easy implementation of arbitrary connections between neuromorphs, sensory elements, and actuators. In addition, the parameters that set dynamics, spike-firing threshold, and synaptic weight, to name a few, are readily programmable through a standard computer interface. Therefore, these labs have, or will soon have, the potential to create artificial nervous systems that may closely match the signal processing capability found in the nervous systems of simple animals. The means for achieving this goal, however, is lacking.

The basic question is this. How does one assemble, or have assembled, artificial nervous systems that exhibit basic animal-like behaviors such as avoidance and escape? I do not believe that one can design, in the engineering sense, the interconnects and dynamics needed to support the desired behavior from basic principles or experience. If not by the process of design then just how will these systems of artificial neurons be put together? A fruitful approach may be to apply ontological and evolutionary principles known, or thought, to exist in biology. Collaborations between developmental neurobiologists and neuromorphic engineers as well as collaborations between researchers studying evolution and neuromorphic engineers should be encouraged and supported. This could result in advances in all three fields by providing an experimental system that, although very different from biological organisms, allows the testing of theories and ideas on how complex systems evolve and develop.

One might criticize this approach by saying that if it is successful we will end up with artificial systems that are as enigmatical as the biological systems that they mimic. This situation, more than likely, would be the case. But, I would argue that this situation presents a unique opportunity to study computation in a nervous system in which all state variables are completely and readily available we can't figure out how the artificial system works, where the entire state is available, there is little hope we will be able to understand computation in biological systems.


Some Afterthoughts:

I was particularly impressed on how some details of nerve function, such as the low level chemistry, are fairly well known, while others, such as just how information is stored, are wide open. Furthermore, it is remarkable how diverse many of the open problems were. Many issues are indeed natural for a theoretical physicist to think about. At one end is whether dynamical system ideas are helpful for a few interconnected neurons satisfying, say, the Hodgkin-Huxley equations. At the opposite extreme are questions of applying ideas from statistical physics to the limit of large numbers of neurons.

Some of my past thoughts have been on simple cellular automaton systems. They can provide simple models for critical phenomena, but are also analogous to very simple neural networks. It is natural to wonder if such are capable of exhibiting learning. At a very theoretical level, this must be possible since it has been proven that even simple two-state-per-cell two-dimensional systems are capable of universal computation. How much can one gain from more states per cell, increasing connectivity in a fixed dimension, and increasing dimensionality? Are there substantial gains to be made by going to continuous variables, such as would be natural in an analog rather than digital circuit?

Actually, the issue of analog circuitry is rather intrigueing and undoubtedly quite controversial. Given the current sophistication of digital technology, it is hard to believe that modelling of small systems could be easier with such circuits. On the other hand, from an electrical engineering point of view, the control of large systems of analog elements presents a challenging problem, interesting in its own right.

Scale invariance is a useful concept in many areas of physics. In critical phenomena, important structures are seen at all scales. Recently a class of systems has been described for which a self-organized criticality takes place. Does such scale invariant organization occur in the brain? Physical structures of a wide range of sizes do appear to be present; however, it is less clear what is a good quantity to measure at the neuronal level.

Since the brain appears to have numerous back connections, it is perhaps amusing to speculate on the connection with recursive programming ideas. Indeed, such concepts often fascinate physicists, computer scientists, and mathematicians. Using functions that call themselves sometimes produces algorithms substantially more efficient than more naive approaches. Perhaps the most famous of these is the fast fourier transform technique. Do such ideas play an important role in the functioning of the brain? This is a rather vague idea to recurse on further.