Report Date: August 15, 1996
Synergistic Neuroscience and Mathematics/Physics/Engineering Approaches for Understanding
INFORMATION PROCESSING IN BIOLOGICAL AND ARTIFICIAL INTELLIGENT SYSTEMS
to bring together neuroscientists, physicists, mathematicians, and researchers in AI,Control Theory and Bioengineering, to discuss new multidisciplinary research opportunities of modeling and understanding information processing in biological systems, including the brain, for the benefit of studying biological systems per se and for applyng such knowledge on artificial intelligent systems.
Neural networks have been an effective approach in deriving formal understanding of information processing in neurobiological systems, and simulation methods are becoming espoused. In addition, physicists, mathematicians, control engineers and neurobiologists are exploring methods from mathematics and physics to understand neurobiological systems. It is recognized that more work along these lines would be desirable.
Theoretical physicists working in nuclear, high energy, and condensed matter physics have developed an array of methods and tools to model and understand complex systems, with many degrees of freedom and complex state and phase spaces. To deal with such systems physicists have developed and applied a number of approaches: modeling, statistical and thermodynamics methods, scaling, and renormalization, and have used collective states and effective interactions to reduce the complexity of the theoretical models and solve energy minimization problems. Numerical analysis, probability and statistics, topology, optimization, group theory, and graph and network theory are some of the mathematical tools that also are used. Likewise, control theory researchers have developed analysis and design tools for optimization of complex dynamic systems. The question is to what extent such methods can be of use for neurobiosystems modeling.
It has been suggested that for the neurobiological community a "new culture" needs to be developed, where there is more collaboration between theoreticians and experimentalists. There seems to be a strong belief that the theoretical community needs practical applications and the practitioners need theory, that experimentalists need help in interpreting the data they collect, and theorists need closer ties with applications.
In addition to building on current successes of singlecell modeling, the need is for collaboration in developing models which can help us to understand how large populations of neurons interact and how information is processed. Together with simulation approaches, there is need to support more development of higher level models, including systems models, and to help the community realize the benefits of this distinction. While simulations are very useful and provide a complementary approach in understanding system behavior, the absence of higher level models is a hindering factor. An analogous case is atmospheric modeling, where successful simulations are based on macroscopic models rather than molecular level models of the atmosphere. There is the potential of achieving more rapid progress by bringing together groups that have the experience to deal with such approaches.
At the same time, emphasis on theory should not be to the exclusion of experiment. Among other areas with expressed need for collaborative work is the development of algorithms for analysis of the data; for example, recent advances in optical imaging systems and multielectrode recorders, capable of recording 10,000 signals, have significantly increased the ability to monitor signals in biological systems and also created the need for new tools to analyze these data. In addition there is an expressed need for means for sharing algorithms used to analyze the data and means for sharing the data.
Biologists are rich in data, mathematicians, physicists and engineers are rich in methods and tools. Under the current mode of funding, such collaborations are not adequately fostered. The existing experience from the involvement of physicists, mathematicians, systems control theorists and neurobiologists in multidisciplinary neurobiosystems research, can be used to assess the potential and the needs of furthering this type of collaboration. There is also a need to determine what educational curricula need to be developed to encourage and enable a new generation of students to acquire the crossdisciplinary training that would facilitate such multidisciplinary collaborations.
The workshop took place in Arlington, VA, on April 810, and brought together scientists and engineers who discussed the problems, the needs, and the possibilities of such multidisciplinary research and education. This report summarizes such discussions that took place during the workshop.
Workshop cochairs
Michael Arbib, University of Southern
California
John Hopfield, Caltech
Working Group Chairs:
Stephen P. DeWeerth, Georgia Tech
Larry
Sirovich, Rockefeller University
Ralph Linsker, T. J. Watson IBM Research
Center
Workshop Steering Committee
Michael Arbib, University of Southern
California
Fred Delcomyn, University of Illinois, UrbanaChampaign
John
Hopfield, Caltech
K. S. Narendra, Yale University
Karen Sigvardt,
University of California, Davis
Larry Sirovich, Rockefeller University
Peter Wolynes, University of Illinois UrbanaChampaign
NSF Coordinating Committee
Frederica Darema, CISE Directorate, Chair
Randakishan Baheti, ENG Directorate
John Cherniavski, CISE
Directorate
W. Otto Friesen, BIO Directorate
Larry Reeker, CISE
Directorate
Bruce Umminger, BIO Directorate
Paul Werbos, ENG
Directorate
The consensus from the Workshop was that such methods can be of increasing use for neurobiosystems modeling, but that the neurobiological community needs to develop a "new culture", with greater collaboration between theoreticians and experimentalists. There is great potential for achieving more rapid progress by encouraging the formation of groups that have the multidisciplinary experience to deal with such approaches. At the same time, emphasis on theory should not be to the exclusion of experiment. Another opportunity for collaborative work is the development of algorithms for analysis of the data; for example, recent advances in optical imaging systems and multielectrode recorders, capable of recording 10,000 signals, have significantly increased the ability to monitor signals in biological systems but also created the need for new tools to analyze these data. In addition there is an expressed need for means for sharing data and the algorithms used to analyze them.
Biology is rich in data; physicists and engineers are rich in methods and tools. Under the current mode of funding such collaborations are not adequately fostered. The Workshop analyzed the experience of physicists, mathematicians, control theorists and neurobiologists in multidisciplinary neurobiosystems research to date to suggest effective means for extending this type of collaboration, and suggest criteria for educational curricula to be developed to encourage and enable a new generation of students to acquire the necessary training to facilitate such multidisciplinary collaborations.
The Workshop combined a few plenary presentations (see Appendix 1 for the full Agenda) to discuss the issues and pose the questions with multiple sessions for each of three Working Groups to debate the issues in depth. We wish to record our debt to the Chairs of the three Working Groups: Stephen P. DeWeerth: Chair, Working Group on Small Systems; Larry Sirovich: Chair, Working Group on the Dynamics of Cerebral Cortex; and Ralph Linsker: Chair, Working Group on Large Populations of Neurons, whose leadership contributed much to the fruitfulness of the Workshop, and whose written reports contributed much to the text of this overview.
Michael A. Arbib
John Hopfield
Frederica Darema
August 15, 1996
Goals and subject matter that are common to both fields: Neural mediation of behavior requires exploiting regularities present in the environment, learning some of these regularities by experience, predicting and inferring environmental states, and generating effective motor outputs. These tasks are important components of the MPE disciplines of control theory, statistical inference, pattern recognition, learning and generalization theory, information theory, and robotics. Interplay at this level is of great value in both directions. Practical goals such as a face recognizer, a method for separating interfering acoustic sources, or a semiautonomous robot drive invention. The solutions invented by workers who are also cognizant of the relevant neurobiology will, in some cases, be "biologically plausible"; that is, the style of the solution will be compatible with known or likely biological mechanisms. Such solutions can inspire a search for mechanisms that may serve a similar or analogous function in neurobiology. Conversely, knowledge of what is accomplished by biological neural systems provides at least an "existence proof," and sometimes much more detailed guidance, concerning how to build an artificial system having desired properties.  
Results of MPE research on more abstract problems, or on conceptually
related problems in different domains, can provide insights for neurobiology.
Some examples:


Novel tools from MPE domains for neurobiology: Striking examples include
new imaging methods such as positron emission tomography, functional magnetic
resonance imaging, and optical imaging of brain activity (using fluorescent
dyes or directly measuring changes in local blood flow). Multielectrode arrays
for electrophysiology are an important tool that will benefit from
improvements in materials, design, and signal processing. Hybrid preparations
comprise an intracellular electrode that communicates bidirectionally with a
computer, and allows artificial "ion channels" having arbitrary dynamics to be
added to a real neuron.
Important experimental tools from domains other than MPE include pharmacologic manipulations and methods from molecular biology, including genetic "knockout" experiments, in which cells and/or neural circuits are manipulated to study structure/function relationships. 

MPE methods provide analytic tools for neurobiology: Examples include methods for analysis of neurobiological data, circuit and other simulation tools, and the like. Linkages of this type are predominantly one way, in that they benefit neurobiology by borrowing tools from MPE, but may be less likely to lead directly to progress on MPE goals. 
Computation also has much to offer the neurosciences beyond simulation. Data assimilation is an area of intense interest in other applications of high performance computing. In the neurosciences, there is great interest in building databases from imaging and electrophysiological measurements, but the assimilation of these data into modeling environments has only recently begun to receive attention. There are apparent opportunities inherent in improved network communications for the sharing and publication of extensive data sets. Systems for doing so will encourage much more extensive use of data from diverse sources in modeling studies. Algorithmic and software development for the incorporation of these data into modeling tools is needed.
There is also a need for software tools that confront the problems of large dimensions and model reduction. We are continually confronted with the need to base models on incomplete and imprecise information about model parameters. As the complexity of models grows, so does the number of parameters that need to be fitted. The expense of simulation is related to both the dimension of underlying models and their stiffness as systems of differential equations. Therefore, parameter identification for models will be greatly facilitated by the development of systematic tools for the reduction of models to ones that involve only essential variables, for sensitivity analysis that identifies critical parameters and for bifurcation analysis that directly identifies regions of parameter space in which there are substantial changes that take place in the dynamics.
The mathematical foundations for the analysis of multiple time scale dynamics lies within the domain of singular perturbation theory. Geometric caricatures of the observed multiple time scale behavior have been drawn, but the theory hardly encompasses many observed phenomena. In particular, existing theory deals almost exclusively with systems in which the fastest time scales relax to slowly changing equilibria, or systems in which the rapid time scales are integrable. The observations of neural systems fit in neither of these contexts. Extensions of existing mathematical theories are needed to enhance our inituition about the dynamical behavior of these multiple time scale systems.
On a more abstract level, neural systems combine discrete and continuous components. The neural processing of our brains translates continuous signals of electrical activity into discrete concepts and then back again into continuous signals that drive our vocal cords when we put those thoughts into words. Dynamical systems theory has concerned itself with modeling both continuous time and discrete systems, but it has done so individually. Dynamical models of brain function that encompass what we do need to put these together into a coherent framework.
Much attention has also been paid to the need for describing and interpreting the temporal patterns of activity among groups of neurons. This is also known as the problem of ensemble coding: how are different sensory stimuli encoded into patterns of neural activity among groups of neurons and how are such patterns distinguished or decoded at subsequent stages? The problems of decoding, transforming and encoding such patterns are more approachable in small systems with a limited number of elements and with well specified connectivity than in large, unbounded systems where the connectivity can only be poorly specified or specified only statistically. New mathematical methods are needed for describing the temporal patterns among even small groups of neurons. Mathematical and computer modeling is certainly necessary for understanding the cellular and circuit features that are essential for the decoding and encoding of activity patterns among neurons.
Numerous examples exist of small systems that have been well studied. These examples come from systems in animals as diverse at invertebrates (e.g. the control of movements or the processing of sensory information in leeches, lobsters and crickets) to lower vertebrates (e.g. sensation in electric fish and swimming in lamprey) to primates (e.g. the control of eye movements or spinal reflexes). Thus, these systems are not at all defined by the complexity of the animal in which they reside, but by the presence of a rigorously quantified behavior, the cellular and circuit basis of which may be understood using neurophysiological experimental techniques combined with modeling and quantitative analysis.
The primary motivation for studying small systems is that these systems provide a context in which to address a number of issues that cannot be nearly as easily studied in larger, more complex systems. Small systems are considerably more tractable than their larger counterparts, facilitating the study of system architecture in semantic context and at a level of detail that is often unattainable in large and/or intact systems. Small systems are a necessary intermediate step in the study of nervous system function, between the level of molecules and membranes and the level of large systems of neurons or the behaving organism. Insights and principles have been, and continue to be, uncovered in the constrained world of small systems that can then be used as building blocks in understanding higher levels of organization.
One of the major issues being addressed using a small systems approach is the relationship between structure/dynamics and function with the goal of determining whether a direct relationship between specific organizational structures and particular behavior patterns can be identified. For example, do feedback loops of a particular length or strength produce some patterns which are more or less stable than others? Are some feedback/feedforward relationships necessary for stability? Although neuroscientists are making progress in defining the architectures that underlie specific functions and dynamics, there is at present very little intuition about how these relationships generalize. Quantitative and computational methods, such as those used in disciplines such as physics, mathematics, and engineering, have the potential to provide the means for deriving these generalizations. The collaboration between neuroscientists and MPE in the modeling of small neural systems offers the potential for the development of the first steps in understanding the relationship between neural organization and function. These collaborations will certainly reveal principles that are novel and perhaps even counterintuitive to both groups (e.g. the importance of the remarkable amount of positive feedback in the nervous system is not intuitively understood by either community).
The processes that control self organization can also be studied directly in small systems. Small systems such as slices or cultured neurons are well suited for establishing the properties of cellular and synaptic plasticity. Once such properties are established, the question arises as to whether they are also necessary and sufficient for explaining the full range of learning related phenomena at higher levels of analysis, including the cognitive and behavioral levels. The question can also be asked in the reverse direction. What cellular or synaptic properties are required to explain learning related phenomena at higher levels of analysis? In general, modeling and mathematics can be used to study the implications of cellular properties for multicomponent systems and higher order learning phenomena, and to develop the large scale implications of learning rules discovered in small systems. One example is provided by a recent modeling prediction that has been verified subsequently. It has been demonstrated that some properties of neurons that are lost when they are removed from a network are redeveloped in a period of days. In other words, the neuron "knows" what it is supposed to do and can get there without circuit interactions.
In the exploration of all of these issues, one of the essential goals is the determination of general organizational principles that span a wide variety of systems types and magnitudes regardless of how they are instantiated. This goal is perhaps the primary motivation for studying small systems, providing the potential for modeling tractable (i.e. "small") systems with the ultimate result of illuminating the basis for computation in much larger systems. This exploration is impossible without the direct interaction between neurophysiologists, modelers, and theoreticians, each group playing a major role in the overall process.
Models are essential for the assimilation and understanding of the results of cortical experiments. This task offers challenging research opportunities to scientists with skills in developing mathematical models, for example physicists and mathematicians as well as neuroscientists experienced in cortical research.
Many classes of models are needed to serve welldefined purposes beyond just fitting and explaining existing data. Examples of challenges facing cortical research are the following:
Provide frameworks for developing new experiments to characterize and understand specific neural mechanisms.  
Predict neural behavior (qualitatively and quantitatively). Create testable hypotheses.  
Creation of dynamical models of interacting populations. Account for modalities that are served by populations of neurons.  
Models containing parallel networks. Exploration of the means of recruitment of individual neurons into populations.  
Detailed biophysical models (implemented via simulations or analog VLSI hardware) of cortical neurons can be used to relate cellular and molecular processes to the functional properties of cortical circuits.  
Dynamical models of interacting populations of neurons can be used to study the functional organization of neurons involved in common tasks but located in different cortical areas. These models will be essential to the interpretation of data collected using imaging and multisite recording methods.  
Models not necessarily tied closely to experiments should suggest new
paradigms for understanding the function of the nervous system and could serve
as the basis for the generation of new technologies, which might create
hardware capable of learning, self organization and the exhibition of near
real behaviors.
There is also a need for imaginative (perhaps impressionistic) models. Such models should suggest new paradigms for understanding the function of the nervous system and could served as the basis for the generation of new technologies which might create hardware capable of learning, self organization and exhibiting near real behaviors. Possible areas of emphasis:

What are the "neural codes" i.e., the relationships between biologically relevant information and the physical properties of neuronal interaction and signaling?  
What does the local neocortical circuit do, from an information processing standpoint?  
What principles underlie multistage processing in sensory and motor systems? How does each stage acquire its functional properties, what are the learning rules, what are the roles of feedback between stages, etc.?  
What principles underlie the segregation and integration of processing streams?  
What is the higher level organization of the brain? What roles (including nonmotor) does the cerebellum play, and how? How are "cognitive" and "emotional" processing linked? What are the neurobiologically distinct types of memory and how are they implemented? 
Modeling at both the realistic (conductance based neuronal and circuit modeling) and the abstract level provides the means for expanding our current understanding of neural systems. These models will be more insightfully developed through the interaction of experimental neuroscientists with computer scientists, engineers, and mathematicians. For example, dynamical systems analysis has proved a fruitful approach toward understanding how rhythmically active neurons and limited networks function and is currently being expanded to encompass changes in critical parameters by modulatory influences. This mathematical approach has led to the development of theoretical frameworks for understanding operationally similar neuronal circuits that differ in many anatomical and physiological details. Computer tools for such analysis need to be more fully developed and packaged to better faciliate the interaction between mathematicians and neuroscientists. Detailed models will be essential for defining which details are important in conferring the functionality to a particular neuronal circuit and in pointing out how theoretical frameworks must be expanded.
Another area where interaction among the disciplines will aid small systems analysis is in the development of better hybrid systems for circuit analysis. Interfacing computer models with neuronal circuits has already begun, but should be continued so that whole neurons or parts of circuits can be simulated online and interfaced through microelectrodes to neuronal circuits. By varying the parameters in the simulated components and observing the response in the biological circuit, new insights should result. Such hybrid systems could also be used for parameter optimization in realistic models and in defining the useful parameter space of the biological circuit. Biological signals could be compared with online simulation and parameter correction implemented. Such ideas will necessitate implementation of new combinations of hardware and software resources as well as new methods for error estimation and the use of error estimation in parameter delineation.
One specific area in which much progress has been made in this area is in the development of analog VLSI (very largescale intergrated) circuits that model neurobiological systems. One of the goals of this research is the development of silicon neurons that incorporate ultimately all of the essential computational features found in biological neurons, resulting in large numbers of model neurons fabricated on small pieces of silicon. The second portion of this research, the capability of interconnecting large numbers of silicon neurons, is also under development. The interconnection systems already in use permit easy implementation of arbitrary connectionsbetween model neurons, sensory elements, and actuators. In addition, the parameters that set dynamics, spike firing threshold, and synaptic weight, to name afew,are readily programmable through a standard computer interface. Therefore, these systems have the potential to model, in real time, the signal processing capability found in the nervous systems of simple animals, facilitating the implementation of complex neuromorphic sensorimotor systems.
One of the primary challenges to assembling these artificial nervous systems is the development of the correct architectures and connectivity. It is not at all clear how the interconnects and dynamics needed to support the desired behavior can be designed, in the engineering sense, from basic principles or experience. If not by theprocess of design then just how will these systems of artificial neurons be put together? A fruitful approach may be to apply developmental and evolutionary principles known, or thought, to exist in biology. Collaborations between developmental neurobiologists and neuromorphic engineers as well as collaborations between researchers studying evolution and neuromorphic engineers should be encouraged and supported. This could result in advances in all three fields by providing an experimental system that, although very different from biological organisms, allows the testing of theories and ideas on how complex systems evolve and develop. The resulting systems would also present unique opportunities to study computation in a sensorimotor "nervous" system in which all state variables are completely and readily available.
The implementation of such neuromorphic sensorimotor systems is only possible through the intensive collaboration between engineering and neuroscientists. However, if such an endeavor is to be successful, it is also necessary to include theory and computer modeling in both the development and the analysis of the hardware implementations. The complexity of designing hardware mandates that predictions of system operation be made through theoretical and simulation means prior to the system being implemented. Also, once the system is implemented, the inevitable complexity of its structure and operation must be studied in much the same way as its biological counterparts, again requiring the inclusion of mathematical techniques. In the end, the implementation of such systems provide unprecedented opportunity for collaboration among neuroscientists, computer modelers, theoreticians, and engineers.
Correlate cortical states with behavioral states and the statistics of the environment.  
Identification of latent topological structures in the data sets.  
Development of dimensionality reduction techniques based on general optimization principles for lossless and lossy compression.  
Apply dynamical systems theory for the analysis of reduction algorithms for stability, convergence properties and bifurcations.  
Development of tools and techniques for the identification, modeling and reduction of noise. 
Spatial Scale  Temporal Rate  Method  Future Needs  
Single Cell  10^4hz 

 
SmallScale Spatially Averaged Populations ( sim 100 mu m)  10^2hz 

 
LargeScale Populations (1mm^230mm^2)  0.1hz  100hz 


In another vein, MEG and VEP are examples of techniques that attempt to infer cortical information from noninvasive external detection techniques. A true knowledge of the actual cortical activity being sensed by such detectors is an inverse problem of considerable difficulty. The challenge of solving this problem represents an important research opportunity for a diversely trained group of engineers and scientists.
How are features extracted by any of the methods being coded and relayed to higher areas of visual cortex? What is the interplay between sparse coding, code complexity and code description length? How is binding of several objects in the scene done in terms of coding the compound representation? 
The use of more complex and more natural stimuli or motor behaviors.  
Response measures that allow assessment of neural coding on behavioral and finer time scales.  
Animals must respond in short (one hundred milliseconds) times to single neural responses, despite great variability in cortical responses to given stimuli.  
Role and precision of temporal coding is presently unknown.  
Systematic means of exploring a stimulus space, guided by neural responses. (To find regions that best drive cells.)  
Examine context dependent neuronal responses, both spatial and temporal.  
Develop stimulus ensembles that are experimentally practicable yet capture salient aspects of natural stimuli  
Develop mathematical models of the structure of natural stimuli. 
Units (representing neurons) having more realistic dynamics and ionchannel properties.  
Units that represent processing at higher levels of integration e.g., local cortical circuits, or units that represent "maps" such as the ocular dominance and orientation maps of primary visual cortex, appropriately parametrized.  
Better understanding of the roles of feedback connections between processing stages.  
Novel learning algorithms.  
Modeling of subsystems (cortical and subcortical) and their relationships.  
More insightful linkages between biologically identified subsystems and the functionally defined subsystems that are studied as part of cognitive science. (Such linkages are studied under the rubric of "cognitive neuroscience.") 
These distributed representations are in some cases modifiable by neural experience. Various principles have been proposed to govern learning and plasticity. The time is ripe for deeper collaboration between theorists and experimental neurobiologists in this area, owing to recent progress in: the experimental methods for measuring neural activity; the understanding of mechanisms for Hebbian learning; the elucidation of the importance of plasticity in adult animals; the understanding of the statistical properties of realistic ensembles of natural sensory data; and the computing power available to analyze the consequences of applying learning principles to such data.
Open questions related to neural representations are fundamental both for neurobiological understanding and for the design of practical machines (such as an autonomous walking robot) that can perform sensorimotor tasks with animallike flexibility and sophistication. In addition to those posed above, these questions include: How are representations integrated across sensory modalities? How do these representations guide motor behavior? How are motor "features" (such as gait, but at various levels of abstraction) learned, used in motor planning, then translated into specific instantiations of motor output that vary depending on environmental details?
Does apparently random activity (e.g., the precise timing of spikes) in fact encode information used by the brain? b. Does noise aid information processing or control (as in the case of stochastic resonance)?  
What role does the optimization of signaltonoise ratio play in sensory processing, both at the periphery (e.g., retinal processing) and at later stages?  
Links between neural processing, optimal encoding, and information theory have been proposed and successfully tested by analyses of this type.  
Do deterministic (e.g., chaotic) dynamics underlie at least part of the observed irregular activity? If so, do they play an information processing role, or are they an unimportant epiphenomenon?  
The separation of signal from noise can be generalized to the problem of separating a desired signal from a background of other signals that may also be meaningful. This is a problem of practical importance (e.g., separation of multiple acoustic sources, segmentation of visual scenes), having implications for the neurobiology of attention, sensory processing, and learning. Relevant disciplines include information and control theory, optimal coding methods, psychophysics, and signal processing. As emphasized above, statistical properties of realworld input may be vital in solving such problems. 
What learning rules lead to effective computational result, given the world we live in?  
What learning rules can account for development and or adult plasticity? What are the distinctions between different learning rules?  
What is the role of learning in mature function of sensory and motor systems?  
How do network activity dynamics and learning dynamics interact?  
How can sensory motor learning and temporal sequence learning be accomplished?  
Can biochemical realistic models be develop for LTP, LTD, and other plasticity rules? How do these compare with simplified models?  
How do cells and networks of plastic neurons achieve stability or homeostasis, staying within the operating rate. 
Specific schenarios:
Renewable training grants (graduate student or PostDoc) that may be used at any institution.  
Support for faculty leaves in complementary fields.  
Sponsorship for neuroscience workshops directed toward mathematicians, physicists and engineers.  
Support for development of interdisciplinary books, Websites, and other educational material between neuroscience and mathematics/physics/engineering.  
Independence, maturity, and strong quantitative skills are particularly important to trainees in newly established crossdisciplinary studies.  
Interactions should be encouraged between students in different disciplines. Programs should consider adopting a "buddy" system, in which students from different disciplines are paired up and assigned a common problem to work on.  
Core curricula should be defined for a key set of disciplinary constellations: If students have some shared understanding, it is much easier for them to interact with one another productively, even if their backgrounds are diverse. 
The success of any collaboration requires that the members of the collaborative team learn at least the basics of the language and techniques of all its members. Without a common language and an appreciation of the background and the challenges of each discipline,there is no collaboration. Therefore, any interdisciplinary program should require that a person trained in physics, dynamical systems or computer science spend a significant amount of time in the laboratory of the neurobiologist and vice versa. However, each student must achieve a "critical mass" of expertise in some core discipline (whether already existing or yet to emerge) if a true contribution is to be made to the collaborative effort.
The recent explosion of the use of the internet provides a new opportunity for establishing collaborations. Web sites could be created for "advertising" intriguing problems that individuals or groups of neurobiologists see as needing input from other disciplines. Similarly, funding agencies could use the Web to advertise new interdisciplinary programs.
A critical mass of funding is crucial for neurobiologyMPE collaborations. We strongly believe that new interdisciplinary Programs that fund only a handful of the submitted proposals discourage collaborations rather than nurturing them. The best potential applicants will shy away from new programs if the effort required to put together an application has little chance of payback. Interdisciplinary collaborations should receive interdisciplinary support from the Directorates most related to the disciplines of the collaborators.
Proper review of interdisciplinary applications requires to be done by ad hoc panels chosen very carefully to represent the disciplines of the applicants. It is also critical that the individuals chosen for these panels have an appreciation of the value of multidisciplinary efforts in research. Too often experts in a particular field fail to appreciate the difficulty of the collaborative approach, and view the fact that only part of the proposal is related to their expertise as being a negative factor, rather than positively evaluating the effort to place the area of their expertise in an effective interdisciplinary framework.
Programs should be organized around a set of goals, rather than focussing on too narrowly defined skill sets. In a new crossdisciplinary field the prediction of results is particularly difficult, directions can quickly change, and students need to be prepared for a wide class of future problems. It may take con siderable time for new teams of researchers from different discipolines to learn anough of each others approaches to make a truly effective team effort. This can be recognized programmatically by NSF providing longer term grants for interdisciplinary projects (e.g., 5 years).
8:008:30  Welcome and introduction (Darema, Arbib) 
8:309:00  NSF ADs and Directors 
Tuesday, April 9
8:30  9:00  Arbib and Hopfield: Formal introduction 
9:00  9:30  Ron Calabrese: Small Systems 
9:30  10:00  Barry Richmond: Cortical Dynamics 
10:00  10:30  Break 
10:30  11:00  Ehud Kaplan: Large Populations of Neurons 
11:00  11:30  Misha Mahawald: Open Problems in Neurobiology 
11:30  12:00  Harel Shouval: LTP, LTD and Cortical Receptive Fields: Is there a Unifying Principle for Synaptic Modification 
12:00  12:15  Harel Shouval: Education Experiences 
12:15  1:10  Lunch 
1:10  1:25  Sankar Shastry: Lessons from the NSF workshop on Biosystems Analysis and Control 
1:25  1:40  John Guckenheimer: Summary of the NSF Workshop on Computational Biology 
1:40  2:00  Charges to the WGs (Arbib and Hopfield) 
2:00  3:30  WGs Breakout Session 
4:00  6:00  WGs Breakout Session 
Wednesday, April 10
8:30  10:00  WG chairs present interim summaries 
10:00  10:30  Break 
10:30  12:00  Working Groups Breakout Session 
12:00  1:00  Working Lunch/Working Groups Breakout Session 
1:00  2:30  Summaries presented by WG chairs 
2:30  4:00  General Discussion, Summaries by Arbib and Hopfield 
4:00  Workshop Adjourns 
NSF NEUROINFOSCIENCES WORKSHOP
APRIL 810, 1996
HOLIDAY INN BALLSTON, ARLINGTON, VIRGINIA
PARTICIPANTS' LIST
I think theories of vision based on statistical estimation, particularly Bayesian techniques, offer the best chance of a mathematical framework for all vision. Theories of these type (for example, work by Mumford, Ullman, and Poggio) appear to map nicely onto the known visual cortex but considerable work remains to match these, or other, theories to experimental data. Recent experimental results coming from Shiller's laboratory at MIT suggests that computational theories of this type may even be partially implemented as early as V1. These theories, in turn, seem to closely agree with psychophysical experiments by Nakayama and others.
Open problems in vision therefore consist of: (i) understanding image segementation and the role of V1, (ii) using the roles of MT and MST for motion processing, (iv) understanding object recognition and scene understanding and in particular the role of the feedback and lateral connections in the visual cortex, (v) developing a mathematical framework for vision, (vi) investigating how computational models can be implemented using realistic neurons. All these problems require an interdisciplinary approach combining mathematics, engineering, psychophysics, and neuropsysiology.
Some of us study simple, or "small" systems because we take them as models for large systems, with the hope that we can develop principles that will apply to mammalian systems. Is this appropriate? Or are there qualitative differences between systems of such different scales? Are there changes that occur at some point simply as a function of scaling up, which have nothing to do with changes in the underlying mechanisms of action? If so, what could they be? Can we identify them? Which types of principles are most vulnerable to such scale changes?
Are there types of connectivity patterns that will tend to bring out these differences? For example, are some recurrent loops, or loops of some length more apt to produce true differences between large systems which have many diverging and converging pathways versus small systems which are necessarily more limited in their circuitry?
Question 2:
Given the evidence that circuits can be reconfigured from moment to moment, and that neurons can qualitatively alter their patterns of activity abruptly as their states change, singlecell recording methods are going to be inadequate to understand any neural circuit, whether it be a "simple system", a cortical circuit, or a large neuronal circuit. These realities will often be obscured if one cannot record from several closely related neurons simultaneously under a variety of conditions, and/or if neurons of the circuit are not identifiable from one animal to another.
How then do we study the details of circuit function in any system more complex than the STG? This question clearly applies to both motor and sensory systems, and the spinal cord as well as the cortex. Can mathematicians help us to identify the salient features of singularities that might provide clues to the underlying properties of a system? Can engineers or physicists help us develop new methodologies for recording from large systems of neurons stably? Can the mathematicians then help us find ways to pull out the important information from the masses of data we record?
In motor systems it has been known that spinal and brainstem neurons carry dynamic movement signals, the dynamics of cortical signals have not been as clearly related to movements. Schwartz has recently shown that the several aspects of the dynamics of practiced movements are simultaneously encoded in the dynamics of primary motor cortex neurons. Again this leads to the conclusion that information is carried in the multidimensional dynamics of neuronal activity.
Given that the dynamics are important, it becomes important to know at what scale they ,operate. Several studies have found evidence for precise timing of neural impulses within the visual system (Strehler and Lestienne, 1986) and in frontal cortex (Vaadia et al, 1995). Vaadia et al reported that this type of precise firing pattern become significantly more frequent during directed attentive activity. Both Abeles (1991) with his synfire proposal, and Lestienne and Strehler (1987) have proposed mechanisms for such occurrences. Others have debated the need for precise timing as a coding mechanism (Softky, Shadlen).
All of these findings and discussions leave us with at least the following questions: What is the precision of neural codes? Can wedecode the different dimensions of the responses, i.e., can we give them any simple interpretation? How are the codes from adjacent neurons related? How does the relationship between neurons constrain the function of networks of these neurons?
All of these questions must be answered to understand cortical function. All of them require data related to the issue, reliable methods of analyzing these data quantitatively, which becomes more and more difficult as the dimensionality of the data increases, and solid theoretical frameworks in which to interpret the results. This requires that the experimentalist provide suitable measurements for the theoreticians and that theoreticians provide frameworks that are presented in terms of reliably measurable variables.
Research in Vision has revealed a multitude of strategies biological systems utilize to achieve extremely efficient realtime operation. One of the major objectives of these systems is to provide sophisticated dynamic control of the flow of visual information and allocation of resources to achieve the desired level of performance for the task at hand with minimal extraneous computations. Such a system can be viewed as being composed of two major subsystems. The first is a highly focused attentive system that provides detailed processing on a highly restricted subset of the available visual information region in a serial fashion. The second system operating in parallel carries out a more global survey of the scene, which is utilized to alert the animal to potentially new and dangerous objects as well as providing guidance to the allocation of the more expensive focal attentive processing.
Interest in Vision from the point of view of a Controlscientist is somewhat more recent with a major emphasis in Machine Vision as applied to problems in Robotics. One of the major problem here is Visual Servoing with vision in the feedback loop where the objective is to visually navigate a Robotic Manipulator for the purpose of tracking and grasping in a dynamic and uncertain environment. The Visual Sensor usually chosen is a CCD camera.
While experiments are still going on in fine tuning the underlying paradigms in visual signal processing that is common between Neuroscientists and Control Scientists, it is equally important to understand the underlying mathematical structure from a higher component level and from a lower neural level. It is equally important to understand how the models obtained from these two viewpoints interact. For example, at a higher component level an important problem is to study the rotation of eye and neck in stabilizing vision and directing gaze towards a moving object. This could be viewed as a subclass of a nonlinear control problem. In this paradigm, one replaces the eye and the muscle attachment by a nonlinear differential equation with the muscle tensions as the actuating control commands. The nonlinear dynamical system, so obtained, can be used for the purpose of gaze control as an example of nonlinear regulation and tracking. The neck and muscle attachments to it can be similarly replaced by a nonlinear differential equation and one can study the 'neck control' problem likewise. Our grand challenge would be to consider the artificial 'gazeneck' as a nonlinear control problem with a pair of eyes placed on a neck mechanism with actuating signals derived from visual signals. From the point of view of Neural Modelling, it is important to obtain models of attentional mechanism for the purpose of forming position, scale and motion invariant object representations. In this paradigm, one would use control neurons to dynamically modify the synaptic strengths of intracortical connections.
Very little has been done on component level modeling of BioSystems from a dynamical systems point of view. We are only beginning to understand the appropriate models of Eye and Neck as a control problem. Even less is understood as to how component level models would interact with Neural Models that control and actuate muscles.
In the realm of theory, one of the major problems is that of understanding the operation of multiple centers or regions that are reciprocally or recurrently interconnected. I wonder if there is a mathematics that is adequate for such richly interconnected centers. The problem is made much more difficult, perhaps even insoluble, by the fact that neural centers are not connected by single lines conveying a simple scalar quantity but by thousands of parallel lines conveying nerve impulses. How can such patterns of nerve impulses be described and understood? Recent work indicates that the relative timing of impulses, down to the microsecond level, conveys important information. A complete description of the temporal relations among n neurons appears to require an ndimensional space, but spaces of more than 3 dimensions are hard to work with and grasp intuitively.
The problem of large numbers of neurons or fibers is made a little more tractable by the modular character of many brain regions. It would seem that theorists or mathematicians might be able to help understand the function that is carried out by a small region (the canonical circuit) of the neocortex, the cerebellar cortex, the tectum, the olfactory bulb, the thalamus, or the retina? They might help suggest the best way of describing the transformations that occur between the input and the output of such regions, quite independently of the specific information being processed. Again, hypotheses and suggestions have been made but a clear understanding of local circuit processing and its function does not seem to have been attained for any region.
Mathematicians appear to have a special ability to stand back from a set of findings and uncover an underlying structure of categories or relationships that might or might not fit some mathematical structure. The mathematical analysis of hierarchy among visual cortical areas in a recent Science article is an example of this process. This ability of mathematicians and the independence that they have from the bias of a particular preparation or system (in contrast to experimentalists) could enhance the conceptual base of neurobiology.
Most of my daily frustration as an experimentalist comes from technical limitations such as obtaining good intracellular recordings from small but important cell types or in analyzing large bodies of data. I only occasionally sense the limitations of my conceptual or theoretical tools. Any help that mathematicians/physicists/engineers can provide with the technical problems of obtaining and analyzing data is welcome (improved voltage sensitive dyes and methods for analysing the data that they yield, for example).
How can we recognize whether two cells are part of the same population (in the sense that they jointly encode the same information)?  
Given a population of neurons, what measurements (using single or multiplecell recordings) need to be carried out in order to characterize its behavior?  
What characteristics of a computational population are responsible for behaviors (or psychophysical performance) on tasks such as detection or discrimination? Can we predict performance in such tasks from population characteristics?  
How does a population respond to superposition of stimuli? Can we account for psychophysical effects such as masking or adaptation? 
Determine the Composition, Organization and Dynamics of functional neuronal ensembles.  
Discover the communication codes used by neurons and neuronal ensembles.  
Discover the mechanisms for the creation and modulation of neuronal ensembles. 
Develop new methods to collect and analyze data from multielement recordings (multiunits, many pixels from video frames, multiple EEG elecrodes or magnetic detectors, etc.). This includes methods for spike sorting, new ways to display multidetector data, methods for exploring and displaying the (usually nonlinear) interactions among ensemble members, methods for compressing and searching large databases, applications of tools from nonlinear dynamics to neuronal function and structure and so on.  
Develop objective, robust and efficient methods to distinguish between signal and artifact.  
Investigate the mathematical and informational aspects of neural codes.  
Develope improved computational and mathematical approaches to modeling of neuronal ensembles and single neurons.  
Study the topology of functional organization of the nervous system. 
One of the main problems is to study multineuronal code exhibited in its firing rate. Multi channel recordings at high temporal resolution have made it possible to obtain spike train data from many neurons simultaneously. It is thus possible to study real spatiotemporal codes. However since such code is high dimensional, it is very difficult to visualizeor analyze it. Modern statistical and neural network techniques with the aid of powerful computers have a chance of cracking this code. Some recent work has started addressing these questions, however the complexity of these types of problems should not be underestimated, and it may take many more years until they are resolved. We expect that the nature of the codes used reflected the properties of neural machinery as well as (statistical) properties of the environment.
Another major question is that of neural plasticity, which is closely related to the first question. Plasticity could be considered as a change in the neural code in response to the environment. Experiments have revealed plasticity both on the system level and on the cellular level. The experiments are becoming detailed enough for us to determine what the functional form of the plasticity is. Different details of the form of synaptic plasticity reflect different features that are singled out as significant by the cortex. When analyzing high dimensional data such as that obtained from multi electrode recording we are faced with the question, which features in this complex data are important? This is a similar question to the one faced by our brains when confronted with the high dimensional data conveyed by our senses. Plasticity is the method used by our brains to sort out this data and to single of the important features in this data. The contribution of physical and mathematical scientists to the domain is to analyze and simulate different learning rules in order to figure out which ones fit better with experimental results, as well as what are the information processing implications of different rules.
Another problem of major interest concerns the mechanisms underlying binding of entities both on a short term such as combining a few words together into a sentence, or on a longer term, such as binding the telephone to the desk it sits on. We have very little idea about how mental functions related to binding are performed and if there is a common mechanisms for language vision and other difficult brain tasks, but these questions are of immense importance both to the understanding of human brains and for the construction of somewhat intelligent machines.
1. The qualitative dynamics of singularly perturbed and hybrid dynamical systems.
Neural systems make evident use of multiple time scales in their function. "Bursting" oscillations that combine slow rhythmic variations with action potentials during a portion of the slow oscillations are common. Singular perturbation models dependent upon two infinitely separated time scales have been formulated for such processes, but the systematic mathematical theory for systems with finitely separated time scales concentrates on cases in which the fast attractors within the system are equilibrium points. Furthermore, there has been little attempt to look at dynamics of multiple time scale systems on times that are long relative to the slow time scale. The best developed parts of the theory examine periodic trajectories of multiple time scale systems. Further development of these areas of mathematics are feasible and present rich, challenging problems for dynamical systems theory. The results of such work should contribute directly to our insight into biological motor control and other dynamical processes of neural systems.
2. The numerical analysis of bifurcations in multiparameter vector fields.
The application of dynamical systems theory to other fields frequently depends upon the dynamical analysis of models that depend upon many parameters. Fitting parameters within the models is hindered by our inability to measure many parameters directly. Use of the models is hindered by our inability to readily calculate bifurcations; i.e., those parameters at which qualitative changes in the dynamics of the system occur. Classification of generic bifurcations has been one of the major themes within dynamical systems theory. Numerical computations with many examples have demonstrated that automated calculation of bifurcation diagrams is a difficult and challenging problem. Such computations are a necessary step in determining the biological implications of models. Even more challenging is the problem of optimizing the fit between dynamical models and experimental data as model parameters are varied. The numerical analysis of dynamical systems and their bifurcations is an active research area that deserves increased attention and support. It provides algorithmic underpinnings important in the analysis of large classes of simulation models.
3. The relationship between the network architecture of systems of coupled oscillators and the resulting dynamics.
The only hope that we have for intelligent understanding the dynamics of vertebrate neural systems is that the hierarchical organization of such systems will allow us to decompose the systems in components with a relatively small number of degrees of freedom. We simply cannot cope with the complexity of systems with large numbers of degrees of freedom unless there are principles that lead to significant simplification of the relevant parts of the system dynamics. We know almost nothing about how the structure of a multicompartment system constrains its behavior (beyond analysis of the role of symmetry when it is present).
Protein folding is a collective selforganization process, conventionally described as a chemical reaction. However, this process generally does not occur by an obligate series of discrete intermediates, a ``pathway,'' but by a multiplicity of routes down a folding funnel [1,2,3,4,5,6,7]. Dynamics within a folding funnel involves the progressive formation of an ensemble of partially ordered structures, which is transiently impeded in its flow toward the folded structure by trapping in local minima on the energy landscape. As one proceeds down the folding funnel, the different ensembles of partially ordered structures can be conveniently described by one or more collective reaction coordinates or order parameters. Thermodynamically this funnel is characterized by a free energy that is a function of the reaction coordinate which is determined by the competition between the energy and entropy. A crucial feature of the funnel description is the concerted change in both the energy and the entropy as one moves along the reaction coordinate. As the entropy decreases so does the energy. The gradient of the free energy determines the average drift up or down the funnel. Superimposed on this drift is a stochastic motion whose statistics depends on the jumps between local minima. To first approximation this process can be described as diffusion. Folding rates are determined both by the free energy profile of the funnel and the stochastic dynamics of the reaction coordinates.
Despite our tremendous knowledge about neuronal properties, the number of insights into how variations in neuronal properties influence the behavior of the neurons have remained relatively limited, in comparison. One of the limitations has been our own brain which, for most of us, has a limited ability to integrate this ever increasing amount of data. We have been aided in recent years by computational approaches in which mathematical analysis and computer simulation is used to model the structure and function of the nervous system. Again, however, the number of new insights remain limited. The neurobiologists in this working group will outline for the other participants the types and complexity of our data set, as outlined above, and how the output of small networks of these neurons, as exemplified by the presentation of Dr. Calabrese, depends about the properties of its elements. The goal is to discover mathematical, statistical and theoretical approaches that physicists and engineers may have used to address or answer analogous issues in their disciplines.
The key issue is to connect either the *mechanisms* or the optimization/computational *principles* embodied in a given model with experimental tests. Too often, models are judged by comparing the pictures they produce with experimental pictures, but the details of the pictures often depend on inessential aspects of the model, or on issues that the modeler purposely did not address, or even on nonbiological rather than biological elements of the model. Instead, we must use models to understand how different mechanisms and/or principles lead to different outcomes, and, thus, to formulate experiments that can test and distinguish between proposed mechanisms or principles.
The key point here I think is that, to be useful, such collaborations must be deeply rooted in data and in experimental work. The theorist must think about some particular system on which experimentalists work and get to know that system, understand our current knowledge and our current experimental capabilities and limitations. This can only happen through deep and frequent interaction with experimentalists. Given such interaction, the theorist can bring his or her tools to bear, hopefully to better and more systematically understand the system. Such interaction leads to new points of view and new questions that may uncover real structure that was not apparent. But in the absence of such interaction, theoretical work is not likely to deeply influence our understanding of the brain  it will stand apart.
Obviously, we need to understand how much processing is achieved by single neurons or small neuron assemblies. This is still terra incognita, even though it is so basic.
Neural Development:
One should seek extremely well defined systems, e.g., the development of the lateral geniculate nucleus, and attempt a convergence of mathematical modelling, molecular neurobiology, and neuroanatomy to provide a complete explanation of developmental trategies of a few brain structures.
Brain Maps and Computational Geometry:
Brain maps associate neurons to task and can capture topological characteristics of task (vision, auditory, motion, language) spaces through synaptic connections. Concepts can be borrowed from computational geometry. This might benefit theoretical neurobiology as much as learning theory benefitted it earlier. Neural network theory, in as far as it deals with algorithms, should become a branch of the theory of algorithms and avoid as much as possible its fuzzy roots in neurobiology.
Neural Coding:
A central question remains how the brain codes information. Brain areas for which maps are well established can serve to achieve further advances. Still of particular interest is firing synchronicity for binding of information.
Engineering Approaches to Theoretical Neurobiology:
To test the practical validity of concepts Theoretical Neurobiology should adopt engineering approaches for the study of integrative brain functions, e.g., for the study of visuomotor coordination. Integrative brain functions are also a means to study how several brain capacities, e.g., vision and motion, work together.
Brain Implants:
Brain implants are becoming a practical means for partial restauration of damaged brain functions. The question how implants should present (e.g., accustical) information to provide an optimal input to brain tissues should be persued.
Consciousness:
Forget it for a while.
Recording from very large neural populations (MRI, Optical Imaging, PET, SQUID, Multiple electrodes, etc.)
Analysis of large scale neural databases.
Dynamical models of neural populations.
Architecture, function and dynamics.
Mobilization of neurons by networks.
General organizing principles.
Problems:
Analysis and role of stochasicity.
Extraction of small signals from an active background.
Extension of single neuron activity to dynamics of large populations.
Realistic models.
Neural codes and information content.
New recording instruments with improved spatiotemporal resolution.
Investigation of the above topics requires expertise in a broad range of topics: statistical mechanics; stochastic analysis; dynamical systems; chaos theory; information and coding theory; large scale computations; signal analysis; and more. Perhaps new physical and biological principles and new mathematical concepts will be required. The vastness of the topics and goals will demand the joint effort of biologists, engineers, mathematicians and physicists.
This synthetic approach is usually seen as falling outside the purview of traditional neuroscience research which historically emphasizes the ANALYSIS of biological neural systems and the extraction of basic organizational and functional principles, but does not extend to applying those principles to the creation of artificial systems. Nor is this approach widely embraced by the engineering community, largely because neurallyinspired solutions are not yet sufficiently advanced to routinely outperform more traditional adaptive signal processing and adaptive control techniques. Thus there are currently only a few research laboratories worldwide that are making serious attempts to design and construct artificial neuromorphic systems with the goal of generating insights into biological function.
When considering the complementary approaches of analysis and synthesis of intelligent systems, many of the scientific issues that arise are similar, but others issues are unique to one domain or the other. For example, a neurobiologist concerned with the analysis of a neural circuit involved in pattern generation during locomotion might ask "what are the underlying biophysical interactions that give rise to and modulate the observed phase relationships between individual neurons in this circuit?" On the other hand, a neuromorphic design engineer that wanted to synthesize a locomotor control circuit for a robot might ask "what are the essential dynamics required to construct a network of N neuronlike oscillators with M adjustable phase relationships?" Often the questions that arise when considering the synthesis of neuromorphic systems tend to be of a broader and more general nature than those that arise in the analysis of one particular biological model system. Thus synthetic approaches can both broaden and deepen our understanding of functional and organizational principles in biological systems.
A few crossdisciplinary research areas relevant to the synthesis of neuromorphic systems include:
evolutionary algorithms and coding schemes for evolving complex systems  
developmental algorithms and analysis of developmental dynamics for selforganization of artificial neural circuits  
design principles for constructing/developing/evolving network architectures hat meet particular dynamic specifications  
realtime emulation of neural circuits in silicon  both analog and digital implementations  
biologicallyinspired robotics  active sensing, adaptive motor control, sensorimotor and crossmodality integration, adaptive behavior, etc. 
1.2. MultiLevel Modeling 1.3. Sociological Obstructions to Collaboration 1.4. Underemphasized Issues Clinical Applications Neural Mechanisms for Social Interactions 
2. Small Systems 2.1. Detailed Analysis of Single Cell Function
2.2. Polyfunctional Circuitry 2.4. Canonical Circuits 
3. Large Populations of Neurons 3.1. Coding in Neural Systems
Neuronal codes in sensory systems Neuronal codes in motor systems Population Encoding MultiNeuronal Code Exhibited in Firing Rates Information vs. Meaning in Neural Systems 3.2. Cortical Dynamics Cortex as a Dynamical System Cortical Memory 3.3. Multiple Brain Regions The Binding Problem 3.4. Nonlinear Dynamics for the Neurosciences and Analogous Subjects Chaos Theory Multiple Time Scales Numerical Analysis of Bifurcations Protein Folding Problem: The Dynamics And Thermodynamics On Complex Landscapes Relationship Between Network Architecture/System Decomposition and Dynamics Statistical Mechanics 
4. Systems Perspectives 4.1. Cognitive Neuroscience
From Neuroethology to Cognition Relating Neural Networks to Behavioral Outcomes Processing Information in Context 4.2. SelfOrganization: The Brain as a Continuingly Self Organizing System. Neural Development Dynamics of Neural Codes Brain Maps and Computational Geometry 4.3. SystemLevel Models of Visuomotor Coordination Control of Saccades Prism Adaptation for Throwing 
5. Autonomous Robots/Brain Synthesis/Neuromorphic Engineering 5.1. Analog VLSI
5.2. From aVLSI to Functioning Robots 5.3. BiologicallyInspired Robotics 5.4. Brain Implants 5.5. Brain Theory <> Autonomous Robots ActionOriented Perception Styles of Learning HandArm Robots Mobile Robots The Robotic Salamander Rana Robotrix Autonomous Flying Vehicles 5.6. SocioRobotics 5.7. Organizational Principles for Neuroethology, Cognitive Architecture and Autonomous Robots: Principles for the Design of Intelligent Agents Principle 1: ActionOriented Perception Principle 2. Cooperative computation of schemas Principle 3: Interaction of partial representations Principle 4. Evolution and modulation Principle 5: Schemabased Learning Principle 6: The "Great Move" in the evolution of intelligence Principle 7: Distributed goaldirected planning Principle 8: Intelligence is Social 
6. Data Analysis and Data/Model Sharing 6.1. Brain Models on the Web: Confronting Models & Data
Putting a Model into Brain Models on the Web Towards Standards for Neural Simulation Languages 6.2. Imaging Patterns of Neural Activity 6.3. Monitoring a Patient's Brain State 
My remarks will be exemplified by a dependency structure known in statistics as a 'Wermuth condition' and also referred to as the 'explainingaway' phenomenon in the AI literature. I will make use of an example due to Judea Pearl. Suppose that you have an Alarm system installed in your home. Alarms can be triggered by Burglars. To model this piece of knowledge we might invoke a twonode network, with a node for Burglar, a node for Alarm, and an excitatory link between the nodes. We also want to be able to do inductive inference, that is, to infer that Burglar is more likely if Alarm is observed, so we assume some neurallyplausible version of backpropagation that can go backward along the excitatory link. (We could also make use of Bayes rule). This process will have an effective excitatory effect and drive the activation of Burglar higher if the activation of Alarm is increased. Now imagine that you are driving home because someone has told you that your Alarm is going off at your home and you are worried about Burglars. On the way home you turn on your radio and discover that there has been an Earthquake in your area. Earthquakes can shake the ground and trigger Alarms, thus we add to our circuitry another node for Earthquake with a positive link to Alarm. We also invoke the ability to go backwards, which again has an effective excitatory effect. Note that the entire circuitry is excitatory, both in the forward and backward directions. Thus, hearing about the Earthquake will make Alarm even more likely, which will increase the belief in Burglar. However, clearly the behavioral facts in the matter are that as soon as one learns about the Earthquake one stops worrying about the Burglar. The Earthquake ``explains away'' the Burglar. But how can we capture this in our circuitry? Where is the inhibition? It makes no sense to add inhibition between Burglar and Earthquake, because (a) as children we learned about Burglars and Earthquakes at different times, so there was no opportunity for correlative learning; (b) leaving learning aside, if we have to make Burglar inhibit Earthquake then we have to make essentially everything inhibit everything else; (c) if anything, Burglars and Earthquakes are marginally positively correlated (cf. the looting associated with Earthquakes). So simple fixed inhibition is not adequate. The other standard neural network fixto introduce hidden unitsis also not very appealing because one would require that the connections to these hidden units be learned, and this would require going home and finding that indeed there was no Burglar. Moreover, the fact is that with proper use of probability theory one can {\em infer} that the probability of Burglar should decrease if Earthquake is observed, given the excitatory influences I've described above; this doesn't need to be {\em learned}.
There are message passing algorithms that can handle the explaining away phenomenon. Pearl's original work was restricted to acyclic graphs, but subsequent work has removed that restriction. Some of the currently available algorithms have ``neural'' flavor; others do not. The basic concept needed is that of a ``Markov blanket.'' Algorithms that take into account the full Markov blanket of node are in position to handle the explaining away semantics readily; algorithms that do not take into account the Markov blanket make accounting for the phenomenon overly difficult.
There are other dependency phenomena that are as sensible behaviorally as the explaining away phenomenon. I would argue that these are the kinds of phenomena that one should be aiming it in thinking about the computational properties of neural circuits. Simply accepting the linearplussigmoidnetworkwithexcitatoryandinhibitoryweights as a computationally adequate formalism, either because of its Turing equivalence or because of its putative relationship to anatomy and physiology, misses the boat (indeed the standard layered network activation rules do {\em not}take into account the entire Markov blanket of each of the nodes). There needs to be a better computational theory in place to guide research on network algorithms. My own view is that this computational theory is available, at least in part, in the statistical literature on graphical models, where graphs and probability theory are married.
Data reduction methods are thus essential in attempts to digest the large amounts of information associated with the investigation of highdimensional processes. In recent years, many promising methods have been developed or adapted for analyzing massive data sets consisting, for example, of sequences of highqresolution images or computer dumps of numerical simulations. In addition, these methods provide a means for remodeling redundant macroscopic models into reduced systems of equations which reflect the intrinsic dimensionality of the process in question. While these tools are still being developed, they may already provide significant benefits to researchers who are currently in need of data reduction methods. Also, the developers of these techniques would benefit greatly from involvement in interdisciplinary research efforts which emphasize applications to challenging real world problems and concomitant constraints, rather than basing method design on the performance of the approaches on toy problems.
The human brain has staggering information processing and storage capabilities. The brain ability to accomplish tasks such as image recognition suggest powerful data reduction tools. Mathematically derived procedures for optimal data reduction are still far behind in overall general performance. There will be an enormous payoff if an understanding of biological information processing techniques can be translated into computational algorithms for modeling and model reduction. Efforts in this direction might be best pursued by research groups with the appropriately diverse backgrounds in Mathematics, Engineering, Physics and Biology.
Such an approach suggests more than collecting teams of scientists from different academic departments and industry. While this is an essential component, it is necessary that we educate future scientists, engineers and mathematicians with a core curriculum that permits the rapid exchange of ideas in diverse fields. One of the major arguments for teaching a mainstream calculus course, rather than teaching a specialized version for the pure mathematicians and an applied flavored one for the engineers, is that students should develop a common knowledge base. In addition, special courses which integrate research into the classroom targeting a broad audience from many disciplines will greatly improve the Tower of Babel situation that currently exists over research as a whole.
There is a gap in the theoretical tools available to deal with meaningful information. Measures of information transfer have been used in analysis of the bandwidth of photoreceptors in the fly's eye. This kind of analysis is mostly based on the idea that all information is equally accessible and equally valuable. It does not, for example, deal well with ``detectors'' like the ``fly detector'' of the frog's retina, which, with a small number of actionpotential bits, communicates very significant information to the animal. Thus, the relationship between meaningful information and information transmission in the physical sense remains to be developed.
Evolved biological systems include a number of features of selforganization not found in digital computers. Firstly, neuronal systems must maintain their own integrity. Thus, homeostatic functions exists at all levels, from the subcellular to the network level. It is unclear how the requirement for metabolic optimization influences information processing strategies. It has been proposed that the bandwidth constraint of the optic nerve for example, dictates a specific information encoding by the retinal circuits. In this example, the metabolic/physical constraint of low bandwidth interacts with computation. In the cortex, a simple homeostatic problem is how to preserve electrical stability in the face of recurrent excitatory connections. Secondly, the brain "programs" itself in three distinct ways: through development, during learning, and when it simply executes routine functions of the organism. There are any number of important open issues in this area; for example, the problem of local learning rules for recurrent networks. In addition, neural network research has done little to address the problem of dynamic programability of a network; programability that does not rely on longterm synaptic changes. A hallmark of this kind of programability is the use of the same neurons in different contexts to perform different functions. A simple example of this is found in the parietal cortex where neurons shift their receptive fields in anticipation of an eye movement.
The general method for processing information in a context is also an open question. For example, there is no circuit level description of how psychophysically described processes such as motion capture and perceptual grouping occur. At a cognitive level, an analogous problem arises when ambiguous symbols, such as letters common to cyrillic and roman text, are interpreted to belong to a particular script. This phenomena shows a switching dynamic similar to the perceptual switching in binocular rivalry. The role of attention in isolating features from their context is also an open problem. It has been demonstrated that the response of neurons in V4 to a stimulus in the presence of a distractor is reduced and that this reduction can be reversed if the animal attends to the stimulus. By what neuronal mechanism is this function realized?
Meaning is defined only in a context. The method of traditional science and engineering has been to isolate of the problem from the context. In these disciplines, the observer is not part of the system, the specifications are not changed by the material, and the animal is anesthetized! The challenge of contemporary science is to find tools to incorporate what has been left out.
What are the problems ? I do not believe that there is a finite list of unresolved problems in neurobiology. However, a basic division in developing in what remains to be done in cellular neurobiology. A dominant them in recent years has been the growing realization that many mechanisms at the cellular level are conserved between types of cells and organisms. Our understanding of synaptic transmission and voltagegated channels, for example, is heavily focused on problems that are general to many types of cells and increasingly are becoming problems in structural biology. There is a growing need for scientists with very good quantitative skills to work on these problems but they are  in many ways  not problems that are characteristic of the nervous system.
The future of work on the nervous system clearly  I believe  lies in understanding how animals (including people) use the nervous system to to execute the various categories of behavior. The small groups established for this workshop are a reasonable representation of the problems that we will face as we continue thinking about nervous systems as dynamic entitities. The fundamental problems are how animals carry out specific tasks. How do they see, how to they learn, how do they generate movements, etc.