Skip to content


    Skip to content

     

    NEUROSCIENCE | Contents | Next

    Road Maps for Understanding the Human Brain

    PROJECT LEADER
    Arthur Toga, Professor of Neurology, Director of the Laboratory of Neuro Imaging, UCLA

    Atraffic light turning red sends a signal from your eye through the optic nerve to the visual centers of the brain. From the visual centers, other parts of your brain translate the image as an instruction to stop your car. At this point, your brain tells your feet to step on the brake pedal. All the while, you're singing along with the radio and deciding whether to roll the window down. To understand how the brain allows humans to perform such tasks, and to investigate changes in the brain related to pathological conditions and disease, a team of NPACI neuroscientists is creating tools to produce road maps of the structures and connections in the brain.

    The team, led by Arthur M. Toga of UCLA, is producing software that can analyze data collected from magnetic resonance imaging (MRI) scans, digitized images of cryosectioned brains--brains frozen and cut into thin slices--and high-powered microscopes (Figure 1). The software can then re-create a 3-D model of the brain and generate a road map that allows a particular brain to be compared against other brains.


    6 brain slices from light microscope.Figure 1: Brain Slices

    The Laboratory for Neuro Imaging at UCLA, led by Arthur Toga, collects high-resolution brain data from MRI scans, digitized slices of cryosectioned brains, and light microscopes, such as these slices here, which are processed to create 3-D data sets that range from 100 megabytes to 10 gigabytes in size.

     


    MAP MAKERS

    "I'm a mapmaker," said Michael I. Miller, professor in the Biomedical Engineering and Electrical and Computer Engineering Departments at Johns Hopkins University and a participant on the NPACI project. "As in classical mapping, we're producing a road map, but the roads in this case are the interconnecting nerves of the white matter, and the cities are regions of the brain."

    However, simply describing the regions, folds, and surfaces of a particular brain, no matter how accurately, is not enough to understand the map. To see how a diseased brain differs from a normal brain, or compare how two normal, yet unique, brains differ, for example, neuroscientists must be able to compare one road map against another. But the brain is a complex 3-D structure that varies from one individual to the next. There is no straightforward equivalent to the latitude and longitude of geographic maps.

    The maps that Miller and Toga are generating rely on methods from an emerging area called computational anatomy--an area formalized by Ulf Grenander of Brown University and Miller--which attempts to get computers to analyze, in this case, the shapes of brain areas and then extract and produce structures that incorporate the variation between individuals while still allowing comparisons between them.

    "We're interested in methods for describing topography and surfaces, as geographers map mountains and valleys," said Miller, director of the multi-institution Center for Imaging Studies, based at Johns Hopkins, which includes Washington University, Harvard, MIT's Lincoln Laboratory, and the University of Texas at Austin and at El Paso. "Mapping methods are for comparing areas, shapes, distances, and volumes. They allow us to compare a diseased brain to a normal brain, or look at correspondences between normal brains." (Figure 2)

    To compare the structures of different brains, the structures must be stretched or warped to a standard 3-D template. "These are statistical operations that look at shapes," Toga said. "Warping allows subject-to-subject comparisons, comparisons between modalities for a single subject, or comparisons of a subject to a population."

    This project is also developing algorithms to combine 3-D data sets from multiple samples of the same or different classes of data, at different levels of spatial resolution. The groups tap different aspects of brain structure and function seen using MRI or PET scans or light microscope images. The same mapping methods also permit the researchers to align data for entry into a common database or to make 3-D comparisons.

    "The brain data from Toga's lab is massive in size," Miller said. "The algorithms we're using can't run on workstations at the full resolution of the brain data. So we're beginning a collaboration with KeLP to run our mapping methods on full-resolution data." KeLP, a project led by Scott Baden of UC San Diego, in NPACI's Programming Tools and Environments thrust area, provides code libraries that allow applications scientists to more easily take advantage of parallel computers for data represented on grids, such as 3-D maps.

    Other collaborators on this project include neuroscientists at Washington University in the laboratories of Marcus Raichle and David Van Essen, post-doctoral researcher Colin Holmes in Toga's lab, and Johns Hopkins graduate student Cui Jing.


    Two images of brain's grey-white matter boundaryFigure 2: Mapping the Brain

    Michael Miller and graduate student Cui Jing developed software to extract the grey-white matter cortical surface from high-resolution, 3-D human brain data obtained from UCLA. Dynamic programming methods were used to define the maximal contour of the grey-white surface from a quarter-resolution version of the UCLA data. Processing of the full-resolution data will require NPACI infrastructure.

     


    HIGH-RESOLUTION BRAINS

    Along with data from the Washington University groups, Miller's group uses data from the Laboratory of Neuro Imaging at UCLA, led by Toga, because Toga's group is one of the few that collects such high-resolution data from MRI scans, digitized slices of cryosectioned brains, and light microscopes. Currently, datasets from an individual brain reach 10 gigabytes, or from 10 to 50 times the resolution of the Visible Human data set. Future work will incorporate even higher-resolution data derived using the electron microscope by Ellisman's group. Data from a single brain collected at this future high resolution will reach petabyte levels.

    In addition to warping and mapping methods, Toga's group is working on distortion correction algorithms to correct any deformation imposed in the collection of data.

    Part of the NPACI support for this project has led to the implementation of large disk caches at UC San Diego, UCLA, and Washington University and to high-speed network connections among the sites. "The data are really only part of the problem," Toga said. "The way we organize the data is also a challenge. Can we manipulate the data sufficiently to allow modeling, comparisons of data from the same subject, from different subjects, from different species, or between different modalities? We're providing an organizational schema to move the data around efficiently." These projects are integrated with the Federating Brain Data project to accomplish this goal.

    Key to the collaboration among UCLA, Washington, UC San Diego, and Johns Hopkins is the ability to exchange and organize data. The question becomes how to index and query a 10-gigabyte database record that includes surfaces and voxels and still allow navigation through the data, independent of variations in the performance of networks, disks, and infrastructure in general.

    "We need a database approach in which records are not limited to a fixed maximum size," Toga said. "To transfer such information, we must prioritize the data that gets sent and only provide the necessary information to satisfy the query. This allows efficiency. Such a capability applies to any domain that uses large volume data." So far, the data are exchanged via files. The UCLA group has designed, but not yet implemented, a data structure for exchanging data via databases.

    One approach is to organize the data hierarchically by anatomical structure. For example, the data might be organized to allow macroscale structures to be requested and delivered. Further queries might zoom in on smaller structures. Alternatively, the data can be organized hierarchically by voxel resolution. Thus as the user zooms in on a region, higher resolution voxel maps would be retrieved.

    FROM VISUAL TO MICROSCOPIC SCALE

    A third set of algorithms and codes is being developed by Mark Ellisman and colleagues at UC San Diego. Ellisman, leader of the Neuroscience thrust area and director of the National Center for Microscopy and Imaging Research at San Diego, is working on codes for electron microscope tomography. This is a computationally intensive approach used to derive high-resolution, 3-D data on biological structures from electron microscope images.

    "We work at the gross morphology, visual scale and the light microscope scale," Toga said. "The work by Ellisman's group goes down the spatial scale. The ultimate goal would be to link the structures revealed at the electron microscope scale with those at the light microscope and gross morphology scales to provide a continuous picture of the brain."

    Ellisman's lab is collaborating with projects from NPACI's Metasystems thrust area. The Globus system offers a toolkit to distribute the codes across networked systems, and the AppLeS scheduler helps in running the code (see ENVISION, July–September 1998). Together, the collaborators have implemented an initial version of a distributed, heterogeneous parallel program in which the tomographic computation is performed on a cluster of networked workstations.

    "To produce these road maps for the human brain, we have to be able to access, analyze, and measure such large data sets," Miller said. "Understanding the road maps--the connections and structures--reveals how the brain functions. The tools that the Neuroscience thrust area is producing are invaluable for the studies we're doing."--DH END