Skip to content

METASYSTEMS | Contents | Next

Globus: Around the World and onto Your Desktop

Carl Kesselman, Research Associate Professor, Computer Science Department, University of Southern California; Project Leader at USC Information Sciences Institute; Visiting Associate, Computer Science Department, Caltech

Ian Foster, Senior Scientist, Mathematics and Computer Science Division, Argonne National Laboratory; Head, Distributed Systems Laboratory, ANL; Associate Professor, Department of Computer Science, University of Chicago

The Internet of the future will connect not only computers, but also large data warehouses and advanced scientific instruments, such as high-powered electron microscopes. For example, biologists will mail their specimens to the lab where the microscope resides, maybe on the far side of the globe, but examine the specimen by controlling the microscope from the workstations in their offices. As they view their slides, the system can store digitized images in a large database or generate high-resolution 3-D images on a collection of high-end computers. All the components will operate together as a seamless environment for scientific discovery.

"With advances in Internet technologies, which allow us to connect a variety of resources, we have the opportunity to create a new type of computational environment, which we call computational grids," said Carl Kesselman, co-leader of the Globus project. "By providing consistent and uniform access to distributed resources, much like the power grid provides electricity, we have the potential to change the way scientists and engineers use computers."

To realize the potential of such metasystems, Kesselman and co-principal investigator Ian Foster of Argonne National Laboratory started the Globus project. The objective of Globus is to develop the fundamental infrastructure needed for grid applications and to enable distributed high-performance computing--using not only supercomputers, but also other advanced resources like scientific instruments, massive data archives, or virtual reality environments.

Kesselman, a project leader at the University of Southern California's Information Sciences Institute (ISI), and Foster, a senior scientist at Argonne and an associate professor at the University of Chicago, have been collaborating on Globus since its beginning. Globus has its roots in the I-WAY demonstration at SC95 and since then has attempted to move from demonstration to persistent environment. In just two years, the Globus project has chalked up an impressive set of achievements, winning awards and proving its capabilities in large-scale experiments. The software is currently deployed at more than 40 institutions around the world, including two NPACI international affiliates: the Parallel Computing Center at the Royal Institute of Technology in Stockholm, Sweden, and the University of Lecce in Italy.





meta_cmt_B.large-cmykFigure 1: Globus and Telemicroscopy

The Globus project at the University of Southern California and UC San Diego's AppLeS group are working with researchers in the Neuroscience thrust area to render and compute tomography data sets in real time by coupling unique scientific instruments, supercomputers, and visualization tools. This project is developing multi-resolution volume visualization techniques that enable us to display tomographic data in real time on a range of different display devices. The top image depicts a tomography volume of a mitochondrion with 9 million elements, while the downsampled volume in a VRML browser has 10,000 primitives.


Globus is a joint project of ISI, the Mathematics and Computer Science Division of Argonne National Laboratory, and The Aerospace Corporation. The Globus environment is also a focus of research in the NPACI Metasystems thrust area, as well as a participant in the NSF's Partnerships for Advanced Computational Infrastructure through NCSA. The Globus project also participates in the DOE ASCI program as a member of the Center for Simulating the Dynamic Response of Materials at Caltech and the Center for Astrophysical Thermonuclear Flashes at the University of Chicago.

Globus provides the fundamental technology needed to let an application integrate geographically distributed instruments, displays, and computational and information resources. Such computations may link tens or hundreds of these resources, as in a recent battlefield simulation demonstration. Globus is a collection of core services, including resource management, security, communication, and others, which enables the linking and interoperation of many systems. A toolkit layered on top of the services includes such utilities as the Message Passing Interface (MPI) for parallel computing.

"The toolkit is structured as a set of largely orthogonal components," Foster said. "Programmers can select just those services required to meet the needs of a particular application without having to restructure the application in terms of a particular programming model." Local services are kept simple, to facilitate deployment. But the more global interfaces are designed to promote the management of rather than the hiding of heterogeneity. The decoupling of global and local services enables incremental deployment of an application to make use of whatever interfaces are currently available. Tools include MPI, a remote I/O library, and a library called Nimrod to facilitate parameter studies.


The main Globus testbed is called the Globus Ubiquitous Supercomputing Testbed (GUSTO). GUSTO currently has 27 participating sites and over 2.5 teraflops of compute power, representing one of the largest computational environments ever constructed.

At SC97, Globus researchers ran a distributed parallel implementation of the DARPA-funded Modular Semi-Automated Forces (ModSAF) Distributed Simulation project. The project linked NPACI's 256-processor HP V2000 at Caltech and about 10 other large-scale systems in six time zones to create a real-time simulation of the movements of more than 50,000 individual vehicles on a battlefield. Using Globus, 1,900 processors were employed simultaneously in the simulations, which were an order of magnitude greater than had been done before.

In March, however, the Globus team raised the bar higher still with a record-breaking run that simulated 100,298 vehicles distributed across 13 computers, nine sites, and seven time zones. Caltech's Synthetic Forces (SF) Express project--funded by DARPA and led by Paul Messina of Caltech, NPACI's chief architect--conducted the largest distributed battlefield simulation ever to date during the Technology Area Review and Assessment briefings held at the SPAWAR facility in Point Loma, California. The simulation, demonstrated by Robert Lucas, deputy director of the Information and Technology Office of DARPA, comprised 100,298 vehicles. The record breaking 100,000-vehicle simulation--which also represented the largest metasystem run ever completed--was supported by the Globus metasystem environment.

"Connecting resources from various communities to perform a very large-scale joint computation is of great value not only to the distributed simulation arena, but to state-of-the-art metacomputing experiments," Messina said. "The operational aspects of coordinating 13 separate computers were made much more manageable with Globus' fault tolerant initialization and control mechanisms."

In another application called Cactus, developed at the Max-Planck Institute in Potsdam, Germany, the Alliance and at Washington University in St. Louis, solutions of Einstein's gravity wave equations were generated on a supercomputer in Garching, Germany, and visualized remotely in the United States. This application is used by astrophysicists to understand the structure of the universe.

Other recent accomplishments include a workshop held for NPACI Globus consultants in July, at which participants ran programs across an SGI machine at ISI, the IBM SP at SDSC, the HP V2000 at Caltech, and a Sun system at SDSC. The last Globus users group in August was attended by more than 80 people from around the world.


The Globus project team members were honored in April with the Global Information Infrastructure Next Generation award for their work in advancing the technology and application of high-performance distributed computing. Foster says that access to metacomputing capabilities will change the way people think about and use high-end computing. "Imagine if the average small investor had access to a $10 million supercomputer able to run a billion calculations per second," he said. "Fundamentally new applications such as tele-immersion, remote visualization, smart instruments and distributed supercomputing are only possible with the creation of new networking software."

The project participants have divided their effort into four main thrusts: research in areas such as resource management, security, and algorithms; tool-building via prototype software that can run on a range of heterogeneous platforms; the use of testbeds like GUSTO to exercise the broadest capacities of the tools; and finally the development of large-scale applications that can fully exploit the resources and demonstrate via scientific progress the progress that can be made through the use of the advanced resources of Globus.

Within NPACI, the Globus project is working to integrate the Meta-Chaos project at the University of Maryland (part of the Programming Tools and Environments thrust area) and the SDSC Storage Resource Broker (part of the Data-intensive Computing thrust area) to expand the services available to Globus users. To apply metacomputing power to new problems, Globus is working with the Earth Systems Science thrust project on surface water flow and transport simulations and a Neuroscience project on algorithms and codes for refining brain data (Figure 1). Globus will also be used to improve the services offered by the NPACI User HotPage.

"The development of the World Wide Web has changed the way that we think about information," Kesselman said. "We don't think twice about accessing Web pages that are spread across the world. The goal of the Globus project is to bring about a similar revolution with respect to computation." --MM