Skip to content


BIOLOGY | Contents | Next

A Strategic Application Collaboration for Molecular Dynamics

Peter Kollman, Professor, Department of Pharmaceutical Chemistry, Associate Dean, School of Pharmacy, UC San Francisco
David Case, Professor, Department of Molecular Biology, The Scripps Research Institute

Over the last two decades, an increasing number of chemists have turned to the computer to predict the results of experiments beforehand or to help interpret the results of experiments. Skepticism on the part of laboratory chemists has gradually evaporated as the computational results have made contact with, and even anticipated, experimental findings. When the 1998 Nobel Prize in Chemistry was awarded recently to two scientists, Walter Kohn and John Pople, who originated some of the first successful methods in computational chemistry, the award was seen as an affirmation of the value of computational chemistry to the field of chemistry.

"We've come a long way," said Peter Kollman of the Department of Pharmaceutical Chemistry at UC San Francisco (UCSF). "But while we've come a long way, we can see that we've still got a long way to go."

Kollman and colleagues David Case of the Molecular Biology Department of The Scripps Research Institute (TSRI), David Pearlman of Vertex Pharmaceuticals in Cambridge, Massachusetts, and Ken Merz of Pennsylvania State University lead a far-flung team of investigators in the development of AMBER, one of the major molecular mechanics and dynamics code packages used to study DNA and proteins by groups around the world. Now, as part of an NPACI Strategic Application Collaboration, AMBER's performance is being improved by 50 percent to 65 percent.

Plastocyanin with alpha-carbon trace

Figure 1:
Plastocyanin with Alpha-carbon Trace

Plastocyanin with solvent molecules

Figure 2:
Ball-and-stick Plastocyanin with Solvent

Plastocyanin without solvent

Figure 3:
Ball-and-stick Plastocyanin without Solvent

Figures 1–3 show results from a molecular dynamics simulation of the plastocyanin protein in water, part of the AMBER benchmark suite.


AMBER stands for Assisted Model Building with Energy Refinement. The code's successes include its use to study protein folding, to study the relative free energies of binding of two ligands to a given host (or two hosts to a given ligand), to investigate the sequence-dependent stability of proteins and nucleic acids, and to find the relative solvation free energies of different molecules in various liquids. Hundreds of contributions to the scientific literature reflect the use of AMBER.

Recently, Yong Duan and Lu Wang of the Kollman group used AMBER to make the longest molecular dynamics simulation of a molecule, a 36-residue protein called the villin headpiece subdomain. The simulation--carried out over 100 days at the Pittsburgh Supercomputing Center on a CRAY T3D and then a CRAY T3E at SGI/Cray Research--followed the molecule for a full microsecond. This was possible because of Duan's improvement of the code's efficiency for parallel computing.

When the typical time step is a femtosecond (a quadrillionth of a second) and most "long" simulations take hundreds of picoseconds (trillionths of a second), a microsecond is an achievement, representing one billion time steps and being longer than the longest previously published simulation of a protein in water by two orders of magnitude.

AMBER's current release, version 5, consists of about 65 programs, with 930 source files containing nearly 200,000 lines of code, mostly Fortran. The code is sold for UC San Francisco by Oxford Molecular, Inc., under a licensing agreement, and the cost to the academic community is nominal.


"The goal is to carry out simulations that are long and accurate enough to simulate biochemical processes," said Bob Sinkovits of SDSC, who is working on an NPACI Strategic Applications Collaboration with Case and Kollman. "We want to simulate such processes from beginning to end," says SDSC chemist Jerry Greenberg, who is working with Sinkovits on AMBER optimization. "That could mean, in the case of protein folding, something like 20 milliseconds. In the case of the assembly and production of a protein from its template, it might mean several seconds." The same technology could be used to allow simulations on larger systems, or to allow the use of more complex force fields, or to allow repeated computational experiments to explore statistical convergence.

The work with the Kollman and Case groups is the first of three ongoing Strategic Applications Collaborations (SAC) within a program, headed by Jay Boisseau, of the Scientific Computing and Computational Science Research groups at SDSC. The other two collaborations, with Charles Peskin of NYU and Lars Hernquist of UC Santa Cruz and Harvard University, will be covered in future issues of enVision.

"The mission of the SAC program is to enhance the effectiveness of computational science and engineering research conducted by NPACI investigators," Boisseau said. "We want to develop a synergy between the research groups and NPACI staff that accelerates the research and enables new science to be produced over a period from a few months to a year." The goal for collaborations is to discover and develop general solutions that will benefit not only the selected researchers, but also the entire academic community and all users of high-performance computing.

Boisseau plans for the SAC program to reach out to communities new to advanced computational resources, but acknowledges that work with codes like AMBER typifies the program at present. "AMBER is a major computational community resource, and our work to make the main module more efficient will benefit many," he said.


AMBER has a long history. It was originated by Paul Weiner, who worked in the Kollman group in the late 1970s, and the second release, mainly the work of group member U.C. Singh, incorporated the first versions of the free-energy approaches (in the gibbs module). George Seibel put the code on a more general and maintainable basis, and more recent additions were made by Kollman group members Jim Caldwell and Bill Ross.

AMBER is currently developed as an active collaboration among Kollman and his group (which numbers from 15 to 20, including students, postdoctoral researchers, and visitors), Case and research programmer Mike Crowley at TSRI, Tom Cheatham at NIH, Ken Merz at Penn State, David Ferguson at the University of Minnesota, Tom Darden at the National Institute for Environmental Health Sciences (NIEHS), Carlos Simmerling at SUNY Stony Brook, and Dave Pearlman at Vertex Pharmaceuticals in Boston.

"Even though I am working in the pharmaceutical industry," said Pearlman, "I am still working with and on AMBER. The collaborative effort is a tribute to the openness of Peter Kollman and David Case and their groups." Various modifications made by users get incorporated, Pearlman notes, first as independent modules, and then, as usage warrants, in the main package for public distribution. For example, Tom Darden of NIEHS developed the Particle Mesh Ewald (PME) solvers for the SANDER module of the code, important for dealing with long-range interactions.

Pearlman worked on the calculation of free-energy derivatives and is now working on a module to analyze statistical error in ensemble calculations on the fly, which should help in the intercomparison of free energy calculations. "I'm also working on the idea of using a 'free-energy grid' around the target molecule to help drive the direction of modification of an ongoing calculation towards tighter binding or similar outcomes," Pearlman said. "AMBER is not just a code package; it's an environment in which to think and do chemistry."


"Most of our work since we began the Strategic Applications Collaboration in July has been focused on boosting the performance of the SANDER module of AMBER," Sinkovits said. SANDER stands for Simulated Annealing with NMR-Defined Energy Restraints, and the module carries out energy minimization, molecular dynamics, and NMR refinement calculations. It accounts for the majority of the processor time used in most studies (Figures 1–3).

"The calculations can be further divided into two main types--those that employ PME methods and those that do not," Sinkovits said. The team has made significant progress in optimizing the non-PME calculations, with speedups in the range of 1.3 to 2.4 on the SDSC CRAY T3E. More typical are the most recent results, run by Greenberg, which show speedups of 55 percent to 70 percent on 1 to 16 processors. The changes have been retrofitted into the current version of SANDER that is released with AMBER 5.0, as SANDER 5.0.1.

Sinkovits and Greenberg will also work with Case and Crowley at TSRI on optimizing the PME routines. "Optimization is directly tied to scientific progress," Case said. "This project is also tied to our Molecular Science thrust area project to optimize and share code among both AMBER and CHARMM, another major package. The Strategic Applications Collaboration is a very welcome development." --MM END