Press Archive

SDSC'S IRCache Team Announces New Web Cache "Bakeoff" Results

Major Net Cache Vendors Compete Head-to-Head in Performance Test

Published 03/02/2000

For more information, contact:
Duane Wessels, NLANR, 303-497-1822

Boulder, Colorado -- The results of the second Web Cache "Bakeoff" Competition ( have been announced by the IRCache team of the San Diego Supercomputer Center and the National Laboratory for Applied Network Research (NLANR). IRCache (, an independent research and support organization for high-performance networking sponsored by the National Science Foundation (NSF), has conducted an unbiased test of products from the industry's major vendors.

"This is our second head-to-head competition between systems that speed up information retrieval over the Internet -- Web cache servers available from commercial vendors and non-profit organizations," said Alex Rousskov, one of the organizers of the IRCache competition.

"We are extremely pleased with the turnout for the second bakeoff," IRCache researcher Duane Wessels said. "At last year's bakeoff, we had only six companies and nine products. Now we have almost triple that -- nearly all the major players in the industry. I think this clearly indicates the relevance of our tests, and the maturity of the caching community to come together like this."

Web caching is a way to reduce traffic and improve response time on the Internet. Instead of connecting directly to distant Web servers, browsers (clients) connect to an "HTTP proxy server" at the Internet Service Provider (ISP), which requests data files from their source servers or from other caches and then saves these files for use in response to future requests. Popular files collect in the caches and might be used many times without needing to be reloaded from remote sites.

Several dozen Web cache products have entered the marketplace within the past few years. Unfortunately, competing performance claims have been difficult to verify, and haven't meant quite the same thing from vendor to vendor. IRCache bakeoffs address the data networking community's needs for high-quality, independent performance data on commercial products. Vendors who want to test the performance of their products have an opportunity to do so under impartial, evenly matched conditions.

IRCache investigators Rousskov, Wessels, and Glenn Chisholm developed a software package called Web Polygraph with funding from caching vendors and the NSF. Polygraph simulates Web clients and servers and is becoming a de facto benchmarking standard. It is available at no cost from IRCache.

The first bakeoff was held in early 1999. Preparations for the second bakeoff began last August with an organizational meeting in Boulder, Colorado. By November, 16 Web Cache vendors had registered 24 products for testing.

The participating vendors were Cisco Systems (two products), Compaq (two products), Dell (two products), IBM (three products), iMimic Networking (one product), InfoLibria (two products), Lucent Technologies (two products), Microbits (one product), Network Appliance (one product), OCD Network Systems (two products), Pionex Technologies (one product), Quantex Microsystems (one product), Squid/IRCache (one product), Swell Technology (one product). Two vendors, Eolian and Cacheflow, withdrew before testing started on January 17, 2000.

"To ensure a valid comparison, every product was tested under identical conditions within a short period of time and at a single location," Wessels explained. The bakeoff was held in a 50,000 square-foot facility near Houston, Texas, generously provided by Compaq Computer Corporation. For two weeks the IRCache "Polyteam" tested 22 proxy caches. Half of the entries were tested in the first week and the other half in the second. Each vendor had five full days to set up and execute the tests. Vendors had access to the bakeoff facility from 9:00 a.m. until 8:00 p.m., and tests often were queued to run overnight. IRCache rented 120 PCs for use as Polygraph clients and servers.

Which contestant was the winner? "It depends on what you need to do, how much you can spend, and what you consider important," Wessels said. "Which is 'better' -- a Honda Civic, a Humvee, a Corvette, or a Mercedes? Our tests don't identify winners or losers, so people should think carefully about our benchmark results. It's tempting to leap to conclusions based on a single performance graph or a column in a summary table. We believe such conclusions will just about always be wrong."

The technical results of the competition can be summarized, but the Polyteam members strongly recommend using the full report rather than a summary of the competitors' statistics. A summary table of the results is available at Detailed information about the bakeoff conditions, the competitors, and the test results with graphs and analyses, is available in the formal report at

The peak throughput -- the highest tested request rate -- for each product ranged from 77 requests per second to 2400 requests per second. The mean response time -- how long it took the product to serve responses -- ranged from 1.35 seconds to 3.08 seconds. The hit ratio -- the percentage of requests that were satisfied as cache hits -- ranged from 32.3% to 55%, the theoretical maximum.

But the total price -- the sum of the list price of all caching hardware plus the cost of networking gear for each tested product -- ranged from $2,250 to $130,000. The price/performance ratios for two performance measurements, request rate and hit rate, are also illuminating -- $1,000 can buy 18 requests per second at one end of the range and 102 requests per second at the other; $1,000 can buy 8 hits per second at one extreme and 54 hits per second at the other. These and other factors varied among the contestants, and evaluations based on ratios of these factors cannot be summarized in a meaningful way.

"Our full report, available on the Web, contains a lot of performance numbers and configuration information, so take advantage of it," Rousskov said. "In particular, read the Polyteam and Vendor Comments sections, and compare several performance factors -- throughput, response time, hit ratio, and so on. Weight each factor by your own needs and preferences."

"The bakeoff tested many important criteria, but omitted others," Wessels said. "Our benchmark addresses only the performance aspects of Web cache products. For example, we think that manageability and reliability are very important attributes that should be considered in any buying decision."

Among the competitors was the Squid Internet Object Cache, developed by the IRCache group and available as "open source" software for all modern versions of Unix. Wessels and others wrote and maintain the code as a worldwide collaboration. Squid has been constantly evolving since 1996, and includes features not found in commercial caching products. An estimated 30,000 to 50,000 sites use Squid -- commercial ISPs, corporations, organizations, universities, and K-12 schools. (No sales figures are available, since Squid is distributed without charge.) The Polyteam testers took great care to ensure a lack of bias in the testing, which was verified by the other participants.

The bakeoff report includes comments by the participating vendors. All of the contestants viewed the results in a positive light, claiming that the results clearly justified the use of their particular products. Participants expressed satisfaction with the fairness of the test and the opportunity to have their products evaluated by unbiased, knowledgeable testers.

The third Web Cache Bakeoff is already being planned for Summer 2000.


IRCache performs research and development in the field of Web caching. Its members are employees of the University of California, San Diego through the San Diego Supercomputer Center and the National Laboratory for Applied Network Research. IRCache offices are located at the NCAR Mesa Laboratory in Boulder, Colorado. IRCache research is sponsored primarily by the National Science Foundation, but the organization also receives donations from many corporations, including Compaq, Cisco, and Alteon. See for more information. For information on Squid, see

The National Laboratory for Applied Network Research (NLANR) has as its primary goal to provide technical, engineering, and traffic analysis support for NSF High-Performance Connections sites and high-performance network service providers such such as Internet2, UCAID Abilene, the NSF/MCI vBNS, the Next Generation Internet, and STAR TAP. Founded by the National Science Foundation's Computer and Information Science and Engineering Directorate in 1995, NLANR is a "distributed national laboratory" with researchers and engineers at the San Diego Supercomputer Center, the National Center for Supercomputing Applications, the Pittsburgh Supercomputing Center, and the National Center for Atmospheric Research, among other sites. See for more information.

The San Diego Supercomputer Center (SDSC) is a research unit of the University of California, San Diego, and the leading-edge site of the National Partnership for Advanced Computational Infrastructure ( SDSC is sponsored by the National Science Foundation through NPACI and by other federal agencies, the State and University of California, and private organizations. For additional information about SDSC, see, or contact David Hart at SDSC, 619-534-8314, <>.

Related Links

For more information from the vendors who participated in the bakeoff, see:

* Cisco Systems (
* Compaq (
* Dell (
* IBM (
* iMimic Networking (
* InfoLibria (
* Lucent Technologies (
* Microbits (
* Network Appliance (
* OCD Network Systems (
* Pionex Technologies (
* Quantex Microsystems (
* Squid/IRCache (
* Swell Technology (

back to top