Press Archive

Major Net Cache Vendors Compete Head-to-Head in Performance Test

Results Released at Fourth International Web Caching Workshop

Published 04/05/1999

For more information, contact:
Duane Wessels, NLANR, 303-497-1822,

Web Cache Competition:
Web Caching Workshop:

UNIVERSITY OF CALIFORNIA, SAN DIEGO -- The results of the first Web Cache "Bakeoff" competition were released at the Fourth International Web Caching Workshop in San Diego on April 2. The competition was organized by the IRCACHE team of the National Laboratory for Applied Network Research (NLANR), an independent research and support organization for high-performance networking funded by the National Science Foundation.

Six major sources of Web Cache systems -- IBM, InfoLibria, Network Appliance, Novell (in an OEM agreement with Dell), the University of Wisconsin, and NLANR itself -- participated in the competition, which was open to any company or organization with a Web cache product. Several other vendors, while initially interested, declined to participate in a head-to-head comparison.

"A 'bakeoff' is a test of several similar products that perform similar functions," said NLANR's Duane Wessels, organizer of the competition. "In this case, we evaluated the performance of systems that speed up information retrieval over the Internet -- Web cache servers available from commercial vendors and non-profit organizations."

More than half of the traffic on the Internet backbone is related to the World Wide Web. In the basic client-server transaction model, each Web browser client connects directly to each server, so a single file often would be transmitted many times over the same network paths when requested by different clients. This mode of operation can cause severe congestion on many of the Internet's wide-area links.

Caching is a way to reduce client-server traffic and improve response time. Instead of connecting directly to servers, clients are configured to connect to an "HTTP proxy server" at the Internet Service Provider (ISP), which requests Web objects from their source servers (or from other caches) and then saves these files for use in response to future requests. Popular objects collect in the caches and might be used many times without reloading from remote sites.

NLANR operates the IRCACHE Web Caching project under funding from the National Science Foundation's Directorate for Computer and Information Sciences and Engineering. NLANR created and maintains nine high-level Web caches located throughout the United States. These caches are directly connected to approximately 400 other caches, and indirectly to 1100 worldwide. Collectively, the NLANR caches receive approximately 7,000,000 requests per day from the others. The operational system of intermeshed Web caches has been credited with significantly reducing Internet congestion. NLANR also developed and distributes the Squid open source software package, which is widely used in the ISP community.

The Bakeoff Competition

Several Web cache products have entered the marketplace within the past couple of years. Unfortunately, competing performance claims hare been difficult to verify, and haven't meant quite the same thing from vendor to vendor.

Wessels and his colleagues Alex Rousskov and Glenn Chisholm have developed a freely available software package called Web Polygraph. Polygraph simulates Web clients and servers and is becoming a de facto benchmarking standard for the Web caching industry. The package is designed to give Web *servers* a workout -- it generates about 1000 requests per second on a 100baseT network between a client-server pair, and can specify such important workload parameters as hit ratio, response sizes, and server-side delays.

"To ensure a valid comparison, every product was tested under identical conditions within a short period of time and at a single location," Wessels explained. The Web Cache Bakeoff competition was held March 15 through 17 in Redwood City, CA, in an industrial space donated for the occasion by Paul Vixie of Vixie Enterprises (

Seemingly minor variations in workload parameters or system configuration can markedly affect performance. "In our report, we specify as much detail as possible about our benchmarking environment so the results of our tests can be reliably reproduced by others," Wessels said.

The bake-off took place over three days. The first day was used for testing the network and computer systems. The next two days were dedicated to running the benchmark. Theoretically, all benchmarking could have been finished by the end day two, so the third day was a saftey net. Participants also had the option of repeating some runs if necessary.

Each vendor was allowed to bring more than one product to the bake-off; each tested product was considered an independant participant, with a separate benchmarking harness (bench) for every participant. More than 80 Compaq Pentium II computers were rented for use as Polygraph clients and servers in the bakeoff.

IBM, InfoLibria (two entries), Network Appliance (two entries), Novell (in OEM agreement with Dell, two entries), the University of Wisconsin, and NLANR participated in the competition, which was open to any company or organization with a Web cache product. The precise parameters of the test were arrived at by discussion and mutual agreement among the competitors and researchers.

"The bakeoff set a high standard both for design and execution, and for the cache robustness required for completion," said Abdelsalam A. Heddaya, InfoLibria's VP of Research and Architecture. "Because it was the first truly independent benchmark of network caches, we believe it will be of tremendous value to the industry."

CacheFlow, Cisco, Entera, and Inktomi had expressed strong interest in the bakeoff, but eventually decided not to participate. In addition, IBM and Network Appliance decided not to disclose the results of their trials after the bakeoff, a "bail-out" option previously agreed upon by the competitors and testers.

"Certainly we are disappointed by their choice," Wessels said. "We feel that benchmarking results are more useful when there are more results to compare. At the same time, we take it as a compliment that our benchmark was taken very seriously -- by those who competed, and by those who didn't."

The Competitors

  • IBM -- IBM brought a 34C00 Web Cache Manager system to the bakeoff.
  • InfoLibria (Large configuration) -- InfoLibria's "large" bakeoff entry was a cluster of four DynaCache IL-100-7 servers (
  • InfoLibria (Small configuration) -- InfoLibria's "small" entry was a single DynaCache IL-200X-14 (
  • Network Appliance (Large configuration) -- Network Appliance entered a cluster of three C720S NetCache Appliances as their "large" solution.
  • Network Appliance (Small configuration) -- Network Appliance's "small" solution was a single C720S NetCache Appliance.
  • Novell/Dell (Large configuration) -- Novell/Dell's "large" entry was the Novell Internet Caching System (Beta version), running on a Dell PowerEdge 6350 (
  • Novell/Dell (Small configuration) -- Novell/Dell's "small" entry was the Novell Internet Caching System (Beta version), running on a Dell PowerEdge 2300 (
  • Peregrine v 1.0 -- The University of Wisconsin brought their Peregrine software package, running on a Dell PowerEdge 2300 (
  • NLANR Squid -- NLANR brought their Squid software package, running on a generic Pentium-II system (

The Results

The results of the Web Cache Bakeoff were released on April 2 at the Fourth International Web Caching Workshop in San Diego. The conference was organized by NLANR and CAIDA (the Cooperative Association for Internet Data Analysis). Detailed information about the competition and the formal report of its results can be found at There is no single absolute measure of performance for all situations -- some customers will place the highest value on throughput, while others emphasize bandwidth or response time savings. Maximizing cache hit ratio is essential at many Web cache installations. For some sites price is important, for others it's the price/performance ratio.

IBM and Network Appliance declined to make their results public.

Detailed results of each test, with graphs and analyses, are contained in the report at

"We strongly caution against drawing hasty conclusions from these benchmarking results," Wessels said. "Since the tested caches differ a lot, it is tempting to draw conclusions about participants based on a single performance graph or pricing table. We believe such conclusions will virtually always be wrong."

"Our report contains a lot of performance numbers and configuration information; take advantage of it," he continued. "Compare several performance factors: throughput, response time, and hit ratio, and weigh each factor based on your preferences. Don't overlook pricing information and price/performance analysis. And always read the Polyteam and Participant Comments sections in the Appendices.

"Our benchmark addresses only the performance aspects of Web cache products. Any given cache will have numerous features that are not addressed here. For example, we think that manageability, reliability, and correctness are very important attributes that should be considered in any buying decisions."

Most customers also have to consider price in their decision making process. The report summarizes the pricing of the participating products and gives detailed product configurations. Note that these costs represent list prices of the equipment only. In reality, there are many additional costs of owning and operating a Web cache. These may include software/technical support, power and cooling requirements, rack/floor space, etc. A more thorough cost analysis might try to determine something like a two-year cost of ownership figure.

The Competitors' Views

In the interest of fairness, all of the competitors were invited to comment on the results. Participant comments contain forward-looking statements concerning, among other things, future performance results. All such forward-looking statements are, by necessity, only estimates of future results and actual results achieved by the Participant may differ substantially from these statements.

We have received the following information from the participating vendors:

  • InfoLibria (Press release at
    The testing showed Infolibria's DynaCache to be an extremely robust and high-performance network caching solution. DynaCache achieved: (1) Sustained peak throughput that [...] is many times faster than any previously documented cache performance, and amply meets the most punishing demands of leading national ISPs. (2) Response time that is assured to remain low throughout DynaCache's entire performance range, via a patent-pending dynamic connection pass-through mechanism. (3) Expanded bandwidth delivered to Web clients by 22-24% (with a payback, based on bandwidth savings alone, of five to nine months).
    These results show DynaCache addressing the rigorous demands of the ISP marketplace. But a caching product must also exhibit a number of equally key characteristics that were not measured in the bakeoff for it to meet the intense demands of the Internet environment. An Internet-grade cache must maintain the integrity of content, be fail-safe and easily manageable, and not impede ecommerce transactions or the collection of visitor data. DynaCache is the only network cache available with all of these attributes.
  • Novell/Dell (Press release at
    The Novell Internet Caching System (NICS) used in these tests will soon be available from Dell and other Novell OEMs. This new product is a headless appliance that blends high-performance and scalability with web-based remote management and a rack-mount form factor. These results were produced with Dell's beta version of "Novell Internet Caching System Powered By Dell."
    "NLANR's Web Cache Bakeoff results substantiate Novell's leadership postion in the Web caching market," commented Drew Major, vice president and chief scientist, Novell. "The Web Polygraph benchmark, which is representative of real-world Internet useage scenarios, was very challenging for Novell. Since the bakeoff we have already begun making improvements to the Novell Internet Caching System and our customers can expect even better performance when it ships through Compaq and Dell channels."
    "It is of great industry benefit to have the NLANR Web Polygraph tool as an unbiased standard measurement that customers can use to determine which cache solution provides the best performance," commented Ron Lee, manager, Advanced Development Performance Labs, Novell. "We're particularly proud of our mean response time results. While our competitors' systems showed significant response time degradation under increased loads, Novell's Internet Caching System response times remained almost unchanged from the lowest request rates to the heaviest peak loads. These results suggest that customers can rely on Novell caching solutions for predictable and reliable Web cache perfomance."
  • University of Wisconsin Peregrine version 1.0
    The Peregrine proxy is built as a part of the WisWeb research project funded by the National Science Foundation. The proxy is currently under evaluation by the University of Wisconsin-Madison for campus wide Web caching. The system will be available by September 1999. Though the system used in the bakeoff employs an Alteon ACEdirector, the same performance can be achieved with a Netgear FS516 Fast Ethernet switch.
    Performance of Peregrine systems can be scaled linearly via clustering. For example, four Pentium-based Peregrine systems connected with a load-balancing switch such as Alteon's CacheDirector or Foundry Networks' FastIron can offer potential throughput up to four times the throughput reported here. The designers of Peregrine avoided the "healing mode" in order to maximize the bandwidth savings of the cache, but the mode can be turned on via a configuration parameter. During the bakeoff, the system was able to handle experiments at request rate of 630-650 req/sec, though at such rates, about 2% of client connections were refused due to overload. The version of Peregrine tested during the bakeoff does not preserve cache contents between proxy restarts; adding this essential feature may affect its performance.
    A note on Peregrine v 1.0: At this time, the developers of Peregrine have requested permission from the University of Wisconsin to distribute their software at no cost. There is a chance, however, that Peregrine will not be free, and price/performance evaluations should take this possibility into account.
  • NLANR Squid
    NLANR's Squid Internet Object Cache is freely available software (available under the GNU General Public License) that runs on all modern versions of Unix. Duane Wessels is building and integrating the code as a worldwide collaboration, and puts it into public trust. Squid has been constantly evolving since 1996, and includes many features not found in commercial caching products. An estimated 5-10,000 installations use Squid -- commercial ISPs, corporations, organizations, universities, and K-12 schools.
    "We were not surprised by Squid's performance at the bakeoff," Wessels said. "We have known for quite some time that the Unix filesystem is our major bottleneck. This is an example of the price that Squid pays for being highly portable. However, we are actively working on a filesystem API for Squid that will allow us to experiment with new types of storage for Squid (e.g. "SquidFS"), while still remaining compatible with the Unix filesystem."
  • Others
    Several of the competitors who declined to participate or disclose their results nevertheless were quite positive about the value of the competition, and indicated a willingness to participate in the next bakeoff, tentatively scheduled for this Fall.
    "IBM is delighted to begin working with the IRCACHE team in shaping an industry standard web caching benchmark. With the exponential growth of World Wide Web traffic, Web caching will be an important component of industrial strength Web infrastructures," said Nancy Ann Coon, worldwide marketing manager for the IBM Web Cache Manager. "We look forward to future participation in IRCACHE sponsored web caching benchmarks."
    CacheFlow has supplied this statement: "CacheFlow declined to participate in this first round of tests but will likely participate in future tests as the Polygraph tool evolves. In order to produce benchmark information for CacheFlow's products that is very meaningful to customers, CacheFlow chose to wait for a later version of Polygraph that more completely models the variable, realistic nature of the Internet.
    "Polygraph is a good start toward a benchmark that will accurately represent how an Internet accelerator performs in a livenetwork environment. CacheFlow continues to actively participate in discussions with the NLANR team and other vendors to help evolve the tool further."


The Web caching industry's thirst for a benchmarking standard led to the creation of the Web Polygraph suite and the launch of a series of IRCACHE bakeoffs. We consider the first bakeoff to be a success. Despite the absence of several big players in the industry, the IRCACHE team collected a representative set of interesting performance data, and prepared the first industry document that provides a fair performance comparison of a variety of caching proxies. We hope the performance numbers and our analysis will be used by buyers and developers of caching products.

The IRCACHE team applauds the vendors who came to the bakeoff and disclosed their results. We regret that other cache vendors did not show their leadership. We certainly hope that more companies will participate in future benchmarking events and will have the courage to disclose their results.

We expect discussions of the bakeoff and its results to appear, possibly including attempts to denounce bakeoffs in general. We believe that, while not perfect, this first bakeoff's rules and workload give knowledgeable customers a lot of valid, useful, and unique performance data. Future bakeoffs will further improve the quality and variety of our tests. We do not know of a better substitute for a fair same-rules, same-time competition.

Finally, we expect some companies will try to mimic bake-off experiments in private labs, and we certainly welcome such activities. We trust the reader will be able to separate unsubstantiated speculations and semi-correct bake-off clones from true performance analysis. If unsure about the validity of vendor tests, consult this report and Polyteam members directly.

The second Web Cache Bakeoff has been tentatively scheduled for six months from now.

The National Laboratory for Applied Network Research (NLANR) has as its primary goal to provide technical, engineering, and traffic analysis support for NSF High-Performance Connections sites and high-performance network service providers such such as Internet 2, Next Generation Internet, the NSF/MCI vBNS, and STAR TAP. Founded by the National Science Foundation's Computer and Information Science and Engineering Directorate in 1995, NLANR is a "distributed national laboratory" with researchers and engineers at the San Diego Supercomputer Center, the National Center for Supercomputing Applications, the Pittsburgh Supercomputing Center, and the National Center for Atmospheric Research, among other sites. See for more information.

The Cooperative Association for Internet Data Analysis is a collaborative undertaking among government, industry, and the research community to promote greater cooperation in the engineering and maintenance of a robust, scalable global Internet infrastructure. It is based at the San Diego Supercomputer Center (SDSC) at the University of California, San Diego (UCSD) and includes participation by Internet providers and suppliers, as well as the NSF and the Defense Advanced Research Project Agency (DARPA). CAIDA focuses on the engineering and traffic analysis requirements of the commercial Internet community. Current priorities include the development and deployment of traffic measurement, visualization, and analysis tools and the analysis of Internet traffic data. For more information, see, or contact Tracie Monk, CAIDA, 619-822-0943,

The San Diego Supercomputer Center (SDSC) is a research unit of the University of California, San Diego, and the leading-edge site of the National Partnership for Advanced Computational Infrastructure ( SDSC is sponsored by the National Science Foundation through NPACI and by other federal agencies, the State and University of California, and private organizations. For additional information about SDSC, see, or contact Ann Redelfs at SDSC, 619-534-5032,