News Archive

Trace Center Demonstrates Live Captioning for the Deaf at SC99

Published 11/11/1999

Contacts: David Hart, NPACI External Relations,
Karen Green, NCSA Public Information,

PORTLAND, OR -- Researchers from the Trace Center at the University of Wisconsin at Madison are demonstrating the feasibility of providing real time captioning and sign language translation for the deaf by broadcasting captions for parts of SC99, the annual high-performance networking and computing conference that runs Nov. 13 - 19 at the Oregon Convention Center.

The captioning/signing demonstration by Trace, a member institution of the Education, Outreach and Training Partnership for Advanced Computational Infrastructure (EOT-PACI), complements efforts of other PACI partners and the SCinet99 infrastructure project. All are providing live audio/video webcasts of selected presentations of SC99 for people who cannot attend in person.

The webcasts are available through SCinet99, an integrated network environment constructed specifically for the conference. Also featured will be a robotic camera that allows viewers to pan the exhibit hall and zoom in on exhibit floor booths from their home or office computers.

The long-term goal of the Trace Center project is to make using a Grid (an array of services and resources linked by high-speed networks) more user friendly to people who are deaf or hard of hearing. Eventually, users will be able to launch an application and have windows appear on their computer screen, showing either sign language interpretation or closed-caption text of any conversation taking place within the application.

"Internet 2 opens up new capabilities with better bandwidth and quality of service that can revolutionize communication and even daily life for people with sensory disabilities," said Gregg Vanderheiden, executive director of the Trace Center.

"We are currently assembling the components (hardware, software and services) to allow people with sensory disabilities to have high speed computing and human collaborative services available at any time and any place," said Vanderheiden. "These components will provide real time translation of text, speech, and environmental images into a form that they can perceive (vision for people who are deaf, speech and sound for people who are blind, and reduced language and conceptual level for those with cognitive disabilities). By using a tiered implementation, we can implement and test the concepts today in a human / machine collaboration and then work toward increasingly machine based implementations, allowing for greater scalability and dropping costs."

"This can be used by anyone (deaf or not) to pop up a screen which will provide text transcript (or sign language interpretation) of whatever sound is in the room," added Vanderheiden. "For example, you could use it for a drop in visitor who as deaf. Or a person who was deaf could carry it around with them."

"Today's high performance computing (HPC) technologies will make their way onto the public's desktops and into everyday life just a few years down the road," said Greg Johnson, a programmer/analyst at the San Diego Supercomputer Center and head of the SC99 webcast team, comprised of investigators and technicians from nearly 30 SCinet team institutions.

"The SC99 webcasts will give a much broader audience the chance to hear and see the future through the eyes of the leaders in the HPC world," said Johnson.

A schedule and instructions for accessing the webcasts are available at A Help Desk will be available shortly before, during, and after the events.