Researchers at the University of Washington (UW) have started testing new technology that allows deaf people to use video for facial recognition and American Sign Language (ASL). In the past, hearing impaired people were using mainly Short Message Service (SMS) to transmit brief text messages.
Most people assume video works on most mobile smartphones in the US. However, the US is number 16 in the world for broadband bandwidth and transfer rate. In Sweden and Japan there is enough bandwidth to transmit full video of individuals speaking sign language to one another. Sprint?s new 4G WiMAX network transfer speeds with their HTC EVO allows video conferencing. The problem is Sprint?s 4G network is not available in most US locations.
The hands and face are the most important parts of the body when communicating using ASL. Eve Riskin, a UW professor of electrical engineering who is leading the project, said: ?This is the first study of how deaf people in the United States use mobile video phones.? They first tried to simply use low-resolution video that could be transmitted over the slow US networks. They quickly found that the video quality was no where near good enough to carry on a meaningful ASL conversation. The UW team then developed their own software to increase image quality around the face and hands. This brought the data rate down to 30kbps (kilobytes per second) while still delivering intelligible sign language.
Their result is the MobileASL project, which allows people to communicate using ASL over most 2G cell phones. MobileASL also uses motion detection to identify whether a person is signing or not in order to extend the phones’ battery life during video use. Jessica DeWitt, a deaf UW undergraduate in psychology and a collaborator on the MobileASL project, said: "Video is much better than text-messaging because it’s faster and it’s better at conveying emotion."
The project is being tested by 20 students this summer. The MobileASL application is not ready for shipment at this time.