Mobile Phones are used by almost everybody in the world. But there is a certain group of people for whom cell phones are just useless when it comes to talking, they can only send text messages and Emails, obviously they are the Deaf and Dumb. The only alternative is sign language for them, but that requires video telephony (Video Calling). It requires high speed 3G network connections which is not affordable for everyone. So the only option is to come up with video compression technology which can be used on standard GPRS networks without a significant drop in video quality so the sign language is still understandable to the person on the other side and not just some random pixels moving on the screen.
That's what a team of engineers lead by Eve Riskin, professor of electrical engineering. at The University of Washington are trying to develop. It is the first device to transmit American Sign Language over U.S. cellular networks its named MobileASL. The whole idea behind this project is to Transmit sign language as efficiently as possible to increase affordability, improves reliability on slower networks and extends battery life, even on devices that might have the capacity to deliver higher quality video.
The team has already achieved data rate as low 30 kilobytes/sec by increasing image quality only around the face and hands and not the whole image and of course no audio is needed which consumes bandwidth in normal video calling. MobileASL also uses motion detection to identify whether a person is signing or not, in order to extend the phones battery life during video use.Mind you if you are thing about iPhone face time well it consumes 10 times more bandwidth then what MobileASL consumes.
How it works.
When it comes to low bandwidth connections even today's best video encoders likely cannot produce the video quality needed for intelligible American Sign Language(ASL). When it comes to MobileASL it uses a new real time video compression scheme needed to transmit within the existing wireless network while maintaining video quality that allows users to understand semantics of ASL with ease. The ASL encoders are compatible with the new H.264/AVC the popular open source compression standard x264.
The ASL also uses skin detection algorithms to find important areas in the video and motion vectors in the video to distinguish macro-block level encoding. Thus it achieves great level of compression without compromising on quality and making the video understandable for sign language. For now this system can only run on phones with windows mobile operating system but the team hopes to port it to more OS like android etc.
If services providers all across the world take up this technology it would be great news for the Dumb and deaf community all across the world providing them with a new dimension for communication.
Source- Wired
For more
No comments:
Post a Comment