At a time when wheelchairs are reinvented to encompass the needs of the user, Project Saksham, focussed on wheelchair usage, has imparted ability to the physically challenged.
There comes a gap between the time when a person discards his high school physics books and discovers the beauty that Richard Feynman’s Physics Lecture series is. It is during this period that students stumble upon the bestsellers and end up on the futile quest of understanding “The Brief History of Time” by Prof. Stephen Hawking or, for that matter, any of his publications, in their entirety. Thankfully, I had the much required discorso mentale and my train of thought shifted its course from “What he does? “ to “How he does it?". His communication system has always fascinated me. It is astonishing how a person can communicate just with the movement of a single muscle group on his cheek. Soon enough, I was more interested in developing an alternate system and that gave birth to the project “SAKSHAM”, the word Saksham roughly translates to ability in English. I carried out an in-depth research on almost all the existing systems and also discovered the apologetic state of the technological development in this field in India. Due to lack of lab facilities and funding at my high school, I had to stick to reading more literature and refining my concept. After joining PSG College of Technology, I defined the target patient group, consulted doctors, patients and industry experts to zero in on the final requirements. The list was then divided into two parts, “must have” and “nice to have” to streamline my priorities. This was followed by experimentation to prove my concept, which has been discussed briefly in the next section.
SAKSHAM – An alternative and Augmentative Communication System
The project consists of two modules. First module aims at creating an alternative control system for power wheelchairs. Power wheelchairs are the class of wheelchairs that have their own power source and often come with a joystick control system. While it is efficient, a lot of people have special needs and cannot use the system. For instance, people with below neck paralysis have very few control systems especially in India. Popular alternative systems include head tracking, eye tracking and extremely innovative options like tongue tracking. Unfortunately, they are extremely costly and Indian companies are not manufacturing such systems yet. Hence, in order to develop an indigenous system, various parameters of the human body were monitored. Breath sensing came out as the most efficient and natural way to take signals. This approach has been extensively worked on in the UK and a sip-and-puff system was developed which had limitations of its own.
For SAKSHAM, more concentration was given to the pattern of breathing rather than the magnitude of pressure that was being produced. Facilitating this approach with a pressure-sensing matrix helped in the creation of an extremely light, non-invasive system with high efficiency. The system also comprises of utilities like emergency Skype calls, Morse code integration, colloquial conversation terms and some extra commands that can be defined by the user. A similar approach was used to design a portable communication aid. Still a work in progress, this module will work on a unique algorithm where people who have lost their ability to speak due to weak vocal chords or other factors can communicate normally. Human speech is essentially just air modulated at a certain frequency and pitch in order to create sound. The output sounds make sense when our brain matches them to one of the languages that we know. Every language has its own building blocks. Like alphabets make up the written part of the language, phonemes are the basic blocks for the spoken language. Though the number of phonemes vary with accents, English language can be defined within nearly 40 different basic sounds to create the dialect of an average UK citizen.
When a person with a speech disorder tries to speak, he ends up creating a sound that we are not familiar with. This is similar to hearing a new language and trying to decode it. The astonishing part is that the speaker knows the language; it’s his inability to churn out the perfect frequency or pitch, which causes the trouble. For every phoneme that he wishes to produce, he ends up creating something different. When the person is asked to read aloud a paragraph, we can actually monitor the pattern which the air follows when he speaks. He may not even make sounds for certain words, but still sensors pick up any variation such as heat or pressure change.
When neural networks are introduced to such system, they can help the computer learn and find a match for each of the 40 phonemes. This is like creating a personal language for the patient and then giving a real time translation in user’s preferred language. So in a hypothetical case where the user pronounces the sound “a” as “d”, the computer starts giving an output "a" every time the sensors detect the sound “d”. So each phoneme gets a match in the signals, which as raw sounds do not make sense. The system is currently evolving and the coding is not an easy task, neural networks have shown promise and initial tests have verified the ability of a sensor interface to sense the variations. A patent was applied for the system in July 2013 with an extra module including gesture sensing capabilities and a haptic feedback mechanism.
P.S- Though out of context, I still request you to read about Dr. Ashoke Sen. He is another source of inspiration who has not been recognized enough.
My parents and relatives for their active involvement and support of my projects.
Dr. P V Mohanram and Mr. Suresh Kumar for their valuable guidance.
Dr. K Ramadoss, Dr. Satish Ghanta and Dr. Gutta Pranathi Reddy for their consultancy help.
Mr. Jameesha, Mr. Manas Ranjan Biswal, Mr. A. Sabareeswaran, and Mr. CSK Pranav for their technical help.