While we have voice recognition software, that lets you interact with your computers, phones and off late, smart speakers by talking to them by using a trigger command, researchers at MIT have been working on a device to bypass the need to actually talk out loud.
**MIT** researchers have developed a device, code-named AlterEgo, that can listen to the words you don’t say or say silently (without speaking) in your mind. The device which is connected to a computer system transcribes words that user verbalises internally. These internally verbalised words trigger certain neuromuscular signals which although not visible to the human eye, can be detected by sensors. [caption id=“attachment_4420857” align=“alignleft” width=“380”] MIT Silent Speech device demonstrated by MIT Media Lab student Arnav Kapur.[/caption] A device which looks like a modified headphone, spans from the top of your ear to the chin. It houses electrodes that pick up the neuromuscular signals in the jaw and the face which are activated by internal verbalisations, ie when you say words in your head. The signals collected by the electrodes are sent to a machine learning system which is able to do the computation for the user. The machine learning system associates certain words with a certain type of signals. Arnav Kapur, a graduate student at the MIT Media Lab, who led the development of the new system demonstrated the device in a video which showed him controlling a smart TV without any physical interference. He scrolled down the list on the TV programme guide internally verbalising it, and this signal was then transmitted by the machine learning algorithm into language that the TV could understand and eventually react to it.
Kapur who lead the development said, “The motivation for this was to build an IA device — an intelligence-augmentation device. Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?” The study research team Kapur who is the lead author on the paper, Maes who’s the senior author and Shreyas Kapur, an undergraduate major in electrical engineering and computer science. The prototype wearable device had an average transcription accuracy rate of 92 percent after it was used by 10 subjects that spent 15 minutes each customising the arithmetic application to their own neurophysiology and then spent 90 minutes using it to execute computations. The device can be used at places where a conversation is important but is difficult because of various factors. It can be used on the flight deck of an aircraft carrier, at a power-plant or a printing press where noise is too loud for normal conversations to happen. It can be also used by firefighters who already wear masks making it difficult to communicate. It will also be helpful for those who are not able to communicate because of disabilities.


)
)
)
)
)
)
)
)
)
