AlterEgo: researchers at MIT develop device that can ‘hear’ your internal voice

By Surya Suresh

A brain-computer interface enables humans to control inanimate objects with their thoughts. BCI’s have been a constant in science fiction novels over the decades but have not made their presence felt in the real world. However, the wait could finally be over with the development of AlterEgo by MIT Media Labs, a non-invasive and non-intrusive interface between humans and computers.

What is AlterEgo?

AlterEgo is a prototype headset that recognises non-verbal prompts, thus enabling silent and seamless communication with computing devices. Researchers at MIT have trained the system to recognise specific prompts using machine learning algorithms. The device picks up neuromuscular signals that arise owing to internal verbalisation, colloquially understood as ‘saying words in your head’. It then correlates these signals with the prompts it has been trained on, thus enabling it to recognise the signals accurately.

This ambitious project aims to integrate humans and computers, thus augmenting human cognition and abilities. It also aims to revolutionise communication and the ways in which we access information. In an interaction with NDTV, Pattie Maes, a professor at MIT Media Labs, noted that cell phones and digital devices disrupt the attention of the user by shifting their focus from the external environment to the device itself. He stated that the goal of the project was to limit this disruption and enable people to leverage the knowledge provided by digital devices while remaining focused on the present.

Prototype testing

While the development of AlterEgo represents a breakthrough in the field of intelligent augmentation, there is still a long way to go before this product becomes commercially viable.

Till date, the researchers have tested the device on a limited and simple set of tasks, such as multiplication, addition and games of chess, with the system being trained on a dataset of 20 prompts. The results have been promising with the prototype demonstrating 92 percent accuracy, and researchers are hopeful that they will be able to scale up the system with time.

Arnav Kapur, a graduate student who leads the project, said, “We’re in the middle of collecting data, and the results look nice. I think we’ll achieve full conversation someday”.

Looking forward

Presently, the device uses seven sensors at different facial locations to obtain the neuromuscular signals generated during ‘inner speech’. Researchers are working on ways to cut down the number of sensors to four along the wearer’s jawline. The successful testing of this prototype represents a significant step forward in AI-human integration. Further evolution of this technology could lead to its application in high noise environments, in assisting specially-abled individuals and in revolutionising the Internet of Things (IoT).

ScienceTechnology