This is how Facebook’s telepathy system will work

In the future, people will be able to create digital content only with the power of their thoughts. Facebook is one of the companies that want to enable it.

Last April, Regina Dugan, director of Facebook’s Future Laboratory, presented a broad vision: more than 60 scientists from the social network are working on brain-computer interfaces to enable brain control.

In the first stage, people should be able to tap through brain signals. Up to 100 words per minute should be possible with such a brain keyboard. A skilled typist creates a 60 to 70 words per minute with a conventional keyboard.

“It sounds impossible but it’s closer than you think,” said Ms Dugan, who joined Facebook from Google last year and previously led DARPA, the US government’s advanced defence research division.

On top of that, you could tap, without having to be near a screen. The smartphone should then remain much more often in the pocket and rather be used as a portable computer.

Especially with regard to the ever-increasing AI assistance systems like Apple’s Siri, Google’s AI assistance or Amazons Alexa, this would be a breakthrough. Because the brain keyboard solves another, fundamental problem: nobody likes to speak publicly with a computer. Not even if it were the most efficient way to solve a problem.

With the brain keyboard, the AI could quietly and secretly manage our everyday life and enlarge the reality audibly: All it would need for it would be earplugs, the brain reading device and an Internet connection.

Silent language should function technically

Even if Facebook’s vision sounds crass science fiction, it is according to Dugan only “a few years” away. Facebook’s telepathy project manager and neuroscientist Mark Chevillet has now approached the MIT more closely at how they could be technically implemented.

The most important thing is that no sensors have to be implanted in the brain, which is the basis for the fact that this technology has realistic market opportunities.

Instead, Facebook is experimenting with an imaging process that can detect brain signals from the outside. The so-called “diffuse optical tomography” uses infrared light to visualize neural activities through tissue structures. However, with this method, with an insufficient resolution, the underlying brain layers are not achieved, and it is still too slow.

The aim of brain scanning is not to read the thoughts completely, but to specifically recognize words that are classified by the brain as “should be pronounced”. So catch the word just before it would come out of the mouth. As a user, you have to carry out the conversation word for word without saying the individual words aloud.

An algorithm is intended to optimize the word recognition rate of the brain reader by examining possible contexts, thus picking up the right word from a selection of potentially imagined words. This is similar to Google’s automatic completion of search queries, but for the brain.

2017-09-24T22:47:38+00:00