Sound ambitiuous but it is the future that Facebook is trying to bring to the world.
During the F8 conference, the largest social network laid plans to let you type with your thoughts and “hear with your skin”.
According to Facebook Chief Executive Officer Mark Zuckerberg is working on brain-computer-interface technology that will “one day will let you communicate using only your mind.”
In January 2017, reported in Business Insider, the futuristic project is part of Facebook’s consumer hardware lab known as Building 8.
In the same conference, Zuckerberg alao issues death warrant against smartphones.
Building 8 chief Regina Dugan revealed that her team of 60 scientists is working on a non-invasive system capable of typing 100 words per minute using only brain waves.
An even more futuristic project intends to deliver spoken language through human skin.
Dugan also added that the end-goal is to eventually be able to think in Mandarin and feel in Spanish.
Dugan joined Facebook last year from Google’s advanced projects division.
The projects are still a ways off from becoming actual products, but the company believes these will eventually become products.
“Eventually, we want to turn it into a wearable technology that can be manufactured at scale,” Zuckerberg said.
Meanwhile, Dugan said on her own blog post, described both of the projects as “silent speech interfaces”, which she said offer the convenience of speaking with your voice but the privacy of sending a text mesage.
Here’s Facebook’s full announcement about the projects
“Recently there has been a lot of hype surrounding brain technology. We have taken a distinctly different, non-invasive and deeply scientific approach to building a brain-computer speech-to-text interface. A silent speech interface with the speed and flexibility of voice and privacy of text.”
“We have a goal of creating a system capable of typing 100 words per minute, straight from the speech center of your brain – this is 5x faster than you can type on your smart phone today.
This isn’t about decoding random thoughts. This is about decoding the words you’ve already decided to share by sending them to the speech center of your brain. Think of it like this: You take many photos and choose to share only some of them. Similarly, you have many thoughts and choose to share only some of them.
We will do this via non-invasive sensors that can be shipped at scale.
We’ll need new, non-invasive sensors that can measure brain activity hundreds of times per second, from locations precise to millimetres and without signal distortions. Today there is no non-invasive imaging method that can do this.
Optical imaging is the only non-invasive technique capable of providing both the spatial and temporal resolution we need. And thanks to improvements in performance, cost and miniaturization from the telecomm industry, we have a big wave to ride.
Six months ago this project was just an idea.
Today, we have a team of over 60 scientists, engineers and system integrators from UC San Francisco, UC Berkeley, Johns Hopkins Medicine, Johns Hopkins University’s Applied Physics Laboratory and Washington University School of Medicine in St. Louis specialising in machine learning methods for decoding speech and language, in optical neuroimaging systems that push the limits of spatial resolution and in the most advanced neural prosthetics in the world.
Project: Hear with your skin
Our second project is directed at allowing you to hear with your skin. We are building the hardware and software necessary to deliver language through the skin.Your skin is a 2 m2 network of nerves that transmits information to your brain.
Braille, invented in France in the 19th century, has proven that small bumps on a surface can be interpreted in the brain as words.
We know from the Tadoma method, developed in the early 20th century based off the experience of Helen Keller, that deaf and blind children could learn and communicate through slight pressure changes created by puffs of air and vibrations felt by their hands placed over a person’s throat and jaw.
From the 1950s to today, what all of these techniques have in common, is our brain’s ability to reconstruct language from components.
The cochlea in your ear takes in sound and separates it into frequency components that are transmitted to the brain. We can do the same work of the cochlea, but transmit the resulting frequency information, instead, via your skin.
With this technology, the ability to learn language and vocabulary are just the beginning of that we can ‘hear’ with our skin.” the Facebook full statement over the projects.