What if your friend the robot could tell what you’re thinking, without you saying a word?
Researchers at MIT’s Computer Science and Artificial Intelligence Lab and Boston University have created a system where humans can guide robots with their brainwaves. This may sound like a theory out of a sci-fi novel, but the goal of seamless human-robot interaction is the next major frontier for robotic research.
For now, the MIT system can only handle simple binary activities such as correcting a robot as it sorts objects into two boxes, but CSAIL Director Daniela Rus sees a future where one day we could control robots in more natural ways, rather than having to program them for specific tasks — like allowing a supervisor on a factory floor to control a robot without ever pushing a button.
"Imagine you look at the robots, and at some point one robot is not doing the job correctly," Rus explained. "You will think that, you will have that thought, and through this detection you would in fact communicate remotely with the robot to say ‘stop.’ "
Rus admits the MIT development is a baby step, but she says it’s an important step toward improving the way humans and robots interact.
Currently, most communication with robots requires thinking in a particular way that computers can recognize or vocalizing a command, which can be exhausting.
"We would like to change the paradigm," Rus said. "We would like to get the robot to adapt to the human language."
The MIT paper shows it’s possible to have a robot read your mind — at least when it comes to a super simplistic task. And Andres Salazar-Gomez, a Boston University Ph.D. candidate working with the CSAIL research team, says this system could one day help people who can’t communicate verbally.
For this study, MIT researchers used a robot named Baxter from Rethink Robotics.
Baxter had a simple task: Put a can of spray paint into the box marked "paint" and a spool of wire in the box labeled "wire." A volunteer hooked up to an EEG cap, which reads electrical activity in the brain, sat across from Baxter, and observed him doing his job. If they noticed a mistake, they would naturally emit a brain signal known as an "error-related potential."
"You can use [that signal] to tell a robot to stop or you can use that to alter the action of the robot," Rus explained.
The system then translates that brain signal to Baxter, so he understands he’s wrong, his cheeks blush to show he’s embarrassed, and he corrects his behavior.
The MIT system correctly identified the volunteer’s brain signal and then corrected the robot’s behavior 70 percent of the time.
Making robots effective "collaborators"
"I think this is exciting work," said Bin He, a biomedical engineer at the University of Minnesota, who published a paper in December that showed people can control a robotic arm with their minds.
He was not affiliated with the MIT research, but he sees this as a "clever" application in a growing yet nascent field.
Researchers say there’s an increasing desire to find ways to make robots effective "collaborators," not just obedient servants.
"One key aspect of collaboration is being able … to know when you’re making a mistake," said Siddhartha Srinivasa, a professor at Carnegie Mellon University who was not affiliated with the MIT study. "What this paper shows is how you can use human intuition to boot-strap a robot’s learning of what its world looks like and how it can know right from wrong."
Srinivasa says this research could potentially have key implications for prosthetics, but cautions it’s an "excellent first step toward solving a harder, much more complicated problem."
"There’s a long gray line between not making a mistake and making a mistake," Srinivasa said. "Being able to decode more of the neuronal activity… is really critical."
And Srinivasa says that’s a topic that more scientists need to explore.
Potential real-world applications
MIT’s Rus imagines a future where anybody can communicate with a robot without any training — a world where this technology could help steer a self-driving car or clean up your home.
"Imagine … you have your robot pick up all the toys and socks from the floor, and you want the robot to put the socks in the sock bin and put the toys in the toy bin," she said.
She says that would save her a lot of time, but for now the mechanical house cleaner that can read your mind is still a dream.
Asma Khalid leads WBUR’s BostonomiX team, which covers the people, startups and companies driving the innovation economy. You can follow them @BostonomiX.