The Adorable Robot That’s Helping Deaf Children Communicate

Laura Ann Petitto smiling on youtube explaining about How Deaf Children communicate

WIRED features a video and print story about the Robot AVatar thermal-Enhanced (RAVE) learning tool prototype developed by Dr. Laura-Ann Petitto and a team at Gallaudet University in collaboration with Yale, the University of Southern California, and Italy’s Università Gabriele D’Annunzio.

Click here to view the corresponding video story. 

 

The Wide-Eyed Robot Teaching Deaf Children To Communicate

THIS KID DOESN’T know it, but he’s kind of a big deal. Sitting in his mother’s lap, he looks at a mohawked robotic head, which periodically turns left to look at a computer screen with its big blue eyes. And the infant takes the cue, glancing at the screen, where a human avatar signs a nursery rhyme.

This boy is doing something remarkable on two levels. For one, he’s practicing a pivotal skill for his development—language—with a clever new platform that blends robotics, fancy algorithms, and brain science. And he’s doing what few humans have done before: communicating with a robot using facial cues alone.

In an ideal world, every child would get enough face-to-face communication during early development to build solid language skills, be that by way of sign language or the spoken word. The reality is, not all parents have the time to sit down and read to their kids. And for deaf children, it may be that the parents themselves have to learn to sign.

What researchers at Gallaudet University — in collaboration with Yale, the University of Southern California, and Italy’s University of D’Annunzio — have developed isn’t a substitute for interpersonal communication between parents and infants, but an experimental supplement. It’s meant to simulate the natural interaction between baby and mother or father.

What’s interesting about the developing infant mind is that natural language, no matter if it’s spoken or signed, stimulates the same areas of the brain. "The same neural sensitivities, they are processed in the identical swatches of brain tissue," says Gallaudet neuroscientist Laura-Ann Petitto. "The brain tissue that we used to think was only responsible for sound is not the unique bastion of sound processing. It's the unique bastion of human language processing."

With this knowledge in hand, the team can strap little brain-scanning hats to deaf infants and watch for these areas to light up. Now they know when the child truly engages in natural language. (In the world outside Look Who’s Talking, a baby can’t tell you it’s interested in what you’re saying or signing.)

But the team’s robot-avatar system uses a more subtle method to read the infant. A thermal camera trained on the baby’s face watches for tiny changes in temperature, which are associated with heightened awareness. Combined with face-tracking software, this can determine not just when the robot is able to direct the kid’s gaze to the avatar, but when the kid is actually engaged. And infants seem to love it—even hearing children will try to sign back to the avatar.

But why go through all the trouble of face-tracking and algorithms and doing motion-capture to build the avatar? Because interaction, even if it’s with a robot masquerading as a human, is essential to language development.

Sure, you could plop a kid in front of Sesame Street, which tries its best to engage with children, but the medium inevitably comes up short. "It's not the tablet itself, it's not the computer itself or the TV itself, it's the way it's used," says Diane Paul, director of clinical issues in speech-language pathology at the American Speech-Language-Hearing Association. "We actually want families, caregivers to be reading to their children, speaking to their children, signing, singing. We want that social interaction because it's within that context that you learn speech and language or signing skills."

Without enough of that kind of interaction, a child’s brain doesn’t develop as it should. Now, a robot can’t replace a mother or father, but it isn’t meant to. One day it might work as a stand-in, grabbing a baby’s attention when the parents are otherwise occupied, to give the child that extra bit of language practice.

Beyond the implications for child development, the system is fascinating from a robotics perspective. Robots are notoriously bad at reading our emotions and making their own. The subtlety of human facial expressions is just too impenetrable, and robots (short of showing an animated face on a screen) have a hard time even smiling or frowning. Yet this robot, using its eyes, is able to grab the gaze of an infant and direct it to the avatar. Robot and human are communicating in a simple yet compelling way.

So sure, one day children may have sophisticated robot babysitters with sophisticated emotions and interactions. But for now, a little mohawked robot is catching the eye of a kid or two.