Virtual infant BabyX prompts question: how do we feel about AI that looks so much like us?

Hidden Content
BabyX is a hyper-realistic screen-based simulation of an infant with rosy cheeks and wide, sparkling eyes. Its lifelike appearance is a result of both art and engineering.

That's the premise behind BabyX, the lifelike virtual infant from the New Zealand-based research group Soul Machines, whose goal is to humanize artificial intelligence (AI). The group's work is in many ways unprecedented as they develop robots that emulate not only human gestures but also actual human functioning.
But it also raises a question: Is human likeness something that we want from our machine counterparts? Or, conversely, does it make humans slightly nervous when artificial beings look too much like us?
Soul Machines' founder, Mark Sagar, is an award-winning special effects artist who has worked in digital character creation for blockbuster films like Avatar and King Kong. He has developed a unique appreciation for the minutiae of human expression. In that way, the computer-generated "people" he creates are in a league of their own, with appearances and movements that are remarkably close to those of humans.
And that is no small feat. After all, it's one thing to look human, but it's a whole other thing to move realistically, explains Michael Walters, a senior lecturer in the School of Computer Science at the University of Hertfordshire.
"It's very difficult to get a robot to not just look right but move right, as well," says Walters, who is also a researcher with the university's multidisciplinary Adaptive Systems Research Group.
"We've seen various humanoid robots, but we aren't fooled by them for very long. They're close but not quite right."
Sagar's team at Soul Machines is working to make virtual beings that are persuasively lifelike — not just in how they look but in how they move and react to stimuli. That's due in large part to the way they're approaching this 21st -century challenge: they're endeavouring to build a simulated brain.
Finding the human connection
An interdisciplinary team that includes neuroscientists and physiologists "is now building biologically inspired models of the human brain," using the concepts of neural networks and machine learning to build a virtual nervous system, says Greg Cross, Soul Machines' chief business officer
Their goal, he says, is to understand how humans work, and "figure out how we learn to interact with others, and how we learn to create." When their autonomous virtual infant smiles, it's not because of a line of code directing it to do so following certain prompts or inputs — it's in reaction to virtual dopamine and endorphins, the release of which is triggered by real-world stimuli and interactions. In other words, the same things that make humans smile.
"By putting a face on machines they become more human-like," says Cross. "The most powerful instrument we have to show our emotions is the human face."
Soul Machines' team of developers is striving to reach the benchmarks of "emotional intelligence, understanding and responding to emotion," he added.
In this way, the research group is differentiating their creations from the current wave of consumer robots on the market. Cross sees their AI as the inevitable evolution of the faceless virtual assistants like Siri and Alexa that are now in millions of homes and businesses all over the world.
"Humanoid robots are to virtual assistants what television was to radio," he says.
'Uncanny valley'
The assumption is that consumers actually want robots as their digital doppelgangers. But do we?
Despite our fascination with lifelike robots and AI, to date, the answer to that question has been mixed. Coined by Japanese roboticist Masahiro Mori, the "uncanny valley" is a term for the discomfort we feel around man-made creations that look human. While these virtual characters can elicit a sense of familiarity, as they attempt — but fail — to mimic human behaviours, they tend to also trigger a sense of uneasiness, or even revulsion, among observers.
Walters suspects the feeling of uncanny valley will disappear over time as people grow more accustomed to interacting with humanoid robots and simulations. He also sees this as the advantage of voice-based virtual assistants. After all, he says, "talking is relatively easy to do, and the latest speech synthesizers sound very realistic."
The other advantage of the current generation of faceless AI, says Walters, is that "people have a desire for robots to behave with consistency." In other words, the more it looks like us, the more we expect it to be capable of doing. In that sense, because Siri and Alexa are just disembodied voices, we are more willing to cut them some slack — a necessity at this still early stage of consumer adoption, when these consumer-facing AI are still buggy and flawed.
When it comes to our adoption of humanoid robots, Walters says, we are "chasing a moving target." The eerie-ness of the uncanny valley will likely ease as time passes and we grow more accustomed to lifelike machines in our midst, he says.
And so, as BabyX grows up over the coming decade, consumers might also grow more open to the likes of Soul Machines' robots.
When it comes to the widespread adoption of these lifelike AI, it's uncertain what will evolve more quickly: our acceptance of this next generation of humanoid simulations, or the technological capability to actually realistically render them.