Caring Cyborgs: Robots may be good at following instructions, but could they also be our companions and caregivers?
Feng Zengkun writes: Head to Singapore’s Nanyang Technological University (NTU) and you might be able to spot Nadine, a cyberhuman with soft skin, brunette hair, and a ready smile. Touted as the world’s most life-like robot, she can not only remember your name and past conversations, but also strike up a chat and even empathize with your troubles.
Over the past few years, scientists across Asia have unveiled a series of humanoid robots that could one day help to look after our children, keep grandma and grandpa company and give directions to people in public spaces such as museums and malls. Aside from Nadine, there is also JiaJia, a female robot created by Chinese researchers, and Geminoid, a male robot developed by renowned Japanese roboticist Hiroshi Ishiguro (What Humanoids Can Teach Us About Being Human).
These social robots—so-called because they are meant to interact with people—could help to alleviate the lack of caregivers in aging societies such as Singapore, Japan, China and South Korea, and their limitless patience could also be useful in therapy for children with learning problems and the elderly who suffer from dementia.
Although the need is undoubtedly great, are robots currently up to the task? And even if they were, should machines replace human companions and caregivers?
The complexity of a chat
“Robots are highly integrated systems with many sophisticated technologies, but it is still really difficult for them to understand human beings,” said Dr. Takayuki Kanda, a senior research scientist at the ATR Intelligent Robotics and Communication Laboratories in Kyoto, Japan.
Not unlike people, robots rely on sensors, such as cameras for eyes and microphones for ears, to recognize images and sounds, and then process the information to react accordingly. However, while they can be programmed to learn to identify people, objects and words, they remain ill-equipped to detect nuances in human interactions, which range from sarcasm to passive-aggressiveness and subtext.
Most people instinctively pick up on a multitude of subtle cues, such as body language, facial expressions and tone of voice, to determine other people’s moods and feelings. Robots are currently unable to replicate such seamless collection and integration of cues, which means that they are usually unable to tell if a person is being sarcastic or sincere.
There are other technical challenges. Like people, robots’ microphone-ears can be distracted by surrounding noises, and their camera-eyes derailed by bad lighting, occlusion and environmental obstructions, for example, if it is raining. While people are good at recognizing other people in unusual positions—such as elderly people who are hunched over—robots might be flummoxed by such variations … (read more)