For many years, the intersection of artificial intelligence and advanced robotics has continued to foster novel and unprecedented innovation. Combining these two vastly complex fields, scientists have been able to create robotic systems that can think, act, and learn on their own, within certain limitations.
Yet, the robotic systems that are seen today in airports, hospitals, or even restocking shelves in grocery stores are quite obviously robotic in nature; that is, they don’t resemble or understand human nature—rather, they are advanced machines that churn out responses and actions based on data and models.
One growing, yet controversial, perspective is that perhaps the next leap in robotic artificial intelligence is bridging the gap between humans and robots even further—that is, making robots more human like.
A lab at Columbia University is tackling this exact aspect, and last week, it announced some of its latest work in the animatronics and robotics space. The lab’s main page explains: the “Ability to generate intelligent and generalizable facial expressions is essential for building human-like social robots. At present, progress in this field is hindered by the fact that each facial expression needs to be programmed by humans. In order to adapt robot behavior in real time to different situations that arise when interacting with human subjects, robots need to be able to train themselves without requiring human labels, as well as make fast action decisions and generalize the acquired knowledge to diverse and new contexts.” The lab describes how it addressed this conundrum: “by designing a physical animatronic robotic face with soft skin and by developing a vision-based self-supervised learning framework for facial mimicry.”
The lab released a video last week detailing their progress. It discusses the creation of “Eva,” an animatronics robotic face with “soft skin, a visual perception system, and a learning framework without any human labels.” The full video can be viewed below:
Overall, the technology is quite jarring. As evident in the video, Eva is able to recognize and mimic the person it looks at, indicating a relatively tailored and curated response structure. Furthermore, the video describes the process of facial mimicry, which entails Eva analyzing human facial landmarks, learning from them, and then generating a responsive facial structure in its own face to match that of who it’s seeing.
This technology is potentially a massive step into the healthcare artificial intelligence and robotics space. Namely, if this learning algorithm can be perfected to not only mimic human emotions, but rather also respond to them appropriately, it may potentially be a novel, yet controversial, addition to the realm of healthcare innovation.
Take for example the use of robots for social needs. During the height of the Covid-19 Coronavirus pandemic, so much emphasis was placed on the detrimental effects of social isolation and loneliness. Although the idea behind robotic companionship is not novel, many experts contend that perhaps social and behaviorally trained robots can provide some respite for loneliness.
“Robo Therapy” is a congruent phenomenon. As the American Psychological Association (APA) describes, “Socially assistive robots could provide companionship to lonely seniors, teach coping skills to adolescents with depression or even help someone quit smoking or lose weight […] Robot therapy isn’t as out there as it sounds […] The goal isn’t merely to provide kids with a robotic playmate. Rather, researchers hope that robotic systems can help the children learn valuable social skills such as imitation, taking turns and maintaining joint attention with another person.” Importantly, the APA adds: “For now, though, robotics researchers think of their creations as tools, not substitutes.” That is, they contend that the intention is not to replace trained professionals in these social or behavioral therapeutic aspects, but rather augment their practice.
And that remains an important consideration. While the development of robotic technology and artificial intelligence systems to recognize and respond to human emotions is a scientific-breakthrough, the ever-perplexing question of “Why?” comes to mind. Perhaps there are some potential benefits to gain from this technology, if it’s perfected. However, the challenge will be to continue to ask “Why,” so that the humanistic aspects of healthcare and medicine are still preserved. Patients don’t see physicians for purely diagnostic purposes; rather, the house of medicine, and the patient-physician relationship is a revered bond that surpasses diagnostics alone, and brings with it social and cultural norms, in addition to purely medical needs. That value is only as good as how a physician is able to respond and take into consideration the humanistic aspects of their patient’s lives. This may arguably become tougher if robotic technology aims to replace, rather than augment healthcare expertise.
Nonetheless, the technology to venture into the human facial response and animatronics space is daring and ground-breaking. Time will tell how best to harness this technology in a way that preserves the nuanced, humanistic aspects of medicine, while also continuing to push healthcare innovation forward.