Can robots learn to read social cues? Is it possible to derive empathy from an algorithm? Ray questions the necessity of socially intelligent robots for jobs that humans already do well. Plus, she is skeptical about how numbers and computing data can create something as complex as social intelligence. Josh, however, argues that it is useful for robots to learn how to read social cues because they must navigate different environments and spaces.
The philosophers are joined by Elaine Short, Professor of Computer Science at Tufts University. Elaine’s work focuses on robots that help and learn from people, as well what happens when robots exit the lab and enter the world. Ray asks if robots are truly learning social intelligence or if they’re simply simulating humans, but Elaine considers the distinction to be unimportant in her field. Josh asks about the success of companionship robots, which leads Elaine to describe the success of animal and zoomorphic robots. She believes that humanoid companionship robots will still take time to develop, especially since a large problem in social robotics lies in managing human expectations.
In the last segment of the show, Josh, Ray, and Elaine consider the tension between popular science and sci-fi representations of robots versus how they actually operate. They look at various weaknesses of socially assistive robots, such as their potential to make mistakes, accidental emotional harms, accessibility, and high costs. Elaine emphasizes the importance of increasing diversity in robotics and computing, and she explains how assistive robots can aid in disability rights and empowering people with disabilities.
- Roving Philosophical Report (Seek to 4:27) → Holly J. McDede discusses how a robotic bee and a robot designed to help kids with autism spectrum disorder are impacting the social lives of their respective communities.
- Sixty-Second Philosopher (Seek to 49:01) → Ian Shoales examines the diversity and tropes of robots in pop culture.