Socially Intelligent Robots

12 November 2021

Would you like a robot to assist you with tasks around the home? What kinds of jobs would you be comfortable leaving a robot to do? Would you trust one to take care of your child or an elderly parent? 

This week, it’s the first episode in our new series, The Human and the Machine, generously sponsored by the Stanford Institute for Human-Centered Artificial Intelligence (HAI). We’re kicking off the series with this episode called “The Social Lives of Robots.”

With that title, you might wonder if we’re talking about robots hanging out with other robots, going to robot dinner parties, having robot pals, and so on. But that’s not what this episode is about. We’re thinking about how to develop robots that are socially intelligent. 

As robots interact more and more with humans and learn how best to assist us, they need to learn how to do things like read social cues, such as figuring out what we’re looking at and what our facial expressions mean. And they need to learn how to behave so that we feel comfortable and not creeped out interacting with them. That means that they need to develop social intelligence.

Computers and AI have incredible data processing abilities. They can do all sorts of things, like calculate pi to 1,000 digits in a fraction of a second, and let you communicate with people all across the globe with a mere click of a button. We use them to do things fast and efficiently that would be slow and cumbersome if we had to do all the work ourselves. So why do we also need them to be social?

But we’re not talking about computers and AI in general—we're talking about robots in particular. And robots have a kind of body, they move around in space. So they need to be able to perceive and navigate different kinds of environments, and they need to be able to figure out how to behave themselves in those different environments. As we are fundamentally social creatures, a robot that is navigating human environments will need some social skills.

Here’s the problem, though. While robots are unlike your laptop in the sense that your laptop isn’t designed to move around and act autonomously, they’re just like laptops in the sense that they share the same kind of intelligence—computational. Their behavior is just as much a result of ones and zeros as your laptop’s. And it’s unclear how to get social intelligence out of computational intelligence.

Our social intelligence doesn’t come from manipulating numbers and computing data. We have natural resonance systems that allow us to have empathy, to perceive meaning in facial expressions, to follow each other’s eye gaze, etc. A newborn baby, with no ability to crunch numbers, still has more social intelligence than a robot. So how do we get one kind of intelligence out of a fundamentally different kind?

Clearly this is a problem for machine learning. Engineers and computer scientists have to develop algorithms that allow robots to model and mimic human behavior, and learn how to keep improving and adjusting their behavior to human needs. This seems like a very challenging problem. And even if socially intelligent robots are developed, they’re going to be very different from us. Whatever “intelligence” they have will be, after all, artificial. 

You might wonder if we should really bother with this. We already have robots doing various tasks, like helping in the operating room or the assembly line. That’s what they’re really good at. Would we really want robots to be nurses and teachers too? Surely, those jobs should be left to real people!

I understand the perennial worry about robots taking away human jobs, but I think robots could be assistants to human teachers and robots, not replacements. Consider the work of a nurse, which can often be physically demanding. They have to do things like help people sit up or get in and out of bed. Now imagine nurses having robot assistants to help with all the heavy lifting, while they do all the things where the human touch is important.  

Of course, if all the robot is doing is heavy lifting, you might wonder why it needs social intelligence at all. Why not just work on designing robots that are really good at lifting people in and out of bed and forget about trying to develop their social skills?

Helping physically incapacitated people is not like building cars on an assembly line. One car is just like that last, and once the robot knows how to assemble one, they just do the same thing over and over. But people are individuals with different needs and desires. Even if all the robot is doing is helping the patient get in and out of bed, it will still need skills that the robot working in a car factory doesn’t need. 

When a human nurse is helping a patient sit up and the patient winces, the nurse understands what that means, that the patient is in pain or uncomfortable. If a robot is going to take over this kind of work, it also needs to be able to tell when the patient is uncomfortable, just by looking at the patient’s facial expression. It should be able to adjust immediately to the patient’s needs, just as a real life human nurse would.

In other words, if robots are going to be interacting with us, assisting us with various tasks, they need to be able to anticipate our needs based on things like body language and facial expression, and that’s why they need at least some social intelligence. 

But robots are also being designed to do a lot more than this. The primary function of socially-assistive robots is social interaction with humans, and they are being used in all sorts of interesting ways, like helping people with autism learn social behavior skills

To talk about both the challenges and potential benefits of social robotics, our guest this week is Elaine Short, a computer scientist from Tufts who actually works on designing socially assistive robots. Hopefully Elaine will explain exactly how to get something like social intelligence from a computer algorithm!

Photo by Andy Kelly on Unsplash

Comments (2)


Tim Smith's picture

Tim Smith

Friday, November 12, 2021 -- 10:47 PM

Ray asked a profound question

Ray asked a profound question @ the 17:35 mark. Would a robot be social as a human, or would the robot be simulating social behavior? Is that a distinction?

That is a distinction that generalizes to AI at large. Suppose we implement the Fregean dream machine and bring consciousness to AI – will that mean machines will approach human biological consciousness. Do we have to solve the hard problem first?

The answer to this is no. A robot will never have to reassemble a memory, parse asynchronous images from left and right inputs, or likely have cause to ululate or feel human-like emotion.

Robots will be able to move about the world and even learn from their interactions. However, I doubt they will feed forward sensation or create their sense of presence as humans do today. Instead, they will have more and different qualia. Some will enable them to intuit human social cues, check that intuition, and correct it in real-time. They will do it their way, however. Robots will be better judges of social signals than humans in the end. That will make all the difference, and it will be a distinction worth making when it confuses humans who can’t distinguish those cues nearly as well. That is a world worth thinking about and avoiding.

I've read and agree to abide by the Community Guidelines
Harold G. Neuman's picture

Harold G. Neuman

Tuesday, January 18, 2022 -- 7:09 AM

See my remarks on the

See my remarks on the November, 2021 post, The Social Life of Robots. I do not KNOW that robots cannot be equipped to have social lives. Just can't see it, given ...what we have and know. Would the robotics pros create another vehicle for sentience, or is that even a pipe dream? Isn't there a point beyond which we dare not venture? This sorta goes back to what has been said of what we would, could and should do.. Doesn't it? If this is primitive thinking, well alright then.

I've read and agree to abide by the Community Guidelines