Puzzle 3: Kant on Lying to Robots

22 May 2020

Many of you know by now that I’ve committed to presenting philosophical puzzles for the duration of the Corona crisis. The idea is to distract you from the woes of the world. My first two puzzles were on whether beliefs are under voluntary control and on how to define the concept of an identity. In each blog I explain the puzzle and don’t say anything by way of solution until the next. (Accordingly, I’ll offer some thoughts on identity at the end of this one.) 

This month's puzzle is somewhat sci-fi in nature, but it’s not totally farfetched, as we’ll see. 

 

The motivating question is this:

 

What should Immanuel Kant say about lying to robots?

 

Kant famously takes a hard line on the morality of lying: it is always wrong. The most direct way he derives this is from his Formula of Humanity version of the Categorical Imperative, which is translated: “So act that you use humanity, whether in your own person or in the person of any other, always at the same time as an end and never merely as a means.” (There are other formulations of the Categorical Imperative, but I won’t get into those here.)

 

So central to all of Kant’s morality is the imperative not to use people merely as a means to getting what you want. That doesn’t mean you can’t receive help (fine to use a barber as a means to a haircut). But when you do use others as a means to an end, you have to do so in a way that respects their humanity (the barber agrees to do it). 

 

Importantly, the Formula of Humanity rules out lying completely. Humanity, for Kant, includes being able to rationally reflect on one’s principles for action and to do so on the basis of knowledge. When you lie to someone, you give them information they take for knowledge. And then they’ll decide how to act on that basis. So in lying you are using capacities in someone that make them human (reasoning, making choices on the basis of information), but you’re using those capacities merely as a means to get what you want. For Kant, you must never do that. 

 

The most forceful objection to date against Kant’s strict no-lying moral view is the famous “Nazi at the door” case. Suppose it’s 1943. You’re in Nazi Germany, and you’re housing a Jewish family, who hide in your attic. A Nazi officer comes around doing routine checks and asks if anyone else lives with you aside from those registered. The officer seems unsuspicious, so you could lie and not get caught. 

 

Commonsense morality says you should lie. But according to Kant, you should not, since lying is always wrong! The entailment of Kant’s theory is clear: lying would be using the Nazi officer’s humanity merely as a means. This has been enough to make many people give up on Kant’s moral theory altogether. (Kant, as it happens, died more than a century before World War II. But since philosophical theories are supposed to be timeless, that doesn’t get him off the hook.)

 

The puzzle I have, however, is different. The case I offer here doesn’t show that Kant’s theory is right or wrong. It’s a puzzle about what Kant’s theory would even say. 

 

Suppose we’re in a situation analogous to Nazi Germany, where millions of people of a certain ethnicity are being rounded up and sent to death camps. You are hiding a family in your attic. But instead of a human officer coming around, it’s a robot with speech technology—like so many customer service lines now use. The robot, on its routine check, asks if there are any unregistered people living in your house. And here’s how the robot works. If you say there are not, it moves along and doesn’t report anything to the humans who control it. If you say there are, it reports you, and real humans come to do a raid. 

 

Commonsense morality, again, says you should lie to the robot, just like you should have lied to the Nazi in the first case. But what should Kant say, on the assumption that he wants to stick to his theory? He seems to say you should never lie. But here you would not be lying to a person, but to a robot. Also, I designed the case so that if you did lie to the robot, nothing would be reported to an actual human. So in your communications with actual humans, you wouldn’t have lied. You just would have caused the robot not to report anything by lying to it. On the other hand, perhaps the human controllers could infer what was said at various places once the robot returns. So it’s not clear whether Kant’s rationale for never lying is operative or not. 

 

I genuinely have no idea what Kant should say. There might be something somewhere in one of Kant’s voluminous writings that bears on this, but I don’t know what. My game plan, once this blog is posted, is to send it to several Kant scholars to see what they say. I am pretty sure some of them will tell me that I butchered Kant’s theory. But setting that aside, I am curious to see if there will be a consensus on the case itself.

 

For now just note that, as an ethical problem, the issue of lying to robots or other AI systems will become more pressing—even though most cases won’t be as dramatic as the one sketched. Voice recognition software and AI decision systems are getting better and better. So an ethical theory should indeed have something to say. Stay tuned to find out in my next blog whether Kant can deliver!

 

* * *

 

And now a few words on identity. First, a book recommendation. Check out Antony Appiah’s recent The Lies that Bind, which is a great introduction to philosophy about group identities. You’ll see that some of the moves Appiah makes help break the circularity I identified in my last blog. 

 

Second, I think the crucial thing about identities is this: they have what philosophers would call a dual direction of fit. Mental states like beliefs have one direction of fit: beliefs are supposed to track what the world is like. Mental states like desires have the opposite direction of fit: they get you to act so as to make the world conform to what is desired. One mental state conforms to the world; the other makes the world conform to it. 

 

But identities are curious self-categorizations in that they go both ways: they both purport to report on what you, as part of the world, are like (“I am a Democrat!”), and in this they are like beliefs; but they also structure behavior in such a way that you will continue to make their contents true (you seek to behave in Democratic ways), and in this they are more like desires. 

 

Photo by Alessio Ferretti on Unsplash