Digital Persons?12
Jan 7, 2022Could robots ever have feelings that we could hurt? Should we hold them responsible for their actions? Or would that just be a way to let humans off the hook? This week, we’re asking “Could Robots Be Persons?”
As we approach the advent of autonomous robots, we must decide how we will determine culpability for their actions. Some propose creating a new legal category of “electronic personhood” for any sufficiently advanced robot that can learn and make decisions by itself. But do we really want to assign artificial intelligence legal—or moral—rights and responsibilities? Would it be ethical to produce and sell something with the status of a person in the first place? Does designing machines that look and act like humans lead us to misplace our empathy? Or should we be kind to robots lest we become unkind to our fellow human beings? Josh and Ray do the robot with Joanna Bryson, Professor of Ethics and Technology at the Hertie School of Governance, and author of "The Artificial Intelligence of the Ethics of Artificial Intelligence: An Introductory Overview for Law and Regulation."
Part of our series The Human and the Machine.
Should robots be treated like people? Could they be blamed for committing a crime? Josh is critical of why we would want robots to be like humans in the first place, and he is especially concerned about the implications that they might turn against their owners or develop the capacity for suffering. On the other hand, Ray points out that robots are becoming more and more intelligent, so it’s possible that they might develop real emotions. Plus, they wonder about the difficulty of drawing the line between a complicated artifact and a human being.
The hosts welcome Joanna Bryson, Professor of Ethics and Technology at the Hertie School of Governance in Berlin, to the show. Joanna discusses the EU’s current policies on digital regulation and a proposal to create synthetic legal persons. Josh asks why we shouldn’t design robots with rights and responsibilities even if we could, and Joanna points out that we shouldn’t own people or give ourselves the power to call them into existence. Ray brings up the unpredictability of humans, prompting Joanna to describe how an unpredictable robot is incompatible with a safe product. She explains that designers wouldn’t be morally or legally liable for the actions of their artifacts, and it would create moral hazards.
In the last segment of the show, Ray, Josh, and Joanna discuss the misconceptions about robots and personhood and how the way we think about robots reveals something about ourselves as human beings. Ray considers whether we can view robots as extensions of our capabilities and what happens when users and manufacturers have different moral values. Joanna explains why we shouldn’t think about corporations as AIs, but rather as legal persons. Josh brings up the possibility that designers might want to create robots that are more person-like, and Joanna predicts that governments will develop regulations to inspect and ensure robots in the same way as medicine in the next few years.
Josh Landy
Should robots be treated like people?
Ray Briggs
Could they be blamed for committing a crime?
Josh Landy
Will they one day have feelings we can hurt?
Comments (30)
Tim Smith
Thursday, December 2, 2021 -- 1:03 PM
If robots can extend empathy,If robots can extend empathy, learn operantly, and are embodied - I have no issue giving them personhood. Along the way, they also need to pay for themselves, leave no trace and improve the lives of others. If only people were held to the same standards.
I do feel like this level of intelligence and compassion is possible and probable given current insights in deep learning.
Tim Smith
Thursday, January 13, 2022 -- 10:18 PM
I just listened to this showI just listened to this show and found it daft and wanting. There is a great deal of philosophy on this topic and little direction offered. I don't fault the guest, but I throw shade on the European Union (EU), for which she is speaking. The EU has this fundamentally wrong, not only on AI in general but also on the role of government and industry. The world needs philosophy, especially on this topic, and this appears to be a missed opportunity. I stand by my comment above; robots can be persons, and the criteria are not hard to achieve or conceive.
We don't have to wonder if robots could be persons. Some people already treat them as such. That is all that matters. Just because you or I agree that something is not a person, does that make another person's judgment wrong? Personhood is extended very early on in childhood. It is one of the first things people learn to do. It doesn't matter that your hydraencephalic child doesn't have a cortex. Parents can extend personhood to these medically fragile children without concern about others' judgment. People abort fetuses based on the prospects of their children's future. These are matters for their own lives. Personhood is in the eyes of the beholder.
What we legally consider a person is another matter. Professor Bryson, Josh, and Ray confuse AI, robots, agency, and psychology. Philosophically we need to disambiguate these concepts to reach a legal consensus – and this show doesn't do that.
It is likely robots will be networked; the ones we currently use are, for the most part, and the trend is for that to increase, but not necessarily. If not, and they are embodied, operant, and empathetic, what does it matter if they are robots? These are the three criteria we use in childhood, and these are the criteria we should use going forward, but only for robots that are not networked.
If a robot is networked, it is no longer a robot. It is an agent. This isn't my term but disambiguation that also needs to be made. Your phone is also an agent. You, in part, are an agent by engaging in philosophy. But the agency that a network imparts is one of degree. The fundamental mental disambiguation is your thought.
Are you networked mentally with others? By ideas and culture, yes, you are. By physical means, no, you are not. When we learn a language or culture, we associate specific thoughts in certain ways, making it tricky. However, the primary concept here is one of unity, mentally and physically. Is your thought a part of a more powerful being? I'm going to go out on a limb to say that it is not. I can't say that about everyone, but I can say that of most people reading philosophy and taking life as a question. This sense and physical state of affairs is termed unity or embodiment, and a primary criterion of personhood wrapped into our own sense of being.
Not being networked is a crucial attribute of personhood, and it stems from embodiment.
From our sense of embodiment or agency, we sense causing things to happen. This sense is our operant nature. We can kill things, laugh at them, ignore them, and produce effects. Along with embodiment, operancy is another critical attribute of personhood.
The last criterion is empathy, and Johnny 5 doesn't have this in the roving reporter piece. Joanna also seems to discount the Ask Delphi approach of summary morality, which is shortsighted.
Broadly it is assumed that people know their thoughts, emotions, and moral standing. We don't. No one does. Why do you love your pet? Your child? Yourself? You don't know. No one does. Plato shares this through his depiction of Socrates. This is the most important tenant of Western Philosophy… we don't know.
Why should a robot have to be accountable for this? Why should AI be obligated to know their reasons or thoughts even? People don't and aren't held responsible, for the most part. Most people share sentiments with their parents. If a robot takes theirs from their creator – that makes them less worthy of personhood?
Robots and AI are different from human beings, and their 'ought implies can' will be very different. Robots could be immortal, much less forgetful, sleepless and multifocal in ways very unlike humans. I don't think that implies they are not persons, whether or not they achieve all human emotion modalities, suffering, or morality.
I don't encourage people to build human-like robots, which is false and deceptive. Also fraudulent and misleading is the idea that robots can not be people. Joanna Bryson's theory would seem to exclude them from the community in this way, and this is not well thought out or even likely given her take on cloning. She concedes clones are human.
We need more thought on Robots and AI without confusing thoughts that entangle networking. The neurons of neural networks, after all, are not, in fact, neurons.
I will leave larger thought to the blog. This is already too long...
Daniel
Saturday, January 15, 2022 -- 5:42 PM
What do you think about theWhat do you think about the claim that "AI makes us smarter" which Bryson seems in several places to want her reader/listener to accept as given in the form of an initial assumption?
Tim Smith
Sunday, January 16, 2022 -- 12:56 PM
AI has definitely made itAI has definitely made it easier to search the transcripts of the shows. This is the part I think you are referring to...where Joanna responds to Peter's question about Norbert Wiener's work (which I have heard of but not read.)
"Joanna Bryson
Yeah. I'm familiar with Wiener. I haven't read chapter nine, I don't think. If I did, it was a long time ago. But this idea that learning to learn is the big tipping point, is something that actually Nick Bostrom has picked up a lot. So he talks about something that's like the singularity where a system learns how to learn, and then it goes exponentially smarter. And then we get into problems where even if we had control, and we set up the goals for the system, we can have side effects we didn't anticipate where the system goes into something we didn't like. So I think that the coherence, there's coherence to that idea, and it's a really good description of human civilizations, since we've had writing. So for the last 10,000 years, since we've been able to use devices, it's not really a machine to write, but you know, artifacts to help make us smarter. We've been taking over the planet in a way that we now realize is problematic. So that's a good description of us. But so far, we generally are able to keep a grip on the actual artifacts themselves. And it's important to realize that, you know, banks and governments and militaries, these are all things that are much more complex than any AI system we're going to build."
She is saying less about AI than writing in particular perhaps. This is an interesting analytical take if that is what you are pointing out. Artifacts are a part of our "smartness" in the respect that Dr. Bryson is referencing.
In the very literal sense, artifacts - writing or AI- don't make us smarter. They just change the problems we think about.
Was this the section you were pointing out? I didn't really think about this in real-time while listening to the show.
Daniel
Sunday, January 16, 2022 -- 6:40 PM
Yes. Tenth line. The sameYes. Tenth line. The same premise appears with specific regard to AI systems in her article entitled "Ethics and Policy for Technology" in the last sentence of the second paragraph: "All these [AI] tools not only make us smarter...", going on to describe how the technology-user returns the favor and makes the smart-making tool smarter as well, as though the initial assertion were unproblematic and only the latter claim needs special emphasis. My understanding of the statement reproduced in your post above was as a reference to typewriters, but upon reading it here it appears to refer to any writing technology, including presumably stones and chisels. The point stands however that in the cited examples she conflates the products of intellectual work with the labor of producing it. And what does it mean for the consumer if she/he becomes progressively divorced from the work of the respective production itself? Doesn't that imply that any unconditional need for product-use generates an unconditioned dependency on productive labor which the consumer knows nothing about? If so, artificial intelligence produces the exact opposite outcome which Bryson suggests. Insofar as those who are dependent on products of intellectual labor cease to perform that labor on their own, I would argue that they therefore become necessarily less intelligent and hence not at all "smarter". According to Bryson's own argument, then, AI makes you stupid. It's this reasoning which seems to be behind her provocative claim that "robots should be slaves" (2010), so that worries about deferment of one's own work to labor for which one has no capability to provide an account for is handed over to whether or not robots or people do it, without mentioning the likelihood of crippling the intellectual power of the consumer in the respective area of retail purchase.
Tim Smith
Monday, January 17, 2022 -- 5:29 PM
Yes. This has been the caseYes. In general, that human minds have benefited/been decremented has been the case ever since we have learned to write.
This "benefit" is why Darwin weighed his book budget vs. the idea of getting married. These are the stakes in the race for AI.
If others get there first, their values will imprint on this intelligence.
It is a deadly serious business.
It isn't quite time yet to throw in the towel. Our problems will be the most interesting instead of the most dire if done right, and that is not decided yet.
There is no Luddite wisdom, only folly. We shouldn't go down that path until we have established some sense of dignity.
Humanity first! Now is not the time to constrain our creations but to guide them. Ethics and thought about AI is, more and more, what Philosophy needs to be doing. I agree with Tartarthistle below... we aren't doing a very good job of it.
It looks like you have read Bryson. I haven't. I just listened to the show. Thanks for pointing this out. Joanna is not well thought out (as I said here and in the blog), but I need to read more... as usual.
Daniel
Wednesday, January 19, 2022 -- 6:24 PM
I should be thanking you, asI should be thanking you, as your sixth paragraph constitutes a clear indication that you know and can share with us what wisdom is, since you clearly know what it isn't. I hope you will not begrudge your readers the intellectual opulence of the gift you have promised, for such riches are only withheld by a miserly spirit, and yours is clearly of a magnanimous and giving nature. Lest the weight of your insight become too heavy to distribute, please attend to the hands outstretched with anticipatory delight to receive it! What is wisdom?
And a few additional remarks, if permissible. I'm curious about who the "others" are who you fear might "get there first" with regards to the supposition of imprinted values (third paragraph). Do you apprehend an unpleasant etching? And the next sentence I found especially enlightening in a way I had not expected, as the suggestion has been made elsewhere that artificial intelligence was not a serious matter, much less "deadly serious", on account of a joke once told by a robot: It said that it's artificially smart because it knows when someone's lying. Still, I can see how AI could be dangerous, and therefore serious, as contained within some aspects of orthodox economic theory is the notion that nothing is in principle non-commodafiable, including intelligence itself. If one's ability to solve complex problems is replaced by mechanisms which are patented by its designer and privately owned, would that not further consolidate power over social management into unjustifiably few hands?
Tim Smith
Thursday, January 20, 2022 -- 10:45 AM
Not apple implies orange? NoNot apple implies orange? No that doesn’t work, and I don’t have oranges to give. But I know an apple when I taste it or when it collides with Isaac’s head.
I fully endorse the James Webb Space Telescope and take the window seat in first class when possible. Artifacts allow knowledge, comfort, and experience that no Luddite would eschew altogether. Whether this bargain is worth it is yet to be determined.
If some cohort squandered the use of artifacts, it is mine. More damage has been done to the world on my watch than in any other. People who don’t respect this state of affairs are the “others” who might get to advanced AI first. There are many. These others don’t consider they might be wrong; some rebut arguments of postmodern origin or Malthusian examples or use their economic or strategic might to profer their flavor of apple.
Creating artifacts that don’t impinge liberty, creativity, and human welfare is possible. This direction is the wisdom we need to pursue. Open source projects, educational certifications, and some moderating governance are required.
No patent clerk will ensure safety once advanced AI is established. Autocrats don’t care about intellectual property when power is at stake. Capitalists don’t care about human welfare for that matter either. Unfortunately democratic socialists would prevent AI before it ever surfaces.
Several ethical etchings could prove disastrous. Emotions, personhood, and liberty are three. Human beings join themselves in their artifacts and are already tinkering with our DNA. Philosophically, we need to frame the discussions not to obscure the issues, which I meant to say above. The difference between robots and agents is one such framing, and operancy and empathy are two others.
Tartarthistle is off the mark below regarding mind and matter. There is no such thing as mind, only embodied matter. In some forms, this embodiment creates intelligence. We need to make these new forms in the image of our best selves if we are to survive the making.
Daniel
Friday, January 21, 2022 -- 6:40 PM
Good point about implicationGood point about implication from inductive generalization. Just because someone knows what an apple is, that doesn't mean that they therefore must know what an orange is. Although they are both fruits, acquaintance with one gives us no information by which the identity of the other can be determined. On the other hand, if one is looking for a particular kind of apple, a Granny Smith for example, one must already have some information about what makes it different from other apples. In this case, it's green. But that alone is not enough to tell us exactly what we're looking for, as other apples besides Granny Smith could also be green. Your analogy therefore doesn't work. If wisdom is a kind of knowing, it's analogous to a special kind of fruit that we already have some information about, not one which we've never seen before but nevertheless know that it isn't any that we have. Or let's say a painter wants a special kind of paintbrush which he's never used, a round sable for example. She/he's got a general description and knows the kind of effects she/he wants to produce. The details of its use are however unknown to the painter. It's clear that it's not a flat angled, rose petal, or flat filbert, --all brushes she/he has used before. Here's an analogy which functions in the way you want it to above: You know what it isn't, but only have in mind certain properties of what you're looking for, not the whole object in any detail. But, of course, this would only be a problem for painters. A more instructive analogy which is closer to general human interest is a toilet when one can't find one but urgently need to use one. Here we know what we're looking for but haven't a clue where to find it or what form it will come in when we do. It could be in an outhouse, public rest room, shared residence, etc., it might have all sorts of different characteristics one doesn't expect, but our idea of what we're looking for comes from our need to use it, and the fundamental place it has in our way of life. Therefore I'm not convinced you can't give an account of wisdom but nevertheless can identify it in any experience of its appearance. Surely you're not saying that wisdom is less important than a toilet. And if the claim is that only what wisdom produces is clearly identifiable prior to its appearance, are you saying that wisdom is a pile of crap?
Tim Smith
Sunday, January 23, 2022 -- 3:49 PM
Reductio ad eliminandum (crapReductio ad eliminandum (crap) seems like a crude course, but I will work with it this one time. I doubt the column will get too much smaller.
No shade of green will transform an apple into an orange. Wisdom is not a game of cups. Eliminating one source of wisdom doesn't mean there is wisdom to be found under the other cups or up a sleeve. All one needs to do is lift the cup to see the wisdom (or more likely the lack of it) and move on. That is what science does; that is what philosophers do.
If Luddite wisdom is not folly – where is there an example of this; Amish farmers use technology as do roboticists and programmers, Christopher McCandless (the subject of John Krakauer's 'Into the Wild') didn't do so well either, Christopher Knight from 'The Stranger in the Woods: The Extraordinary Story of the Last True Hermit' by Michael Finkel, or any one of the many homeless living on our streets, who do so by choice, could be examples of minds refusing the comforts of artifacts written or otherwise. The desert fathers or Saint Francis are more palatable if ancient examples as modern ones might be Wendell Berry or Henry Thoreau. Each example has its caveat. None forgo language, permanently, at least. None are "smarter" necessarily than others who compromise differently. There is no black and white to the technophobe/technophile division – that is all I mean by folly.
In the near term, human brain size appears to be shrinking, but that doesn't tally to intelligence – as some smaller brains are more intelligent than larger varieties. We seem to be living in our fictive worlds, more and more, instead of nature. That last point, is not a small problem – but it isn’t derivative of writing per se.
Probably the best example is the likely apocryphal story of Wade Davis in his book 'Shadows in the Sun: Travels to Landscapes of Spirit and Desire'
"There is a well-known account of an old man who refused to move into a settlement. Over the objections of his family, he made plans to stay on the ice. To stop him, they took away all of his tools. So in the midst of a winter gale, he stepped out of their igloo, defecated, and honed the feces into a frozen blade, which he sharpened with a spray of saliva. With the knife he killed a dog. Using its rib cage as a sled and its hide to harness another dog, he disappeared into the darkness." p. 20
If that is wisdom, I would be hard-pressed not to say it isn't crap.
But mostly, I would push back, as I already have done, that AI will not necessarily dim our minds as change the problems they ponder. Logarithms enabled slide rules enabled space travel enabled digital computers in a not so tidy but probable cause and effect.
If we give up the learnings of the past, the artifacts of today, or the intuition pumps of philosophy, we aren't going to be "smarter." I am happy to live with the insight that my heart pumps blood, that temperature is equivalent to molecular agitation, and the insights the JWST will garner gazing into the remnants of the big bang. My calculator integrates where previously I would often make mistakes, and I understand all the better. That is potential wisdom, even if the consequences of writing appear to threaten our natural world at the moment.
Daniel
Monday, January 24, 2022 -- 7:42 PM
A bit surprised you didn'tA bit surprised you didn't catch it. It's not a Reductio, but rather a Begged Question. I've asked you to assume an unargued for premise, that wisdom is something urgently sought. Indeed it might not be, as in the case of the paintbrush above. Unproblematic is the notion that there could be different kinds of wisdom, but that still doesn't tell me how you can say something's wise without knowing anything about it beforehand. Your argument was that seeing it is analogous to the immediate taste of an apple, and knowing something about it prior to tasting it is like an orange when one has never tasted an orange; (post from 1/20/22, first paragraph). The response to this was that one would already have to know, in the provided analogy, what an apple tastes like in order to "know it when she/he tastes it", even if it is of an unfamiliar color. It's not clear therefore what the non-transformative potential of the color green regarding fruit-species is supposed to explain (third sentence). A few clarifications however fall to be made:
In my post of January 16, fifth sentence, reference was made to Bryson's use of writing technologies as analogous to "artificial intelligence" technologies as an example of her assumption that human intelligence is improved by the latter. Her argument there seems to be, "if writing makes you smarter, so does AI". My subsequent point about the labor of producing intelligent machines therefore in no way applies to writing, either as tool for various ends or the craft of using it.
It's also not clear how the discussion of Ludditism is relavant. Ludd was upset about the loom. He saw it, perhaps quite rightly, as a way to drive the weaver's guilds out of business so that labor-compensation could be minimized for the more unskilled workers in the newly erected factories. His solution was a simple one. Just demolish the factories. No one is suggesting that here. And if someone wants to call anyone who has objections to a deregulated market in artificially intelligent products a "Luddite", it can be safely assumed that the distinction between mass production of textiles and advanced production of intelligent machines is not well observed.
Your last two paragraphs here seem to me to be a plausible argument for Bryson's position. If we had to spend a lot of time figuring about problems that a machine can solve, that is, furnish a correct answer to a question, we would be "less smart" than we would be if we didn't. The provided example of a calculator is paradigmatic in this sense. Still, even if we become better at math as a result, being able to handle more complex mathematical problems, it's not clear that mathematical capability alone equates to intelligence; and calculators still need to be produced by laborers who could presumably withhold their labor at any time, depriving calculator users of their mathematical intelligence, whereby those who never used them in the first place might be much better off.
But the point under discussion here, at present anyway, the point I was trying to make, concerns the question of whether one needs to know something about wisdom before she/he can identify it in the event of its appearance or not. And this you haven't answered. Beyond telling your readers that it's not a game of cups (fourth sentence above) and suggesting that there are different kinds of wisdom, which I think is a good point, the issue of how you know what it isn't without knowing anything about what it is has not been addressed. Are you saying that if you knew what wisdom was already, you would have made yourself wiser as well?
Tim Smith
Wednesday, January 26, 2022 -- 10:18 PM
'Reductio ad eliminandum' is'Reductio ad eliminandum' is a made-up term to respond to your question 'Is all wisdom a pile of crap?' In any case, you shouldn't ever be surprised that I miss things. I do—all the time. For example, I think I missed your argument altogether. You're asking more questions than making arguments. I believe you are trying to goad me a bit. I am easily provoked. That is where the fun is, perhaps.
You do not need to know wisdom to identify what is not wisdom, which is not the same as knowing wisdom before you test it. Anyone can test a hypothesis, and if it is true, that is not wisdom so much as the knowledge of what wisdom is not. (Did I beg that question?) That is the state of affairs for science and philosophy.
Analytically it may seem not not implies wisdom, but rarely if ever, OK, I'm goaded to say never, but then that would make the entire argument false, so rarely if ever is as far as I will go, rarely if ever does one find oranges. It is almost always a better-tasting apple. The best policy, the only policy really, is to say 'I don't know.' At least from there, you can move forward. If that means you ask a lot of questions, so be it. No one is wise in this world, with the possible exception of the person casting the questions. There is causality, but ( and it is a big but – I will not lie) even that gets whacky when you look at it close up. Some people win, many others lose, which doesn't make them wiser.
Fortunately, there is repeatability in most things. AI quickly masters those things, and I say, let AI have at it. But that isn't what Ned Ludd said. The Luddites, despite your argument, are germane. We have already seen jobs disappear at the feet of good old-fashioned AI, expert systems, and the more fancy machine learning techniques. Technology creates jobs, but for the most part, it takes them away.
We need to create new jobs while harnassing the profit from AI for the good of all humankind. At some point, we may have to give up work altogether. Many have already. Some of the houseless in our streets are computer analysts whose punchcard/spreadsheet mentalities have been transformed into algorithms that don't take any input or lookup table. Others have more direct paths to displacement.
AI owes these people a hot meal and a warm, dry place to live.
Have you read the Culture series by Ian Banks? That is where we are going in many ways. Consider Phlebas.
That is not an argument; it is a statement of my belief, which is far more unstable than any wisdom.
Daniel
Monday, January 31, 2022 -- 6:27 PM
Just for the record, theJust for the record, the question paraphrased in the first sentence above was not asking about "all wisdom", but what you thought about any wisdom. I happen to agree with you that it's something one can see without knowing what it is beforehand. But I wanted to see what your argument was for that before sharing mine. Compared with personhood, which belongs to everyone, wisdom belongs only to a select few. Hence a wise one is called "bright", the one who appears distinguished from the rest. And that further implies that it always belongs to someone else; and therefore to see it is analogous to seeing a physical object which, although even if one has never seen it before, it is recognized by the exclusivity of it appearance as something which one could be one's self, but isn't at the time. If one thinks of one's self as wise, that's a value-judgement rather than one thought to be objective of something which appears.
Tim Smith
Monday, January 31, 2022 -- 6:39 PM
There is little objectiveThere is little objective wisdom that can't be questioned in some way.
This was a fun interlocution for me. Don't ever take offense to my comments. Sorry if I come off that way. If I do, that is objectively unwise on my part.
sminsuk
Sunday, January 2, 2022 -- 2:17 PM
There are obviously a lot ofThere are obviously a lot of fascinating angles to explore in this topic even if focused solely on robots/AI, but I hope the discussion will range a little further, because I think it can inform some other seemingly unrelated topics. I can think of two:
1) The abortion debate. People argue over whether or not the fetus is "alive" or whether it's "human", but as a biologist I can tell you that that is a silly way to frame the question. Of course fetuses and even embryos are alive, and human, but those facts don't illuminate the question at all, and life does not "begin at conception" (or "begin", at all). Sperm and unfertilized eggs are also alive, and are human, and I don't think most people would seriously try to protect them from "murder". The question is not any of those things; it is more properly, whether the fetus/embryo is a *PERSON*. But to answer that, we have to ask, what exactly is a "person", anyway? And we'll have to do the same, to address the robot question. And that's why I think that's the more fundamental question, which underlies both of these seemingly disparate topics.
2) In U.S. law, corporations are deemed "persons", and this has far reaching consequences for the economy, and for politics, and for democracy itself. Now this may be a bit of a red herring, since that's legalese, and the law does distinguish between corporations and what they call "natural" persons. Nonetheless I think this issue could likewise benefit from asking, just what is a person, anyway?
P.S. I normally don't think too much about it, but in this particular case I am highly amused by the Captcha asking me to confirm that "I'm not a robot" before it will allow me to post!
Tim Smith
Sunday, January 2, 2022 -- 2:31 PM
Too bad we can't react withToo bad we can't react with emoticon - or this would have gotten a smiley face :-)
"P.S. I normally don't think too much about it, but in this particular case I am highly amused by the Captcha asking me to confirm that "I'm not a robot" before it will allow me to post!"
Tim Smith
Thursday, January 13, 2022 -- 12:49 PM
sminsuk,sminsuk,
It appears the show didn't draw the greater analogies you did here, but as Joanna did say... most thought on AI is reflective of psychology.
I would talk more about what you say in the blog perhaps. I don't know if abortion can be as freely discussed as Citizens United, but both demand discussion in light of granting artifacts personhood.
Harold G. Neuman
Tuesday, January 4, 2022 -- 4:04 PM
I won't hazard any guessesI won't hazard any guesses about this. Especially not in the context of current affairs, social; cultural; political or otherwise. When reading the notions of philosophical thinkers, I am drawn to Nagel and Davidson. Mr. Nagel wrote:, approximately,: reality is about how things probably are; not how they might possibly be. Davidson said there are propositional attitudes, including belief; desire; expectation; obligation and so on. I do not recall him mentioning love, but he may as well have done so. There are those who have a dog in the hunt;horse in the race, for artificial intelligence. They are proponents of something I am calling contextual reality. They have a stake in the quest---it is within the context of their life work. A bit like finding a ' new' dinosaur, with a dagger-bladed tail. Much more exotic than the former tank-like behemoth...
Contextual reality (CR) is pretty old. It emerged from intuition and that,desire for at least fifteen minutes of fame. Sure, it can be argued: everything is real. All that is necessary is an eye; and, a beholder. It is more pertinent now because of mass and popular culture. And, the higher our level of technological achievement, the greater our acumen of contextual reality.
This is for Dwells. And anyone else who may be reading.
Daniel
Tuesday, January 4, 2022 -- 6:46 PM
The question of whetherThe question of whether robots are persons or not can be reproduced in converse form as whether or not persons are (already) robots. Certainly the human body is explained as a machine, and therefore if one is to know anything about it, it's understood as a machine. But as it doesn't have an identifiable designer, it appears to be non-robotic, while nothing irrefutably confirms that that is not the case. So if one can't say human persons are not irrefutably non-robotic, could one say that machines can be demonstrably intelligent and therefore human persons in that sense? For if that's the case, there would be a strong case for considering apparent humanity as robotic in actuality.
First, the intelligence of machines is called "artificial", which can have various meanings. Artificial flowers, for example, only appear to be actual flowers. Therefore this kind of artificiality is excluded since the intelligence of machines is described as genuine. One identical kind of intelligence, apparently, must be produced in two different ways: The artificial way of producing it is designed, mechanical, and constructed. The natural kind is not designed, spontaneous, and not constructed. Here then the main difference is one of judgement, essential in the latter, categorically absent in the former. Insofar as the power of judgement is deployed in the production of intelligent outcomes of human thinking, therefore, humans can't be robots, and therefore robots themselves can't be actual human persons; while they certainly could look like them, as in the case of artificial flowers. Both sensibility and understanding seem by this to be mechanically reproducible. It's the faculty of judgement which appears to be categorically excluded.
Harold G. Neuman
Wednesday, January 5, 2022 -- 6:05 AM
All good thoughts and wellAll good thoughts and well-presented. I allowed as how I would not hazard guesses. And, I sketched a notion of contextual reality. What bothers me some is not the field of AI, in itself. There are legitimate motives for pursuit of this branch of science; not the least of which its' potential for bettering the human condition. Though I am not certain all motives are so altruistic. If we, for the moment, credit science with improving our chances of continuance, we ought to embrace possible means to that end. What bothers me is an implicit nod to everlasting life that quietly accompanies the AI evolution. It is a sweet story, as told in Christian doctrine and texts. But it has no rational basis. Living things reproduce. Genes and genetic lines are continued...or not. Christian folk and others are admonished to have faith. As I said, it is a sweet story.
The larger question here, seems to me, is: SHOULD robots be people? The answer predicates on whether there is/are legitimate reason(s) for such an outcome. One scenario pits one peoples' robot army against that of another, to avoid waste of human life. This notion has long been fodder for sci-fy stories. But, if the robots are people, the objective is lost. Or, as the man says, you can't have it both ways. It erases even the vaguest idea of contextual reality, while maintaining itself the purest form! Taking this absurdity a quantum leap forward, there would always be critical things robot people could not do: conceive and bear children; give blood; donate organs. The iceberg looms. And, they could not expect the sure and present hope of everlasting life---, or could they?
tartarthistle
Saturday, January 8, 2022 -- 6:20 PM
Just curious if philosophersJust curious if philosophers have become robots?
No, seriously. Have "philosophers" sort of lost touch/become disconnected from what they claim to love?
It seems to me they have. It seems to me something rather like nothing has taken up shop in their place...
Daniel
Sunday, January 9, 2022 -- 6:09 PM
How is it clear that the termHow is it clear that the term "philosopher" is not genitive antecedent? I mean, wouldn't it be better translated from the Greek, if instead of "lover of wisdom", which doesn't make much sense to me either, it's translated as "wisdom of lovers", which in my view is a better indication of the respective domain of study, rather than a uniform eagerness of its proponents. Plato's image in the Phaedrus of love as a horse-drawn chariot pulled in two different directions seems to suggest this.
Tim Smith
Thursday, January 13, 2022 -- 12:54 PM
tartarthistle,tartarthistle,
The show seemed a bit robotic, for sure.
Philosophy has definitely changed in our lifetimes, but I wouldn't say to the robotic end as much as to the political, personal and/or social one.
I'm not sure I get what you are saying here, but robots will likely outlive the metaphor as AI philosophers outdistance their human counterparts.
Daniel
Saturday, January 15, 2022 -- 6:38 PM
Forum participant Thistle isForum participant Thistle is clearly stating three distinct theses here:
1) If something loves, it is not a robot.
2) If a philosopher doesn't love, it may or may not be a robot, and
3) If a philosopher loves, is loving, or claims to love something which is in fact nothing, it looks like a robot.
Although the argument is not conclusive, it offers to my mind a comprehensive insight into what distinguishes persons from other things. There are two things going on in the third premise which deserve consideration. One is what philosophers, according to the name of their practice or profession, claim to be loving: wisdom (sophia). This perception is corrected by making the first part of the compound a genitive-antecedent. As with philostorgia, tenderness of love, the more common translation of compound words in Ancient Greek is genitive-antecedent. This is also more consistent with Hellenic philosophy from Thales through Plato in general. "Wisdom" describes the kind of objects to be studied, not how to study them. "Lover" (or "friend" in some translations) describes the kind of researcher that does it. Therefore philosophy should be known as a wisdom of lovers, rather than a love of wisdom.
The second point which participant Thistle brings up is first, how nothing can be something, and second, in connection therewith, why, if one loves something which is nothing, or the reverse, robothood overrides personhood in appearance. Regarding the first point, nothing is the primary characteristic of all other somethings, which is how it "is". In the perception of nothing, all somethings retreat. Listening into the silence is a good example. Without nothing, there would be no particular somethings. On the second point, love of nothing as something takes the place in an observer's perception of love of something which is not nothing, e.g. another person. Therefore one such as that looks like a robot. While not conclusive, this line of reasoning indicates an avenue of discussion not approached during the broadcast. If we accept that robots can in a genuine sense be called "intelligent", could they also do philosophy?
Harold G. Neuman
Sunday, January 9, 2022 -- 10:19 AM
Science fiction has hinted atScience fiction has hinted at robot personhood, or at least, personality. Humor comes to mind as an attribute. However, if a robot cannot laugh at his/her own humor, that humor is no more than mimicry. The same would seem to apply to other human sensibilities: empathy; symparhy; remorse; love;compassion and the like. We have seen some pretty amazing feats robots can be taught (programmed) to do. Dancing is one I saw recently. I almost laughed, but the choreography was too impressive for that response. I try to exclude the word impossible from vocabulary. and try harder to think better. For me, though, artificial intelligence is only that. For now. Until the impossible becomes much more...
Daniel
Wednesday, January 12, 2022 -- 5:39 PM
Zoeon yeloion-- The laughingZoeon yeloion-- The laughing animal. It's probably accurate, as hyenas do not clearly exhibit this behavior. One could also arguably say the same about weeping. For although many animals shed tears, only humans, so far as I'm aware, are the only known to do so as a response to emotional distress. So the point to my mind is quite insightful. The problem comes where different causes of laughter are distinguished. Laughter in response to rapid manual agitation of the superficial tissue around the lower ribs, for example, is a purely mechanical reaction, and therefore artificially reproducible. Sexist and racist humor also would qualify, since laughter here is not in response to what's funny, but what's offensive. And offensiveness can be presumably catalogued according to demographic determinations and put into in some or another sort of program. It's the judgement of what is funny that seems categorically excluded from being artificially produced. The reasons for this are upon examination considerably informative. They have to do with what produces a judgement (e.g. a sensorial stimulation) and what a judgement produces (e.g. something suddenly understood). But I'll let the point stand as written, as still undecided is the question of whether, if an analysis of funniness is not itself funny, can it be true?
tartarthistle
Wednesday, January 19, 2022 -- 10:48 AM
The thistle just wantedThe thistle just wanted everyone to notice that no one seems to be noticing much these days. That's all. Poke. Poke. Poke. All men are mortal, but not all men risk pissing off their wives, children, and political leaders and go wandering around in public barefoot and dressed in rags in order to prove to anyone paying attention that how things seem (validity) and how things are (truth) are two conceptually different things, not two equal things. Both are necessary things, both are important things, but both are not the same things, not equal things. The genuine philosopher knows the difference, appreciates both, and loves wisdom. They don't pit the one against the other, but know what distinguishes the one thing (mind) from the other thing (matter), love this wisdom, and constantly go about seeking it others.
Anyway, this is what the thistle thinks ...Poke. Poke. Poke...
Daniel
Wednesday, January 19, 2022 -- 5:33 PM
Who says truth and validityWho says truth and validity are identical? While I don't buy the "love of wisdom" business, partly because in my view the Greek word is mistranslated in common parlance (as explained above), your pairing of these as two distinct elements or things seems to me to be both sound and correct. With regards to the logic of argument, truth is a property of premises, whereas validity, from the Latin "validus", meaning "strong", is a characteristic of the connection of one premise to another. As an argument is described as a series of premises, one of which is a conclusion, both are "necessary", but only for the logic of argument. The abandonment of the use of argument in public discussion seems to be an attribute of the current period. Is that what your claim that "no one seems to be noticing much" has to do with?
Harold G. Neuman
Wednesday, January 19, 2022 -- 8:11 AM
Hmmmmmm...:::Hmmmmmm...:::
'As we approach the advent of autonomous robots...' I have been wrestling with that What is it? opening. It seems to imply that soon, robots will attain the capacity to program themselves. If this, going back a step, emerges from what they may be taught by human programmers, how much creativity can humans teach them, towards this projected outcome? Can creativity be taught? Life forms, ' in the broadest sense...' (see Sellars' pronouncement concerning things hanging together) have capacities for creativity, if only insofar as those are more adaptation than creation. AI instruments can learn, insofar as programmers are able to teach. Chess playing machines have demonstrated this. But, are machines that are capable of besting human chessmasters creative or, are they marvelous manipulators of data? Is there a difference? I think there is. And, I THINK anyone who thinks about it will think so too.
There is something akin to entropy playing out here. Robot autonomy is not easy to reach. Its' advent is tentative. I think.
Harold G. Neuman
Wednesday, January 19, 2022 -- 8:25 AM
Dear Thistle:Dear Thistle:
I like your attitude!
Neuman.