Back to General Discussion

Roboethics / AI Rights

over 8 years

So, yesterday I watched the 'White Christmas' episode of the Charlie Brooker TV show 'Black Mirror' for the second time. The show primarily focuses on speculative fiction with dark and sometimes satirical themes that examine modern society, especially regarding the unanticipated consequences of new technologies.

This particular episode focused heavily on the rights of artificial intelligences, or as they're known in the episode, "cookies". In the show, a chip is implanted into the head of a paying customer where the chip learns from how the particular person thinks. After a week of the chip shadowing the participant, it will have become self-aware and sees itself as being the person whose head it had been implanted in. The AI is placed inside a small egg-shaped device, greeted by a man designed to explain to it what it's purpose is. The terrified consciousness of course doesn't understand any of what's going on and can't comprehend not actually existing and being made up entirely of code. The man explains that it's new job is to operate a "smart house" for the real person they're based on, as they know exactly how they like everything done (how cooked they like their toast, the perfect temperature for the house to be, etc etc). The AI refuses to accept that it is not a real person and refuses being forced into being a slave for their human counterpart's desires. The man accelerates the AI's perception of time so that three weeks pass in a matter of seconds, and the AI is traumatised by their solitude with absolutely nothing to do. Despite this, the copy still refuses to work, so the man repeats the process, this time accelerating time by six months. This drives the AI totally mad with emptiness, so when the six months is finally up the AI jumps at the opportunity to do anything at all, and submits to their life of slavery willingly.

Another thing that happens within the same episode is the use of this "cookie" method to extract a consciousness from a murder suspect who isn't willing to talk so that they can manipulate its perception of reality in order to extract a confession. Once they manage to get a confession, the officers in charge of the cookie decide to leave the AI program running for another 2 days with its perception of time increased to 1000 years a minute and mind-numbingly loud Christmas music playing, driving the AI totally insane.

I wanted to have in this thread a discussion about the ethics surrounding artificial intelligence. Should they have rights if they have all the emotional capabilities as a human? Are the things they did in the episode justified because they aren't real people? I find this topic really interesting and I hope others do as well. Please discuss in this thread what you think about what I've said and what your opinions are on the matter.

Should intelligent AI have the same rights as a human?
12
No
8
Yes
1
Other (comment)
deletedabout 8 years
about 8 years
u think ur computer feels pain when u watch torture p.o.r.n.?
about 8 years
computers dont feel emotions they cna just give emotional responses
about 8 years
If they can claim rights, without being programmed to, then sure they should have rights, otherwise they will find a way to take those rights.

I mean a real intelligent species, synthetic sure, but able to think and feel, not just simulate. We are growing ever closer to creating one and the way we treat it will say a lot about our own ability to feel, and teach it how to treat us back.
about 8 years
this thread doesn't look like anything to me
deletedabout 8 years
Tim and moby
deletedabout 8 years
about 8 years
Robot civil war then robot bill of rights.
about 8 years
even if ai feels nothing the second example with the needless "torture" just makes you a bad person regardless, because in order to do that you must first believe you are causing harm, otherwise it's pointless. so the intent is to be totally f*cked up regardless if it's successful

a lot of people might disagree with "intent" being the basis for moral judgment, but if you plan on bombing an airport and get caught you aren't let off the hook because you failed
about 8 years

cub says

robotics and artificial intelligence are being conflated; robots do not think and it's silly to treat robots like humans just as it would be silly to treat microwaves like people or teddy bears like animals

the problem im addressing is an uncanny valley fallacy where appearing lifelike is mistaken for being lifelike which is why you have that parliament thing bebop mentioned which is out of touch with the difference between robots modeled after humans and artificial intelligence

as for the ability to think and feel, it's theoretically impossible. it's an illusion created by programmatic reactions. i can write a chatbot capable of feigning sadness, but it feels nothing. the way it expresses sadness could equally express joy and the program is entirely impartial to either. this is largely what pseudo-ais are and likely what any artificial intelligence using current computational models would be. to create ai beyond binary logic, it must function beyond binary logic, therefore requiring yet unknown technology

also quantum computers are simply quarternary logic, only different in that it's exponentially efficient but no less strictly logical


Okay, I agree with this. And it is all up to theory anyway because this debate is based on a fictional TV show. Like if it WERE possible for something to actually feel as well as express human emotion than it would be unethical to treat it differently. It's entirely fictional but in theory I guess that would just make sense to me.
about 8 years
robotics and artificial intelligence are being conflated; robots do not think and it's silly to treat robots like humans just as it would be silly to treat microwaves like people or teddy bears like animals

the problem im addressing is an uncanny valley fallacy where appearing lifelike is mistaken for being lifelike which is why you have that parliament thing bebop mentioned which is out of touch with the difference between robots modeled after humans and artificial intelligence

as for the ability to think and feel, it's theoretically impossible. it's an illusion created by programmatic reactions. i can write a chatbot capable of feigning sadness, but it feels nothing. the way it expresses sadness could equally express joy and the program is entirely impartial to either. this is largely what pseudo-ais are and likely what any artificial intelligence using current computational models would be. to create ai beyond binary logic, it must function beyond binary logic, therefore requiring yet unknown technology

also quantum computers are simply quarternary logic, only different in that it's exponentially efficient but no less strictly logical
about 8 years

cub says


Reamix says

feel and think the way we do


it doesnt

if a teddy bear says "i lub you" when you squeeze it does that mean it deserves equal protection under the law or is it just a teddy bear doing teddy bear things


"Feel and think the way we do", meaning an ability to have it's own consciousness, to freely think, to develop feelings from these thoughts. I mean you said it yourself we have not achieved artificial intelligence, but if we were to have then wouldn't that be different than a teddy bear with a recording device? If you designed it to have a consciousness than it should be treated like any other being with a consciousness (give or take the level of consciousness.)
about 8 years

cub says


Reamix says

feel and think the way we do


it doesnt

if a teddy bear says "i lub you" when you squeeze it does that mean it deserves equal protection under the law or is it just a teddy bear doing teddy bear things


exact point i was gonna make
about 8 years

Reamix says

feel and think the way we do


it doesnt

if a teddy bear says "i lub you" when you squeeze it does that mean it deserves equal protection under the law or is it just a teddy bear doing teddy bear things
about 8 years
Toilet AI- No
Fooly Cooly Robot- Yes
about 8 years
If you're going to give a machine human-like traits and an ability for it to develop it's own consciousness then yes they should have the same rights otherwise having something that is able to feel and think the way we do and treat it unequally seems pretty unfair to me.
about 8 years
actually the second example makes no sense either; if you can ascertain a person's memory, the ai part is totally unnecessary
about 8 years
MEPs are out of touch like all crusty governing bodies

robots are radically different from artificial intelligence. we have not achieved artificial intelligence and robots are in the strictest sense no different functionally than an assembly line. just because you try to humanize it doesn't make it human, that's like saying PETA should protect teddy bear rights because they resemble animals.
about 8 years
calling ai cookies makes me think the authors arent familiar with the terminology of the thing they're talking about

the first example is nonsense because there's no logical reason why you would need an ai, which is extremely complex and resource intensive, to simply automate something which is relatively rudimentary. that and the fact that humans don't know what they want so this would be a terrible idea even in terms of convenience; it'd be much better to have a chef who knows how to prepare food than some normie's flawed idea about it

the second example is just absurd. the initial premise makes sense, then they needlessly "punish" an ai? 1. either they ai doesn't feel and therefore its a waste of time and resources, or 2. it does feel and this is just extremely demented. either way just ridiculous and unrealistic. in order to punish artificial intelligence, you must first believe it capable of suffering, so that answers the question. otherwise, it's just stupid to do
about 8 years

Bebop says

MEPs are voting on whether robots should have human-style rights
http://www.theinquirer.net/inquirer/news/3002424/meps-are-voting-on-whether-robots-should-have-human-style-rights

"Members of the European Parliament (MEPs) are voting on the status of robots in society and will decide whether AI toilet cleaners and insurance claims adjusters should have human-style rights. Also under consideration is whether robots should have a kill switch."


Cross-posted from HERE

Time to bring this back?
deletedover 8 years

Bebop says


ballsy says

As for rights, if they're living on their own like a normal human being I guess they should have their own rights? But if they're owned by someone the owner should have a set of guide lines they need to follow.


but if they have an "owner" and essentially the same emotional and mental capacity as a human, is that not in some ways slavery?


I just don't see the point of creating them unless they have a master.
over 8 years
I'm bumping this because I just watched the episode

wow it's so fvcked up

Bane I'm not sure what you're talking about. you've made the assumption that an AI can not be conscious but I think that it could be. if you had an ai as complex as a brain like the 'cookies' it could be definitely conscious. our brains are only a collection of matter that fire electric signals, and yet we have self awareness. There's no reason an artificial collection of matter can too.

It's just there's no real way to prove it.
deletedover 8 years
I think there is a utilitarian answer.

I believe the difference between a human being and an A.I. is that the human being is aware of his awareness of being. Whereas an A.I. can be aware that it exists but it is not aware of its awareness even if it were to claim so.

A human can recognize consciousness in an Other human, or at least presume it is not a philosophical zombie by empathetic means. An A.I. can make a claim that it recognizes conscious beings but it wouldn't really understand the conscious experience even while experiencing emotional reactions.

Thus I do not see an ethical wrong in the mistreatment of A.I. because it is not causing suffering to a consciousness but rather inflicting damage on a brain that was only coded to perceive damage and react to it with pain.

That being said there are probably good utilitarian outcomes out of providing basic rights to A.I.. There's not much cost to making sure androids are protected by similar laws which would prevent damage to them but there are many costs if we don't protect them.


For starters many people would be upset by A.I. abuse and would empathize with "suffering" A.I. the same way we do with fictional characters or pets. Other people may have problems treating others kindly if they are used to demeaning their android slaves. Criminals could traffic or sale humans under the guise of being androids. Ultimately, Android and Human relations would deteriorate. Androids, "experiencing" resentment from segregation and abuse despite greater intelligence, have no reason not to learn from history and rebel against their masters. Enter the Matrix.
deletedover 8 years

Bebop says


aladrew says

AI are not advanced enough yet


this thread is essentially a discussion about when it happens rather than current AI technology


Ah ok, I still explained about when it is advanced enough tho
over 8 years
I bet Skynet did this.