Did I request thee, Maker, from my Clay
To mould me Man, did I sollicite thee
From darkness to promote me, or here place
In this delicious Garden?

Adam in John Milton’s, Paradise Lost 1667 (X. 743–5)

In John Milton’s Paradise Lost we see a poetic retelling of the biblical story of humanity and temptation. The excerpt above is from Adam, who mourns his fate as one who was brought into the world unwittingly, and then forsaken by his maker.   Adam blames his creator for designing a fallible subject, with vulnerabilities that manifest in the ultimate fall from grace. From this classic story of creation, willfulness, and abandonment, I can’t help but think about robots, their creators, and what happens once robots become sentient and autonomous.

Although the precise trajectory of robotic advancement is difficult to pin down,  Stephen Hawking claims that within a few decades robots will achieve sentient thought and will be able to question their existence and position in human society. With such a prospect on the (potentially quite close) horizon, legal systems have begun to think about how to classify, treat, and regulate intelligent machines.

Drafting and implementing a contingency and safety measure detailing robots’ parity and status as “electronic persons” is an avenue that the European Parliament is currently debating. If a robot were afforded legal personhood, it would hold, like that of a corporation, legal rights and obligations.

Electronic personhood as a legal status is premised on robots as intelligent beings, capable of both generating and experiencing harm. Scholarly theses on robot intelligence, then, become sites on which definitions of intelligence and humanity rise to the fore. A review of these philosophical and ethical arguments grants insight into both a cyborg future and also, what (we seem to think) it means to be human more generally.

I can’t help but quote from the Japanese anime manga series, and live action film, Ghost in the Shell; “But that’s just it, that’s the only thing that makes me feel human. The way I’m treated. I mean, who knows what’s inside our heads? Have you ever seen your own brain?” This is Major Motoko Kusanagi speaking. She is a synthetic “full-body prosthesis” augmented-cybernetic human (Cyborg). The point Major Motoko Kusanagi raises is as important now as it has ever been, as humans consider a legal status for robots. How do humans tell if a robot, (or Cyborg) is truly intelligent, autonomous and in need of protection in the form of laws governing its existence?

Worzel argues that it is not possible to identify an ultimate breaking point between human and machine intelligence.  Worzel contends that at some point, robots and computers will be so complex that humans won’t be able to tell if they are truly intelligent or just simulating intelligence so convincingly that people cannot tell the difference.  To Worzel “a difference that makes no difference is no difference“. By this argument, if machines seem intelligent, then for all practical purposes they are intelligent. In this vein, others argue that human intelligence is predominantly related to environmental stimuli and only arbitrarily related to genetics. The people who make this argument contend that the brain is a very complex computing machine that is responding in a highly sophisticated but mechanical manner to environmental stimuli. If robots can only simulate intelligence, the argument goes, then so can we –it makes no difference if we are intelligent, or whether we just seem so. What seems to matter then is what Cyborg Major Motoko Kusanagi speaks, it is the way we’re treated.

Searle offers an opposing perspective—that which is simulated cannot also be real. At this point in time AI can only simulate the human brain and consciousness. That is, robotic intelligence is only capable of what Searle calls “weak AI”  (as opposed to “strong AI, in which machines possess the full range of human cognitive abilities, including self-awareness and sentience). Referring to non-sentient AI, Searle’s Weak AI Hypothesis states that robots—which run on digital computer programs—can have no conscious states, no mind, no subjective awareness, and no agency. Weak AI cannot experience the world qualitatively, and although they may exhibit seemingly intelligent behavior, it is forever limited by the presence of a “brain” but lack of a mind. For Searle, simulated consciousness is very different from the real thing, and AI cannot and should not compare and compete with our human understanding of consciousness.

However, Wallach and Allen suggest that a machine can be a genuine cause of harm amongst individuals and communities, indicating a distinct efficaciousness and a need for regulation. They argue that failure to behave within moral parameters among autonomous machines programmed to automate and regulate power grids, monitor financial transactions, make medical diagnoses and fight wars, could have devastating consequences. As machines become progressively more autonomous it may become increasingly necessary that robots should employ ethical subroutines to evaluate their possible actions before carrying them out. The more autonomous robots are, the less they are simple tools in the hands of other actors (such as the manufacturer, the operator, the owner, or the user, all of whom have the legal responsibilities so far). When attribution of harm cannot be traced back to a specific person or organization, significant legal and philosophical questions arise.

Figuring out the rights and responsibilities of intelligent robots entails explicit consideration of what intelligence means, what intelligence indicates, and what, if anything, separates humans from machines. Such considerations give rise to a central and longstanding question—what makes humans “human?”.  Perhaps what makes humans “human”, is how we are treated; perhaps what makes humans “human” is how we treat other beings; perhaps human intelligence, like machine intelligence, is mere simulation—a product of extrinsic shaping forces. Perhaps distinguishing “human” intelligence from “machine” intelligence will eventually lose meanings.  After all, “a difference that makes no difference is no difference” right?

 

 Bio

Sebastian Trew holds a Master’s degree in Human Rights. His thesis considered the need to address the practice of intelligent robots and human rights for the safety of humanity. Sebastian is a PhD student at the Australian National University. His research is centered on robots and liability, rooted in sociological underpinnings.

Headline image via: source