jakegaorobot

Recently a man in Bellevue, Washington, finding that his car would not go through some six inches of snow, became enraged and attacked the automobile. He broke out the car’s windows with a tire iron and emptied a revolver into its side. “He killed it,” said police. “It’s a case of autocide.” Such wanton acts of violence are not limited to Coke machines, photocopiers, public telephones, and other gizmos that steal our dimes and quarters. In 1979, a sheriff in California shot a large mainframe computer for uncontrollably spewing out arrest records. As if to even the score, that same year a one-ton Litton Industries mobile robot stalked and killed a human warehouse worker who trespassed on the machine’s turf during business hours. The worker’s family sued Litton and was awarded a $10-million judgment, but the surly robot got off with a slap on the sensor. Under present law, robots are just inanimate property without rights or duties. Computers aren’t legal persons and have no standing in the judicial system. As such, computers and robots may not be the perpetrators of a felony; a man who dies at the hands of a robot has not been murdered. (An entertaining episode of the old //Outer Limits// TV series, entitled “I, Robot,” involved a court trial of a humanoid robot accused of murdering its creator.) But blacks, children, women, foreigners, corporations, prisoners, and Jews have all been regarded as legal nonpersons at some time in history. Certainly any self-aware robot that speaks English and is able to recognize moral alternatives, and thus make moral choices, should be considered a worthy “robot person” in our society. If that is so, shouldn’t they also possess the rights and duties of all citizens? It may be an idea ahead of its time. People have been jailed for kidnapping or wrecking computers, but it’s the rights of humans, not machines, that are being protected. Trashing your own computer maliciously or another’s accidentally is no crime. When a computer forges checks using bogus data supplied by human accomplices, the people, not the machine, are charged with the crime. But how long can the law lag behind technology? Knowledgeable observers predict consumer robotics will be a multibillion-dollar growth industry by 2000. Clever personal robots capable of climbing stairs, washing dishes, and accepting spoken commands in plain English should be widely available by 2005. By the turn of the century the robot population may number in the millions. By 2010, most new homes will offer a low-cost domestic robot option. This “homebot” will be a remote-controlled peripheral of a computer brain buried somewhere in the house. Homebot software will include: (1) applications programs to make your robot behave as a butler, maid, cook, teacher, sexual companion, or whatever; and (2) acquired data such as family names, vital statistics and preferences, a floor map of the house, food and beverage recipes, past family events, and desired robot personality traits. If a family moves, it would take its software with it to load into the domestic system at the new house. The new homebot’s previous mind would be erased and overwritten with the personality of the family’s old machine. If homebots became members of households, could they be called as witnesses? In the past, courts have heard testimony from “nonhumans” during witch trials in New England, animal trials in Great Britain, and other cases in which animals, even insects, were defendants. But homebots add a new twist. Since the robot’s mind is portable, Homebot Joe might witness a crime but Homebot Robbie might actually testify in court, if Joe’s mind, has, in the interim, been transferred to Robbie. (Is this hearsay?) Further, a computer memory that can be altered is hardly a reliable witness. Only if homebots have tamperproof “black box recorders,” as in commercial jetliners, might such testimony be acceptable to a court. Some futurists have already begun to devise elaborate codes of ethics for robots. The most famous are science-fiction writer Isaac Asimov’s classic Three Laws of Robotics. First: A robot may not injure a human being, or, through inaction, allow a human being to come to harm. Second: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. Third: A robot must protect its own existence as long as such protection does not conflict with the first or second laws. But even with Asimov’s three laws in place there’s lots of room for mischief. While corrupting a robots “laws” may someday be deemed a serious felony, a human could order a robot to steal or to destroy property, other robots, or even itself, since the machine must dutifully obey under Asimov’s Second Law. With those kinds of abuses possible, questions of “machine rights” and “robot liberation” will surely arise in the future. Twenty years ago Hilary Putnam at Massachusetts Institute of Technology was the first to address the issue of the civil rights of robots. “It seems preferable to me,” Putnam concluded after lengthy philosophical analysis, “that discrimination based on the softness or hardness of the body parts of a synthetic organism seems as silly as discriminatory treatment of humans on the basis of skin color.” More recently Richard Laing, formerly a computer scientist at the University of Michigan, has contemplated the day when human-level intelligent machines exhibit complex behaviors including altruism, kinship, language, and even self-reproduction. “If our machines attain this level of behavioral sophistication,” Laing reasons, “it may finally not be amiss to ask whether they have not become so like us that we have no further right to command them for our own purposes, and so should quietly emancipate them.” The case law on robots’ rights is pretty thin but not, as one might expect, totally nonexistent. One criminal case, widely reported in the popular press under the sensational banner, “Computer Raped by Telephone,” involved a professional programmer who invaded the computer of a competitor and stole a copy of a valuable proprietary program using a telephone link and several secret passwords. During the course of the investigation, the question arose whether a search warrant could be issued to order the computer to retrieve evidence of the programmer’s invasion. The first such warrant was issued prior to //Ward v. Superior Court of California// (3 C.L.S.R. 206 [1972]). This was the first time a computer had ever been hauled in for questioning.