If the AI ​​robot kills humans, who will carry this pot?



This article is produced by NetEase Smart Studio (public number smartman 163). Focus on AI and read the next big era!

NetEase smart news March 17th message may wish to imagine: This year is 2023, the self-driving car finally drives on our city street. However, this was the first time that a self-driving car hit a pedestrian and caused its death. The media has reported extensively. A high-profile lawsuit is very likely to happen, but what kind of law should be applied to this incident?

Today, based on the research of John Kingston of the University of Brighton, UK, we got some answers - he made some explorations in this area of ​​law that has just begun. He analyzed and pointed out important issues that should be taken seriously by people in the automotive, computer, and legal fields. If they haven’t started thinking about these issues yet, now is the time to prepare.

The core of this debate is whether artificial intelligence systems can be criminally responsible for their actions. Kingston said that Gabriel Hallevy of Ono Academic College in Israel conducted an in-depth discussion on this issue.



The current law may apply to the three scenarios of AI

Criminal liability usually requires an action and a psychological intent (in legal terms can be expressed as a crime and criminal intent). Kingston said that Hallevy explored three scenarios that might apply to artificial intelligence systems.

The first, known as "perpetrator via another," is applied when a person or animal suffering from a mental illness commits a crime. They are therefore considered innocent. However, anyone who has referred to a mental patient or animal can be held criminally liable. For example, the owner of a dog orders him to attack another person.

This is far-reaching for those who design and use smart machines. Kingston said: "Artificial intelligence programs can be thought of as innocent agents, and software programmers or users can be considered as 'passers of other crimes'."

The second situation is the so-called "natural probable consequence" when the general behavior of an artificial intelligence system may be improperly used to perform a crime.

Kingston cited an example where an artificial intelligence robot killed a human worker at a Japanese motorcycle factory. Kingston said: “The robot mistakenly identified this employee as a threat to its mission and calculated the most effective way to eliminate this threat by pushing it into the adjacent operating machine. The robot used its powerful hydraulic arm. The panicked worker is picked up on the machine and caused to die on the spot, and then continues to perform the task."

The key question here is whether the programmer designing the machine knows that this situation is a consequence of the use of the machine.

The third situation is direct liability, which requires both action and intention. If an AI system takes action that leads to criminal activity or fails to take action when there is an obligation to take action, then this is easily proved.

However, Kingston also said that the intention of this behavior is more difficult to determine, but it is still relevant. "Speeding is a strict liability crime," he said. “So, according to Halleviy, if a self-driving car is found speeding on the road, then the law is likely to be criminally liable for the artificial intelligence program that was driving the car at that time.” In this case, the owner may not be liable responsibility.

Artificial intelligence may defend itself



This is followed by the issue of defense. If an AI system might be criminally liable, what would it use to defend it?

Kingston proposed a series of possibilities: Can a faulty program justify human insanity as a human being? Can an AI infected with an electronic virus defend against coercion or drunkenness?

These defenses are by no means theoretical. Kingston pointed out that in the United Kingdom, some individuals accused of computer crimes successfully argued that their computers were infected with malicious software, which was precisely the culprit leading to crime.

In one case, a teenage computer hacker who was accused of performing Denial-of-Service Attacks claimed that the Trojan horse program was responsible for the incident and that this Trojan horse program was undergoing preliminary analysis. It has been cleared from the computer before. Kingston said: "The defendant's lawyer successfully convinced the jury that such a situation did not exceed reasonable doubt."

The AI ​​system is in trouble.



Finally, there is the issue of punishment. Who or what will be punished for the direct responsibility of an AI system? What form will this punishment take? For now, these questions have no answer.

However, in this case, criminal liability does not apply and must be resolved through civil law. Then, a key question comes: whether an artificial intelligence system is a service or a product.

If it is a product, product design legislation will serve as a basis for applying this condition.

If it is a service, negligent infringement applies. In this case, the plaintiff must usually prove the fault through three elements. The first is that the defendant has a duty to care - usually straightforward, although in the case of artificial intelligence, it is difficult to assess the standard of care, Kingston said. The second factor is that the defendant violated this obligation. The third factor is that this violation has caused damage to the plaintiff. If all this is still somewhat ambiguous, the legal status of artificial intelligence systems may change as they become more human (even superhuman).

One thing is for sure: in the coming years, all this may have implications for lawyers (or replacing their artificial intelligence systems).

(From: technologyreview compilation: NetEase smart participation: narizi)

Pay attention to NetEase smart public number (smartman163), obtain the latest report of artificial intelligence industry.


Screen Protector

Screen Protector,Uv Glue Screen Protector,Uv Tpu Film Screen Protector,Uv Curing Screen Protector

Shenzhen TUOLI Electronic Technology Co., Ltd. , https://www.szhydrogelprotector.com

Posted on