Artificial Intelligence

1 The Ethics of AI

Learn It

  • Isac Asimov was a prolific writer of science fiction stories.

I_Robot_-_Runaround.jpg

  • One of the themes that frequently occurred in his novels and short stories was the interaction between humans and robots.
  • One of the central plot devices used in many of his stories were The Three Laws of Robotics which are:
    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Research It

  • Use this Wikipedia article to learn about an ethical dilemma called The Trolley Problem.
  • MIT designed a Moral Machine platform to gather human perspectives on moral decisions made by machines, such as self-driving cars. Have a go and also see how other people respond to the same situations.

Badge It - Silver - Learning Strand - Information technology

  • Google, Tesla and many other car manufacturers are currently working on self-driving cars.
  • These cars will be controlled by AI programs.
  • Now image this situation.

You car is driving at a moderate speed down a road. It turns a corner and there are five pedestrians standing in the road. Even if the car were to apply it's brakes, it would still hit them, so the only way to avoid killing the pedestrians is to mount the pavement. Unfortunately there is a single pedestrian standing on the pavement, who will be killed if the car chooses to swerve.

  • Should the car continue in a straight line and kill 5 people, or should it make a decision to intentionally kill a single individual?
  • Write down what you think the car should do and why?
  • If it was later discovered that the car should have been driving more slowly, who is to blame? Is it the fault of the owner of the car, the manufacturer of the car, the programmer of the car, or the program?
  • Should computers be programmed to always serve humanity and therefore choose options that lead to the greatest good?

Learn It

  • Consider the following facts:
    • The first actual recorded death attributed to a robot occurred in 1979, although the robot in question was not Artificially Intelligent.

sgr-a1.gif

  • Sentry guns like this one (deployed along the border of the Korean demilitarised zone) are capable of autonomously killing humans.

predator-firing-missile4.jpg

  • Drones like this one are remotely controlled by human pilots, and are currently in heavy use in Iraq, Pakistan and Yemen. They could easily be controlled by an AI.

Test It

  • Discuss in pairs, what you think about the ethics of allowing computers to take a human life.
  • Should we be deploying more autonomous killer robots in the battlefield so that our soldiers are not put in harm's way?
  • Should the option to take a human life only ever be decided by another human?

Badge It - Gold - Learning Strand - Information technology

  • In thirty years time, military robots capable of autonomously killing humans could be common.
  • Imagine you are the Prime Minister of the day.
  • Would you order the army to purchase such robots, in the interests of defencing the country, or do you think that taking a human life is a decision that only another human should make.
  • Justify your opinion as well as you can.

Learn It

  • Prior to 1835, there was no law in the United Kingdom that prevented cruelty to animals.
  • In the 1500s you would have been laughed at if you were to suggest that animals had rights and shouldn't be treated cruelly.
  • Sports such as Cock Fighting, Bare Bating or Fox tossing have been common throughout European history and it is only relatively recently that we have decided that an animal's welfare needs to be protected by law.

Learn It

  • Ethicists are today thinking about Robot Rights in much the same way as people once thought about animal rights.
  • As AI become more and more sophisticated, and able to imitate humans with greater and greater degrees of accuracy, do we have to start thinking about robot rights?

Badge It - Platinum - Learning Strand - Information technology

  • Does a truly intelligent AI (as demonstrated in the works of fiction above) have rights?
  • The European Convention on Human Rights, lays down several articles, detailing the rights of all people.
  • For each of the Articles listed below, state whether you think that in the future, these rights should be extended to intelligent AIs
  • Articles - 2, 3, 4, 5, 6, 9, 14

Validate