Accessibility Links

Who should be responsible for policing the development of artificial intelligence?

Posted by: Robert Stokes
25/10/2016

 

We’ve all seen them; the films that predict mankind perishing from its own technological advancements. Terminator, iRobot, AI, Blade Runner, 2001: A Space Odyssey to name a few, all depict the demise of the human race at the hands of artificial intelligence, with an uprising of droids, cyborgs and robots.

So, how far away are we really from this scenario? If you consider that some of the technology that has appeared in science fiction TV and film over the years, as already being successfully developed - from flat screen TVS, video calls, mobile phones and ipads to 3d printers and virtual reality glasses - and with more constantly being developed, science fiction is quickly turning into science fact. Telsa owner Elon Musk, has declared that AI is the biggest threat to the survival of the human race. He’s predicting an army of HAL 9000’s rather than one of Jonny 5 alives’.

Right now however, the concern for the majority of people is focused on robots replacing humans in many job roles. The World Economic Forum is predicting that AI will wipe out over 5.1 million jobs in the next 5 years, specifically lower skilled and non-creative roles.

So the question is who should be responsible for policing these developments and how far they should be allowed to go? MPs on the Science and Technology Committee are calling for the careful scrutiny of the possible ethical, legal societal impact of this developing technology. Chairwoman, Dr Tania Mathaias, has declared that the government so far has failed to show any leadership on the issue and should recognise their responsibilities. It has been suggested that that government needs to set up a “Commission on AI” in order to identify the potential challenges and rewards of AI. We don’t know how quick or how far this technology will develop and what the consequences could be, but we need to be prepared. Our education and training systems need to be flexible to deal with the changes and demands. Technology develops fast and it cannot be a repeat of introducing coding into the curriculum which took the government decades and has now left us with a digital skills gap. Further, as a public body, government should consider the impact in an objective light, rather than leaving it up giant corporations who may have a hidden agenda with this technology.

Google, unsurprisingly, are at the forefront of the AI revolution and although they are off loading Boston Dynamics, the robotic hardware maker purchased only in 2013, Google are still investing vast amounts of money and resources into AI. DeepMind, a Google initiative has over 250 researchers developing AI at its London headquarters. Together with Facebook, Amazon, IBM and Microsoft, DeepMind has created a Partnership on AI, a group aiming to address concerns about where the technology is heading.

The ethical impact needs to be addressed by a range of institutions in order to get an objective view on the development of AI and to ensure that it will be used for the “common good.” The ethics of AI were first considered back in 1942 by sci-fi author Isaac Asimov. The Three Laws of Robotics or Asimov’s laws appeared in his short story, The Runaround and are meant to be referenced from the “Handbook of Robotics, 56th Edition, 2058 AD.” The laws are as follows;
1. A robot may not injure a human being or through inaction allow a human being to come to harm.
2. A robot must obey the orders it by human beings except where such orders would conflict with the first law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second law.

Asimov embedded these laws into AI as a safety feature which dose not have an override. His stories are based on robots behaving in counter intuitive ways as an untended consequence of how the robot applied the three laws. Later on Asimov added a 4th or the 0th law to precede all others - A robot may not harm humanity or by inaction allow humanity to come to harm.

Other authors have adopted and often referenced the laws in their own books as well as film and TV. Google has also developed its own set of regulations based on the worry that AI won’t so much take over the world, but could possibly and accidentally damage a human. They are as follows;
1. Avoiding Negative side effects – What if, while performing a duty the robot harms or damages something in its path.
2. Avoiding reward hacking – If the robot is programmed to enjoy it’s task will it be counter productive in order to constantly be doing its task?
3. Scalable Oversight – How much decision making should a robot have?
4. Safe Exploration – How do you ensure that there are limits of curiosity and the robot doesn’t begin testing things outside of its duty?
5. Robustness to Distributional shift – How do robots respect the space they’re in? How can they tell that they are doing the job in a different environment?

However, these are more questions because AI is not developed enough yet to be able to work within laws. Currently, AI has fairly basic, narrow and specific roles such as voice recognition or playing the board game Go. But DeepMind is currently working on developing an artificial hippocampus, a part of the brain responsible for memory and creativity. This artificial hippocampus could be the foundation for a software brain which could eventually animate a physical robot. So we must ensure that we these artificial brains are being developed to function correctly and within ethical boundaries.

In the immediate however, the focus needs to be on two main aspects of this developing technology; firstly, ensuring that the robots we currently have and are currently developing are smart enough to not accidentally kill or maim humans; secondly, being proactive in creating flexible and adaptable regulations before the technology is here, which have been constructed by a balanced and objective board with common interests at the heart.




Add new comment
*
*
*