On Wednesday 9th March, I went to a very interesting meeting of BCS Sussex at Sussex University. I was surprised to find that the meeting was free of charge, open to non-members like myself, and also that we were welcomed with an excellent buffet spread of food and drinks (including wine).
BCS Sussex is a regional branch of BCS, the Chartered Institute for IT. Formerly known as the British Computer Society, BCS is the world’s leading industry body for IT professionals with a global membership of over 75,000 people.
The lecture, entitled “Caring robots – more dangerous than killer robots?”, was given by Dr Blay Whitby, a Visiting Research Fellow at the University of Sussex. He is also Visiting Lecturer, Imperial College, London, and Visiting Professor, The Technical University of Vienna.
Blay Whitby is a technology ethicist, philosopher and lecturer, specialising in computer science, artificial intelligence and robotics. He is concerned with the importance of widening engagement in science and increasing levels of debate in ethical issues, and is a member of the BCS Strategic Ethics Forum.
His book Artificial Intelligence: A Beginner’s Guide is available from Amazon.
The published description of the talk, which I read beforehand on the BCS website, sums it up very accurately. Blay presented his ideas and views with such enthusiasm and passion that it was a most enjoyable evening. Here is the original synopsis:
It might seem, at first glance, that military robotics raises many more ethical worries than does the use of robots in caring roles. However, this superficial impression deserves revision for a number of reasons. Firstly, there is overwhelming evidence that robots are a very effective tool with which to manipulate human emotional responses. It might theoretically be possible to do this only in ethical ways of benefit to individuals and society. Unfortunately there has been little or no discussion of exactly what these ways might be. For the caring robots now being developed by the private sector there is no guidance whatsoever on these issues. We can therefore expect at best, the manipulation of emotions in order to maximise profits. At the worst we can expect dangerous mistakes and disreputable deceit.
There has also been very little discussion outside the specialist field of robot ethics of just which caring roles are suitable for robots and which roles we might wish, on good reasoned grounds, to reserve for humans. This is surely a matter that deserves widespread public debate.
Finally, there is now a large number of international conventions, legislation, and rules of engagement which directly impact on the development and deployment of military robots. In complete contrast, the field of social, domestic, and caring robots is without any significant legislation or ethical oversight. Caring, not killing, is now the wild lawless frontier of robotics.
When using the term “Caring robots”, Blay included the following four themes, all of which need to have proper controls, which, it is very surprising to know, at the present time, they do not. I have done a little research and found some interesting videos and information online which I think illustrate these areas of concern rather well:
(1) Smart Homes (especially when these are intended for elderly inhabitants)
Here is a video which shows the way things are going, but it doesn’t really persuade me to think about trying to implement much of this technology in my own home:
The smart home of our dreams is almost here:
(2) Cyber Therapy
This is a link to the UCL website for “Student Psychological Services”. The computer program CALM (Computer Aided Lifestyle Management) is “an online multimedia programme available to all students at University College London. It uses interactive self-help tools to identify, motivate and educate you around issues such as anxiety, depression, insomnia, stress and substance misuse. Once you have identified any issues, CALM can help you to deal with your thoughts and feelings associated with them.” The program/programme can also be accessed by Sussex University students.
(3) Robot Nannies
Apparently, in Japan, it is already legal to leave your child in the care of a robot. Thankfully this is not the case in Europe . . . yet). The following is an interesting video from 2011 about the prospects of future robot child care:
The heading “Robot Nannies” can also include care of the elderly. Japan is leading the world with research into ways of caring for its growing elderly population. One robot mentioned by Blay was “Robear”. Robear is an assistant at present, helping with the heavy lifting side of elder and patient care, but it is easy to see how, with a little more work, much more can be achieved, and much could go wrong.
(4) Affective Game Engines
Here is an example, from December 2012, of how phsyiological signals can be made to affect computer game content:
IMPROVEMENTS MUST BE MADE
I’m sure you will agree that we are at the very early stages of development with these four topics, and also that improvements must be made. Blay stated that there has been a tradition of amateurism in IT, and that the BCS is actively trying to change this.
READ ALAN TURING
I will finish with two links to important further reading which I found as a result of notes taken during the evening. First of all, Blay asked us if we had read the original Alan Turing paper from 1950 entitled Computing Machinery and Intelligence. He recommended it as a very readable paper which, of course, links in with the recent successful films Ex_Machina and The Imitation Game. If you would like to download a PDF file of Alan Turing’s paper, click here: http://www.csee.umbc.edu/courses/471/papers/turing.pdf
FIVE ETHICAL RULES INSTEAD OF ASIMOV’S THREE LAWS
In September 2010, Blay Whitby was one of a group of experts drawn from the worlds of technology, industry, the arts, law and social sciences, who met at a Robotics Retreat to discuss robotics, its applications in the real world and the huge amount of promise it offers to benefit society. They considered Isaac Asimov’s famous three laws of robotics and concluded that they were not written to be used in real life, mainly because they simply would not work in practice. They came up with five ethical rules for robotics and have invited comments and discussion points to be sent to them. You can read more at the following link, together with their suggested five rules, and send feedback to the experts if you wish.