Towards Responsible Robotics in the Digital Economy

Imagine that your elderly relative lives at home with her assistive care robot, which is tasked with helping her with day-to-day activities. Then one day you receive a call to say that your relative has been found unconscious on the floor, with the robot bumping aimlessly to and fro.

Happily, your relative is found to be fine – but what happened, and how can you find out?

This is one possible scenario for the work that I was delighted to present to the Bavarian Research Institute for Digital Transformation in September. Powerful technologies are being developed that have the potential to transform society, and investigators in all fields are under growing pressure to consider and reflect on the motivations, purposes and possible consequences associated with their research. This pressure comes from the general public, civil society and government institutions, and of course from the media. Hardly a day goes by where we do not hear about the negative effects of a technological innovation on society. It is becoming impossible (and it is also undesirable) for developers and designers to ignore what is happening societally as a consequence of some innovations.

‘Responsible’ innovation initiatives across policy, academia and legislation emerged nearly two decades ago. These initiatives are responding to the fact that many of the problems we face are a legacy of our previous failures to consider potential negative impacts of innovations. And, increasingly, the public and media are expressing concerns over the negative consequences of innovation – whilst all the time, technologies are becoming more potent

Responsible innovation is more than a nice idea, it is a practical method that focuses on anticipatory governance, inclusion, reflection and responsiveness. Its core aim is to involve all relevant stakeholders, including the public, and to encourage those stakeholders to anticipate and reflect on the consequences of scientific and technological innovations. This is the nub of its definition as ‘doing science and innovation with society and for society’. It includes society ‘very upstream’ in the processes of research and innovation – that is at the point where the innovation is first conceived – to align the outcomes with the values of society.

Crucially this is not a once-and-for-all tickbox exercise, but an ongoing, iterative process, looking at how new technologies meld with old, how people adapt, and how we as researchers and innovators adapt to emerging knowledge of technology use in the world. In this way, RI is a space for creativity, for confidence, and for serendipity. It is not predefining what are or are not the “right” impacts from research, but it is providing us with a framework which can help us to decide what those impacts might be and how we might realise them.

Of course, many challenges remain over how to embed responsibility into processes of technological design and development. Furthermore, as the pace of innovation continues to grow, the tension between profit and responsibility also grows stronger.

The case study I mentioned at opening is part of my RoboTIPS project, which picks up some of these challenges. RoboTIPS is a collaboration between Oxford and Bristol Robotics Lab. In our investigations we are focussing on social robots – broadly speaking, robots which interact with humans on a daily basis (for example driverless cars or autonomous vehicles, companion robots, toy robots and so on).

We have defined Responsible Robotics as:

The application of Responsible Innovation in the design, manufacture, operation, repair and end-of-life recycling of robots, that seeks the most benefit to society and the least harm to the environment.

The research will draw on the expertise of the Bristol Robotics Lab, to examine how to develop accountable and explainable systems relevant to a range of stakeholders, particularly when the technology seems to go wrong, in order to develop trustworthy systems

Examining this idea of trustworthy systems, we looked to another industry that (until recently perhaps) enjoyed a significant level of public trust; the airline industry. The reason commercial aircraft are so safe is not just good design, it is also the tough safety certification processes and, when things do go wrong, robust social processes of air accident investigation. We suggest that trust in and acceptance of air travel, despite its catastrophes, is in part bound up with aviation governance, which has cultural and symbolic importance as well as practical outcomes. A crucial aspect of the former is rendering the tragedy of disaster comprehensible through the process of investigation and reconstruction.

Returning to our original example of a malfunctioning care robot, although this is a fictional scenario it could happen today. If it did happen, a user would currently be reliant on the goodwill of the robot manufacturer to discover what went wrong. It is also entirely possible that the robot and the company are not even equipped with the tools and processes to facilitate an investigation. It is startling that although these social robots are currently interacting with humans in unplanned-for contexts, currently there are no established processes for robot accident investigation.

Hence in our 2017 paper, Professor Alan Winfield and I argued the case for an Ethical Black Box (EBB). Our proposition is very simple: that all robots (and some AIs) should be equipped with a standard device which continuously records a time stamped log of the internal state of the system, key decisions, and sampled input or sensor data. ​In effect this is the robot equivalent of an aircraft flight data recorder. Without such a device, finding out what the robot was doing and why in the moments leading up to an accident, is more or less impossible. ​In RoboTIPS we are developing and testing a model EBB for social robots.

However, air accident investigations do not rely solely on the evidence from the aircraft’s flight recorder. They are social processes of reconstruction that need to be perceived as impartial and robust, and which may serve as a form of closure so that aviation does not acquire an enduring taint in the public’s consciousness. We anticipate very similar roles for investigations into robot accidents.

Crucially, it is not the black box on its own that forms the safety mechanism; it is its inclusion within a social process of accident/incident investigation. An investigation into a robot accident will draw on EBB information and also information from human witnesses and experts to determine the reason for an accident – and lessons to be learnt from it.

Thus, we aim to develop and demonstrate both technologies and social processes (and ultimately policy recommendations) for robot accident investigation. And the whole project will be conducted within the framework of Responsible Research and Innovation; it will, in effect, be a case study in Responsible Robotics.

The team will work with business designers and partners to co-develop the requirements for the EBB and at the same time try to understand how these designers understand responsibility in their practices.

The potential impact of this work is extensive. The EBB can change how we develop products in social robotics and potentially beyond. The work could lead to new opportunities for companies to design and manufacture standard ‘black boxes’ for each class of social robot.

It is a fundamental contention of this work that if we increase transparency of how such technologies make decisions and take users obligations and lived experiences seriously in the design of these tools – and are seen to be doing so – then we will increase trust in the technologies. The reverse though is also the case – that if a company does something societally unacceptable, it could have an adverse effect not only on the company but on the whole area of development.

It is in the end, when things go wrong, that the responsibilities throughout the chain of creating, commissioning and deploying social robots will take centre stage, albeit retrospectively. The proposed case studies are a vehicle for understanding what these chains of responsibility will look like when a harmful incident takes place and provide an unparalleled opportunity to simulate ‘disaster’ in a safe way to understand how to manage its consequences.

This article was first published on the website of the Bavarian Research Institute for Digital Transformation