By Margherita Nulli, ORBIT Project Officer

25 June 2018

Building a better IT future

Information and Communication Technologies (ICTs) have raised ethical concerns for almost as long as they exist. Professional bodies like the BCS, ACM or IEEE have therefore developed and implemented ways of addressing these concerns, such as codes of ethics or professional responsibilities. Despite these activities the ethical issues arising from ICTs are now more prominent than ever before. If you open a newspaper you will almost invariably find articles about ethical issues directly linked to technology. Fake news, election meddling, concerns about robots and AI are among high-profile issues. This is caused by the continuing advance of ICT in most aspects of our social, organizational and personal lives. Human beings rely on them for numerous aspects of daily life such as education, communication, entertainment, travelling or storing personal data. The pervasiveness of those technologies in everyday life continues to increase and at the same time the pace of technological development is becoming increasingly fast. While you may find it exciting to think that one day technologies will be able to replace humans in many tasks considered mundane, this also pose the question of what would happen if robots will be able to do everything better than humans.

It is also worth pointing out that often, once a technological product leaves the lab, it is used in a way that was not originally predicted by the designers. Let’s take the example of affective devices, which are robots that can interpret signals such as human heartbeat, posture or tone of voice to infer human’s mood and respond accordingly. Such devices are mainly developed with the beneficial purpose in mind, such as that of helping people to fight loneliness (in particular elderly people). Even though the original idea is good concerns arise about how such devices will be used: what happens to all the data about its owner that the robot stores? Who has access to it and how can it be used? Also, what would happen if people start treating the robot as a real human being?

If we want to build a better IT future we need to ask ourselves a key question: how can we maximise the benefits of a technology while minimising its risks? There are several answers to this question and one of the most prominent approaches is Responsible Research and Innovation.

Responsible Research and Innovation: What is it?

Responsible Research and Innovation (RRI) is a process that aims to ensure that the outcomes and processes of research are socially desirable, acceptable and sustainable. This highlights important questions such as what are the benefits of the research, what are the risks of it, what are the alternatives and who is responsible for the outcomes of the research. RRI states that the role of the researchers is to take a long term perspective and consider what will happen to their products once they leave the laboratories. RRI can help researchers to obtain better outputs and minimising the side effects of their research by helping them anticipating the effect of their products, reflect on the purposes of the research, engage with the stakeholders and act to influence the direction of their research. These aspects forms the AREA 4P framework, which is the core of RRI in the UK.

The AREA 4P framework

The AREA 4P framework defines a structure and a process for embedding RRI into research and innovation projects. The framework consists of a set of questions that help the researcher to consider all aspects of their research and to evaluate if it is being done responsibly (see the ‘Further readings’ box for references).

The AREA framework help scientists harness their creativity by asking them to anticipate the impact of their work, reflect on purpose and motivation, engage with stakeholders and act accordingly.

In order to consider these aspects comprehensively, you need to take into account the process of doing research and development, the products and outcomes (including unintended ones), the purpose and the people involved.

The AREA-4P framework is in the form of a matrix that allows researchers to work through key questions to determine whether their work is done responsibly.

It has to be emphasised that RRI is not a panacea and cannot perform miracles. RRI cannot predict the future or avoid all side effects arising from a technology, nor does it alleviate from problems related to ethics in research. It can however help to reflect on the strengths and weaknesses of a project (or a tool) and stimulate a better-informed conversation about possible remedies.

Let’s put RRI into practice: the ORBIT project

RRI is a concept embraced by a range of research funders including the EU and EPSRC, which is now requiring scientist to demonstrate that researchers have implemented RRI in their projects. ORBIT, the Observatory for Responsible Research and Innovation in ICT was created with the support of the UK’s biggest funder of ICT research, the Engineering and Physical Research Council (EPSRC). ORBIT aims to help you to put RRI into practice. To do so, ORBIT offers a series of services that aim at integrating all the aspects of RRI in the research practice.

To go back to the example of an affective device, there are many things that could be done to ensure it is developed responsibly. ORBIT could help you, for example find resources, cases, technology descriptions and other documents to learn from good practice. A self-assessment tool can help you identify the strengths and weaknesses of your work in many respects, from foresight and engagement to gender balance and research ethics. During the work of developing an affective device all of these can be relevant. For example how do you decide whether the artefact could or should appear to be female or male? How can you envisage the consequences of a broad adoption of affective robots in home care settings? ORBIT can help researchers find ways to answer such questions and gain competences through training and proposal development. A key aim is to develop a community, which is supported by an online open access journal where you can see other researchers’ views and publish your own.

Ethical and social issues of ICT are now so obvious and pressing that we need to find ways of addressing them. RRI offers a way to take a broad look at these issues and it encompasses an array of ways of dealing with them. As an ICT professional it is your responsibility to make sure that the technology you develop benefit society. RRI can help you achieve this.

Further reading:

Jirotka, M., Grimpe, B., Stahl, B. C., Hartswood, M., Eden, G. (2017). Responsible Research and Innovation in the Digital Age. Communications of the ACM, 60(5), 62–68. https://doi.org/10.1145/3064940

Owen, R., & Goldberg, N. (2010). Responsible Innovation: A Pilot Study with the U.K. Engineering and Physical Sciences Research Council. Risk Analysis, 30(11), 1699–1707. doi:10.1111/j.1539-6924.2010.01517.x

Stahl, B. C., Obach, M., Yaghmaei, E., Ikonen, V., Chatfield, K., & Brem, A. (2017). The Responsible Research and Innovation (RRI) Maturity Model: Linking Theory and Practice. Sustainability, 9(6), 1036. https://doi.org/10.3390/su9061036

Stilgoe, J., Owen, R., & Macnaghten, P. (2013). Developing a framework for responsible innovation. Research Policy, 42(9), 1568–1580. https://doi.org/10.1016/J.RESPOL.2013.05.008

 

This post was previously published in ITNOW.

Recommended Posts

Leave a Comment

Contact Us

Please use the form below to send us an e-mail, we will respond to all e-mails as soon as possible.

X
X
X