APPG on Artificial Intelligence calls for ethics training as part of research & innovation

At its meeting on Monday 12 March to gather evidence for its roadmap on the future of AI, the All Party Parliamentary Group on Artificial Intelligence made a strong demand for ethical considerations to be included in the development of machine learning and artificial intelligence.

In the evidence-gathering meeting around its Accountability theme, the APPG sought to answer three main questions:

  • How do we make ethics part of business decision-making processes?
  • How do we assign responsibility around algorithms?
  • What auditing bodies can monitor the ecosystem?

It was clear from the many providers of evidence at the session, including Amnesty International, universities and industry, that ethical concerns need to be foregrounded in AI development. The evidence given echoed the recent recommendations from the Future of Humanity Institute that a much expanded range of stakeholders be involved in discussing and addressing these challenges, and exhorting developers, researchers and engineers to take seriously the possibility that their work might be put to malicious purposes.

Professors Marina Jirotka and Bernd Stahl, co-founders of ORBIT, commented, “We are glad to see issues such as these being brought to the fore by the APPG – the transparency of algorithms and the inability of AI to make ethical choices unless these are built-in are essential challenges to tackle if we are to see safe, sustainable, accountable artificial intelligence. We particularly welcome the points made around the requirement to train researchers and developers on the ethical issues and encourage them to think responsibly about their work. ORBIT exists to address exactly these issues – our vision is to provide a robust framework of responsible research and innovation that can ensure development happens in the right way for society.”


Notes for editors

ORBIT is the Observatory for Responsible Research and Innovation in ICT. Led by Oxford and De Montfort Universities and funded in the launch phase by the Engineering and Physical Sciences Research Council (EPSRC), ORBIT provides policy advice, training on responsible research and innovation (RRI), consultancy, project assessment, support services, and an online community for ICT researchers.
Professor Bernd Stahl is Professor of Critical Research in Technology and Director of the Centre for Computing and Social Responsibility at De Montfort University, Leicester, UK. His interests cover philosophical issues arising from the intersections of business, technology, and information. This includes ethical questions of current and emerging of ICTs, critical approaches to information systems and issues related to responsible research and innovation.
Professor Marina Jirotka is Professor of Human Centred Computing at Oxford University and ORBIT investigator. She undertakes work focused on deepening societal comprehension of the impacts of technology and ameliorating negative effects by anticipating outcomes. She leads the human centred computing group, an interdisciplinary research group that aims to understand the ways in which technology affects communication, collaboration and knowledge exchange within scientific, work and home settings.
Share This Post
Have your say!
00

Leave a Reply