At its meeting on Monday 12 March to gather evidence for its roadmap on the future of AI, the All Party Parliamentary Group on Artificial Intelligence made a strong demand for ethical considerations to be included in the development of machine learning and artificial intelligence.
In the evidence-gathering meeting around its Accountability theme, the APPG sought to answer three main questions:
- How do we make ethics part of business decision-making processes?
- How do we assign responsibility around algorithms?
- What auditing bodies can monitor the ecosystem?
It was clear from the many providers of evidence at the session, including Amnesty International, universities and industry, that ethical concerns need to be foregrounded in AI development. The evidence given echoed the recent recommendations from the Future of Humanity Institute that a much expanded range of stakeholders be involved in discussing and addressing these challenges, and exhorting developers, researchers and engineers to take seriously the possibility that their work might be put to malicious purposes.
Professors Marina Jirotka and Bernd Stahl, co-founders of ORBIT, commented, “We are glad to see issues such as these being brought to the fore by the APPG – the transparency of algorithms and the inability of AI to make ethical choices unless these are built-in are essential challenges to tackle if we are to see safe, sustainable, accountable artificial intelligence. We particularly welcome the points made around the requirement to train researchers and developers on the ethical issues and encourage them to think responsibly about their work. ORBIT exists to address exactly these issues – our vision is to provide a robust framework of responsible research and innovation that can ensure development happens in the right way for society.”