The Future of Humanity Institute (FHI) has released its report and recommendations on machine learning and artificial intelligence. The report analyses the landscape of current developments in these fields and advises policymakers, researchers and innovators working in the disciplines of machine learning and AI on ways to mitigate potential future harms. These harms might include the expansion of current threats, introduction of new threats, and changes to the character of threats. The FHI’s report summarises the work it carried out on digital, physical and political security at the University of Oxford in February 2017 and includes subsequent research stemming from the workshop.
The FHI makes several recommendations in its report, including the expansion of the range of stakeholders to be involved in discussing and addressing these challenges, and exhorting developers, researchers and engineers to take seriously the possibility that their work might be put to malicious purposes.
Professor Marina Jirotka, head of Human-Centred Computing at Oxford University and co-founder of ORBIT, commented, “The FHI’s recommendation on expanding the range of stakeholders aligns with the principles of responsible research and innovation, which use public and stakeholder engagement as one of the tools with which to address the results of ICT research.”
Martin de Heaver, managing director of ORBIT, added, “We particularly welcome the advice that researchers should anticipate possible negative outcomes of their work and seek to mitigate potential harm arising from it. The RRI principles promulgated by the UK Research Councils and ORBIT specifically address the issue of unforeseen consequences to ICT research and we believe that by working to consider additional possible outcomes, negative effects can be reduced.”