2. Solutions to address AI's anticipated negative impacts

A number of participants in this canvassing offered solutions to the worrisome potential future spawned by AI. Among them: 1) Improving collaboration across borders and stakeholder groups. 2) Developing policies to assure that development of AI will be directed at augmenting humans and the common good. 3) Shifting the priorities of economic, political and education systems to empower individuals to stay ahead in the "race with the robots".

Many respondents sketched out overall aspirations:

Andrew Wycoff, the director of OECD's directorate for science, technology and innovation, and Karine Perset, an economist in OECD's digital economy policy division, commented, "Twelve years from now, we will benefit from radically improved accuracy and efficiency of decisions and predictions across all sectors. Machine learning systems will actively support humans throughout their work and play. This support will be unseen but pervasive – like electricity. As machines' ability to sense, learn, interact naturally and act autonomously increases, they will blur the distinction between the physical and the digital world. AI systems will interconnect and work together to predict and adapt to our human needs and emotions. The growing consensus that AI should benefit society at-large leads to calls to facilitate the adoption of AI systems to promote innovation and growth, help address global challenges, and boost jobs and skills development, while at the same time establishing appropriate safeguards to ensure these systems are transparent and explainable, and respect human rights, democracy, culture, nondiscrimination, privacy and control, safety, and security. Given the inherently global nature of our networks and applications that run across then, we need to improve collaboration across countries and stakeholder groups to move toward common understanding and coherent approaches to key opportunities and issues presented by AI. This is not too different from the post-war discussion on nuclear power. We should also tread carefully toward Artificial General Intelligence and avoid current assumptions on the upper limits of future AI capabilities".

Wendy Hall, professor of computer science at the University of Southampton and executive director of the Web Science Institute, said, "By 2030 I believe that human-machine/AI collaboration will be empowering for human beings overall. Many jobs will have gone, but many new jobs will have been created and machines/AI should be helping us do things more effectively and efficiently both at home and at work. It is a leap of faith to think that by 2030 we will have learnt to build AI in a responsible way and we will have learnt how to regulate the AI and robotics industries in a way that is good for humanity. We may not have all the answers by 2030 but we need to be on the right track by then".

Ian O'Byrne, an assistant professor focusing on literacy and technology at the College of Charleston, said, "I believe in human-machine/AI collaboration, but the challenge is whether humans can adapt our practices to these new opportunities".

Arthur Bushkin, an IT pioneer who worked with the precursors to the Advanced Research Projects Agency Network (ARPANET) and Verizon, wrote, "The principal issue will be society's collective ability to understand, manage and respond to the implications and consequences of the technology".

Daniel Obam, information and communications technology policy advisor, responded, "As we develop AI, the issue of ethical behaviour is paramount. AI will allow authorities to analyse and allocate resources where there is the greatest need. AI will also change the way we work and travel. … Digital assistants that mine and analyse data will help professionals in making concise decisions in health care, manufacturing and agriculture, among others. Smart devices and virtual reality will enable humans to interact with and learn from historical or scientific issues in a more-clear manner. Using AI, authorities will be able to prevent crime before it happens. Cybersecurity needs to be at the forefront to prevent unscrupulous individuals from using AI to perpetrate harm or evil on the human race".

Ryan Sweeney, director of analytics at Ignite Social Media, commented, "Our technology continues to evolve at a growing rate, but our society, culture and economy are not as quick to adapt. We'll have to be careful that the benefits of AI for some do not further divide those who might not be able to afford the technology. What will that mean for our culture as more jobs are automated? We will need to consider the impact on the current class divide".

Susan Mernit, executive director of The Crucible and co-founder and board member of Hack the Hood, responded, "If AI is in the hands of people who do not care about equity and inclusion, it will be yet another tool to maximize profit for a few".

The next three sections of this report focus on solutions most often mentioned by respondents to this canvassing.