2. Solutions to address AI's anticipated negative impacts

Improve human collaboration across borders and stakeholder groups

A number of these experts said ways must be found for people of the world to come to a common understanding of the evolving concerns over AI and digital life and to reach agreement in order to create cohesive approaches to tackling AI's challenges.

Danil Mikhailov, head of data and innovation for Wellcome Trust, responded, "I see a positive future of human/AI interaction in 2030. In my area, health, there is tremendous potential in the confluence of advances in big data analysis and genomics to create personalised medicine and improve diagnosis, treatment and research. Although I am optimistic about human capacity for adaptation, learning, and evolution, technological innovation will not always proceed smoothly. In this we can learn from previous technological revolutions. For example, [Bank of England chief economist] Andy Haldane rightly pointed out that the original 'luddites' in the 19th century had a justified grievance. They suffered severe job losses, and it took the span of a generation for enough jobs to be created to overtake the ones lost. It is a reminder that the introduction of new technologies benefits people asymmetrically, with some suffering while others benefit. To realise the opportunities of the future we need to acknowledge this and prepare sufficient safety nets, such as well-funded adult education initiatives, to name one example. It's also important to have an honest dialogue between the experts, the media and the public about the use of our personal data for social-good projects, like health care, taking in both the risks of acting – such as effects on privacy – and the opportunity costs of not acting. It is a fact that lives are lost currently in health systems across the world that could be saved even with today's technology let alone that of 2030".

Edson Prestes, a professor and director of robotics at the Federal University of Rio Grande do Sul, responded, "We must understand that all domains (technological or not) have two sides: a good and a bad one. To avoid the bad one we need to create and promote the culture of AI/Robotics for good. We need to stimulate people to empathize toward others. We need to think about potential issues, even if they have small probability to happen. We need to be futurists, foreseeing potential negative events and how to circumvent them before they happen. We need to create regulations/laws (at national and international levels) to handle globally harmful situations for humans, other living beings and the environment. Applying empathy, we should seriously think about ourselves and others – if the technology will be useful for us and others and if it will not cause any harm. We cannot develop solutions without considering people and the ecosystem as the central component of development. If so, the pervasiveness of AI/robotics in the future will diminish any negative impact and create a huge synergy among people and environment, improving people's daily lives in all domains while achieving environment sustainability".

Adam Nelson, a software developer for one of the "big five" global technology companies, said, "Human-machine/AI collaboration will be extremely powerful, but humans will still control intent. If human governance isn't improved, AI will merely make the world more efficient. But the goals won't be human welfare. They'll be wealth aggregation for those in power".

Wendy Seltzer, strategy lead and counsel at the World Wide Web Consortium, commented, "I'm mildly optimistic that we will have devised better techno-social governance mechanisms. such that if AI is not improving the lives of humans, we will restrict its uses".

Jen Myronuk, a respondent who provided no identifying details, said, "The optimist's view includes establishing and implementing a new type of ISO standard – 'encoded human rights' – as a functional data set alongside exponential and advancing technologies. Global human rights and human-machine/AI technology can and must scale together. If applied as an extension of the human experience, human-machine/AI collaboration will revolutionize our understanding of the world around us".

Fiona Kerr, industry professor of neural and systems complexity at the University of Adelaide, commented, "The answer depends very much on what we decide to do regarding the large questions around ensuring equality of improved global health; by agreeing on what productivity and worth now look like, partly supported by the global wage; through fair redistribution of technology profits to invest in both international and national social capital; through robust discussion on the role of policy in rewarding technologists and businesses to build quality partnerships between humans and AI; through the growth of understanding in the neurophysiological outcomes of human-human and human-technological interaction which allows us to best decide what not to technologies, when a human is more effective, and how to ensure we maximise the wonders of technology as an enabler of a human-centric future".

Benjamin Kuipers, a professor of computer science at the University of Michigan, wrote, "We face several critical choices between positive and negative futures. … Advancing technology will provide vastly more resources; the key decision is whether those resources will be applied for the good of humanity as a whole or if they will be increasingly held by a small elite. Advancing technology will vastly increase opportunities for communication and surveillance. The question is whether we will find ways to increase trust and the possibilities for productive cooperation among people or whether individuals striving for power will try to dominate by decreasing trust and cooperation. In the medium term, increasing technology will provide more powerful tools for human, corporate or even robot actors in society. The actual problems will be about how members of a society interact with each other. In a positive scenario, we will interact with conversational AIs for many different purposes and even when the AI belongs to a corporation we will be able to trust that it takes what in economics is called a 'fiduciary' stance toward each of us. That is, the information we provide must be used primarily for our individual benefit. Although we know, and are explicitly told, that our aggregated information is valuable to the corporation, we can trust that it will not be used for our manipulation or our disadvantage".

Denise Garcia, an associate professor of political science and international affairs at Northeastern University, said, "Humanity will come together to cooperate".

Charles Geiger, head of the executive secretariat for the UN's World Summit on the Information Society, commented, "As long as we have a democratic system and a free press, we may counterbalance the possible threats of AI".

Warren Yoder, longtime director of the Public Policy Center of Mississippi, now an instructor at Mississippi College, optimistically responded, "Human/AI collaborations will … augment our human abilities and increase the material well-being of humanity. At the same time the concomitant increase in the levels of education and health will allow us to develop new social philosophies and rework our polities to transform human well-being. AI increases the disruption of the old social order, making the new transformation both necessary and more likely, though not guaranteed".

Wangari Kabiru, author of the MitandaoAfrika blog, based in Nairobi, Kenya, commented, "In 2030, advancing AI and tech will not leave most people better off than they are today, because our global digital mission is not strong enough and not principled enough to assure that 'no, not one is left behind' – perhaps intentionally. The immense positive-impact potential for enabling people to achieve more in nearly every area of life – the full benefits of human-machine/AI collaboration can only be experienced when academia, civil society and other institutions are vibrant, enterprise is human-values-based, and governments and national constitutions and global agreements place humanity first. … Engineering should serve humanity and never should humanity be made to serve the exploits of engineering. More people MUST be creators of the future of LIFE – the future of how they live, future of how they work, future of how their relationships interact and overall how they experience life. Beyond the coexistence of human-machine, this creates synergy".

professor expert in AI connected to a major global technology company's projects in AI development wrote, "Precision democracy will emerge from precision education, to incrementally support the best decisions we can make for our planet and our species. The future is about sustaining our planet. As with the current development of precision health as the path from data to wellness, so too will artificial intelligence improve the impact of human collaboration and decision-making in sustaining our planet. "

Some respondents argued that individuals must do better at taking a more active role in understanding and implementing the decision-making options available to them in these complex, code-dependent systems.

Kristin Jenkins, executive director of BioQUEST Curriculum Consortium, said, "Like all tools the benefits and pitfalls of AI will depend on how we use it. A growing concern is the collection and potential uses of data about people's day-to-day lives. 'Something' always knows where we are, the layout of the house, what's in the fridge and how much we slept. The convenience provided by these tools will override caution about data collection, so strong privacy protection must be legislated and culturally nurtured. We need to learn to be responsible for our personal data and aware of when and how it is collected and used".

Peng Hwa Ang, professor of communications at Nanyang Technological University and author of "Ordering Chaos: Regulating the Internet," commented, "AI is still in its infancy. A lot of it is ruled-based and not demanding of true intelligence or learning. But even so, I find it useful. My car has lane-assistance. I find that it makes me a better driver. When AI is more full-fledged, it would make driving safer and faster. I am using AI for some work I am doing on sentiment analysis. I find that I am able to be more creative in asking questions to be investigated. I expect AI will compel greater creativity. Right now, the biggest fear of AI is that it is a black-box operation – yes, the factors chosen are good and accurate and useful, but no one knows why those criteria are chosen. We know the percentages of the factors, but we do not know the whys. Hopefully, by 2030, the box will be more transparent. That's on the AI side. On the human side, I hope human beings understand that true AI will make mistakes. If not, it is not real AI. This means that people have got to be ready to catch the mistakes that AI will make. It will be very good. But it will (still) not be foolproof".

Bert Huang, an assistant professor in the department of computer science at Virginia Tech focused on machine learning, wrote, "AI will cause harm (and it has already caused harm), but its benefits will outweigh the harm it causes. That said, the [historical] pattern of technology being net positive depends on people seeking positive things to do with the technology, so efforts to guide research toward societal benefits will be important to ensure the best future".

An anonymous respondent said, "We should ensure that values (local or global) and basic philosophical theories on ethics inform the development and implementation of AI systems".