3. How humans and AI might evolve together in the next decade

AI will be integrated into most aspects of life, producing new efficiencies and enhancing human capacities

Many of the leading experts extolled the positives they expect to continue to expand as AI tools evolve to do more things for more people.

Martijn van Otterlo, author of "Gatekeeping Algorithms with Human Ethical Bias" and assistant professor of artificial intelligence at Tilburg University in the Netherlands, wrote, "Even though I see many ethical issues, potential problems and especially power imbalance/misuse issues with AI (not even starting about singularity issues and out-of-control AI), I do think AI will change most lives for the better, especially looking at the short horizon of 2030 even more-so, because even bad effects of AI can be considered predominantly 'good' by the majority of people. For example, the Cambridge Analytica case has shown us the huge privacy issues of modern social networks in a market economy, but, overall, people value the extraordinary services Facebook offers to improve communication opportunities, sharing capabilities and so on".

Vint Cerf, Internet Hall of Fame member and vice president and chief internet evangelist at Google, said, "I see AI and machine learning as augmenting human cognition a la Douglas Engelbart. There will be abuses and bugs, some harmful, so we need to be thoughtful about how these technologies are implemented and used, but, on the whole, I see these as constructive".

Mícheál Ó Foghlú, engineering director and DevOps Code Pillar at Google's Munich office, said, "The trend is that AI/ML models in specific domains can out-perform human experts (e.g., certain cancer diagnoses based on image-recognition in retina scans). I think it would be fairly much the consensus that this trend would continue, and many more such systems could aid human experts to be more accurate".

Craig Mathias, principal at Farpoint Group, an advisory firm specializing in wireless networking and mobile computing, commented, "Many if not most of the large-scale technologies that we all depend upon – such as the internet itself, the power grid, and roads and highways – will simply be unable to function in the future without AI, as both solution complexity and demand continue to increase".

Matt Mason, a roboticist and the former director of the Robotics Institute at Carnegie Mellon University, wrote, "AI will present new opportunities and capabilities to improve the human experience. While it is possible for a society to behave irrationally and choose to use it to their detriment, I see no reason to think that is the more likely outcome".

Mike Osswald, vice president of experience innovation at Hanson Inc., commented, "I'm thinking of a world in which people's devices continuously assess the world around them to keep a population safer and healthier. Thinking of those living in large urban areas, with devices forming a network of AI input through sound analysis, air quality, natural events, etc., that can provide collective notifications and insight to everyone in a certain area about the concerns of environmental factors, physical health, even helping provide no quarter for bad actors through community policing".

Barry Hughes, senior scientist at the Center for International Futures at the University of Denver, commented, "I was one of the original test users of the ARPANET and now can hardly imagine living without the internet. Although AI will be disruptive through 2030 and beyond, meaning that there will be losers in the workplace and growing reasons for concern about privacy and AI/cyber-related crime, on the whole I expect that individuals and societies will make choices on use and restriction of use that benefit us. Examples include likely self-driving vehicles at that time, when my wife's deteriorating vision and that of an increased elderly population will make it increasingly liberating. I would expect rapid growth in use for informal/non-traditional education as well as some more ambivalent growth in the formal-education sector. Big-data applications in health-related research should be increasingly productive, and health care delivery should benefit. Transparency with respect to its character and use, including its developers and their personal benefits, is especially important in limiting the inevitable abuse".

Dana Klisanin, psychologist, futurist and game designer, predicted, "People will increasingly realize the importance of interacting with each other and the natural world and they will program AI to support such goals, which will in turn support the ongoing emergence of the 'slow movement'. For example, grocery shopping and mundane chores will be allocated to AI (smart appliances), freeing up time for preparation of meals in keeping with the slow food movement. Concern for the environment will likewise encourage the growth of the slow goods/slow fashion movement. The ability to recycle, reduce, reuse will be enhanced by the use of in-home 3D printers, giving rise to a new type of 'craft' that is supported by AI. AI will support the 'cradle-to-grave' movement by making it easier for people to trace the manufacturing process from inception to final product".

Liz Rykert, president at Meta Strategies, a consultancy that works with technology and complex organizational change, responded, "The key for networked AI will be the ability to diffuse equitable responses to basic care and data collection. If bias remains in the programming it will be a big problem. I believe we will be able to develop systems that will learn from and reflect a much broader and more diverse population than the systems we have now".

Michael R. Nelson, a technology policy expert for a leading network services provider who worked as a technology policy aide in the Clinton administration, commented, "Most media reports focus on how machine learning will directly affect people (medical diagnosis, self-driving cars, etc.) but we will see big improvements in infrastructure (traffic, sewage treatment, supply chain, etc.)".

Gary Arlen, president of Arlen Communications, wrote, "After the initial frenzy recedes about specific AI applications (such as autonomous vehicles, workplace robotics, transaction processing, health diagnoses and entertainment selections), specific applications will develop – probably in areas barely being considered today. As with many new technologies, the benefits will not apply equally, potentially expanding the haves-and-have-nots dichotomy. In addition, as AI delves into new fields – including creative work such as design, music/art composition – we may see new legal challenges about illegal appropriation of intellectual property (via machine learning). However, the new legal tasks from such litigation may not need a conventional lawyer – but could be handled by AI itself. Professional health care AI poses another type of dichotomy. For patients, AI could be a bonanza, identifying ailments, often in early stages (based on early symptoms), and recommending treatments. At the same time, such automated tasks could impact employment for medical professionals. And again, there are legal challenges to be determined, such as liability in the case of a wrong action by the AI. Overall, there is no such thing as 'most people,' but many individuals and groups – especially in professional situations – WILL live better lives thanks to AI, albeit with some severe adjustment pains".

Tim Morgan, a respondent who provided no identifying details, said, "Algorithmic machine learning will be our intelligence amplifier, exhaustively exploring data and designs in ways humans alone cannot. The world was shocked when IBM's Deep Blue computer beat Garry Kasparov in 1997. What emerged later was the realization that human and AI 'centaurs' could combine to beat anyone, human or AI. The synthesis is more than the sum of the parts".

Marshall Kirkpatrick, product director of influencer marketing, responded, "If the network can be both decentralized and imbued with empathy, rather than characterized by violent exploitation, then we're safe. I expect it will land in between, hopefully leaning toward the positive. For example, I expect our understanding of self and freedom will be greatly impacted by an instrumentation of a large part of memory, through personal logs and our data exhaust being recognized as valuable just like when we shed the term 'junk DNA'. Networked AI will bring us new insights into our own lives that might seem as far-fetched today as it would have been 30 years ago to say, 'I'll tell you what music your friends are discovering right now'. AI is most likely to augment humanity for the better, but it will take longer and not be done as well as it could be. Hopefully we'll build it in a way that will help us be comparably understanding to others".

Daniel A. Menasce, professor of computer science at George Mason University, commented, "AI and related technologies coupled with significant advances in computer power and decreasing costs will allow specialists in a variety of disciplines to perform more efficiently and will allow non-specialists to use computer systems to augment their skills. Some examples include health delivery, smart cities and smart buildings. For these applications to become reality, easy-to-use user interfaces, or better yet transparent user interfaces will have to be developed".

David Wells, chief financial officer at Netflix at the time he responded to this study, responded, "Technology progression and advancement has always been met with fear and anxiety, giving way to tremendous gains for humankind as we learn to enhance the best of the changes and adapt and alter the worst. Continued networked AI will be no different but the pace of technological change has increased, which is different and requires us to more quickly adapt. This pace is different and presents challenges for some human groups and societies that we will need to acknowledge and work through to avoid marginalization and political conflict. But the gains from better education, medical care and crime reduction will be well worth the challenges".

Rik Farrow, editor of ;login: for the USENIX association, wrote, "Humans do poorly when it comes to making decisions based on facts, rather than emotional issues. Humans get distracted easily. There are certainly things that AI can do better than humans, like driving cars, handling finances, even diagnosing illnesses. Expecting human doctors to know everything about the varieties of disease and humans is silly. Let computers do what they are good at".

Steve Crocker, CEO and co-founder of Shinkuro Inc. and Internet Hall of Fame member, responded, "AI and human-machine interaction has been under vigorous development for the past 50 years. The advances have been enormous. The results are marbled through all of our products and systems. Graphics, speech [and] language understanding are now taken for granted. Encyclopedic knowledge is available at our fingertips. Instant communication with anyone, anywhere exists for about half the world at minimal cost. The effects on productivity, lifestyle and reduction of risks, both natural and man-made, have been extraordinary and will continue. As with any technology, there are opportunities for abuse, but the challenges for the next decade or so are not significantly different from the challenges mankind has faced in the past. Perhaps the largest existential threat has been the potential for nuclear holocaust. In comparison, the concerns about AI are significantly less".

James Kadtke, expert on converging technologies at the Institute for National Strategic Studies at the U.S. National Defense University, wrote, "Barring the deployment of a few different radically new technologies, such as general AI or commercial quantum computers, the internet and AI [between now and 2030] will proceed on an evolutionary trajectory. Expect internet access and sophistication to be considerably greater, but not radically different, and also expect that malicious actors using the internet will have greater sophistication and power. Whether we can control both these trends for positive outcomes is a public policy issue more than a technological one".

Tim Morgan, a respondent who provided no identifying details, said, "Human/AI collaboration over the next 12 years will improve the overall quality of life by finding new approaches to persistent problems. We will use these adaptive algorithmic tools to explore whole new domains in every industry and field of study: materials science, biotech, medicine, agriculture, engineering, energy, transportation and more. … This goes beyond computability into human relationships. AIs are beginning to understand and speak the human language of emotion. The potential of affective computing ranges from productivity-increasing adaptive interfaces, to 'pre-crime' security monitoring of airports and other gathering places, to companion 'pets' which monitor their aging owners and interact with them in ways that improve their health and disposition. Will there be unseen dangers or consequences? Definitely. That is our pattern with our tools. We invent them, use them to improve our lives and then refine them when we find problems. AI is no different".

Ashok Goel, director of the human-centered computing Ph.D. program at Georgia Tech, wrote, "Human-AI interaction will be multimodal: We will directly converse with AIs, for example. However, much of the impact of AI will come in enhancing human-human interaction across both space (we will be networked with others) and time (we will have access to all our previously acquired knowledge). This will aid, augment and amplify individual and collective human intelligence in unprecedented and powerful ways".

David Cake, an leader with Electronic Frontiers Australia and vice-chair of the ICANN GNSO Council, wrote, "In general, machine learning and related technologies have the capacity to greatly reduce human error in many areas where it is currently very problematic and make available good, appropriately tailored advice to people to whom it is currently unavailable, in literally almost every field of human endeavour".

Fred Baker, an independent networking technologies consultant, longtime leader in the Internet Engineering Task Force and engineering fellow with Cisco, commented, "In my opinion, developments have not been 'out of control,' in the sense that the creation of 'Terminator's' Skynet or the HAL 9000 computer might depict them. Rather, we have learned to automate processes in which neural networks have been able to follow data to its conclusion (which we call 'big data') unaided and uncontaminated by human intuition, and sometimes the results have surprised us. These remain, and in my opinion will remain, to be interpreted by human beings and used for our purposes".

Bob Frankston, software innovation pioneer and technologist based in North America, wrote, "It could go either way. AI could be a bureaucratic straitjacket and tool of surveillance. I'm betting that machine learning will be like the X-ray in giving us the ability to see new wholes and gain insights".

Perry Hewitt, a marketing, content and technology executive, wrote, "Today, voice-activated technologies are an untamed beast in our homes. Some 16% of Americans have a smart speaker, and yet they are relatively dumb devices: They misinterpret questions, offer generic answers and, to the consternation of some, are turning our kids into a**holes. I am bullish on human-machine interactions developing a better understanding of and improving our daily routines. I think in particular of the working parent, often although certainly not exclusively a woman, who carries so much information in their head. What if a human-machine collaboration could stock the house with essentials, schedule the pre-camp pediatrician appointments and prompt drivers for the alternate-side parking/street cleaning rules. The ability for narrow AI to assimilate new information (the bus is supposed to come at 7:10 but a month into the school year is known to actually come at 7:16) could keep a family connected and informed with the right data, and reduce the mental load of household management".

John McNutt, a professor in the school of public policy and administration at the University of Delaware, responded, "Throwing out technology because there is a potential downside is not how human progress takes place. In public service, a turbulent environment has created a situation where knowledge overload can seriously degrade our ability to do the things that are essential to implement policies and serve the public good. AI can be the difference between a public service that works well and one that creates more problems than it solves".

Randy Marchany, chief information security officer at Virginia Tech and director of Virginia Tech's IT Security Laboratory, said, "AI-human interaction in 2030 will be in its 'infancy' stage. AI will need to go to 'school' in a manner similar to humans. They will amass large amounts of data collected by various sources but need 'ethics' training to make good decisions. Just as kids are taught a wide variety of info and some sort of ethics (religion, social manners, etc.), AI will need similar training. Will AI get the proper training? Who decides the training content?"

Robert Stratton, cybersecurity expert, said, "While there is widespread acknowledgement in a variety of disciplines of the potential benefits of machine learning and artificial intelligence technologies, progress has been tempered by their misapplication. Part of data science is knowing the right tool for a particular job. As more-rigorous practitioners begin to gain comfort and apply these tools to other corpora it's reasonable to expect some significant gains in efficiency, insight or profitability in many fields. This may not be visible to consumers except through increased product choice, but it may include everything from drug discovery to driving".

data analyst for an organization developing marketing solutions said, "Assuming that policies are in place to prevent the abuse of AI and programs are in place to find new jobs for those who would be career-displaced, there is a lot of potential in AI integration. By 2030, most AI will be used for marketing purposes and be more annoying to people than anything else as they are bombarded with personalized ads and recommendations. The rest of AI usage will be its integration into more tedious and repetitive tasks across career fields. Implementing AI in this fashion will open up more time for humans to focus on long-term and in-depth tasks that will allow further and greater societal progression. For example, AI can be trained to identify and codify qualitative information from surveys, reviews, articles, etc., far faster and in greater quantities than even a team of humans can. By having AI perform these tasks, analysts can spend more time parsing the data for trends and information that can then be used to make more-informed decisions faster and allow for speedier turn-around times. Minor product faults can be addressed before they become widespread, scientists can generate semiannual reports on environmental changes rather than annual or biannual".

Helena Draganik, a professor at the University of Gdansk in Poland, responded, "AI will not change humans. It will change the relations between them because it can serve as an interpreter of communication. It will change our habits (as an intermediation technology). AI will be a great commodity. It will help in cases of health problems (diseases). It will also generate a great 'data industry' (big data) market and a lack of anonymity and privacy. Humanity will more and more depend on energy/electricity. These factors will create new social, cultural, security and political problems".

There are those who think there won't be much change by 2030.

Christine Boese, digital strategies professional, commented, "I believe it is as William Gibson postulated, 'The future is already here, it just not very evenly distributed'. What I know from my work in user-experience design and in exposure to many different Fortune 500 IT departments working in big data and analytics is that the promise and potential of AI and machine learning is VASTLY overstated. There has been so little investment in basic infrastructure, entire chunks of our systems won't even be interoperable. The AI and machine learning code will be there, in a pocket here, a pocket there, but system-wide, it is unlikely to be operating reliably as part of the background radiation against which many of us play and work online".
An anonymous respondent wrote, "While various deployments of new data science and computation will help firms cut costs, reduce fraud and support decision-making that involves access to more information than an individual can manage, organisations, professions, markets and regulators (public and private) usually take many more than 12 years to adapt effectively to a constantly changing set of technologies and practices. This generally causes a decline in service quality, insecurity over jobs and investments, new monopoly businesses distorting markets and social values, etc. For example, many organisations will be under pressure to buy and implement new services, but unable to access reliable market information on how to do this, leading to bad investments, distractions from core business, and labour and customer disputes".

Mario Morino, chairman of the Morino Institute and co-founder of Venture Philanthropy Partners, commented, "While I believe AI/ML will bring enormous benefits, it may take us several decades to navigate through the disruption and transition they will introduce on multiple levels".

Daniel Berninger, an internet pioneer who led the first VoIP deployments at Verizon, HP and NASA, currently founder at Voice Communication Exchange Committee (VCXC), said, "The luminaries claiming artificial intelligence will surpass human intelligence and promoting robot reverence imagine exponentially improving computation pushes machine self-actualization from science fiction into reality. The immense valuations awarded Google, Facebook, Amazon, Tesla, et al., rely on this machine-dominance hype to sell infinite scaling. As with all hype, pretending reality does not exist does not make reality go away. Moore's Law does not concede the future to machines, because human domination of the planet does not owe to computation. Any road map granting machines self-determination includes 'miracle' as one of the steps. You cannot turn a piece of wood into a real boy. AI merely 'models' human activity. No amount of improvement in the development of these models turns the 'model' into the 'thing'. Robot reverence attempts plausibility by collapsing the breadth of human potential and capacities. It operates via 'denialism' with advocates disavowing the importance of anything they cannot model. In particular, super AI requires pretending human will and consciousness do not exist. Human beings remain the source of all intent and the judge of all outcomes. Machines provide mere facilitation and mere efficiency in the journey from intent to outcome. The dehumanizing nature of automation and the diseconomy of scale of human intelligence is already causing headaches that reveal another AI Winter arriving well before 2030".

Paul Kainen, futurist and director of the Lab for Visual Mathematics at Georgetown University, commented, "Quantum cat here: I expect complex superposition of strong positive, negative and null as typical impact for AI. For the grandkids' sake, we must be positive!"

The following one-liners from anonymous respondents also tie into AI in 2030:

  • An Internet Hall of Fame member wrote, "You'll talk to your digital assistant in a normal voice and it will just be there – it will often anticipate your needs, so you may only need to talk to it to correct or update it".
  • The director of a cognitive research group at one of the world's top AI and large-scale computing companies predicted that by 2030, "Smartphone-equivalent devices will support true natural-language dialog with episodic memory of past interactions. Apps will become low-cost digital workers with basic commonsense reasoning".
  • Another Internet Hall of Fame member said, "The equivalent of the 'Star Trek' universal translator will become practical, enabling travelers to better interact with people in countries they visit, facilitate online discussions across language barriers, etc".
  • An Internet of Things researcher commented, "We need to balance between human emotions and machine intelligence – can machines be emotional? – that's the frontier we have to conquer".
  • An anonymous respondent wrote, "2030 is still quite possibly before the advent of human-level AI. During this phase AI is still mostly augmenting human efforts – increasingly ubiquitous, optimizing the systems that surround us and being replaced when their optimization criteria are not quite perfect – rather than pursuing those goals programmed into them, whether we find the realization of those goals desirable or not".
  • A research scientist who works for Google said, "Things will be better, although many people are deeply worried about the effects of AI".
  • An ARPANET and internet pioneer wrote, "The kind of AI we are currently able to build is good for data analysis but far, far away from 'human' levels of performance; the next 20 years won't change this, but we will have valuable tools to help analyze and control our world".
  • An artificial intelligence researcher working for one of the world's most powerful technology companies wrote, "AI will enhance our vision and hearing capabilities, remove language barriers, reduce time to find information we care about and help in automating mundane activities".
  • A manager with a major digital innovation company said, "Couple the information storage with the ever-increasing ability to rapidly search and analyze that data, and the benefits to augmenting human intelligence with this processed data will open up new avenues of technology and research throughout society".

Other anonymous respondents commented:

  • "AI will help people to manage the increasingly complex world we are forced to navigate. It will empower individuals to not be overwhelmed".
  • "AI will reduce human error in many contexts: driving, workplace, medicine and more".
  • "In teaching it will enhance knowledge about student progress and how to meet individual needs; it will offer guidance options based on the unique preferences of students that can guide learning and career goals".
  • "2030 is only 12 years from now, so I expect that systems like Alexa and Siri will be more helpful but still of only medium utility".
  • "AI will be a useful tool; I am quite a ways away from fearing SkyNet and the rise of the machines".
  • "AI will produce major benefits in the next 10 years, but ultimately the question is one of politics: Will the world somehow manage to listen to the economists, even when their findings are uncomfortable?"
  • "I strongly believe that an increasing use of numerical control will improve the lives of people in general".
  • "AI will help us navigate choices, find safer routes and avenues for work and play, and help make our choices and work more consistent".
  • "Many factors will be at work to increase or decrease human welfare, and it will be difficult to separate them".