2. Solutions to address AI's anticipated negative impacts

Develop policies to assure that development of AI will be directed at augmenting humans and the common good

Many experts who shared their insights in this study suggested there has to be an overall change in the development, regulation and certification of autonomous systems. They generally said the goal should be values-based, inclusive, decentralized, networks "imbued with empathy" that help individuals assure that technology meets social and ethical responsibilities for the common good.

Susan Etlinger, an industry analyst for Altimeter Group and expert in data, analytics and digital strategy, commented, "In order for AI technologies to be truly transformative in a positive way, we need a set of ethical norms, standards and practical methodologies to ensure that we use AI responsibly and to the benefit of humanity. AI technologies have the potential to do so much good in the world: identify disease in people and populations, discover new medications and treatments, make daily tasks like driving simpler and safer, monitor and distribute energy more efficiently, and so many other things we haven't yet imagined or been able to realize. And – like any tectonic shift – AI creates its own type of disruption. We've seen this with every major invention from the Gutenberg press to the invention of the semiconductor. But AI is different. Replication of some human capabilities using data and algorithms has ethical consequences. Algorithms aren't neutral; they replicate and reinforce bias and misinformation. They can be opaque. And the technology and means to use them rests in the hands of a select few organizations, at least today".

Bryan Johnson, founder and CEO of Kernel, a leading developer of advanced neural interfaces, and OS Fund, a venture capital firm, said, "We could start with owning our own digital data and the data from our bodies, minds and behavior, and then follow by correcting our major tech companies' incentives away from innovation for everyday convenience and toward radical human improvement. As an example of what tech could look like when aligned with radical human improvement, cognitive prosthetics will one day give warnings about biases – like how cars today have sensors letting you know when you drift off to sleep or if you make a lane change without a signal – and correct cognitive biases and warn an individual away from potential cognitive biases. This could lead to better behaviors in school, home and work, and encourage people to make better health decisions".

Marc Rotenberg, executive director of Electronic Privacy Information Center (EPIC), commented, "The challenge we face with the rise of AI is the growing opacity of processes and decision-making. The favorable outcomes we will ignore. The problematic outcomes we will not comprehend. That is why the greatest challenge ahead for AI accountability is AI transparency. We must ensure that we understand and can replicate the outcomes produced by machines. The alternative outcome is not sustainable".

John C. Havens, executive director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems and the Council on Extended Intelligence, wrote, "While today people provide 'consent' for their data usage, most people don't understand the depth and breadth of how their information is utilized by businesses and governments at large. Until every individual is provided with a sovereign identity attached to a personal data cloud they control, information won't truly be shared – just tracked. By utilizing blockchain or similar technologies and adopting progressive ideals toward citizens and their data, as demonstrated by countries like Estonia, we can usher in genuine digital democracy in the age of the algorithm. The other issue underlying the 'human-AI augmentation' narrative rarely discussed is the economic underpinnings driving all technology manufacturing. Where exponential growth, shareholder models are prioritized human and environmental well-being diminishes. Multiple reports from people like Joseph Stiglitz point out that while AI will greatly increase GDP in the coming decades, the benefits of these increases will favor the few versus the many. It's only by adopting 'Beyond GDP' or triple-bottom-line metrics that 'people, planet and profit' will shape a holistic future between humans and AI".

Greg Lloyd, president and co-founder at Traction Software, presented a future scenario: "By 2030 AIs will augment access and use of all personal and networked resources as highly skilled and trusted agents for almost every person – human or corporate. These agents will be bound to act in accordance with new laws and regulations that are fundamental elements of their construction much like Isaac Asimov's 'Three Laws of Robotics' but with finer-grain 'certifications' for classes of activities that bind their behavior and responsibility for practices much like codes for medical, legal, accounting and engineering practice. Certified agents will be granted access to personal or corporate resources, and within those bounds will be able to converse, take direction, give advice and act like trusted servants, advisers or attorneys. Although these agents will 'feel' like intelligent and helpful beings, they will not have any true independent will or consciousness, and must not pretend to be human beings or act contrary to the laws and regulations that bind their behavior. Think Ariel and Prospero".

Tracey P. Lauriault, assistant professor of critical media and big data at Carleton University's School of Journalism and Communication, commented, "[What about] regulatory and policy interventions to protect citizens from potentially harmful outcomes, AI auditing, oversight, transparency and accountability? Without some sort of principles of a systems-based framework to ensure that AI remains ethical and in the public interest, in a stable fashion, then I must assume that AI will impede agency and could lead to decision-making that can be harmful, biased, inaccurate and not able to dynamically change with changing values. There needs to be some sort of accountability".

Joël Colloc, professor at Université du Havre Normandy University and author of "Ethics of Autonomous Information Systems," commented, "When AI supports human decisions as a decision-support system it can help humanity enhance life, health and well-being and supply improvements for humanity. See Marcus Flavius Quintilianus's principles: Who is doing what, with what, why, how, when, where? Autonomous AI is power that can be used by powerful persons to control the people, put them in slavery. Applying the Quintilian principles to the role of AI … we should propose a code of ethics of AI to evaluate that each type of application is oriented toward the well-being of the user: 1) do not harm the user, 2) benefits go to the user, 3) do not misuse her/his freedom, identity and personal data, and 4) decree as unfair any clauses alienating the user's independence or weakening his/her rights of control over privacy in use of the application. The sovereignty of the user of the system must remain total".

Joseph Turow, professor of communication at the University of Pennsylvania, wrote, "Whether or not AI will improve society or harm it by 2030 will depend on the structures governing societies of the era. Broadly democratic societies with an emphasis on human rights might encourage regulations that push AI in directions that help all sectors of the nation. Authoritarian societies will, by contrast, set agendas for AI that further divide the elite from the rest and use technology to cultivate and reinforce the divisions. We see both tendencies today; the dystopian one has the upper hand especially in places with the largest populations. It is critical that people who care about future generations speak out when authoritarian tendencies of AI appear".

Henry E. Brady, dean of the Goldman School of Public Policy at the University of California, Berkeley, wrote, "I believe that policy responses can be developed that will reduce biases and find a way to accommodate AI and robotics with human lives".

Jennifer King, director of privacy at Stanford Law School's Center for Internet and Society, said, "Unless we see a real effort to capture the power of AI for the public good, I do not see an overarching public benefit by 2030. The shift of AI research to the private sector means that AI will be developed to further consumption, rather than extend knowledge and public benefit".

Gary Kreps, distinguished professor of communication and director of the Center for Health and Risk Communication at George Mason University, wrote, "The tremendous potential for AI to be used to engage and adapt information content and computer services to individual users can make computing increasingly helpful, engaging and relevant. However, to achieve these outcomes, AI needs to be programmed with the user in mind. For example, AI services should be user-driven, adaptive to individual users, easy to use, easy to understand and easy for users to control. These AI systems need to be programmed to adapt to individual user requests, learning about user needs and preferences".

Thomas Streeter, a professor of sociology at the University of Vermont, said, "The technology will not determine whether things are better or worse in 2030; social and political choices will".

Paul Werbos, a former program director at the National Science Foundation who first described the process of training artificial neural networks through backpropagation of errors in 1974, said, "We are at a moment of choice. The outcome will depend a lot on the decisions of very powerful people who do not begin to know the consequences of the alternatives they face, or even what the substantive alternatives are".

Divina Frau-Meigs, professor of media sociology at the University of Paris III: Sorbonne Nouvelle and UNESCO chair for sustainable digital development, responded, "The sooner the ethics of AI are aligned with human rights tenets the better".

Juan Ortiz Freuler, a policy fellow at the World Wide Web Foundation, wrote "We believe technology can and should empower people. If 'the people' will continue to have a substantive say on how society is run, then the state needs to increase its technical capabilities to ensure proper oversight of these companies. Tech in general and AI in particular will promote the advancement of humanity in every area by allowing processes to scale efficiently, reducing the costs and making more services available to more people (including quality health care, mobility, education, etc.). The open question is how these changes will affect power dynamics. To operate effectively, AI requires a broad set of infrastructure components, which are not equally distributed. These include data centers, computing power and big data. What is more concerning is that there are reasons to expect further concentration. On the one hand, data scales well: The upfront (fixed) costs of setting up a datacenter are large compared to the cost of keeping it running. Therefore, the cost of hosting each extra datum is marginally lower than the previous one. Data is the fuel of AI, and therefore whoever gets access to more data can develop more effective AI. On the other hand, AI creates efficiency gains by allowing companies to automate more processes, meaning whoever gets ahead can undercut competitors. This circle fuels concentration. As more of our lives are managed by technology there is a risk that whoever controls these technologies gets too much power. The benefits in terms of quality of life and the risks to people's autonomy and control over politics are qualitatively different and there cannot (and should not) be up for tradeoffs".

Meryl Alper, an assistant professor of communication at Northeastern University and a faculty associate at Harvard University's Berkman Klein Center for Internet and Society, wrote, "My fear is that AI tools will be used by a powerful few to further centralize resources and marginalize people. These tools, much like the internet itself, will allow people to do this ever more cheaply, quickly and in a far-reaching and easily replicable manner, with exponentially negative impacts on the environment. Preventing this in its worst manifestations will require global industry regulation by government officials with hands-on experience in working with AI tools on the federal, state and local level, and transparent audits of government AI tools by grassroots groups of diverse (in every sense of the term) stakeholders".

David Wilkins, instructor in computer science at the University of Oregon, responded, "AI must be able to explain the basis for its decisions".

A top research director and technical fellow at a major global technology company said, "There is a huge opportunity to enhance folks' lives via AI technologies. The positive uses of AI will dominate as they will be selected for their value to people. I trust the work by industry, academia and civil society to continue to play an important role in moderating the technology, such as pursuing understandings of the potential costly personal, social and societal influences of AI. I particularly trust the guidance coming from the long-term, ongoing One Hundred Year Study on AI and the efforts of the Partnership on AI".

Peter Stone, professor of computer science at the University of Texas at Austin and chair of the first study panel of the One Hundred Year Study on Artificial Intelligence (AI100), responded, "As chronicled in detail in the AI100 report, I believe that there are both significant opportunities and significant challenges/risks when it comes to incorporating AI technologies into various aspects of everyday life. With carefully crafted industry-specific policies and responsible use, I believe that the potential benefits outweigh the risks. But the risks are not to be taken lightly".

Anita Salem, systems research and design principal at SalemSystems, warned of a possible dystopian outcome, "Human-machine interaction will result in increasing precision and decreasing human relevance unless specific efforts are made to design in 'humanness'. For instance, AI in the medical field will aid more precise diagnosis, will increase surgical precision and will increase evidence-based analytics. If designed correctly, these systems will allow the humans to do what they do best –provide empathy, use experience-based intuition and utilize touch and connection as a source of healing. If human needs are left out of the design process, we'll see a world where humans are increasingly irrelevant and more easily manipulated. We could see increasing under-employment leading to larger wage gaps, greater poverty and homelessness, and increasing political alienation. We'll see fewer opportunities for meaningful work, which will result in increasing drug and mental health problems and the further erosion of the family support system. Without explicit efforts to humanize AI design, we'll see a population that is needed for purchasing, but not creating. This population will need to be controlled and AI will provide the means for this control: law enforcement by drones, opinion manipulation by bots, cultural homogeny through synchronized messaging, election systems optimized from big data and a geopolitical system dominated by corporations that have benefited from increasing efficiency and lower operating costs".

Chris Newman, principal engineer at Oracle, commented, "As it becomes more difficult for humans to understand how AI/tech works, it will become harder to resolve inevitable problems. A better outcome is possible with a hard push by engineers and consumers toward elegance and simplicity (e.g., Steve-Jobs-era Apple)".

A research scientist based in North America wrote, "The wheels of legislation, which is a primary mechanism to ensure benefits are distributed throughout society, move slowly. While the benefits of AI/automation will accrue very quickly for the 1%, it will take longer for the rest of the populace to feel any benefits, and that's ONLY if our representative leaders DELIBERATELY enact STRONG social and fiscal policy. For example, AI will save billions in labor costs – and also cut the bargaining power of labor in negotiations with capital. Any company using AI technologies should be heavily taxed, with that money going into strong social welfare programs like job retraining and federal jobs programs. For another example, any publicly funded AI research should be prevented from being privatized. The public ought to see the reward from its own investments. Don't let AI follow the pattern of Big Pharma's exploitation of the public-permitted Bayh-Dole Act".

Ken Birman, a professor in the department of computer science at Cornell University, responded, "By 2030, I believe that our homes and offices will have evolved to support app-like functionality, much like the iPhone in my pocket. People will customize their living and working spaces, and different app suites will support different lifestyles or special needs. For example, think of a young couple with children, a group of students sharing a home or an elderly person who is somewhat frail. Each would need different forms of support. This 'applications' perspective is broad and very flexible. But we also need to ensure that privacy and security are strongly protected by the future environment. I do want my devices and apps linked on my behalf, but I don't ever want to be continuously spied-upon. I do think this is feasible, and as it occurs we will benefit in myriad ways".

Martin Geddes, a consultant specializing in telecommunications strategies, said, "The unexpected impact of AI will be to automate many of our interactions with systems where we give consent and to enable a wider range of outcomes to be negotiated without our involvement. This requires a new presentation layer for the augmented reality metaverse, with a new 'browser' – the Guardian Avatar – that helps to protect our identity and our interests".

Lindsey Andersen, an activist at the intersection of human rights and technology for Freedom House and Internews, now doing graduate research at Princeton University, commented, "Already, there is an overreliance on AI to make consequential decisions that affect people's lives. We have rushed to use AI to decide everything, from what content we see on social media to assigning credit scores to determining how long a sentence a defendant should serve. While often well-intentioned, these uses of AI are rife with ethical and human rights issues, from perpetuating racial bias to violating our rights to privacy and free expression. If we have not dealt with these problems through smart regulation, consumer/buyer education and establishment of norms across the AI industry, we could be looking at a vastly more unfair, polarized and surveilled world in 2030".

Yeseul Kim, a designer for a major South Korean search firm, wrote, "The prosperity generated by and the benefits of AI will promote the quality of living for most people only when its ethical implications and social impacts are widely discussed and shared inside the human society, and only when pertinent regulations and legislation can be set up to mitigate the misconduct that can be brought about as the result of AI advancement. If these conditions are met, computers and machines can process data at unprecedented speed and at an unrivaled precision level, and this will improve the quality of life, especially in medical and healthcare sectors. It has already been proven and widely shared among medical expert groups that doctors perform better in detecting diseases when they work with AI. Robotics for surgery is also progressing, so this will also benefit the patients as they can assist human surgeons who inevitably face physical limits when they conduct surgeries".

Mark Maben, a general manager at Seton Hall University, wrote, "The AI revolution is, sadly, likely to be dystopian. At present, governmental, educational, civic, religious and corporate institutions are ill-prepared to handle the massive economic and social disruption that will be caused by AI. I have no doubt that advances in AI will enhance human capacities and empower some individuals, but this will be more than offset by the fact that artificial intelligence and associated technological advances will mean far fewer jobs in the future. Sooner than most individuals and societies realize, AI and automation will eliminate the need for retail workers, truck drivers, lawyers, surgeons, factory workers and other professions. In order to ensure that the human spirit thrives in a world run and ruled by AI, we will need to change the current concept of work. That is an enormous task for a global economic system in which most social and economic benefits come from holding a traditional job. We are already seeing a decline in democratic institutions and a rise in authoritarianism due to economic inequality and the changing nature of work. If we do not start planning now for the day when AI results in complete disruption of employment, the strain is likely to result in political instability, violence and despair. This can be avoided by policies that provide for basic human needs and encourage a new definition of work, but the behavior to date by politicians, governments, corporations and economic elites gives me little confidence in their ability to lead us through this transition".

Eduardo Vendrell, a computer science professor at the Polytechnic University of Valencia in Spain, responded, "These advances will have a noticeable impact on our privacy, since the basis for this application is focused on the information we generate with the use of different technologies. … It will be necessary to regulate in a decisive way the access to the information and its use".

Yoram Kalman, an associate professor at the Open University of Israel and member of The Center for Internet Research at the University of Haifa, wrote, "The main risk is when communication and analysis technologies are used to control others, to manipulate them, or to take advantage of them. These risks are ever-present and can be mitigated through societal awareness and education, and through regulation that identifies entities that become very powerful thanks to a specific technology or technologies, and which use that power to further strengthen themselves. Such entities – be they commercial, political, national, military, religious or any other – have in the past tried and succeeded in leveraging technologies against the general societal good, and that is an ever-present risk of any powerful innovation. This risk should make us vigilant but should not keep us from realizing one of the most basic humans urges: the strive to constantly improve the human condition".

Sam Gregory, director of WITNESS and digital human rights activist, responded, "We should assume all AI systems for surveillance and population control and manipulation will be disproportionately used and inadequately controlled by authoritarian and non-democratic governments. These governments and democratic governments will continue to pressure platforms to use AI to monitor for content, and this monitoring, in and of itself, will contribute to the data set for personalization and for surveillance and manipulation. To fight back against this dark future we need to get the right combination of attention to legislation and platform self-governance right now, and we need to think about media literacy to understand AI-generated synthetic media and targeting. We should also be cautious about how much we encourage the use of AI as a solution to managing content online and as a solution to, for example, managing hate speech".

Jonathan Kolber, futurist, wrote, "My fear is that, by generating AIs that can learn new tasks faster and more reliably than people can do, the future economy will have only evanescent opportunities for most people. My hope is that we will begin implementing a sustainable and viable universal basic income, and in particular Michael Haines' MUBI proposal. (To my knowledge, the only such proposal that is sustainable and can be implemented in any country at any time.) I have offered a critique of alternatives. Given that people may no longer need depend on their competitive earning power in 2030, AI will empower a far better world. If, however, we fail to implement a market-oriented universal basic income or something equally effective, vast multitudes will become unemployed and unemployable without means to support themselves. That is a recipe for societal disaster".

Walid Al-Saqaf, senior lecturer at Södertörn University, member of the board of trustees of the Internet Society (ISOC) and vice president of the ISOC Blockchain Special Interest Group, commented, "The challenge is to ensure that the data used for AI procedures is reliable. This entails the need for strong cyber security and data integrity. The latter, I believe, can be tremendously enhanced by distributed ledger technologies such as blockchain. I foresee mostly positive results from AI so long as there is enough guards to protect from automated execution of tasks in areas that may have ethical considerations such as taking decisions that may have life-or-death implications. AI has a lot of potential. It should be used to add to and not replace human intellect and judgement".

Danny O'Brien, international director for a nonprofit digital rights group, commented, "I'm generally optimistic about the ability of humans to direct technology for the benefit of themselves and others. I anticipate human-machine collaboration to take place at an individual level, with tools and abilities that enhance our own judgment and actions, rather than this being a power restricted to a few actors. So, for instance, if we use facial-recognition or predictive tools, it will be under the control of an end-user, transparent and limited to personal use. This may require regulation, internal coding restraints or a balance being struck between user capabilities. But I'm hopeful we can get there".

Fernando Barrio, director of the law program at the Universidad Nacional de Río Negro in Argentina, commented, "The interaction between humans and networked AI could lead to a better future for a big percentage of the population. In order to do so efforts need to be directed not only at increasing AI development and capabilities but also at positive policies to increase the availability and inclusiveness of those technologies. The challenge is not technical; it is sociopolitical".

Paul Jones, professor of information science at the University of North Carolina at Chapel Hill, responded, "AI as we know it in 2018 is just beginning to understand itself. Like HAL, it will have matured by 2030 into an understanding of its post-adolescent self and of its relationship to humans and to the world. But, also, humans will have matured in our relationship to AI. Like all adolescent relationships there will have been risk taking and regrets and hopefully reconciliation. Language was our first link to other intelligences, then books, then the internet – each a more intimate conversation than the one before. AI will become our link, adviser and to some extent our wise loving companion".

Jean-Claude Heudin, a professor with expertise in AI and software engineering at the De Vinci Research Center at Pole Universitaire Leonard de Vinci in France, wrote, "Natural intelligence and artificial intelligence are complementary. We need all of the possible intelligence possible for solving the problems yet to come. More intelligence is always better".

Bryan Alexander, futurist and president of Bryan Alexander Consulting, responded, "I hope we will structure AI to enhance our creativity, to boost our learning, to expand our relationships worldwide, to make us physically safer and to remove some drudgery".

But some have concerns that the setting of policy could do some damage.

Scott Burleigh, software engineer and intergalactic internet pioneer, wrote, "Advances in technology itself, including AI, always increase our ability to change the circumstances of reality in ways that improve our lives. It also always introduces possible side effects that can make us worse off than we were before. Those effects are realized when the policies we devise for using the new technologies are unwise. I don't worry about technology; I worry about stupid policy. I worry about it a lot, but I am guardedly optimistic; in most cases I think we eventually end up with tolerable policies".

Jeff Jarvis, director of the Tow-Knight Center at City University of New York's Craig Newmark School of Journalism, commented, "What worries me most is worry itself: An emerging moral panic that will cut off the benefits of this technology for fear of what could be done with it. What I fear most is an effort to control not just technology and data but knowledge itself, prescribing what information can be used for before we know what those uses could be. I could substitute 'book' for 'AI' and the year 1485 (or maybe 1550) for 2030 in your question and it'd hold fairly true. Some thought it would be good, some bad; both end up right. We will figure this out. We always have. Sure, after the book there were wars and other profound disturbances. But in the end, humans figure out how to exploit technologies to their advantage and control them for their safety. I'd call that a law of society. The same will be true of AI. Some will misuse it, of course, and that is the time to identify limits to place on its use – not speculatively before. Many more will use it to find economic, societal, educational and cultural benefit and we need to give them the freedom to do so".

Some respondents said no matter how society comes together to troubleshoot AI concerns there will still be problems.

Dave Gusto, professor of political science and co-director of the Consortium for Science, Policy & Outcomes at Arizona State University, said, "The question asked about 'most people'. Most people in the world live a life that is not well regarded by technology, technology developers and AI. I don't see that changing much in the next dozen years".

longtime Silicon Valley communications professional who has worked at several of the top tech companies over the past few decades responded, "AI will continue to improve *if* quality human input is behind it. If so, better AI will support service industries at the top of the funnel, leaving humans to handle interpretation, decisions and applied knowledge. Medical data-gathering for earlier diagnostics comes to mind. Smarter job-search processes, environmental data collection for climate-change actions – these applications all come to mind".

Hari Shanker Sharma, an expert in nanotechnology and neurobiology at Uppsala University in Sweden, said, "AI has not yet peaked hence growth will continue, but evil also uses such developments. That will bring bigger dangers to mankind. The need will be to balance growth with safety, e.g., social media is good and bad. The ways to protect from evil mongers are not sufficient. Tracing an attacker/evil monger in a global village to control and punish is the need. AI will give birth to an artificial human being who could be an angel or a devil. Plan for countering evil at every development stage".

A changemaker working for digital accessibility wrote, "There is no reason to assume some undefined force will be able to correct for or ameliorate the damage of human nature amplified with power-centralizing technologies. There is no indication that governments will be able to counterbalance power-centralization trends, as governments, too, take advantage of such market failures. The outward dressing of such interactions is probably the least important aspect of it".

An information-science futurist commented, "I fear that powerful business interests will continue to put profits above all else, closing their eyes to the second- and third-order effects of their decisions. I fear that we do not have the political will to protect and promote the common interests of citizens and democracy. I fear that our technological tools are advancing more quickly than our ability to manage them wisely. I have, however, recently spotted new job openings with titles like 'Director of Research, Policy and Ethics in AI' and 'Architect, AI Ethical Practice' at major software companies. There are reasons for hope".

The following one-liners from anonymous respondents also tie into this theme:

  • An open-source technologist in the automotive industry wrote, "We'll have to have independent AI systems with carefully controlled data access, clear governance and individuals' right to be forgotten".
  • research professor of international affairs at a major university in Washington, D.C., responded, "We have to find a balance between regulations designed to encourage ethical nondiscriminatory use, transparency and innovation".
  • director for a major regional internet registry said, "The ability of government to properly regulate advanced technologies is not keeping up with the evolution of those technologies. This allows many developments to proceed without sufficient notice, analysis, vetting or regulation to protect the interests of citizens (Facebook being a prime example)".
  • professor at a major Silicon-Valley-area university said, "If technological advances are not integrated into a vision of holistic, ecologically sustainable, politically equitable social visions, they will simply serve gated and locked communities".
  • member of the editorial board of the Association of Computing Machinery journal on autonomous and adaptive systems commented, "By developing an ethical AI, we can provide smarter services in daily life, such as collaborating objects providing on-demand highly adaptable services in any environment supporting daily life activities".

Other anonymous respondents commented:

  • "It is essential that policymakers focus on impending inequalities. The central question is for whom will life be better, and for whom will it be worse? Some people will benefit from AI, but many will not. For example, folks on the middle and lower end of the income scale will see their jobs disappear as human-machine/AI collaborations become lower-cost and more efficient. Though such changes could generate societal benefits, they should not be born on the backs of middle- and low-income people".
  • "Results will be determined by the capacity of political, criminal justice and military institutions to adapt to rapidly evolving technologies".
  • To assure the best future, we need to ramp up efforts in the areas of decentralizing data ownership, education and policy around transparency".
  • "Most high-end AI knowhow is and will be controlled by a few giant corporations unless government or a better version of the United Nations step in to control and oversee them".
  • "Political change will determine whether AI technologies will benefit most people or not. I am not optimistic due to the current growth of authoritarian regimes and the growing segment of the super-rich elite who derive disproportionate power over the direction of society from their economic dominance".
  • "Mechanisms must be put in place to ensure that the benefits of AI do not accrue only to big companies and their shareholders. If current neo-liberal governance trends continue, the value-added of AI will be controlled by a few dominant players, so the benefits will not accrue to most people. There is a need to balance efficiency with equity, which we have not been doing lately".