1. Concerns about human agency, evolution and survival

Surveillance and data systems designed primarily for efficiency, profit and control are inherently dangerous

Who decides what about people's code-defined lives, when, where, why and how? Many of these respondents cited concerns that the future of AI will be shaped by those driven by profit motives and power thirst. They note that many AI tools rely on individuals' sharing of information, preferences, search strategies and data. Human values and ethics are not necessarily baked into the systems making peoples' decisions for them. These experts worry that data-based decision-making can be prone to errors, biases, and false logic or mistaken assumptions. And these experts argue that machine-based decisions often favor "efficiencies" in the name of profit or power that are extremely unfavorable to individuals and the betterment of the human condition.

Michael Kleeman, a senior fellow at the University of California, San Diego and board member at the Institute for the Future, wrote, "The utilization of AI will be disproportionate and biased toward those with more resources. In general, it will reduce autonomy, and, coupled with big data, it will reduce privacy and increase social control. There will be some areas where IA [intelligence augmentation] helps make things easier and safer, but by and large it will be a global net negative".

professor at a major U.S. university and expert in artificial intelligence as applied to social computingsaid, "As AI systems take in more data and make bigger decisions, people will be increasingly subject to their unaccountable decisions and non-auditable surveillance practices. The trends around democratic governance of AI are not encouraging. The big players are U.S.-based, and the U.S. is in an anti-regulation stance that seems fairly durable. Therefore, I expect AI technologies to evolve in ways that benefit corporate interests, with little possibility of meaningful public response".

Justin Reich, executive director of MIT Teaching Systems Lab and research scientist in the MIT Office of Digital Learning, responded, "Systems for human-AI collaborations will be built by powerful, affluent people to solve the problems of powerful, affluent people. In the hands of autocratic leaders, AI will become a powerful tool of surveillance and control. In capitalist economies, human-AI collaboration will be deployed to find new, powerful ways of surveilling and controlling workers for the benefit of more-affluent consumers".

Seth Finkelstein, consulting programmer at Finkelstein Consulting and EFF Pioneer Award winner, commented, "AI depends on algorithms and data. Who gets to code the algorithms and to challenge the results? Is the data owned as private property, and who can change it? As a very simple example, let's take the topic of algorithmic recommendations for articles to read. Do they get tuned to produce suggestions which lead to more informative material – which, granted, is a relatively difficult task, and fraught with delicate determinations? Or are they optimized for ATTENTION! CLICKS! *OUTRAGE*!? To be sure, the latter is cheap and easy – and though it has its own share of political problems, they're often more amenable to corporate management (i.e., what's accurate vs. what's unacceptable). There's a whole structure of incentives that will push toward one outcome or the other".

Douglas Rushkoff, professor of media at City University of New York, responded, "The main reason I believe AI's impact will be mostly negative is that we will be applying it mostly toward the needs of the market, rather than the needs of human beings. So while AI might get increasingly good at extracting value from people, or manipulating people's behavior toward more consumption and compliance, much less attention will likely be given to how AI can actually create value for people. Even the most beneficial AI is still being measured in terms of its ability to provide utility, value or increase in efficiency – fine values, sure, but not the only ones that matter to quality of life".

Annalie Killian, futurist and vice president for strategic partnerships at Sparks & Honey, wrote, "More technology does not make us more human; we have evidence for that now within 10 years of combining the smartphone device with persuasive and addictive designs that shape and hijack behavior. Technologists who are using emotional analytics, image-modification technologies and other hacks of our senses are destroying the fragile fabric of trust and truth that is holding our society together at a rate much faster than we are adapting and compensating – let alone comprehending what is happening. The sophisticated tech is affordable and investible in the hands of very few people who are enriching themselves and growing their power exponentially, and these actors are NOT acting in the best interest of all people".

Collin Baker, senior AI researcher at the International Computer Science Institute at the University of California, Berkeley, commented, "I fear that advances in AI will be turned largely to the service of nation states and mega-corporations, rather than be used for truly constructive purposes. The positive potential, particularly in education and health care, is enormous, but people will have to fight to make it come about. … I hope that AI will get much better at understanding Gricean maxims for cooperative discourse and at understanding people's beliefs, intentions and plans".

Brian Harvey, lecturer on the social implications of computer technology at the University of California, Berkeley, said, "The question makes incorrect presuppositions, encapsulated in the word 'we'. There is no we; there are the owners and the workers. The owners (the 0.1%) will be better off because of AI. The workers (bottom 95%) will be worse off, as long as there are owners to own the AI, same as for any piece of technology".

One of the world's foremost social scientists studying human-technology interactions said, "My chief fear is face-recognition used for social control. Even Microsoft has begged for government regulation! Surveillance of all kinds is the future for AI. It is not benign if not controlled!"

Devin Fidler, futurist and founder of Rethinkery Labs commented, "If earlier industrialization is any guide, we may be moving into a period of intensified creative destruction as AI technologies become powerful enough to overturn the established institutions and the ordering systems of modern societies. If the holes punched in macro-scale organizational systems are not explicitly addressed and repaired, there will be increased pressures on everyday people as they face not only the problems of navigating an unfamiliar new technology landscape themselves, but also the systemic failure of institutions they rely on that have failed to adapt".

An anonymous respondent said, "My fear is that technology will further separate us from what makes us human and sensitive to others. My hope is that technology would be used to improve the quality of living, not supplant it. Much of the AI innovation is simply clogging our senses, stealing our time, increasing the channels and invasion of adverts. This has destroyed our phones, filled our mailboxes and crowded our email. No product is worth that level of incursion".

Paola Perez, vice president of the Internet Society's Venezuela chapter and chair of the LACNIC Public Policy Forum, responded, "Humans will be better with AI. Many problems will be solved, but many jobs are going to disappear, and there may be more poor people as a result. Will we see life extension? Maybe, and maybe not, because our dependence on technology may also be destructive to our health".

Eliot Lear, principal engineer at Cisco Systems, predicted, "AI and tech will not leave most people better off than they are today. As always, technology outpaces our ability to understand its ramifications so as to properly govern its use. I have no reason to believe that we will have caught up by 2030".

Olivia Coombe, a respondent who provided no identifying details, wrote, "Children learn from their parents. As AI systems become more complex and are given increasingly important roles in the functioning of day-to-day life, we should ask ourselves what are we teaching our artificial digital children? If we conceive and raise them in a world of individual self-interest, will they just strengthen these existing, and often oppressive, systems of capitalist competition? Or could they go their own way, aspiring to a life of entrepreneurship to collaboration? Worse yet, will they see the reverence we hold for empires and seek to build their own through conquest?"

Peter Asaro, a professor at The New School and philosopher of science, technology and media who examines artificial intelligence and robotics, commented, "AI will produce many advantages for many people, but it will also exacerbate many forms of inequality in society. It is likely to benefit a small group who design and control the technology greatly, benefit a fairly larger group of the already well-off in many ways, but also potentially harm them in other ways, and for the vast majority of people in the world it will offer few visible benefits and be perceived primarily as a tool of the wealthy and powerful to enhance their wealth and power".

Mark Deuze, a professor of media studies at the University of Amsterdam, wrote, "With the advances in AI and tech, the public debate grows over their impact. It is this debate that will contribute to the ethical and moral dimensions of AI, hopefully inspiring a society-wide discussion on what we want from tech and how we will take responsibility for that desire".

Rob Frieden, professor and Pioneers Chair in Telecommunications and Law at Penn State University, said, "Any intelligent system depends on the code written to support it. If the code is flawed, the end product reflects those flaws. An old-school acronym spells this out: GIGO, Garbage In, Garbage Out. I have little confidence that AI can incorporate any and every real-world scenario, even with likely developments in machine learning. As AI expands in scope and reach, defects will have ever increasing impacts, largely on the negative side of the ledger".

Anthony Judge, author, futurist, editor of the Encyclopedia of World Problems and Human Potential, and former head of the Union of International Associations, said, "AI will offer greater possibilities. My sense is that it will empower many (most probably 1% to 30%) and will disempower many (if not 99%). Especially problematic will be the level of complexity created for the less competent (notably the elderly) as is evident with taxation and banking systems – issues to which sysadmins are indifferent. For some it will be a boon – proactive companions (whether for quality dialogue or sex). Sysadmins will build in unfortunate biases. Missing will be the enabling of interdisciplinarity – as has long been possible but carefully designed out for the most dubious divide-and-rule reasons. Blinkered approaches and blind spots will set the scene for unexpected disasters – currently deniably incomprehensible (Black Swan effect). Advantages for governance will be questionable. Better oversight will be dubiously enabled".

Stephanie Perrin, president of Digital Discretion, a data-privacy consulting firm, wrote, "There is a likelihood that, given the human tendency to identify risk when looking at the unknown future, AI will be used to attempt to predict risk. In other words, more and deeper surveillance will be used to determine who is a good citizen (purchaser, employee, student, etc.) and who [is] bad. This will find itself into public-space surveillance systems, employee-vetting systems (note the current court case where LinkedIn is suing data scrapers who offer to predict 'flight risk' employees), and all kinds of home-management systems and intelligent cars. While this might possibly introduce a measure of safety in some applications, the impact of fear that comes with unconscious awareness of surveillance will have a severe impact on creativity and innovation. We need that creativity as we address massive problems in climate change and reversing environmental impacts, so I tend to be pessimistic about outcomes".

Alistair Knott, an associate professor specializing in cognitive science and AI at the University of Otago in Dunedin, New Zealand, wrote "AI has the potential for both positive and negative impacts on society. [Negative impacts are rooted in] the current dominance of transnational companies (and tech companies in particular) in global politics. These companies are likely to appropriate the majority of advances in AI technology – and they are unlikely to spread the benefit of these advances throughout society. We are currently witnessing an extraordinary concentration of wealth in the hands of a tiny proportion of the world's population. This is largely due to the mainstreaming of neoliberalism in the world's dominant economies – but it is intensified by the massive success of tech companies, which achieve huge profits with relatively small workforces. The advance of AI technologies is just going to continue this trend, unless quite draconian political changes are effected that bring transnational companies under proper democratic control".

Richard Forno, of the Center for Cybersecurity at the University of Maryland-Baltimore County, wrote, "AI is only as 'smart' and efficient as its human creators can make it. If AI in things like Facebook algorithms is causing this much trouble now, what does the future hold? The problem is less AI's evolution and more about how humankind develops and uses it – that is where the real crisis in AI will turn out".

Sam Punnett, research and strategy officer at TableRock Media, wrote, "The preponderance of AI-controlled systems are designed to take collected data and enable control advantage. Most of the organizations with the resources to develop these systems do so to enable advantages in commercial/financial transactions, manufacturing efficiency and surveillance. Self-regulation by industry has already been shown to fail (e.g., social media platforms and Wall Street). Government agencies are lagging in their will and understanding of the implications of the technology to effectively implement guidelines to curtail the impacts of unforeseen circumstances. As such, government participation will be reactive to the changes that the technology will bring. My greatest fear is a reliance on faulty algorithms that absolve responsibility while failing to account for exceptions".

Luis Pereira, associate professor of electronics and nanotechnologies, Universidade NOVA de Lisboa, Portugal, responded, "I fear that more control and influence will be exerted on people, such as has started in China. There will be a greater wealth gap, benefits will not spread to all and a caste system will develop, unless a new social compact is put in place, which is unlikely. Widespread revolt is plausible".

Stavros Tripakis, an associate professor of computer science at Aalto University in Finland and adjunct professor at the University of California, Berkeley, wrote, "'1984,' George Orwell, police state".
A principal architect for a top-five technology company commented, "AI will enable vicious regimes to track citizens at all times. Mistaken identifications will put innocent people in jail and even execute them with no hope of appeal. In general, AI will only have a positive contribution in truly democratic states, which are dwindling in number".

John Sniadowski, a director for a technology company, wrote, "As technology is currently instantiated it simply concentrates power into a smaller number of international corporations. That needs fixing for everyone to gain the best from AI".

David Brake, senior lecturer in communications at the University of Bedfordshire, UK, said, "Like many colleagues I fear that AI will be framed as 'neutral' and 'objective' and thereby used as cover to make decisions that would be considered unfair if made by a human. If we do not act to properly regulate the use of AI we will not be able to interrogate the ways that AI decision-making is constructed or audit them to ensure their decisions are indeed fair. Decisions may also be made (even more than today) based on a vast array of collected data and if we are not careful we will be unable to control the flows of information about us used to make those decisions or to correct misunderstandings or errors which can follow us around indefinitely. Imagine being subject to repeated document checks as you travel around the country because you know a number of people who are undocumented immigrants and your movements therefore fit the profile of an illegal immigrant. And you are not sure whether to protest because you don't know whether such protests could encourage an algorithm to put you into a 'suspicious' category which could get you harassed even more often".

A longtime veteran of a pioneering internet company commented, "Profit motive and AI at scale nearly guarantee suffering for most people. It should be spiffy for the special people with wealth and power, though. Watching how machines are created to ensure addiction (to deliver ads) is a reminder that profit-driven exploitation always comes first. The push for driverless cars, too, is a push for increased profits".

Joshua Loftus, assistant professor of information, operations and management sciences at New York University and co-author of "Counterfactual Fairness in Machine Learning," commented, "How have new technologies shaped our lives in the past? It depends on the law, market structure and who wields political power. In the present era of extreme inequality and climate catastrophe, I expect technologies to be used by employers to make individual workers more isolated and contingent, by apps to make users more addicted on a second-by-second basis, and by governments for surveillance and increasingly strict border control".

Eugene H. Spafford, internet pioneer and founder and executive director emeritus of the Center for Education and Research in Information Assurance and Security, commented, "Without active controls and limits, the primary adopters of AI systems will be governments and large corporations. Their use of it will be to dominate/control people, and this will not make our lives better".

Michael Muller, a researcher in the AI interactions group for a global technology solutions provider, said it will leave some people better off and others not, writing, "For the wealthy and empowered, AI will help them with their daily lives – and it will probably help them to increase their wealth and power. For the rest of us, I anticipate that AI will help the wealthy and empowered people to surveil us, to manipulate us, and (in some cases) to control us or even imprison us. For those of us who do not have the skills to jump to the AI-related jobs, I think we will find employment scarce and without protections. In my view, AI will be a mixed and intersectional blessing at best".

Estee Beck, assistant professor at the University of Texas at Arlington and author of "A Theory of Persuasive Computer Algorithms for Rhetorical Code Studies," responded, "Tech design and policy affects our privacy in the United States so much so that most people do not think about the tracking of movements, behaviors and attitudes from smartphones, social media, search engines, ISPs [internet service providers] and even Internet of Things-enabled devices. Until tech designers and engineers build privacy into each design and policy decision for consumers, any advances with human-machine/AI collaboration will leave consumers with less security and privacy".

Michael H. Goldhaber, an author, consultant and theoretical physicist who wrote early explorations on the digital attention economy, said, "For those without internet connection now, its expansion will probably be positive overall. For the rest we will see an increasing arms race between uses of control, destructive anarchism, racism, etc., and ad hoc, from-below efforts at promoting social and environmental good. Organizations and states will seek more control to block internal or external attacks of many sorts. The combined struggles will take up an increasing proportion of the world's attention, efforts and so forth. I doubt that any very viable and democratic, egalitarian order will emerge over the next dozen years, and – even in a larger time frame – good outcomes are far from guaranteed".

Dave Burstein, editor and publisher at Fast Net News, said, "There's far too much second-rate AI that is making bad decisions based on inadequate statistical understanding. For example, a parole or sentencing AI probably would find a correlation between growing up in a single parent household and likelihood of committing another crime. Confounding variables, like the poverty of so many single mothers, need to be understood and dealt with. I believe it's wrong for someone to be sent to jail longer because their father left. That kind of problem, confounding variables and the inadequacy of 'preponderant' data, is nearly ubiquitous in AI in practice".

Ian Peter, pioneer internet activist and internet rights advocate, said, "Personal data accumulation is reaching a point where privacy and freedom from unwarranted surveillance are disappearing. In addition, the algorithms that control usage of such data are becoming more and more complex leading to inevitable distortions. Henry Kissinger may have not been far off the mark when he described artificial intelligence as leading to 'The End of the Age of Enlightenment'".

Michael Zimmer, associate professor and privacy and information ethics scholar at the University of Wisconsin, Milwaukee, commented, "I am increasingly concerned that AI-driven decision making will perpetuate existing societal biases and injustices, while obscuring these harms under the false belief such systems are 'neutral'".

Martin Shelton, a professional technologist, commented, "There are many kinds of artificial intelligence – some kinds reliant on preset rules to appear 'smart,' and some which respond to changing conditions in the world. But because AI can be used anywhere we can recognize patterns, the potential uses for artificial intelligence are pretty huge. The question is, how will it be used? … While these tools will become cheaper and more widespread, we can expect that – like smartphones or web connectivity – their uses will be primarily driven by commercial interests. We're beginning to see the early signs of AI failing to make smart predictions in larger institutional contexts. If Amazon fails to correctly suggest the right product in the future, everything is fine. You bought a backpack once, and now Amazon thinks you want more backpacks, forever. It'll be okay. But sometimes these decisions have enormous stakes. ProPublica documented how automated 'risk-assessment' software used in U.S. courtroom sentencing procedures is only slightly more accurate at predicting recidivism than the flip of a coin. Likewise, hospitals using IBM Watson to make predictions about cancer treatments find the software often gives advice that humans would not. To mitigate harm in high-stakes situations, we must critically interrogate how our assumptions about our data and the rules that we use to create our AI promote harm".

Nigel Hickson, an expert on technology policy development for ICANN based in Brussels, responded, "I am optimistic that AI will evolve in a way that benefits society by improving processes and giving people more control over what they do. This will only happen though if the technologies are deployed in a way in which benefits all. My fear is that in non-democratic countries, AI will lessen freedom, choice and hope".

Vian Bakir, a professor of political communication and journalism at Bangor University, responded, "I am pessimistic about the future in this scenario because of what has happened to date with AI and data surveillance. For instance, the recent furor over fake news/disinformation and the use of complex data analytics in the U.K'.s 2016 Brexit referendum and in the U.S. 2016 presidential election. To understand, influence and micro-target people in order to try get them to vote a certain way is deeply undemocratic. It shows that current political actors will exploit technology for personal/political gains, irrespective of wider social norms and electoral rules. There is no evidence that current bad practices would not be replicated in the future, especially as each new wave of technological progress outstrips regulators' ability to keep up, and people's ability to comprehend what is happening to them and their data. Furthermore, and related, the capabilities of mass dataveillance in private and public spaces is ever-expanding, and their uptake in states with weak civil society organs and minimal privacy regulation is troubling. In short, dominant global technology platforms show no signs of sacrificing their business models that depend on hoovering up ever more quantities of data on people's lives then hyper-targeting them with commercial messages; and across the world, political actors and state security and intelligence agencies then also make use of such data acquisitions, frequently circumventing privacy safeguards or legal constraints".

Tom Slee, senior product manager at SAP SE and author of "What's Yours is Mine: Against the Sharing Economy," wrote, "Many aspects of life will be made easier and more efficient by AI. But moving a decision such as health care or workplace performance to AI turns it into a data-driven decision driven by optimization of some function, which in turn demands more data. Adopting AI-driven insurance ratings, for example, demands more and more lifestyle data from the insured if it is to produce accurate overall ratings. Optimized data-driven decisions about our lives unavoidably require surveillance, and once our lifestyle choices become input for such decisions we lose individual autonomy. In some cases we can ignore this data collection, but we are in the early days of AI-driven decisions: By 2030 I fear the loss will be much greater. I do hope I am wrong".

Timothy Graham, a postdoctoral research fellow in sociology and computer science at Australian National University, commented, "There is already an explosion of research into 'fairness and representation' in ML (and conferences such as Fairness, Accountability and Transparency in Machine Learning), as it is difficult to engineer systems that do not simply reproduce existing social inequality, disadvantage and prejudice. Deploying such systems uncritically will only result in an aggregately worse situation for many individuals, whilst a comparatively small number benefit".

senior researcher and programmer for a major global think tank commented, "I expect AI to be embedded in systems, tools, etc., to make them more useful. However, I am concerned that AI's role in decision-making will lead to more-brittle processes where exceptions are more difficult than today – this is not a good thing".

Jenni Mechem, a respondent who provided no identifying details, said, "My two primary reasons for saying that advances in AI will not benefit most people by 2030 are, first, there will continue to be tremendous inequities in who benefits from these advances, and second, if the development of AI is controlled by for-profit entities there will be tremendous hidden costs and people will yield control over vast areas of their lives without realizing it. … The examples of Facebook as a faux community commons bent on extracting data from its users and of pervasive internet censoring in China should teach us that neither for-profit corporations nor government can be trusted to guide technology in a manner that truly benefits everyone. Democratic governments that enforce intelligent regulations as the European Union has done on privacy may offer the best hope".

Suso Baleato, a fellow at Harvard University's Institute of Quantitative Social Science and liaison for the Organization for Economic Cooperation and Development (OECD)'s Committee on Digital Economy Policy, commented, "The intellectual property framework impedes the necessary accountability of the underlying algorithms, and the lack of efficient redistributive economic policies will continue amplifying the bias of the datasets".

Sasha Costanza-Chock, associate professor of civic media at MIT, said, "Unfortunately it is most likely that AI will be deployed in ways that deepen existing structural inequality along lines of race, class, gender, ability and so on. A small portion of humanity will benefit greatly from AI, while the vast majority will experience AI through constraints on life chances. Although it's possible for us to design AI systems to advance social justice, our current trajectory will reinforce historic and structural inequality".

Dalsie Green Baniala, CEO and regulator of the Telecommunications and Radiocommunications Regulator of Vanuatu, wrote, "Often, machine decisions do not produce an accurate result, they do not meet expectations or specific needs. For example, applications are usually invented to target the developed-world market. They may not work appropriately for countries like ours – small islands separated by big waters".

Michiel Leenaars, director of strategy at NLnet Foundation and director of the Internet Society's Netherlands chapter, responded, "Achieving trust is not the real issue; achieving trustworthiness and real empowerment of the individual is. As the technology that to a large extent determines the informational self disappears – or in practical terms is placed out of local control, going 'underground' under the perfect pretext of needing networked AI – the balance between societal well-being and human potential on the one hand and corporate ethics and opportunistic business decisions on the other stands to be disrupted. Following the typical winner-takes-all scenario the internet is known to produce, I expect that different realms of the internet will become even less transparent and more manipulative. For the vast majority of people (especially in non-democracies) there already is little real choice but to move and push along with the masses".

Mike O'Connor, a retired technologist who worked at ICANN and on national broadband issues, commented, "I'm feeling 'internet-pioneer regret' about the Internet of S*** that is emerging from the work we've done over the last few decades. I actively work to reduce my dependence on internet-connected devices and the amount of data that is collected about me and my family. I will most certainly work equally hard to avoid human/AI devices/connections. I earnestly hope that I'm resoundingly proven wrong in this view when 2030 arrives".

Luke Stark, a fellow in the department of sociology at Dartmouth College and at Harvard University's Berkman Klein Center for Internet & Society, wrote, "AI technologies run the risk of providing a comprehensive infrastructure for corporate and state surveillance more granular and all-encompassing than any previous such regime in human history".

Zoetanya Sujon, a senior lecturer specializing in digital culture at the University of the Arts London, commented, "Like the history of so many technologies show us, AI will not be the magic solution to the world's problems or to symbolic and economic inequalities. Instead, AI is most benefitting those with the most power".

Larry Lannom, internet pioneer and vice president at the Corporation for National Research Initiatives (CNRI), said, "I am hopeful that networked human-machine interaction will improve the general quality of life. … My fear: Will all of the benefits of more-powerful artificial intelligence benefit the human race as a whole or simply the thin layer at the top of the social hierarchy that owns the new advanced technologies?"
A professor and researcher in AI based in Europe noted, "Using technological AI-based capabilities will give people the impression that they have more power and autonomy. However, those capabilities will be available in contexts already framed by powerful companies and states. No real freedom. For the good and for the bad".

An anonymous respondent said, "In the area of health care alone there will be tremendous benefits for those who can afford medicine employing AI. But at the same time, there is an enormous potential for widening inequality and for abuse. We can see the tip of this iceberg now with health insurance companies today scooping up readily available, poorly protected third-party data that will be used to discriminate".

senior data analyst and systems specialist expert in complex networks responded, "Artificial intelligence software will implement the priorities of the entities that funded development of the software. In some cases, this will [be] a generic service sold to the general public (much as we now have route-planning software in GPS units), and this will provide a definite benefit to consumers. In other cases, software will operate to the benefit of a large company but to the detriment of consumers (for example, calculating a price for a product that will be the highest that a given customer is prepared to pay). In yet a third category, software will provide effective decision-making in areas ranging from medicine to engineering, but will do so at the cost of putting human beings out of work".

A distinguished engineer at one of the world's largest computing hardware companies commented, "Tech will continue to be integrated into our lives in a seamless way. My biggest concern is responsible gathering of information and its use. Information can be abused in many ways as we are seeing today".

digital rights activist commented, "AI is already (through racial recognition, in particular) technologically laundering longstanding and pervasive bias in the context of police surveillance. Without algorithmic transparency and transparency into training data, AIs can be bent to any purpose".

The following one-liners from anonymous respondents also tie into this theme:

  • longtime economist for a top global technology company predicted, "The decline of privacy and increase in surveillance".
  • journalist and leading internet activist wrote, "Computer AI will only be beneficial to its users if it is owned by humans, and not 'economic AI' (that is, corporations)".
  • A strategy consultant wrote, "The problem is one of access. AI will be used to consolidate power and benefits for those who are already wealthy and further surveil, disenfranchise and outright rob the remaining 99% of the world".
  • A policy analyst for a major internet services provider said, "We just need to be careful about what data is being used and how".
  • professor of information science wrote, "Systems will be developed that do not protect people's privacy and security".
  • The founder of a technology research firm wrote, "Neoliberal systems function to privilege corporations over individual rights, thus AI will be used in ways to restrict, limit, categorize – and, yes, it will also have positive benefits".
  • A professor of electrical and computer engineering based in Europe commented, "The problem lies in human nature. The most powerful will try to use AI and technology to increase their power and not to the benefit of society".

Other anonymous respondents commented:

  • "The panopticon and invasion of all personal aspects of our lives is already complete".
  • "AI will allow greater control by the organized forces of tyranny, greater exploitation by the organized forces of greed and open a Pandora's box of a future that we as a species are not mature enough to deal with".
  • "The combination of widespread device connectivity and various forms of AI will provide a more pleasant everyday experience but at the expense of an even further loss of privacy".
  • "I have two fears 1) loss of privacy and 2) building a 'brittle' system that fails catastrophically".
  • "AI strategic decisions with the most clout are made by corporations and they do not aim for human well-being in opposition to corporate profitability".
  • "Data is too controlled by corporations and not individuals, and privacy is eroding as surveillance and stalking options have grown unchecked".
  • "The capabilities are not shared equally, so the tendency will be toward surveillance by those with power to access the tools; verbal and visual are coming together with capacities to sort and focus the masses of data".
  • "Knowing humanity, I assume particularly wealthy, white males will be better off, while the rest of humanity will suffer from it".