Read this section to explore how data needs to be used responsibly, the role of artificial intelligence, and the effects of data on people.
Better Data for Doing Good: Responsible Use of Big Data and Artificial Intelligence
From Design to Responsible Use: Ethical Challenges with Using Big Data and AI
Although we are only scratching the surface of what is possible in
the new age of big data and AI, and how they can be leveraged for
social good, we also need to grapple with both the unintended
risks and malicious use of the same technology. These benefits
and looming risks were aptly articulated by the UN Secretary-General at the 2017 "AI for Good Global Summit":
We face a new frontier, with advances moving at warp speed. Artificial intelligence can help analyze enormous volumes of data, which in turn can improve predictions, prevent crimes and help governments better serve people. But there are also serious challenges, and ethical issues at stake. There are real concerns about cyber security, human rights and privacy. . . The implications for development are enormous. Developing countries can gain from the benefits of AI, but they also face the highest risk of being left behind.
Technologies and algorithms by themselves have no intrinsic morality – however, technology can be used for good or bad depending on how it is employed. Looking at existing technologies, ethical considerations need to address questions such as what life-and-death decisions self-driving cars make. Although privacy norms have been long established to protect personal data from misuse and ensure individual privacy in the digital world, ethics has become an additional tool in AI applications used to protect fundamental human rights and help make decisions in areas where law has no clear-cut answers. The UN Special Rapporteur on the right to privacy recommends formal consultation mechanisms be instituted "including ethics committees, with professional, community and other organizations and citizens to protect against the erosion of rights and identify sound practices" (Cannataci 2017). A recent example in which ethics and moral obligations of data handling were included in an official UN document is the "Guidance Note on Big Data for the achievement of the 2030 Agenda" adopted by the UN Development Group (UNDG 2017). The note, the first official document in the UN on big data and privacy, stresses the importance of ensuring that data ethics is included as part of standard operating procedures for data governance (box 3.10).
Data ethics should be treated holistically using a consistent and inclusive framework that considers a diverse set of outcomes instead of an ad hoc approach that only accounts for limited applications. Such mechanisms include codified data ethics principles or codes of conduct, ethical impact assessments, ethical training for researchers, and ethical review boards.
Privacy impact assessments, in general, allow developers and organizations to effectively assess the risks posed to privacy by big data and AI, thereby ensuring compliance with privacy requirements, identifying mitigation measures, and effectively classifying the impacts of data and algorithm use. Including issues of ethics and human rights in any impact assessment, including a privacy impact assessment, could prove more effective than developing a separate analysis or ethical review framework.
For example, UN Global Pulse builds ethical considerations into its data practices by conducting a "risks, harms, and benefits assessment," which may help identify anticipated or actual ethical and human rights issues that may occur during a data innovation project. The assessment considers the proportionality of potential benefits compared to risks of harm from data use, as well as risk of harm from the data not being used. If the risks outweigh the benefits, the project does not proceed. In its "Guide to Personal Data Protection and Privacy," the World Food Programme also builds ethics into its procedures through the application of humanitarian principles and risk assessments.Although ethics may not have clear-cut rules, when assessing the risk of harm along with the benefits "any potential risks and harms should not be excessive in relation to the [likely] positive impacts of data use".
Incorporating privacy by design is also crucial for innovation applications that operate with limited human supervision. The rapidly developing nature of AI algorithms can give rise to algorithmic bias and unverified results. Similar to privacy by design is the concept of AI ethics by design, which suggests seven principles, including recommendations to proactively identify security risks by using tools such as the privacy impact assessment to minimize potential harm. In addition, ensuring oversight of the entire data innovation process, from design to use, is vital to securing true incorporation of ethics into AI system.
Moreover, accountability and transparency are critical ethical principles that must accompany any AI innovation project. "[T]ransparency builds trust in the system, by providing a simple way for the user to understand what the system is doing and why". To maintain transparency, the Institute of Electrical and Electronics Engineers recommends developing new standards that describe measurable, testable levels of transparency so systems can be objectively assessed and the level of compliance can be determined. Although it is harder and harder to keep algorithms transparent because of heavily interlinked and layered processes of algorithmic programming, the AI ethics by design approach suggests that ensuring the transparency and accountability of algorithms is essential to determining the intended outputs and preventing algorithmic bias.
The overall data ethics program may also include recurring data ethics reviews at every critical juncture, such as review boards. A similar approach already exists in research institutions and is usually referred to as internal review boards. For example, in their published procedures for ethical standards regarding data collection, the United Nations Children's Fund (UNICEF) adheres to mechanisms for review such as internal and external review boards as well as the basic ethics training for researchers. Any UNICEF project involving surveys, focus groups, case studies, physical procedures, games, or diet and nutritional studies is subject to ethical review.
A stakeholder-inclusive approach that features "the proactive inclusion of users" is also desirable. "Their interaction will increase trust and overall reliability of these systems". "[T]he context of data use" should also always be considered, thus requiring human intervention, and at times, context-specific expertise – such as the presence of a humanitarian expert during a humanitarian response or of a transportation planning expert in a project that looks at transportation policy.
Finally, ethical approaches to AI should be humanrights-centric, incorporating substantive, procedural, and remedial rights. Just as misuse of AI may lead to harm, nonuse of AI may allow preventable harm to occur. Decisions to use or not use applications of AI can infringe on fundamental rights. As suggested by the UN Special Rapporteur on the right to privacy in his recent report to the UN General Assembly, "commitment to one right should not detract from the importance and protection of another right. Taking rights in conjunction wherever possible is healthier than taking rights in opposition to each other". But undoubtedly, incorporating ethics into every stage of project design and implementation of AI can potentially mitigate harm and maximize positive impact of rapidly developing new technologies, ensuring they are used for social benefit.