"As global stakeholders from governments, international organizations, the private sector, academia, and civil society, we convene in Hamburg to shape a human-centric, human-rights-based, inclusive, open, sustainable, and responsible AI future. We commit to advancing AI for the SDGs, aligning with o
...
ngoing international efforts. A responsible AI future must be built on equal and meaningful participation, with actions to ensure that all stakeholders, especially those from emerging markets, developing economies, and vulnerable groups, have fair and equitable access to, as well as ownership of, computing, data, investment, and resources for capacity and talent development. AI’s benefits must not remain concentrated among a privileged few. We are committed to bridging digital divides and empowering all nations and communities to co-create and leverage AI solutions and evaluations that serve people and the planet. To achieve this, we call for leveraging AI responsibly, inclusively, and sustainably, aligned with the five pillars of the 2030 Agenda for Sustainable Development: people, planet, prosperity, peace, and partnerships." (A Joint Vision)
more
"If 2023 was the year of generative artificial intelligence, 2024 was marked by the rapid expansion and adoption of AI, driving waves of innovation across nearly every conceivable domain. The United Nations system has kept an encouraging pace, redoubling efforts to harness the power and potential of
...
AI responsibly. That progress is visible not only in the soaring number of UN AI initiatives — now totalling 729 projects, from 406 last year — but also in the depth of engagement across the system, with 53 UN entities contributing to this year’s Activities Report, six of them for the first time. This gathering momentum reflects our shared conviction that AI is no longer a distant aspiration; it is a present-day imperative within and beyond the United Nations. The projects highlighted here tackle urgent challenges, from accelerating climate action to improving health outcomes to expanding educational access, bolstering governance, and creating decent work. Nearly half of these efforts are built on partnerships with governments, academia, industry, and civil-society organizations, underscoring the multistakeholder spirit at the heart of the UN’s inclusive digital transformation agenda. Across our work, AI is already driving efficiencies, revealing deeper insights, and informing faster responses — from chatbots that streamline public-service delivery to data tools that strengthen emergency responses to applications that keep meetings and consultations running smoothly. These innovations are extending the reach and impact of our work in practical, promising ways. Yet the UN remains keenly aware of the risks and responsibilities that accompany AI. Ethics, human rights, and inclusion anchor every initiative, guiding us toward a shared digital future where AI helps us advance opportunity and prosperity for all. This report — prepared by the International Telecommunication Union (ITU) in partnership with 52 other UN entities — embodies the system’s resolve to lead by example. Read on to discover how AI is already driving progress, delivering results, and reshaping the way the UN serves people and planet in the digital age." (Foreword)
more
"The survey found that 13% of respondents’ organizations are very familiar with Gen AI, while an additional 46% possess a basic awareness. Among those who stated they had basic to good familiarity with Gen AI, 38% reported using it either daily or very frequently. The most common use of Gen AI was
...
for research and idea generation, followed by content creation & copy editing, and translation. In terms of the platform, ChatGPT was by far the most widely used, followed by Bing AI, Midjourney, and Dall-E. The majority of respondents (94%) stated that they found Gen AI to be either very useful or somewhat useful. Furthermore 67% of respondents view the future of Gen AI as either very promising (22%) or promising (45%)." (Key findings, page 1)
more
"In the debate on how to improve efficiencies in the humanitarian sector and better meet people’s needs, the argument for the use of artificial intelligence (AI) and automated decision-making (ADMs) systems has gained significant traction and ignited controversy for its ethical and human rights-re
...
lated implications. Setting aside the implications of introducing unmanned and automated systems in warfare, we focus instead on the impact of the adoption of AI-based ADMs in humanitarian response. In order to maintain the status and protection conferred by the humanitarian mandate, aid organizations are called to abide by a broad set of rules condensed in the humanitarian principles and notably the principles of humanity, neutrality, impartiality, and independence. But how do these principles operate when decision-making is automated? This article opens with an overview of AI and ADMs in the humanitarian sector, with special attention to the concept of algorithmic opacity. It then explores the transformative potential of these systems on the complex power dynamics between humanitarians, principled assistance, and affected communities during acute crises. Our research confirms that the existing flaws in accountability and epistemic processes can be also found in the mathematical and statistical formulas and in the algorithms used for automation, artificial intelligence, predictive analytics, and other efficiency-gaining-related processes. In doing so, our analysis highlights the potential harm to people resulting from algorithmic opacity, either through removal or obfuscation of the causal connection between triggering events and humanitarian services through the so-called black box effect (algorithms are often described as black boxes, as their complexity and technical opacity hide and obfuscate their inner workings (Diakopoulos, Tow Center for Digital Journ, 2017). Recognizing the need for a humanitarian ethics dimension in the analysis of automation, AI, and ADMs used in humanitarian action, we endorse the concept of “explicability” as developed within the ethical framework of machine learning and humancomputer interaction, together with a set of proxy metrics. Finally, we stress the need for developing auditable standards, as well as transparent guidelines and frameworks to rein in the risks of what has been defined as humanitarian experimentation (Sandvik, Jacobsen, and McDonald, Int. Rev. Red Cross 99(904), 319–344, 2017). This article concludes that accountability mechanisms for AI-based systems and ADMs used to respond to the needs of populations in situation of vulnerability should be an essential feature by default, in order to preserve the respect of the do no harm principle even in the digital dimension of aid. In conclusion, while we confirm existing concerns related to the adoption of AI-based systems and ADMs in humanitarian action, we also advocate for a roadmap towards humanitarian AI for the sector and introduce a tentative ethics framework as basis for future research." (Abstract)
more