"Our research reveals that the organisational use of AI and algorithmic systems often happens through a process of unstructured integration rather than formal adoption. More precisely, cloud-based, algorithmic-enhanced functionalities and processes are making their way into humanitarian entitiesʼ t
...
ech stack through updates and add-ons to preexisting products, slowly but inexorably seeping into most internal systems. A price-based procurement process designed around the static idea of digital asset purchase is inadequate to ensure adequate protection from hyperdynamic tech product development. This is especially evident looking at the growing power imbalances between private actors and nonprofits, and among nonprofits themselves. Digital technology is often heralded as key to democratise and equalise the playing field between humanitarian actors, by finally unlocking local actorsʼ access to resources traditionally reserved for international entities. But emerging trends seem to disprove such claims. The lack of adequate legal frameworks and safeguards, lagging investments in humanitarian funding for information and communication technology (ICT) resources and connectivity, and inadequate dedicated funding for digital programmes are thwarting these opportunities. More broadly, virtuous and responsible procurement practices are proving inadequate for protecting human rights alone, as tech companiesʼ commitments to human rights are replaced by defence-oriented pitches and the size of their human rights teams keeps shrinking.
Preexisting digital divides are enhanced by several deep and subtle changes in traditional tech business models, resulting from the gradual cloudification of most common digital systems and the fast-paced, distributed nature of modern digital development. Even more worryingly, we are seeing new forms of digital divide well beyond the traditional gaps in digital capacity between local and small non-governmental organisations (NGOs) and international non-governmental organisations (INGOs) or UN entities. Cloud-based companiesʼ takeover of generic coordination platforms creates a constantly broadening chasm between the few organisations that are digitally mature, those who are barely retrofitting their processes to fit the new reality, and the broad majority forced to access the promised land of digital humanitarianism through the predatory gates of algorithmic and cloud servicesʼ free tech licensing.
Our findings highlight how organisation-led digital development, building on open-source systems (despite some “openwashing” practices) and trusted external providers, remains the best option for risk reduction when it comes to algorithmic systems. However, in the short to medium term, this might not be financially and technically feasible for the majority of humanitarian actors, especially considering the catastrophic funding environment. To fill this gap, we propose a toolkit in an appendix to this report. This is a foundational governance model for transforming digital procurement from a transactional into a strategic business function, integrated across key organisational departments. By doing so, protection-mandated aid groups could move from running purchase-centred processes, to adopting a dynamic, proactive, and rights-based approach to tech management. This report also contains a set of recommendations for states and donors, the humanitarian community, tech companies, local aid actors and communities, research centres, and cyber experts." (Executive summary, pages 2-3)
more
"In the debate on how to improve efficiencies in the humanitarian sector and better meet people’s needs, the argument for the use of artificial intelligence (AI) and automated decision-making (ADMs) systems has gained significant traction and ignited controversy for its ethical and human rights-re
...
lated implications. Setting aside the implications of introducing unmanned and automated systems in warfare, we focus instead on the impact of the adoption of AI-based ADMs in humanitarian response. In order to maintain the status and protection conferred by the humanitarian mandate, aid organizations are called to abide by a broad set of rules condensed in the humanitarian principles and notably the principles of humanity, neutrality, impartiality, and independence. But how do these principles operate when decision-making is automated? This article opens with an overview of AI and ADMs in the humanitarian sector, with special attention to the concept of algorithmic opacity. It then explores the transformative potential of these systems on the complex power dynamics between humanitarians, principled assistance, and affected communities during acute crises. Our research confirms that the existing flaws in accountability and epistemic processes can be also found in the mathematical and statistical formulas and in the algorithms used for automation, artificial intelligence, predictive analytics, and other efficiency-gaining-related processes. In doing so, our analysis highlights the potential harm to people resulting from algorithmic opacity, either through removal or obfuscation of the causal connection between triggering events and humanitarian services through the so-called black box effect (algorithms are often described as black boxes, as their complexity and technical opacity hide and obfuscate their inner workings (Diakopoulos, Tow Center for Digital Journ, 2017). Recognizing the need for a humanitarian ethics dimension in the analysis of automation, AI, and ADMs used in humanitarian action, we endorse the concept of “explicability” as developed within the ethical framework of machine learning and humancomputer interaction, together with a set of proxy metrics. Finally, we stress the need for developing auditable standards, as well as transparent guidelines and frameworks to rein in the risks of what has been defined as humanitarian experimentation (Sandvik, Jacobsen, and McDonald, Int. Rev. Red Cross 99(904), 319–344, 2017). This article concludes that accountability mechanisms for AI-based systems and ADMs used to respond to the needs of populations in situation of vulnerability should be an essential feature by default, in order to preserve the respect of the do no harm principle even in the digital dimension of aid. In conclusion, while we confirm existing concerns related to the adoption of AI-based systems and ADMs in humanitarian action, we also advocate for a roadmap towards humanitarian AI for the sector and introduce a tentative ethics framework as basis for future research." (Abstract)
more