"Diversity is often announced as a solution to ethical problems in artificial intelligence (AI), but what exactly is meant by diversity and how it can solve those problems is seldom spelled out. This lack of clarity is one hurdle to motivating diversity in AI. Another hurdle is that while the most c
...
ommon perceptions about what diversity is are too weak to do the work set out for them, stronger notions of diversity are often defended on normative grounds that fail to connect to the values that are important to decision-makers in AI. However, there is a long history of research in feminist philosophy of science and a recent body of work in social epistemology that taken together provide the foundation for a notion of diversity that is both strong enough to do the work demanded of it, and can be defended on epistemic grounds that connect with the values that are important to decision-makers in AI. We clarify and defend that notion here by introducing emergent expertise as a network phenomenon wherein groups of workers with expertise of different types can gain knowledge not available to any individual alone, as long as they have ways of communicating across types of expertise. We illustrate the connected epistemic and ethical benefits of designing technology with diverse groups of workers using the examples of an infamous racist soap dispenser, and the millimeter wave scanners used in US airport security." (Abstract)
more
"Content moderation algorithms influence how users understand and engage with social media platforms. However, when identifying hate speech, these automated systems often contain biases that can silence or further harm marginalized users. Recently, scholars have offered both restorative and transfor
...
mative justice frameworks as alternative approaches to platform governance to mitigate harms caused to marginalized users. As a complement to these recent calls, in this essay, I take up the concept of reparation as one substantive approach social media platforms can use alongside and within these justice frameworks to take actionable steps toward addressing, undoing and proactively preventing the harm caused by algorithmic content moderation. Specifically, I draw on established legal and legislative reparations frameworks to suggest how social media platforms can reconceptualize algorithmic content moderation in ways that decrease harm to marginalized users when identifying hate speech. I argue that the concept of reparations can reorient how researchers and corporate social media platforms approach content moderation, away from capitalist impulses and efficiency and toward a framework that prioritizes creating an environment where individuals from marginalized communities feel safe, protected and empowered." (Abstract)
more
"The emerging ecosystem of artificial intelligence (AI) ethics and governance auditing has grown rapidly in recent years in anticipation of impending regulatory efforts that encourage both internal and external auditing. Yet, there is limited understanding of this evolving landscape. We conduct an i
...
nterview-based study of 34 individuals in the AI ethics auditing ecosystem across seven countries to examine the motivations, key auditing activities, and challenges associated with AI ethics auditing in the private sector. We find that AI ethics audits follow financial auditing stages, but tend to lack robust stakeholder involvement, measurement of success, and external reporting. Audits are hyper-focused on technically oriented AI ethics principles of bias, privacy, and explainability, to the exclusion of other principles and socio-technical approaches, reflecting a regulatory emphasis on technical risk management. Auditors face challenges, including competing demands across interdisciplinary functions, firm resource and staffing constraints, lack of technical and data infrastructure to enable auditing, and significant ambiguity in interpreting regulations and standards given limited (or absent) best practices and tractable regulatory guidance. Despite these roadblocks, AI ethics and governance auditors are playing a critical role in the early ecosystem: building auditing frameworks, interpreting regulations, curating practices, and sharing learnings with auditees, regulators, and other stakeholders." (Abstract)
more
"We present a framework for viewing artificial intelligence (AI) as planetary assemblages of coloniality that reproduce dependencies in how it co-constitutes and structures a tiered global data economy. We use assemblage thinking to map the coloniality of power to demonstrate how AI stratifies acros
...
s knowledge, geographies, and bodies to influence development and economic trajectories, impact workers, reframe domestic industrial policies, and reconfigure the international political economy. Our post-colonial framework unpacks AI through its (1) global, (2) meso, and (3) local layers, and further dissects how these layers are vertically integrated, each with its horizontal dependencies. At (1) the global layer of international political economy maps a new digital bipolarity expressing Sino and American global digital corporations’ strategic and dominant positions in shaping a tiered global data economy. Then, at (2) the meso layer, we have a mosaic of domestic industrial policies that fund, frame markets, and develop AI talent across industries, sectors, and organizations to competitively integrate into AI value chains. Finally, incorporating into these are (3) the localized labor processes and tasks, where workers and users enact various AI-mediated tasks and practices driving further value extraction. We traced how AI is an interlaced system of power that reshapes knowledge, geographies, and bodies into dependencies that reinforce stratifications in developing underdevelopment. This commentary maps the current digital realities by laying out an uneven techno-geoeconomic power architecture driving a tiered global data economy and opening new research avenues to examine AI as planetary assemblages of coloniality." (Abstract)
more
"Large language models (LLMs) and dialogue agents represent a significant shift in artificial intelligence (AI) research, particularly with the recent release of the GPT family of models. ChatGPT’s generative capabilities and versatility across technical and creative domains led to its widespread
...
adoption, marking a departure from more limited deployments of previous AI systems. While society grapples with the emerging cultural impacts of this new societal-scale technology, critiques of ChatGPT’s impact within machine learning research communities have coalesced around its performance or other conventional safety evaluations relating to bias, toxicity, and “hallucination.” We argue that these critiques draw heavily on a particular conceptualization of the “human-centered” framework, which tends to cast atomized individuals as the key recipients of technology’s benefits and detriments. In this article, we direct attention to another dimension of LLMs and dialogue agents’ impact: their effects on social groups, institutions, and accompanying norms and practices. By analyzing ChatGPT’s social impact through a social-centered framework, we challenge individualistic approaches in AI development and contribute to ongoing debates around the ethical and responsible deployment of AI systems. We hope this effort will call attention to more comprehensive and longitudinal evaluation tools (e.g., including more ethnographic analyses and participatory approaches) and compel technologists to complement human-centered thinking with social-centered approaches." (Abstract)
more
"As artificial intelligence (AI) becomes more seamlessly integrated into our social life, the unfair outcomes and ethical issues associated with AI and its subtechnologies have been widely discussed in scholarly work across disciplines in recent years. This study provides an overview of the conceptu
...
alization, empirical scholarship, and ethical concerns related to algorithmic bias across diverse disciplines. In doing so, the study relies on the framework of AI-mediated communication and human-AI communication, as well as topic modeling and semantic network analysis to examine the conceptualization and major thematic areas of AI bias literature. The study reveals the complexity of the concept of algorithmic bias, which extends beyond the algorithm itself. Empirical scholarship on AI and algorithmic bias revolves around conceptualizations, human perceptions, algorithm optimization, practical applications, and ethics and policy implications. Understanding and addressing the ethical challenges require a multilevel examination from the perspectives of different stakeholders. Theoretical and practical implications are further discussed in the context of AI and algorithmic justice." (Abstract)
more
"This engaging Handbook critically examines the moral opportunities and challenges surrounding artificial intelligence. It provides a comprehensive overview of the most pressing problems concerning this technology by drawing on a wide range of analytical methods, traditions and approaches. Advocatin
...
g for a diversification of the study of ethics and AI, this Handbook covers the foundations of the field before delving into the challenges of responsibility, justice and authority in an AI-centred landscape. Chapter authors champion typically underrepresented or marginal traditions, including continental philosophy, indigenous cosmologies, queer studies, post-colonial theories, African philosophies, disability studies, and feminist ethics. Balancing legal and moral philosophies, the Handbook surveys the transformative present of AI, while also reckoning with the ethics of an increasingly inscrutable future." (Publisher description)
more
"Längst ist Künstliche Intelligenz nicht mehr auf kognitive Zwecke begrenzt: Sie wertet, etwa zur Kriminalitätsprävention oder gezielten Kundenansprache, menschliche Mimik aus und suggeriert in Spielen oder Lernprogrammen Emotionen und Stimmungen. Erscheinen bereits solche auf Empfindungen ziele
...
nde Anwendungen durchaus kritikwürdig, so wirft es für die Philosophin Eva Guskar-Weber erst recht Fragen auf, wenn Chatbots oder Avatare zwischenmenschliche Beziehungen ersetzen sollen. Sind rote Linien überschritten, wenn emotionale KI nicht mehr selbstbestimmt in einer gemeinsamen Realität mit anderen Menschen genutzt wird, mit der Folge, dass die Grenzen zu rein fiktionalen Beziehungen nicht mehr erkannt werden? Könnten mit Blick auf das noch lange nicht ausgereizte Potenzial einer KI, die auch ethisch verwerfliche oder inhumane Bedürfnisse befriedigen könne, unabdingbare soziale Fähigkeiten wie Toleranz, Respekt und Rücksichtnahme verlernt werden? Sollen Roboter durch ihre äußere Anmutung in der Lage sein, bei ihrem menschlichen Gegenüber – insbesondere Alten oder Pflegebedürftigen – Gefühle zu simulieren und emotionale Erwartungen auszulösen? Guskar-Weber beleuchtet das Potenzial und die Risiken emotionaler KI und plädiert für ethisch-moralische Standards bei ihrer Entwicklung und Nutzung." (Verlagsbeschreibung)
more
"Was bedeutet die zunehmende Nutzung Künstlicher Intelligenz für die Demokratie, für Offenheit und Vielfalt in der Gesellschaft? Wie gerecht entscheiden Algorithmen? Vielfach zeigt sich, dass KI-Anwendungen strukturelle Ungleichheit und bestehende Diskriminierungsmuster reproduzieren können. Den
...
noch berge Künstliche Intelligenz sowohl demokratiegefährdendes Potenzial als auch Chancen etwa für die Bekämpfung von Hassrede im Internet oder für die politische Bildung. Der Band versammelt Perspektiven aus so unterschiedlichen Feldern wie der Antisemitismusforschung, der Psychologie, der Informatik, dem Journalismus, der Techniksoziologie und der bildenden Kunst. Aufgrund von Verzerrungen im Datenmaterial kann Künstliche Intelligenz etwa bei der Bewertung von Bewerbungen, Kreditwürdigkeit oder Prognosen zur erneuten Straffälligkeit zu diskriminierenden Entscheidungen kommen. Daher fordern die Autorinnen und Autoren gezielte Investitionen in politische Bildung und gerechtere soziale Strukturen. Diese seien nötig, um zu verhindern, dass Rassismus, Antisemitismus und andere Formen gruppenbezogener Menschenfeindlichkeit in digitalen Räumen reproduziert werden, denn: Technische Systeme, so ihr Fazit, werden kein Problem lösen können, das im Kern sozial oder politisch begründet sei." (Verlagsbeschreibung)
more
"The goal of this policy brief is to demonstrate how stakeholders can apply the lens of Media and Information Literacy for all to Artificial Intelligence/Generative Artificial Intelligence (AI/GAI) to address its risks and opportunities in the information and digital ecologies. A special focus is on
...
the empowerment of youth and highlighting issues of gender and cultural diversity. It also aims to provide recommendations based on Media and Information Literacy (MIL) principles and competences that can lead to concrete policy actions. This policy brief is also useful for civil society organizations and a variety of stakeholders, including media, digital platforms, and governments." (Introduction)
more
"A recent innovation in the field of machine learning has been the creation of very large pre-trained models, also referred to as ‘foundation models’, that draw on much larger and broader sets of data than typical deep learning systems and can be applied to a wide variety of tasks. Underpinning
...
text-based systems such as OpenAI’s ChatGPT and image generators such as Midjourney, these models have received extraordinary amounts of public attention, in part due to their reliance on prompting as the main technique to direct and apply them. This paper thus uses prompting as an entry point into the critical study of foundation models and their implications. The paper proceeds as follows: In the first section, we introduce foundation models in more detail, outline some of the main critiques, and present our general approach. We then discuss prompting as an algorithmic technique, show how it makes foundation models programmable, and explain how it enables different audiences to use these models as (computational) platforms. In the third section, we link the material properties of the technologies under scrutiny to questions of political economy, discussing, in turn, deep user interactions, reordered cost structures, and centralization and lock-in. We conclude by arguing that foundation models and prompting further strengthen Big Tech’s dominance over the field of computing and, through their broad applicability, many other economic sectors, challenging our capacities for critical appraisal and regulatory response." (Abstract)
more