"Wie so häufig in der langen Geschichte des technischen Fortschritts sind auch die Entwicklungen auf dem Feld der KI von großen Hoffnungen und mindestens ebenso großen Befürchtungen begleitet. Was tatsächlich neu ist: Erstmals fragen sich auch akademisch ausgebildete „Wissensarbeiter“, ob e
...
s ihre Jobs noch lange geben wird. Waren es früher körperlich anstrengende Arbeiten, die Menschen an Maschinen übertrugen, sind es heute Übersetzungen, Recherchen oder komplexe Analysen. Was viele dabei gleichermaßen fasziniert und gruselt, ist die Fähigkeit der KI, zu lernen und das erworbene Wissen direkt anzuwenden. In einer populären Vorstellung ist der Schritt zum „Erwachen“ der KI, zur Herausbildung eines eigenen Willens, dann nur noch ein kleiner. Dies mag eine dystopische Übertreibung sein. Doch auch ohne autonomes Bewusstsein ist KI ein mächtiges Instrument. Mit ihrer Hilfe lassen sich Bilder und Stimmen täuschend echt fälschen, was die Gefahr von Deep Fakes erhöht und damit dem demokratischen Diskurs empfindlich schaden kann. Auch können bestehende Vorurteile algorithmisch fortgeschrieben werden, was mit Blick auf predictive policing oder soziale Scoring-Verfahren höchst problematisch ist. Neben der politischen Regulierung ist es daher vor allem wichtig, den mündigen Umgang mit KI zu fördern. Ob Künstliche Intelligenz am Ende „Gutes“ oder „Schlechtes“ bewirkt, liegt in unseren eigenen Händen. Denn es ist die Maschine, die von uns lernt, nicht andersherum." (Editorial)
more
"The survey found that 13% of respondents’ organizations are very familiar with Gen AI, while an additional 46% possess a basic awareness. Among those who stated they had basic to good familiarity with Gen AI, 38% reported using it either daily or very frequently. The most common use of Gen AI was
...
for research and idea generation, followed by content creation & copy editing, and translation. In terms of the platform, ChatGPT was by far the most widely used, followed by Bing AI, Midjourney, and Dall-E. The majority of respondents (94%) stated that they found Gen AI to be either very useful or somewhat useful. Furthermore 67% of respondents view the future of Gen AI as either very promising (22%) or promising (45%)." (Key findings, page 1)
more
"Wer KI als Lösung präsentiert, muss auch Beweise dafür liefern. Wir zeigen in der zweiten Ausgabe unseres SustAIn-Magazins, dass KI-Systeme in der Energieversorgung durchaus den Einsatz von erneuerbaren Energien verbessern können. Aber ihr Potenzial kann nur ausgeschöpft werden, wenn eine dafÃ
...
¼r geeignete Infrastruktur existiert. Auch ist immer wieder zu hören, dass autonom fahrende Kleinbusse gut fürs Klima sind. Meistens wird dann aber nicht erwähnt, dass für den Betrieb dieser Kleinbusse viele Ressourcen benötigt werden. KI-Hoffnungsfantasien bringen uns nicht weiter. Wir müssen uns den ganzen Lebenszyklus von KI-Systemen anschauen, wenn wir ihre Nachhaltigkeit bewerten. Wir müssen aufhören, nur nach den CO2-Emissionen zu fragen, wenn wir die Auswirkungen der Systeme auf die Umwelt untersuchen. Und wir müssen KI-Systeme genau, umfassend und unvoreingenommen analysieren, wenn wir ernsthaft versuchen wollen, KI nachhaltiger zu gestalten." (Editorial)
more
"[...] Die Einbeziehung von vulnerablen Gruppen in die Technologieentwicklung ermöglicht es, ihre spezifischen Herausforderungen, Bedenken und Bedürfnisse besser zu verstehen und zu berücksichtigen. Dies sollte bereits bei der Formulierung des zu lösenden Problems geschehen, bevor überhaupt ein
...
e technische Lösung erdacht wird. Denn: Gemeinwohlorientierung beginnt vor dem ersten Buchstaben Code. Nutzer:innenzentrierte Technologieentwicklung ist möglich und bildet eine wichtige Stellschraube, um den Begriff der „gemeinwohlorientierten KI“ mit Leben zu füllen. Diese Publikation soll als eine Hilfestellung sowohl für Wohlfahrtsverbände als auch für zivilgesellschaftliche Organisationen aller Art verstanden werden, um diese nutzer:innenzentrierte und gemeinwohlorientierte Technologieentwicklung in die Tat umzusetzen. Dabei kann KI im Fokus stehen, es sind aber auch andere digitale Produkte denkbar. Wir haben die Publikation so gestaltet, dass Sie sie immer wieder heranziehen können, wenn Herausforderungen bei der Implementierung und Steuerung digitaler Vorhaben entstehen. Die Ordner stehen dabei für sich und müssen nicht in einem Schwung durchgelesen werden." (Vorwort, Seite 4)
more
"This study has identified myriad potential benefits of AI and automation solutions in agri-food systems, with the largest being significant enhancements in agricultural productivity and outputs. Data collected from sensors, satellites, or drones can help SSPs prepare and use their available land op
...
timally. Data-generated insights can identify which farm areas are most suited to which crops, and automated input provision like automated irrigation systems can optimize resource use. AI and automation solutions can also improve extension advisory services, resulting in better, more contextualized and real-time advice for farmers, improving yields. Other solutions can predict, identify and mitigate against pests and diseases to reduce spoilage. Enhancing the improved productivity of SSPs is critical for global food security, and the economic and social empowerment of SSPs and the communities in which they live." (Executive summary, page 8)
more
"Wie kann Theologie zu Technik und KI beitragen? Der Technikdiskurs ist aufgeladen mit religiösen Motiven, und Technologien wie Roboter fordern die Theologie, z. B. das Menschenbild, die Ethik und die religiöse Praxis, neu heraus. Der Sammelband erforscht aus theologischer Perspektive die drängen
...
den Themen unserer Zeit. Dazu begibt sich die Theologie in Dialog mit den Technikwissenschaften. Untersucht werden die Veränderungen des Menschenbildes durch Roboter, Religiöse Roboter, Optimierung des Körpers, medizinische Technologien, Autoregulative Waffensysteme und wie die Theologie durch die Technologisierung transformiert wird." (Verlagsbeschreibung)
more
"Hoy más que nunca advertimos la necesidad de una Inteligencia Artificial (IA) que respete los derechos humanos y las perspectivas de los pueblos indÃgenas, destacando su papel en la preservación de identidades y patrimonios culturales indÃgenas. Es fundamental democratizar la IA, incluyendo una
...
diversidad de voces en su desarrollo y aplicación y este es uno de los grandes objetivos de este reporte. En definitiva, este reporte es una invitación para entender la necesidad de una IA participativa que respete y se enriquezca con la diversidad cultural, convirtiéndose en una herramienta para el desarrollo sostenible y la promoción de libertades fundamentales. Se presenta como un llamado a la acción para incorporar activa y respetuosamente las perspectivas de los pueblos indÃgenas en la IA, apuntando hacia una futura sabidurÃa colectiva que honre la riqueza de las experiencias y conocimientos humanos. Todo esto, sobre la base de la Recomendación sobre la Ética en la Inteligencia Artificial de la UNESCO, que funciona como una brújula de orientación ética y un cimiento normativo mundial, que permitirá desarrollar IA al servicio de la sociedad, con especial énfasis a los grupos poblacionales más vulnerables, y respetuosa de los derechos humanos." (Prólogo, página 7)
more
"This report presents the results of deep dive experiments into the risks associated to the design, deployment and use of generative AI to facilitate gender-based violence. It assesses the possible impact posed by generative AI that enables the creation of more realistic ‘synthetic’ media, ‘ha
...
llucinations’ or biases in the outputs, automated harassment campaigns, and the ability to build ‘synthetic histories’ and compositional deepfakes. Lessons learned from the prompt-injection experiments conducted on how gender-based cyber-harassment templates are and can be generated are presented. It concludes with measures to be put in place by generative AI companies and the technology companies that platform them, by regulators and policy makers, by civil society organisations and independent researchers, as well as users." (Back cover)
more
"The release of ChatGPT at the end of 2022 met with fears and optimism. One particularly important avenue of research that is emerging revolves around ChatGPT's ability to provide accurate and unbiased information on a variety of topics. Given the interest that Google and Microsoft have shown in sim
...
ilar technologies, it is likely that Large Language Models such as ChatGPT could become new gateways to information, and if this is the case, what kind of information this technology provides needs to be investigated. The current study examines the usefulness of ChatGPT as a source of information in a South African context by first investigating ChatGPT's responses to ten South African conspiracy theories in terms of truthfulness, before employing bias classification as well as sentiment analysis to evaluate whether ChatGPT exhibits bias when presenting eight South African political topics. We found that, overall, ChatGPT did not spread conspiracy theories. However, the tool generated falsehoods around one conspiracy theory and generally presented a left bias, albeit not to the extreme. Sentiment analysis showed that ChatGPT's responses were mostly neutral and, when more emotive, were more often positive than negative. The implications of the findings for academics and students are discussed, as are a number of recommendations for future research." (Abstract)
more
"1. Global internet freedom declined for the 13th consecutive year. Digital repression intensified in Iran, home to this year’s worst decline, as authorities shut down internet service, blocked WhatsApp and Instagram, and increased surveillance in a bid to quell antigovernment protests. Myanmar ca
...
me close to dislodging China as the world’s worst environment for internet freedom, a title the latter country retained for the ninth consecutive year [...] 2. Attacks on free expression grew more common around the world. In a record 55 of the 70 countries covered by Freedom on the Net, people faced legal repercussions for expressing themselves online, while people were physically assaulted or killed for their online commentary in 41 countries [...] 3. Generative artificial intelligence (AI) threatens to supercharge online disinformation campaigns. At least 47 governments deployed commentators to manipulate online discussions in their favor during the coverage period, double the number from a decade ago. Meanwhile, AI-based tools that can generate text, audio, and imagery have quickly grown more sophisticated, accessible, and easy to use, spurring a concerning escalation of these disinformation tactics. Over the past year, the new technology was utilized in at least 16 countries to sow doubt, smear opponents, or influence public debate. 4. AI has allowed governments to enhance and refine their online censorship. The world’s most technically advanced authoritarian governments have responded to innovations in AI chatbot technology, attempting to ensure that the applications comply with or strengthen their censorship systems." (Key findings)
more
"Since the mid-2010s, chatbots have grown in usage and popularity across the humanitarian sector. While this usage has gained traction, there is scarce information on the collective successes, risks, and trade-offs of this automation. This research addresses this gap, documenting chatbot deployments
...
across the humanitarian sector and exploring the existing uses, benefits, trade-offs and challenges of using chatbots in humanitarian contexts. The research was conducted by The Engine Room, with support from the American Red Cross, the IFRC and valuable input from our advisory committee. This report is not created as a comprehensive overview of chatbots use, instead, the goal of the report is twofold: to offer guiding questions and considerations for humanitarian organisations deciding if a chatbot is an appropriate tool to address programme and community needs. It also contains use cases highlighting the experiences of practitioners working in diverse geographic contexts and issue areas." (Executive summary)
more
"In particular we recommend to strengthen collaboration (platforms should adopt a collaborative approach involving various stakeholders, including governments, civil society organisations, and fact-checkers, to counter the spread and impact of disinformation. This can include sharing information, be
...
st practices, and resources to develop effective strategies); enhance transparency (platforms should prioritise transparency by providing clear and comprehensive information on their policies, algorithms, and content moderation processes. Users should have a better understanding of how their data is used, and how algorithms work to prevent the amplification of false and misleading narratives); implement effective content moderation (platforms need to allocate sufficient resources to effectively monitor and moderate harmful content. This includes investing in advanced AI systems and human moderation teams to detect and remove disinformation in a timely manner. Transparent and consistent guidelines should be in place to ensure fairness and accountability in content moderation decisions); promote fact-based information (platforms should prioritise the promotion of fact-based information from reliable sources. This can be done by partnering with credible news organizations and fact-checkers to provide accurate information and combat false narratives. Advertising promoting climate change denial or other forms of misinformation should be prevented); improve the access to data for researchers (platforms should make efforts to provide access to data for independent researchers to evaluate the effectiveness of their policies and initiatives in countering disinformation. This will enable better analysis and understanding of the impact of disinformation and the effectiveness of countermeasures); comply with regulatory frameworks (platforms should fully comply with regulatory frameworks, such as the Digital Services Act (DSA) or other relevant International, EU and National laws and regulations, that provide for obligations on addressing disinformation and mitigating associated risks, the Code of Practice on Disinformation that aims to commit signatories to a range of actions to counter disinformation. These actions include providing transparency reports on political advertising, restricting advertising placements on disinformation websites, disrupting advertising revenue for purveyors of disinformation, and enabling user feedback and fact-checking mechanisms. In this framework, compliance should not be limited to large platforms but extended, with adjustments, to smaller platforms to ensure a comprehensive approach)." (Recommendations, page 6)
more