In the rapidly evolving landscape of artificial intelligence (AI), newsrooms around the world are grappling with the challenge of inherent biases in technology that can distort the quality and trustworthiness of news. As AI increasingly influences content creation and distribution, the accuracy of data and its implications on marginalized communities have become a focal point for industry professionals.

Lyndsey Jones, an AI author and transformation coach, along with Madhumita Murgia, the Financial Times’ first artificial intelligence editor, have highlighted the critical issues at the intersection of AI and journalism. Murgia’s book, “Code Dependent: Living in the Shadow of AI,” discusses how women, migrants, and racial minorities are disproportionately affected by AI’s technical limitations, such as generating stereotypes or failing to recognize diverse voices and accents.

The Digital ABCs, a training program initiated by WAN-IFRA Women In News, aims to address these challenges by equipping media professionals with the necessary skills to navigate and shape the digital future. The program includes modules specifically designed to tackle the nuances of AI, such as generating culturally accurate avatars and recognizing local dialects, which have been issues in regions like Africa and Southeast Asia.

A notable incident involving an AI tool mistakenly creating an image of an African witchdoctor when tasked to generate a picture of doctors in Africa underscores the importance of culturally aware AI training. Moreover, the misrepresentation of voices, where a Zimbabwean avatar spoke with an American accent, illustrates the gaps in AI’s understanding of global diversity.

The issue of bias in AI is not new. Timnet Gebru, a former Google scientist, brought to light the ingrained biases within AI systems, leading to her controversial dismissal. Google CEO Sundar Pichai’s acknowledgment of these biases, following inappropriate AI-generated images of WWII soldiers, signifies the ongoing struggle within tech companies to address these issues effectively.

To combat these inherent biases, WAN-IFRA’s program encourages diversity in AI experimentation. Participants have shared various strategies, from using AI to protect journalist identities with synthetic avatars to re-promoting archived content to increase engagement, showing that diverse inputs can lead to innovative solutions.

However, the industry recognizes that more systemic changes are needed. Proposals such as implementing quotas for women in AI research and forming cross-border partnerships to develop shared AI tools suggest a proactive approach to diversity and inclusion in AI development.

For journalists and newsrooms, the path forward involves a critical assessment of how stories are sourced, told, and visualized. Questions about source diversity, protagonist representation, and visual storytelling are crucial in ensuring that AI tools enhance journalistic integrity rather than undermine it.

As AI becomes an integral part of newsrooms, the media industry must remain vigilant in monitoring and reviewing the data that feeds these technologies, ensuring that AI serves to enhance the quality of journalism and represent all voices fairly.