Based on some recent conversations with our Slownews.kr team, I had the opportunity to pen a rough guideline on how to use AI for journalism in an ethical way. Instead of going with a technical manual like I often do, I opted to make it more like a manifesto: what journalism’s social mission is (i.e. the human role within it), how to cope with those tools, and what humans should pursue once the tools are in use. Anyway, that’s my take on this topic, based on the technology we have today. Originally written in Korean, draft translated into English using DeepL for speed and human-edited by me for accuracy.
Slownews.kr Guideline for
Ethical Use of AI in Journalism
ver 250804.
The core mission of journalism is to discover, organize, and interpret facts on social issues and opinions based on them, to help citizens make democratic decisions in a well-informed manner. If journalism is to promote delicate social advancements amidst the pluralistic interests of diverse identities that characterize modern advanced democratic societies, the sheer quantity and quality of content that needs to be delivered, and the effectiveness of its communication, pose an ever-increasing challenge.
From print to the internet, journalism has historically sought to play its social role by adapting, sometimes quickly and sometimes slowly, to the cutting edge of the media tools of its time. Generative AI should be viewed in this framework. AI as tools can facilitate the gathering of a wide range of facts, help us recognize which agendas are prominent among them, help us frame explanations, help us choose among possible interpretations, and suggest effective ways to package them for different audiences.
The social function of journalism itself, on the other hand, is entirely in the hands of the people who do it. No tool can replace that human part, no matter how it appears. What voices our society needs more of right now, how to bring the right mix of expert voices to a given conflict, how to maintain interest in issues that require long-term involvement beyond the attention span of individual citizens, and how to find the most socially beneficial balance between simple impartiality and activism. Those are all questions that we, as humans who can only survive within the social environment we cultivate ourselves, must explore on a case-by-case basis.
Products produced by tools without humans carefully considering their social functions are not journalism. They are just chunks of language, made to fulfill word counts and harvest views.
As such, we propose a few guidelines for the use of AI in journalism today.
1. Humans carry the responsibility.
Humans are responsible for journalism, not tools. So any part that requires value judgment, such as reasoning, evaluating, or suggesting implications, must be verified by humans. Even simple factual statements, such as AI-generated summaries, should be fact-checked by humans to prevent unintentional inconsistencies.
2. Recognize AI’s limitations.
AI is subject to bias due to a number of factors, including the value systems of the developers behind the algorithm, the nature of the training data, and the abuse of its learning capabilities. Furthermore, it does not have the internal consistency that can be expected from humans, so the same prompt can produce completely different results if the algorithm changes. Therefore, AI should only be used to collect and analyze data, let alone write, when human journalists have a detailed understanding of the reliability and representativeness of the source material and the methodology of the analysis. This should be the key focus of all editing work.
3. Disclose the use of AI.
As AI outputs are generated within a specific context, it is important to be transparent about whether, under what conditions, and why AI is used to ensure credibility and verifiability. Furthermore, journalists should not be negligent in preventing people from mistaking AI-generated content as human-generated one.
3.1. When using personal data such as personalized content recommendations and summaries, it should be disclosed, and users should be given the option to opt out.
3.2. The use of AI to generate visual images should be limited to depictions of non-existent things and should be disclosed. Using AI to generate lifelike images depicting real people or events is inherently manipulative. If an AI-generated image is newsworthy and should be included in a story, it should be clearly identified as an AI product.
3.3. If you use AI to rework your own content, you should also make clear whether and under what conditions you used AI. This includes summarizing, adapting media to audio or video, bundling articles, translating into foreign languages, etc.
4. Recognize AI misuse and abuse.
Journalists aren’t the only ones using AI to create and disseminate information. AI can be a useful tool for intentionally spreading misinformation, in large quantities and under the guise of plausibility. Journalists should scrutinize the material they gather to see if it was generated by AI, tampered with, or otherwise distributed in an unusual way using AI. They should also be prepared to make corrections if their stories are altered by AI and deviate from their original intent.
5. Use the convenience of tools as a means to fundamental goals.
AI can be used effectively to reduce the amount of labor and time spent gathering, writing, and processing data, but it would be a waste to use it to simply churn out exponentially more content. Instead, we should use the leeway created by the convenience of the tools to pursue a deeper journalism. Examples include:
5.1. If AI can easily map out the landscape of social issues, it means that humans can weigh the importance of each agenda through better lenses. We can consider a more effective tightrope walk between what grabs popular attention and what deserves public attention.
5.2. If AI can easily communicate complex social truths or scientific knowledge in plain language, humans can freely link to and introduce valuable academic research that would otherwise have little contact with the public due to the difficulty of the details. This is an opportunity for journalism to contribute to an environment where public opinion is formed through knowledge, not preference.
5.3. If AI can make it easier to foster trust between citizens and news information, humans can build on that and ask more fundamental and system-level questions. They can point to the underlying patterns of a problem, uncover the ingredients needed to solve it, and engage more people in participation by drawing on the experience and wisdom of those who are working on solutions.
[] Other guidelines by others
Leave a Reply
You must be logged in to post a comment.