Top

[IAAE] A guide to using AI safely for users and developers

Note: This article marks the second article in 4i MAG’s community project series with the International Association for AI and Ethics (IAAE). The interview was conducted with Kim Myuhng-joo, the president of the association.

After a long period of work, South Korea legislated the Framework Act on Artificial Intelligence (AI) Development and the Foundation of Trust in January, focusing on both the healthy advancement of AI and the protection of citizens’ rights. As part of this, the Korean government launched the AI Safety Institute under the Electronics and Telecommunications Research Institute last year — a centre that collaborates with AI-focused companies to enhance international competitiveness through various safety regulations and guidelines related to AI.

4i MAG spoke with Kim Myuhng-joo, head of the AI Safety Institute and president of the International Association for AI and Ethics (IAAE), to ask how users and developers can use AI tools safely today.

In your definition, what is AI ethics?

Ethics is generally considered a way of putting oneself in another person’s shoes. It is broader and more diverse than the concept of morality, as individuals may hold differing views on ethical issues — even when it comes to written regulations or laws. At its core, ethics involves forming social agreements based on a range of beliefs and perspectives and fostering an understanding of one another’s differences. In this context, AI ethics is the process of addressing the challenges that arise from AI usage through mutual agreements within society.

What should AI users keep in mind? For example, when people use large language models (LLMs) for study or work — what should they be aware of?

LLMs often produce responses that seem more polished or informed than those the average person in a given field might provide. With that in mind, it appears many students or job seekers now use AI to assist with open-book or at-home exams or to write cover letters, for example.

At the end of the day, seeking assistance from AI is a natural reaction — it undoubtedly improves human efficiency. I don’t think it makes sense to penalise or criticise individuals simply for using AI at work unless there is a clear prior agreement restricting its use. When assessing someone’s qualifications or competence, it’s crucial to have a social agreement on whether AI tools are allowed. If examiners decide not to permit AI use, then they should have a method to detect its use and apply penalties to those who breach the agreement.

On a different note, I allow my students to use AI for assignments I set. However, I ask them to write their own reflections on the AI-generated response. This makes it a more challenging task, as they can’t simply repeat what the AI has said — they must produce something of equal quality but in their own voice.

It’s also important for users to be competent enough to recognise whether AI-generated content is accurate or not. AI sometimes produces information that is entirely fabricated, a phenomenon known as “hallucination”. This is why education remains vital: to equip users with the skills to detect hallucinations in AI-generated content and maintain control over its use.

Are there any considerations for AI users?

It’s essential to understand the potential risks of using AI. For example, during the pandemic years, more than 700 companies reportedly used AI as part of their human resources interview process. We might assume that AI makes more objective decisions than humans, but that’s not necessarily true. These systems can be trained on historical data that reflects past biases — ranging from gender to university background — and may unknowingly replicate discriminatory patterns. As an interviewer, you should be aware that AI has the potential to make flawed or biased decisions. As an interviewee, you should have the right to request an explanation for why you were unsuccessful in the hiring process.

Other risks include the malicious use of AI — for instance, creating sexually exploitative content using someone’s photos without their consent. While AI is a powerful tool, it also carries the risk of being used unlawfully when users have harmful intentions. Misuse and even addiction are among the other common issues that people should be aware of.

This is why AI literacy — understanding the potential risks of AI and finding ways to address them — is so important today. I believe it is essential for individuals to recognise its significance and educate themselves and for the government to ensure that related education is provided, especially for minors.

What should developers need to consider, then?

Developers’ responsibilities go beyond simply creating a secure and high-quality product. They must also consider how it will be used once in the hands of users. As seen in the earlier example of the AI interviewer, developers should be aware of the potential for discrimination or bias and proactively screen datasets that could lead to such outcomes.

They must also recognise that the side effects or unintended uses of AI products can extend far beyond their expectations. For example, suppose developers create an AI companion intended to provide comfort to users who are feeling lonely or sad. In an unexpected scenario, a user might decide to ‘marry’ the AI, believing it to be their ideal partner. The user’s family could then complain to the developers, claiming that the AI companion has disrupted their lives.

This is why it’s essential to gather feedback from users after a product is launched, understand what kinds of issues arise, and develop solutions for future updates. If a competitor’s product encounters a problem, it’s equally important to take an interest in their challenges as well, from the perspective of shared accountability among developers.

What else should developers take into account?

We often emphasise the importance of AI literacy, but there is an underlying assumption that users are capable of learning and adapting to the technology. However, we must not overlook the “digitally disabled”: those who are unfamiliar with software or digital devices. This group includes not only older generations but also very young users and those without access to technology.

With digital inclusiveness in mind, developers should ask themselves whether anyone might be excluded from using their AI product. Designing with accessibility and equity in mind is a key part of responsible innovation.

Sunny Um is a Seoul-based journalist working with 4i Magazine. She writes and talks about policies, business updates, and social issues around the Korean tech industry. She is best known for in-depth explanations of local issues for readers who need a better understanding of the Korean context. Sunny’s works appeared in prominent Korean news outlets, such as the Korea Times and Wired Korea. She currently makes regular writing contributions to newsrooms worldwide, such as Maritime Fairtrade, a non-profit media organization based in Singapore. She also works as a content strategist at 1021 Creative. A person who holds a Master’s degree in Political Economy from King’s College London, she loves to follow up on news of Korean politics and economy when she’s not writing.