Note: This article marks the first in 4i MAG’s community project series with the International Association for AI and Ethics (IAAE). The interview was conducted with Jeon Chang-bae, Chairman of the association.
Since the update to ChatGPT-4o, which advanced its image generation tools to reflect a “vast variety of image styles” as per OpenAI, social media across the world has been flooded with computer-generated artwork imitating well-known pop art styles.
One of the most viral cases involved images in the style of Japanese animation and film producer Studio Ghibli, which led to ChatGPT recording over 1.2 million daily users and its servers struggling to keep up with the soaring demand. The trend also sparked concerns and debates over ethical challenges such as copyright: are AI-powered tools allowed to reproduce copyrighted works without the artists’ permission?
With interest in AI ethics on the rise, 4i MAG met Jeon Chang-bae, chairman of the International Association for AI and Ethics — one of the biggest AI ethics organisations with more than 1,200 members — to hear his insights on the concept.
Please give us a brief introduction of yourself.
I currently lead the non-governmental organisation IAAE in South Korea. I first came across the concept of AI ethics in 2019 while writing an editorial column, and I discovered the significance and impact it could have on our future. Having worked in the tech industry for over ten years and studied ethics at university, I was intrigued to learn more. As a parent of two children, I also want to protect their generation from the potential problems related to AI, ensuring that it is used for good — for all of us.
What are IAAE’s main projects?
First, we aim to raise awareness about AI ethics. We run campaigns that provide information on AI and related ethical considerations. Our second focus is establishing an identification and verification process for AI-powered products. Much like a quality assurance (QA) process, we develop and apply a verification system for AI-driven products that could have a significant impact on people. Lastly, we host an annual award ceremony to recognise companies and individuals who demonstrate outstanding commitment to respecting AI ethics.
Alongside these projects, we also conduct educational sessions on AI ethics. To date, we’ve hosted more than 300 sessions with AI experts registered under our association.
Governance is also a key part of our work. The association has been involved in developing regulations concerning AI ethics, and more recently, we collaborated with the government and the National Assembly to create AI ethics guidelines. In addition, the association has hosted an annual seminar on AI ethics at the National Assembly for the past three years, fostering communication among lawmakers, tech experts, and citizens.
Tell us more about AI ethics. What is it, and why should we care?
Various side effects and unintended consequences of AI-powered products fall under the scope of AI ethics. Some of the most common challenges include algorithmic bias, safety concerns caused by errors, misuse and abuse of the technology, copyright infringement, autonomous weapons (“killer robots”), and threats to job security.
Naturally, these issues arise across a wide range of devices. For example, automated robots can pose a risk to people if they malfunction — especially since many are heavy and potentially dangerous. Smartphones are another example; as many have pointed out, over-dependence on them may negatively affect our health.
What would be some of the key AI ethics principles that developers and consumers should refer to?
For developers and corporations, a stronger sense of ethical responsibility is essential, especially considering the far-reaching impact of AI compared to other technologies. They need to be aware of the potential harm and risks their technologies might cause, starting from the earliest stages of development.
When it comes to consumers, there is currently no clear standard or regulation to restrict AI usage. However, that doesn’t mean people are free to use AI tools however they like, without limitations. I would encourage users to ask themselves a simple question whenever they use AI: “Does this use of AI trouble my conscience?” If the answer is “yes”, then it may be best to stop using AI for that purpose.
Of course, this kind of self-reflection requires a shared sense of morality underpinned by ethics education. For instance, children who haven’t received proper ethics education might not recognise the harm in using AI for malicious purposes — such as creating a deepfake image of a classmate without their consent. They might simply say, “What’s the big deal? I did it for fun.”
This is why ethics education is so crucial in today’s world. Structured and systematic lessons on the topic should be introduced in schools to help build a foundation of ethical awareness from a young age.


It seems like more people are talking about AI ethics since the latest update of ChatGPT. What are your thoughts on the Ghibli-style AI image trend?
This issue should be viewed from two different perspectives: legal and ethical.
From a legal standpoint, there may not be a problem. An art style is considered an idea, not a specific artwork, and therefore is not protected by copyright law. So, generating images in a particular style typically wouldn’t pose legal issues — unless the output closely resembles an existing, specific artwork. In that case, it could be considered plagiarism and a violation of copyright law.
From an ethical perspective, however, the trend can be problematic, as it may go against the beliefs and values of the original creator. The artist behind the Ghibli style, Mr Hayao Miyazaki, is widely known as a pacifist. If his style is used to promote military conflict, it becomes deeply inappropriate.
For example, the official account of Israel created and posted a Ghibli-style image of its soldiers on social media — something that Mr Miyazaki would likely never have supported. Ethically, this raises serious concerns.
What should be our stance on AI in the era of coexisting with the technology?
AI should be controlled by humans — not the other way around. When we look at the rapid pace of AI development, it often feels as though humans are being pulled along, gradually losing control. The problem could become far more serious if AI systems gain autonomy and operate independently of human oversight.
That’s why we need to begin discussing whether it is ethically right to grant autonomy to AI. And if we do choose to give AI a certain level of autonomy, we must also find reliable ways to maintain human control. Allowing AI greater independence without safeguards or restrictions could pose a real threat to humanity.