Note: This article marks the third in 4i MAG’s community project series with the International Association for AI and Ethics (IAAE). The following is based on an interview with Jay Cho, a director of IAAE and Kakao Entertainment.
Compared to other sectors, the impact of AI advancements appears especially pronounced in the entertainment and content production industries.
YouTubers and TikTokers, for instance, have been creating AI-generated covers using the voices of their favourite artists — Taylor Swift among them — and producing films with generative AI tools like Midjourney and ChatGPT. Smaller beauty brands have already begun replacing human models with AI-generated fictional ones, and AI-made adverts for games are now commonplace on YouTube. With Google’s recent introduction of Veo 3, reviewed as the tool that produces the most realistic clips to date, the influence of AI in this space is expected to grow even further.
To reflect on how AI has shaped and will continue to shape content production, 4i MAG spoke with Jay Cho, Director of Kakao Entertainment, who has witnessed these changes first-hand from the frontlines of the K-pop and wider entertainment industry.
What are your thoughts on some of the industry’s more prominent developments, such as the emergence of virtual humans, for example, Davis, who debuted last year as part of K-pop group Aespa’s narrative universe?
When it comes to virtual humans, I believe the emphasis shouldn’t be on how technically sophisticated they are but rather on how naturally they can be presented to the public through storytelling. Many of these virtual characters are built using similar engines, which makes it difficult for them to stand out purely on a technical level.
During the pandemic, it was largely visual effects companies and developers driving this space. But now, storytelling-focused companies are taking the lead. A strong example of this shift is PLAVE, which received an enthusiastic response thanks to the strength of its narrative.
Let’s talk more about AI’s influence on content production. Has it significantly changed the production environment?
Content production involves collaboration between experts from a wide range of fields to create something for the public. Recently, we’ve seen each sector within the production process begin adopting AI technologies in their own way, even from the initial brainstorming stage.
Of course, there are still areas where stability and refinement are needed in the long term, but we’re closely monitoring how AI is being integrated across the industry.
What are your thoughts on unauthorised AI-generated content that may risk infringing copyright, such as AI voice covers or deepfakes?
We’ve been monitoring cases where AI-generated content has been created without the artist’s permission. It’s a complex issue — on the one hand, it can be seen as a form of entertainment for the public to engage with, but on the other, it can pose serious concerns about copyright infringement and the protection of artists’ rights.
There are broader benefits to AI content, such as giving fans the opportunity to be creative or allowing underrepresented artists to gain visibility. However, if it’s used recklessly or with malicious intent, companies must be prepared to take firm action.
Ultimately, we’re keeping a close watch on these developments. Rather than jumping straight to regulation, I believe it’s crucial to foster a shared sense of moral and ethical responsibility within the public.

What are your thoughts on AI replacing jobs across the industry, including roles like actors and voice actors?
These days, many human occupations are being replaced by AI. In the past, content production was primarily driven by broadcasters or large video production companies equipped with advanced technologies. But now, the public has easy access to similar tools, including AI-generated virtual characters and voice synthesis, which empowers them creatively. While this is undoubtedly a strength for users, it also poses significant risks for professionals who have traditionally held these roles.
When it comes to these developments, I believe we’ll need to observe how the production environment continues to evolve. Rather than immediately turning to regulation, we may need to allow the market to adjust and stabilise itself over time.
Are there any AI-related ethical principles that content producers should bear in mind?
I think the most important thing is not to exploit the hard work of others. Content producers can certainly create new works by combining their imagination with existing materials, but if this process ends up undermining the efforts of original creators, it could discourage them from producing anything new, and that would ultimately harm the entire ecosystem. It’s perfectly fine to utilise AI, but producers must remain aware of and respectful toward the creators behind the original content they draw from.
As I mentioned earlier, using someone else’s IP for commercial purposes without permission or producing harmful content, such as material designed to spread misinformation, also needs to be taken seriously from an ethical standpoint.
What’s your outlook on the role of AI in content production?
AI has become deeply integrated into the industry, not only in what we see in the final content but also throughout the production process itself. The entire system behind content creation is evolving, and we can expect even more changes ahead.
At this point, AI is already mainstream. We’ve moved beyond the stage of questioning which AI technologies will have an impact, they’re already shaping the industry. As these tools continue to develop, those who know how to make the most of AI will have a greater chance of producing high-quality content.
As a result, while the content landscape once centred around individual talents and well-established creators, the focus is shifting. The new environment will favour those who can effectively harness AI to make compelling work.