![]() |
By Robert D. Atkinson
Everyone, it seems, is all in a lather about AI, especially after the release of the most recent ChatGPT large language model. Elon Musk says AI will kill us all if we don't act now. More restrained voices predict just the end of work and the end of truth. Now the best way to get attention is to "cry AI." Indeed, we are rapidly ascending to peak AI panic.
Although the vast majority of these breathless claims are nonsense, policymakers around the world are panicking, rushing to be the first out of the gate in regulating this menacing technology. And with its longstanding embrace of the precautionary principle, Europe is leading the charge. And for some bizarre reason, countries are competing not to be the best in AI, but to be the best ― or worst, depending on how you view it ― in AI regulation.
Korea is certainly trying with its proposed new Law on Nurturing the AI Industry and Establishing a Trust Basis. To be sure, the "nurturing" part is positive. And to his credit, President Yoon Suk Yeol has committed to supporting AI research and entrepreneurship, including spurring cooperation on AI education and research.
But the "trust" component appears more problematic. Indeed, the legislation mirrors the EU's in the fact that both are based on the faulty premise that regulation fosters trust which in turn fosters AI use. But that is only the case if regulation does not harm AI innovators or users, either by restricting needed capabilities (such as limiting data use) or by imposing significant compliance costs.
In the case of the proposed Korean law, the regulatory requirements on "high-risk" systems are comparable to the AI Act in the EU and like it, may encompass many more AI applications and services than are required. It makes sense to regulate AI in automobiles but not necessarily in biometrics, employment law and others. And even for high-risk systems like automobiles, there is no need for AI-specific regulation. Governments don't regulate light bulbs, but national motor vehicle regulators regulate car light bulbs because they are a core component of cars, which they do regulate for safety. Similarly, there should be little need for AI rules for employment uses, as long as it is already illegal to discriminate against workers.
One sign that suggests Korean policymakers should take a deep breath and slow down is that the legislative language mentions several cases of problematic applications of AI in the past. But all were less about the use of AI than about problematic and potentially illegal business practices. Companies can and do discriminate against workers with and without AI. The internet will have hate speech and misinformation with or without AI. Here, there is little or no need to single out AI for regulation when the problems in mind can exist with or without AI. Doing so will slow AI development and adoption.
As such, Korean policymakers might do well to consult ITIF's 10 principles for regulation that does not harm AI innovation. These include not holding AI to higher standards than humans face; regulating sectors, not AI itself; defining AI precisely and treating all firms (large and small, foreign and domestic) equally.
Finally, large language models and other AI systems rely on data for their effectiveness. But the Korean Personal Information Protection Commission (PIPC) doesn't seem to understand this, or if it does, it does not care. Its Chairman Ko Hak-soo recently stated that "we are checking whether Korea's data has been used for ChatGPT's AI model building and how the data has blended in." Ko also bragged about fining U.S. companies Google and Meta 100 billion won for "collecting personal information without users' consent and using it for personalized online advertising and other purposes."
But what difference does it make if Korean's data is used to power an AI model or serve up a more targeted ad if only the software system uses the data and no personally identifiable information is shared with humans? No one has less privacy because of it. Ko noted that over 2.2 million Koreans used OpenAI's chatbot, but clearly he wants them to be free riders, using GPT without contributing data. That is a recipe for weak and inaccurate AI.
So, let's hope Korean policymakers do not let the siren song of Brussels seduce them onto the rocks of innovation-crushing regulation. Instead, they should resist the call and steer toward the open waters of AI progress so that Korean firms can prosper globally.
Robert D. Atkinson (@RobAtkinsonITIF) is the president of the Information Technology and Innovation Foundation (ITIF), an independent, nonpartisan research and educational institute focusing on the intersection of technological innovation and public policy. The views expressed in the above article are those of the author's and do not reflect the editorial direction of The Korea Times.