Artificial intelligence (AI) is transforming industries across Asia, driving innovation, economic growth, and societal advancements. However, AI’s profound impact also brings significant governance challenges. As with any transformative technology, robust regulatory frameworks are essential to mitigate risks, ensure ethical use, and protect public interests.
Reflecting on the evolution of cybersecurity regulation may provide insight into how AI regulation might develop. This blog explores the current AI regulatory landscape in key Asian markets, highlighting how these countries are shaping their AI governance frameworks and what lessons can be drawn from their previous approaches to cybersecurity.
AI and cybersecurity regulation in Asia
The regulation of AI and cybersecurity in Asia has evolved as these technologies have become integral to economic and social structures. Cybersecurity regulation laid the foundation for managing technological risks and offers a template for AI governance. Across the region, countries are adopting varying approaches to AI regulation, influenced by their own experiences regulating cybersecurity. Understanding these parallels might predict how AI regulations develop and the challenges ahead.
Let’s take a closer look:
Singapore: A Leader in Proactive and Adaptive Regulation
Singapore has consistently positioned itself as a leader in both cybersecurity and AI regulation. The country’s Cybersecurity Act 2018 is a comprehensive framework that mandates stringent cybersecurity practices across critical information infrastructure sectors, underscoring Singapore’s commitment to proactive governance and international collaboration.
In the AI domain, Singapore has adopted an equally forward-looking approach. Tools like AI Verify, developed by the Infocomm Media Development Authority (IMDA) and the Personal Data Protection Commission (PDPC), enable organisations to assess the transparency and accountability of their AI systems, akin to how cybersecurity frameworks evaluate the resilience of digital defences.
Singapore also promotes innovation within regulatory boundaries through sandbox testing environments, allowing companies to trial AI technologies in a controlled setting. As reflected in the Model AI Governance Framework, this adaptive approach demonstrates how lessons from cybersecurity—such as the importance of rigorous testing and compliance—can inform AI regulation.
Japan: From voluntary guidelines to stricter oversight
Japan’s regulatory approach in both cybersecurity and AI has historically emphasised voluntary guidelines and industry self-regulation. The Cybersecurity Management Guidelines issued by the Ministry of Economy, Trade and Industry (METI) initially focused on voluntary compliance. However, as cyber threats have intensified, Japan has implemented stricter measures, particularly in sectors critical to national security.
Similarly, Japan’s AI regulation is transitioning from a voluntary model towards more formal oversight. AI Utilisation Guidelines are evolving, with the government moving towards stricter regulations for high-impact AI applications in sectors like healthcare and finance. This shift parallels Japan’s approach to cybersecurity, where mandatory requirements have increasingly reinforced voluntary practices as the risks associated with these technologies have become more apparent.
South Korea: Building trust through clear and transparent regulations
Comprehensive and transparent regulatory frameworks, such as the Act on Promotion of Information and Communications Network Utilization and Information Protection, have characterised South Korea’s approach to cybersecurity. These frameworks are designed to protect critical infrastructure and build public trust—a principle carried over into South Korea’s AI governance strategy.
The National AI Strategy reflects South Korea’s commitment to fostering public trust in AI technologies. By establishing clear guidelines and ethical standards, South Korea aims to create a regulatory environment where innovation can thrive without compromising public safety or trust. This strategy mirrors the country’s cybersecurity efforts, emphasising transparency, accountability, and protecting sensitive data.
China: A prescriptive and controlled regulatory environment
China’s regulatory environment for cybersecurity and AI is highly prescriptive, reflecting the government’s focus on control and oversight. The Cybersecurity Law and the Personal Information Protection Law (PIPL) are central to China’s efforts to regulate digital technologies, imposing strict requirements on organisations handling sensitive data.
China has adopted a similarly stringent approach to AI. Regulations like the Provisions on the Management of Deep Synthesis Technology and the Artificial Intelligence Standardisation White Paper outline comprehensive governance frameworks for AI, particularly in areas such as algorithm development and content moderation. This prescriptive approach aims to align AI development with state objectives, ensuring that AI technologies support social stability and national security—just as cybersecurity regulations are designed to safeguard the digital landscape.
Taiwan: Moving towards AI regulation
Taiwan is advancing its AI regulatory framework. The National Science and Technology Council (NSTC) has drafted an AI law focusing on the use, reliability, and risk mitigation of AI technologies. This law is expected to be sent to the Cabinet for approval in October 2024, marking a significant step in Taiwan’s commitment to developing a robust AI governance framework. This mirrors Taiwan’s earlier efforts in cybersecurity, where the government introduced strict guidelines to protect against cyber threats while promoting technological innovation.
Taiwan is actively engaging with industry stakeholders and experts to ensure that the AI law is both comprehensive and adaptable to the rapidly evolving technological landscape. The government aims to strike a balance between fostering innovation and ensuring that AI technologies are implemented safely and ethically. By building on its experience in cybersecurity regulation, Taiwan is positioning itself as a key player in the global AI regulatory environment, demonstrating a strong commitment to both technological advancement and public safety.
Australia: Transitioning from voluntary guidelines to targeted regulation
Australia’s cybersecurity regulation traditionally combined voluntary practices with mandatory requirements for critical infrastructure, as exemplified by the guidelines provided by the Australian Cyber Security Centre (ACSC). Over time, Australia has moved towards more stringent oversight, reflecting the growing importance of cybersecurity to national security and economic resilience.
Similarly, Australia’s approach to AI regulation is evolving from voluntary guidelines to more targeted regulation, particularly for high-risk areas like privacy and data protection. The AI Ethics Framework is the foundation for this transition, focusing on transparency, accountability, and human-centred design principles. As with cybersecurity, Australia’s AI regulation will likely become more prescriptive as the risks associated with AI technologies become clearer.