Introduction
The European Union (EU) has taken a groundbreaking step in regulating artificial intelligence (AI) with the EU AI Act 2023. This landmark legislation, provisionally agreed upon on 9th December 2023, aims to ensure the safe and ethical development and deployment of AI. The European Parliament and Council negotiators came to a provisional agreement on the EU AI Act after three days of intensive talks. This legislation creates a world-class regulatory framework to guarantee the safety, legality, reliability, and respect for fundamental rights within AI systems, making it the first act of its own kind specifically for artificial intelligence (AI).
Significance of the AI Act
The EU AI Act is a significant step forward in regulating AI and setting global standards for responsible development. It aims to ensure that AI benefits society while mitigating potential risks and protecting fundamental rights. The AI Act would prohibit uses which pose an unacceptable risk and mitigate harm in sectors including public services, healthcare, education, border monitoring, and other sectors where deploying AI poses the greatest risk to basic rights. The AI Act seeks to promote AI innovation within the EU in addition to governance and enforcement regarding AI. In keeping with the EU’s coordinated plan on artificial intelligence, which contains rules to promote a unified market for AI applications, it also aims to stimulate investment in AI throughout Europe.
The legislation is being considered as a global standard for governments seeking to use AI’s potential advantages while mitigating the threats associated with it. Amid a highly fragmented landscape of rules and regulations worldwide, the EU bill is regarded as the most comprehensive attempt to date to regulate AI.
A Look at Major Countries AI Regulation Efforts
In October 2023, President of the United States, Joe Biden issued an executive order titled as “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” This order addressed the possible advantages and disadvantages of artificial intelligence, with a specific focus on two areas i.e. discrimination and national security. To ensure that all relevant agencies are working together to leverage AI for national security effectively.
China has been rapidly establishing AI regulations, among them are regulations governing recommendation algorithms, which are the most widely used type of artificial intelligence on the internet. But it is necessary to promote AI development which is consistent with China’s political and social values. Among others, China’s three most significant and practical legislation pertaining to algorithms and artificial intelligence are the recommendation algorithms regulation from 2021, the deep synthesis for synthetically generated content regulations from 2022, and the generative AI draft regulations from 2023. The AI governance framework that is being structured by China will help country to have more profound insights on how the technology is developed and used domestically as well as internationally.
France has taken a broad approach to AI policy, emphasizing innovation, ethics, and regulation. announced a major AI investment. The French government in July 2023, announced €500 million to support the creation of AI champions. This project is a symbol of France’s ambition to play a significant role in the global AI landscape. The French government also passed a new artificial intelligence law in 2021, which lays out the guidelines for the country’s AI regulations. Before that, the government unveiled a national AI strategy in 2018 that plans to dominate the world in AI by 2030.
Regulations and frameworks focusing on particular aspects of AI, such as data privacy and consumer protection, have also been adopted by countries like the UK and Japan. In 2021, the United Kingdom unveiled its National AI Strategy, which outlined objectives for the development of ethical and responsible AI. Whereas, Japan supports AI development and economic progress. The country actively participates in international discussions on AI regulation and aims to harmonize with global standards.
Conclusion
Unlike the fragmented approaches adopted by many other countries, the EU’s AI Act offers a comprehensive framework addressing various aspects of AI development and deployment. It covers issues like transparency, fairness, accountability, and risk mitigation categorized into four levels ranging from minimal or no risks to unacceptable risks. Other countries including developing countries should consider AI regulation and can learn from the EU’s Act. Countries can create strong and efficient AI governance frameworks depending upon their needs and requirements that support ethical AI development. This is not just a technological race, it is a race for a better world, and every country has a stake in it. It is because artificial intelligence (AI) has the potential to become a powerful engine of innovation, driving advancements across diverse sectors. By embracing AI responsibly, investing in its development, and addressing ethical concerns collaboratively, countries can unlock its potential to create a more prosperous, sustainable, and equitable world for all.
Research Associate, Pakistan House