Skip to content

Founder’s guide to deciding the right approach for developing MVP or software product Download Whitepaper

All posts

A Global Step Forward: the EU AI Act, Canada’s AIDA, and the US AI Bill of Rights

Tech Law

Introduction

Artificial Intelligence (AI) has become a powerful tool with vast potential for societal transformation. However, its rapid advancement has raised significant concerns regarding privacy, security, and ethical implications. To address these challenges, governments worldwide are actively working on AI regulations. In this blog, we will provide an analysis of three critical AI regulatory initiatives: the EU AI Act, Canada’s AIDA, and the US AI Bill of Rights. While the EU has emerged as a leader in AI regulation, Canada and the US have faced criticisms for their comparatively slower progress.

The EU AI Act: A Comprehensive Framework

The EU Artificial Intelligence Act, adopted in April 2021, is a pioneering legislative initiative that aims to ensure responsible AI development and use. It establishes a comprehensive regulatory framework that classifies AI systems based on their risk levels into four categories:

  1. Unacceptable Risk: AI systems considered a significant threat to fundamental rights are strictly banned within the EU.
  2. High Risk: AI systems used in critical areas, such as healthcare, transport, and infrastructure, undergo rigorous risk assessments and must meet strict requirements before being placed on the market.
  3. Limited Risk: AI systems that do not fall into the high-risk category but still require specific transparency measures and human oversight.
  4. Minimal Risk: AI systems with minimal potential to harm individuals or society, which are subject to lighter regulation.

The EU AI Act also sets obligations for AI providers, including transparency, human oversight, and data protection. By imposing stringent requirements on high-risk AI systems and promoting transparency across all AI applications, the EU aims to ensure that AI respects fundamental rights, safety, and ethical considerations.

The AI act in a nutshell:

  • It will be wise for every AI initiative to perform risk analysis.
  • AI is broadly defined here and includes wider statistical approaches and optimization algorithms.
  • Human rights are at the core of the AI act, so risks are analyzed from a perspective of harmfulness to people.
  • Based on the risk level, an expected 10% of AI applications will require special governance.
  • The special governance includes public transparency/documentation, auditability, bias countermeasures, and oversight.
  • Some initiatives will be forbidden, such as mass face recognition in public spaces and predictive policing.
  • For generative AI, the transparency needs to include being open about what copyrighted sources were used.
  • To illustrate: if OpenAI for example would violate this rule, Microsoft could face a 10 billion dollar fine.

Canada’s AIDA: Striving for Trust, Transparency, and Accountability

Canada’s approach to AI regulation comes in the form of the Artificial Intelligence and Data Act (AIDA). Introduced as a companion document to the Digital Charter Implementation Act (DCIA), AIDA focuses on building trust and accountability in AI systems.

AIDA places a strong emphasis on transparency, requiring organizations to provide users with comprehensive information about how AI systems function, use data, and make decisions. It also mandates algorithmic impact assessments to identify potential biases and discriminatory outcomes before AI deployment.

AIDA grants individuals the right to access and control their data in AI systems. It aims for a balance between AI innovation and respecting human rights. Critics say Canada’s AI regulation is slow, leaving risks unaddressed. They urge faster implementation of AIDA and stronger regulations.

Bill C-27, titled “An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act, and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts,” was presented for the first reading on June 16, 2022, by the Minister of Innovation, Science, and Industry. As of now, the bill is still in its initial stages and has not progressed further. Further developments will determine its eventual impact on AI regulation and data protection in Canada.

Credits

US AI Bill of Rights: Empowering Citizens in the AI Era

In the United States, the approach to AI regulation takes the form of the AI Bill of Rights, emphasizing citizen empowerment and access to AI benefits. The Bill of Rights focuses on five key principles:

  1. Access to AI Opportunities: Ensuring equal access to AI advancements and opportunities for all citizens, regardless of socio-economic background.
  2. Privacy and Data Protection: Safeguarding individuals’ privacy rights and preventing unauthorized use of personal data by AI systems.
  3. Explainability and Accountability: Promoting transparency and accountability in AI systems’ decision-making processes to build trust with users.
  4. Fairness and Non-Discrimination: Preventing AI systems from perpetuating bias and discrimination in decision-making.
  5. Empowerment and Education: Providing educational resources to help citizens understand AI and participate responsibly in the AI-driven world.

The blueprint aligns with other voluntary efforts to establish transparency and ethics rules for AI from various entities. However, the US AI Bill of Rights faces criticism for being voluntary and non-binding, potentially limiting its impact. Critics also find fault in its generality and lack of specific enforcement measures, leading to calls for more concrete and comprehensive AI regulations in the US.

 

Conclusion

AI regulations have become a global priority to address the potential risks and ethical implications associated with AI technologies. The EU AI Act, Canada’s AIDA, and the US AI Bill of Rights represent significant steps in this direction, each tailored to their respective regions’ values and priorities.

The EU leads with a proactive approach, while Canada and the US face criticism for slow AI regulation progress. They must accelerate efforts, engage experts and stakeholders, and prioritize human rights, privacy, and ethics. A future with beneficial AI, respecting dignity and rights, is attainable through strong regulatory frameworks.

AI’s rapid evolution outpaces governments’ ability to provide comprehensive regulations. Private sector innovation surpasses government policy-making, while the complexity of AI makes it challenging to keep up. This disconnect raises concerns about risks, bias, and privacy violations. Bridging the gap with effective regulations is vital to harness AI’s benefits responsibly and protect human rights and societal well-being.