11 min read

An Overview of AIDA

An Overview of AIDA
Likely more engagement than we could reasonably expect, but dare to dream.

The Artificial Intelligence and Data Act (AIDA) is a proposed legislation aimed at regulating the development and use of artificial intelligence (AI) in Canada. As part of the Digital Charter Implementation Act, 2022, AIDA seeks to establish a framework for the responsible development and use of AI that aligns with international norms.

Since its introduction, AIDA has undergone significant changes, with the government releasing multiple amendments to the bill. However, despite these efforts, many stakeholders have raised concerns about the bill’s current state. As the bill continues to evolve, it is essential to carefully consider these concerns and ensure that any changes made align with the needs and values of all stakeholders.

In this newsletter, we will explore the current state of AIDA and examine the proposed amendments, as well as the key recommendations put forth by various organizations and experts in the field.

News

In a move towards greater transparency and accountability in AI development, the European Union (EU) AI regulation recently went into effect. The regulation aims to establish clear guidelines for the development, deployment, and use of AI systems that respect fundamental rights and values. While not directly related to AIDA, this development emphasizes the need for a comprehensive framework that ensures ethical considerations are built into AI development.

The EU’s approach serves as a model for other jurisdictions, including Canada, to develop similar regulations. As outlined in the AIDA companion document, one of the key goals of this legislation is to ensure alignment with regulation around the world. As we will discuss in our newsletter, AIDA should include mechanisms for ongoing review and public consultation to ensure that AI systems align with Canadian values and respect individual rights.

Speaking of rights, it has been a few years since a news article highlighted concerns over the use of AI-powered facial recognition technology in Canada. The Toronto Police Service had been testing this technology, raising questions about privacy and potential biases. This practice has become so prevalent that AI surveillance was used in the Paris 2024 Olympics.

Our recommendation is that AIDA should include robust mechanisms for ongoing review and public consultation to ensure that AI systems respect individual rights and are transparent in their use. The Toronto Police Service’s pilot project serves as a reminder of the importance of careful consideration and regulation around AI-powered facial recognition technology.

Lastly, the Canadian government has made it clear, through budget policy and other directives, that AI is a key part of our economic future. This strategy serves as a reminder of the importance of balancing economic growth with social responsibility and ethical considerations in AI development. We want AI to be a boon for the economy, but only if the prosperity generated from the use of the technology is shared appropriately with society.

AIDA Overview

Canada has had an eye on AI for some time, as outlined in the Pan-Canadian AI Strategy. AIDA is an attempt to balance the needs and views of various stakeholders to create regulation that, ultimately, balances the promises of AI against its risks and sets Canada up to be a major player in this space. One of the best ways to get a handle on the intent and context for AIDA is to read the companion document. An important disclaimer here is that the companion document does not form any actual part of the law, and so, one of the critical exercises to do when evaluating AIDA is to determine its fit to the companion document.

From the public stakeholder perspective, a series of public consultations were done in 2020 to better understand what the public's perception of AI was, from a literacy standpoint, and from an opportunities/risks perspective. Some of the areas probed here were quite interesting as shown in the following excerpt.

This gap in confidence with assessing questions of ethics is further reinforced when comparing the results to the prompt, "Computers can be programmed to make ethical decisions". When asked to assess their level of agreement with a series of statements on the capabilities of AI, 42% of respondents agreed, with 38% responding negatively and 19% unsure.

I think there is a lot to unpack with the above statement, most of which is outside the scope of this overview newsletter. Digital Ethics, which is an examination of technology's influence on society, is a broad subject that is currently under active research and might be an interesting subject for an upcoming issue. To pique your interest a bit, you could likely describe ethics as a nuanced view at situations from the ever-changing (and evolving) human context where the goal is to balance overall value alignment across a broad range of viewpoints, situations, and decision categories. Imagine working that into a set of strict rules and binary logic.

From a private stakeholder perspective, the bill recognizes that Canada is a leader in AI and that AI use is becoming ubiquitous within all manners of industry. While the bill recognizes interesting areas with positive impact from the use of AI, for example by introducing new smart products or improving search, it also recognizes that use of this technology can have negative unintended consequences.

From a society stakeholder perspective, the bill recognizes risks to the use of AI that can't necessarily be predicted. This includes the risk from bias, and the risk of of AI being a tool for disinformation campaigns. These areas have been a key concern for the Canadian government. If you want a little bit a primer on this subject, have a look at the dais institute report on the subject.

To summarize, here are 3 key reasons why AIDA is needed, and what the bill tries to address:

  1. To address existential or catastrophic risk from the use of AI
  2. To regulate high-impact AI systems
  3. To ensure transparency, explainability, and robustness of deployed AI systems

Key Terms

As with much of regulation, it is hard to get a full appreciation for the extent of the law without examining the key terms used within it. As we will discuss later, issues with these key terms form the current opinions (either for or against) the bill.

Artificial Intelligence System

From the bill:

means a technological system that, autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions.

From the definition, it is clear that the text of this bill is focused on technological systems. I think there are a couple of important questions to ask from this definition. One, what is actually included in a technological system? The philosophy of technology would suggest that technology is focused on describing ".. How the world ought to be". When we consider these systems, should we also be considering the people/process around such a system? Is there a minimum "amount of technology" that makes a system "technological"?

The second question is also around scope. If this bill is related to technological systems, do we have current legislation that would cover "non-technological" systems? If so, what would that legislation be, and are we saying anything in this legislation that could not be broadly applicable to all types of systems? Would it be good to have an opinion on all "high-impact" systems?

Person

From the bill:

includes a trust, a joint venture, a partnership, an unincorporated association and any other legal entity.

I think the obvious question is, am I, as an actual person, a person according to this bill?

Biased Output

From the bill:

means content that is generated, or a decision, recommendation or prediction that is made, by an artificial intelligence system and that adversely differentiates, directly or indirectly and without justification, in relation to an individual on one or more of the prohibited grounds of discrimination set out in section 3 of the Canadian Human Rights Act, or on a combination of such prohibited grounds. It does not include content, or a decision, recommendation or prediction, the purpose and effect of which are to prevent disadvantages that are likely to be suffered by, or to eliminate or reduce disadvantages that are suffered by, any group of individuals when those disadvantages would be based on or related to the prohibited grounds.

It might take a read or two to parse the above. Effectively, as I understand it, bias is something (or some outcome) that adversely differentiates on prohibited grounds, except, when the goal of the system is to reduce/remove that disadvantage in some way. It seems to make sense that we don't want AI to perpetuate bias that we currently have in our society, and, where possible, it would be good to use this technology to remove bias where possible. I also like how this statement is focused on outcomes, rather than just input data, which seems to be an interesting distinction that isn't generally made.

High-Impact System

From the bill:

means an artificial intelligence system that meets the criteria for a high-impact system that are established in regulations.

This is one of the major areas of contention within the bill. Effectively, the government is deferring putting an actual definition in the text of the bill itself, instead, deferring to a regulator/commissioner to define at a later point. The main argument here is that this allows the bill to be flexible as either the societal context changes (what we value, for example) or as regulations in other countries pop up (allowing us to match that regulation more fluidly). It makes sense as a policy instrument but leaves a lot of wiggle room after the fact.

Major Issues

It is actually a bit difficult to summarize the key issues with this bill since there is so much interest and content to review. There are also issues, not only regarding the content within the bill, but also focusing on the structure of the bill and what policy mechanisms are in play to ensure a balance between compliance and flexibility. There is also the unique Canadian context to consider which would include recognizing Canada as a follower nation (generally) when it comes to this type of policy, and the strength of our institutions (for example, the CRTC) to actually regulate and govern the industry.

There is a prevailing sentiment that we must get the bill "perfect" before passing. If you read the companion document and the bill itself, you get the sense that they are trying to do "just enough" in legislation while leaving the bulk of the details to a regulatory body to figure out later. Many concerns are raised with this approach, recognizing that Canada isn't particularly good at revisiting laws on the books, and that there are no processes for strengthening/changing this law over time to accommodate the speed at which technology is changing.

On the protections angle of the bill, you can likely imagine the current divide between AI producers, AI consumers, AI consumers with questionable ethics and business practices, and citizen rights groups. The partnership on AI guidelines on shared prosperity took an interesting view on trying to assess signals of opportunities and risks as it relates to shared prosperity. When you look at the risks category, it would be hard not to count several well-known brands whose sole business mission is to operate within that risk category. We will take a deeper look at a couple of briefs to understand the prevailing arguments next.

What did Meta Say?

In my opinion, most of Meta's brief (February 2024) could be described as being written by someone with a toddler's understanding of topics such as computer security, risk management, AI systems, public policy, societal values, influence, complex adaptive systems, and so much more. Let's review their priority amendments.

The first one revolves around supporting the idea that the "high-impact" term needs to be further defined into categories of systems. However, their support of such was simply to provide a circular argument for why AI systems categorized as content prioritization are not fit for the definition of "high-impact". In providing further definition for high impact systems (which they support) we've proposed categories which are not exactly equal to each other, and content prioritization should not be on-par with the other areas identified. From the brief:

The use of AI systems in these areas does not categorically rise to the same level of risk as the other areas contemplated in the high-impact list, such as employment, health care services, administrative decisions about an individual, and assistance of peace officers. Evidently, these are all scenarios in which the use of AI systems can give rise to effects of legal or similar significance. The same is not true for content prioritization.

It is pretty easy to see how they miss the influence of content prioritization on the other high-risk system categories mentioned, specifically, the role of content prioritization in shaping world view, creating bias where none existed before, and popularizing misinformation.

The second priority amendment was related to the technical controls defined within the bill. Their main opposition is that the jury is still out on exactly how to technically control/validate AI systems and therefore we should defer these decisions to a later date. They reference the NIST AI Risk framework, which would have never addressed anything in this area, as a potential document that could be used to shape Canadian regulation. From the brief:

The regime proposed in the amendments for the regulation of general-purpose AI systems is vague and overreaching. We are specifically concerned about several provisions that set forth detailed, prescriptive obligations that are infeasible, not appropriately tailored, and misaligned in their objectives. Consensus is still developing related to the best way to approach technical requirements for GPAI, which could be significant to the creation of a GPAI framework in Canada that is interoperable with other jurisdictions. For this reason, regulation and codes of conduct developed in collaboration with industry and other stakeholders are a more desirable approach to defining the technical specifications of these requirements.

Vague and overreaching is a fair assessment of the Meta brief in general, and also of the technical controls listed within AIDA. My opinion is that regulation should not necessarily concern itself with what is technically feasible, but what society needs in order to safely run AI systems. Allow the gaps in the market, technology wise, to be filled as researchers and industry are put to task. Later in this section Meta argues that controls should be proportionate to the risks being addressed. This has been a cornerstone of the cybersecurity industry, and one that has led to the current insecure state of our society. Risk is subjective, incalculable, and limited to the imagination of the one doing the risk analysis. Taking actions on these guesses will just cause more issues down the line.

The last of the priority amendments is about limiting the powers of the AI and Data commissioner's audit and remote search authorities. I don't think I need to dive into this one too much, effectively, Meta is happy with the status quo of corporations being able to lie to government authorities with little to no negative outcomes. I will just leave this story here. Note that the fine pales in comparison to the worth of Facebook. If you think this was a one-time violation of laws, have a look at DeepFace.

What did The Dais say?

The Dais took a little bit of a different approach with their brief submission (November 2023) as they had already tried to address concerns around AI in a previous report. In the brief, they actually organized a one-day joint multi-stakeholder routable discussion around this subject and identified 5 areas of concern. The concerns are so on point that I think I will just post them here for you to read directly.

Concern 1: The AIDA does not define "high-impact systems"
Proposed Amendment: Set out the factors to be used in deciding which systems are in scope, as well as deeming a minimum set of high-impact systems, with the ability to add others by regulation.

Concern 2: The AIDA aims to regulate only "high-impact" systems, leaving out broader harms associated with all AI systems.
Proposed Amendment: Broaden how AI systems are categorized beyond only "high-impact", and establish minimum transparency and accountability requirements for systems that pose "lower" levels of impact, and prohibitions for "unacceptable impact" AI systems.

Concern 3: The AIDA does not apply to public institutions.
Proposed Amendment: Public sector use of AI requires legislation.

Concern 4: The scope of harms in the AIDA is limited to individuals, excluding harms towards population groups or communities.
Proposed Amendment: Broaden the scope of harms to include impact from harms toward population groups or communities.

Concern 5: The AIDA's requirement that the ISED Minister appoint an Artificial Intelligence and Data Commissioner creates issues of regulatory independence and lack of oversight
Proposed Amendment: Establish the AI and Data Commissioner as independent from the minister, ideally through a parliamentary appointment, with sufficient resources and processes to support their function

Really not much more to add here.

Conclusion

In conclusion, the Artificial Intelligence and Data Act (AIDA) has generated a significant amount of debate and scrutiny since its introduction. While some see it as a step towards regulating AI in Canada, others are critical of its lack of clarity, scope, and protections for personal information.

This likely won't be our last newsletter on this topic as the bill continues to make its way through the legislative process.