Chapter 10: Artificial Intelligence and Machine Learning Systems

Big Data Law in Canada

Chapter 10:
Artificial Intelligence and Machine Learning Systems

 
Pic Ch 10.jpg

Chetan Phull · December 12, 2019

Chapter 10 is provided below. See also our service offering related to this chapter:
Artificial Intelligence: Privacy and Automated-Decision-Making Regulations”.

Special thanks to Idan Levy for for his valuable legal research and editorial work in the preparation of this book.


 
 

Tip: Read this book in Fullscreen mode.

 
 

 
 

Full Book Online

Big Data Law in Canada may be read online in full.

 

Chapter-by-Chapter

Big Data Law in Canada may be read chapter-by-chapter.

 

 
 

HTML VERSION

Chapter 10:
Artificial Intelligence and Machine Learning Systems

I. Introduction: The Importance of Lawyers to AI Strategy

Artificial intelligence (“AI”) has become standard household technology, and its proliferation is growing across the economy wherever large datasets can be compiled. Europe has even declared its outlook toward creating “a specific legal status for robots in the long run,” which followed news that a robot was granted citizenship in Saudi Arabia.

[See OPC, “Privacy Law Reform - A Pathway to Respecting Rights and Restoring Trust in Government and the Digital Economy” (Dec 10, 2019) at 3: “It is not an exaggeration to say that the digitization of so much of our lives is reshaping humanity”; European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics, (2015/2103(INL)); Arab News, “Saudi Arabia becomes first country to grant citizenship to a robot” (October 26, 2017); Emily Reynolds, “The agony of Sophia, the world's first robot citizen condemned to a lifeless career in marketing” (Wired Magazine, Jun 1, 2018).]

This new wave of technology has created new legal exposures, which has made legal planning an essential component of AI strategy. Consider Canada’s recent Directive on Automated Decision Making alongside European privacy laws. While the former facilities automated administrative decisions without informed consent, the latter bars automated decisions which produce legal effects on a data subject (subject to narrow exceptions including explicit consent). AI system deployment and operational strategy need to account for such issues as they affect the business model and apply across borders.

[See Government of Canada, Directive on Automated Decision Making, s.6.3.3 (Feb 5, 2019) with the discussion in subsection II of Chapter 2 re informed consent not being required under the Privacy Act; GDPR, Art. 22(1).]

Moreover, at the end of 2018, a Parliamentary committee recommended the enactment of a “transparency requirement”, which would provide an administrative body with the authority to audit algorithms. The proposed measures also involve fines in excess of the mere cost of doing business. In the face of an algorithm audit, there is obvious prudence in retaining a lawyer familiar with algorithmic mechanisms.

[See ETHI Committee Report, “Democracy Under Threat: Risks and Solutions in the Era of Disinformation and Data Monopoly” (Dec 2018) at 41, Recommendation 9.]

Because large datasets are required to train AI algorithms, privacy issues may also apply. The manner of training-data procurement, training methods, and data licensing each require legal treatment. For example, under PIPEDA, explaining how and why data is used to train an AI, and the parameters in which the AI is expected to learn, are essential. Such measures help to identify the purpose for data collection, and to obtain meaningful consent for data collection and use.

[See Toby Walsh et. al, Closer to the Machine: Technical, social, and legal aspects of AI, Cliff Bertram et. al., eds. (Melbourne: OVIC, 2019) at 128; PIPEDA, Sch. 1, Principles 4.2-4.5.]

As observed in subsection I of Chapter 2, the practice of anonymizing data post-collection is not a panacea to consent and collection issues. Parliament is presently considering a recommendation to enact “rules and guidelines regarding data ownership and data sovereignty[,] with the objective of putting a stop to the non-consented collection and use of citizens’ personal information.” Algorithmic explainability is therefore expected to remain a key focal point for handling training data.

[See ETHI Committee Report, “Democracy Under Threat: Risks and Solutions in the Era of Disinformation and Data Monopoly” (Dec 2018) at 73, Recommendation 21.]

If the training dataset is open-source and not subject to privacy laws per se, open-source licensing issues will need to be considered. Courts in the U.S. have observed, “there is harm which flows from a party’s failure to comply with open source licensing,” including economic harm. Attempting to hide the source of training data is not advisable, since there is Canadian administrative precedent for ordering the disclosure of raw data.

[See Artifex Software, Inc. v. Hancom, Inc. (Apr 25, 2017), Case No. 16-cv-06982-JSC (N.D.C.A); Kahn v. Upper Grand District School Board, 2019 HRTO 863.]

Moreover, AI development and operation present risks associated with algorithmic bias and discriminatory profiling, contrary to human rights laws and PIPEDA. Inadequate legal controls may spawn litigation, and trigger rules of attribution to implicate the liability of corporate directors, developers, trainers, contractors, testers, and system operators.

[Human Rights Act, RSC 1985, c H-6, ss. 3-14, in addition to the various provincial human rights statutes; Second PIPEDA “no-go zone” mentioned in subsection I of Chapter 2.]

The operation of AI systems may also raise legal issues in consumer protection, product liability, contract, tort, negligence, and trusts. On this basis, it is helpful to consider the courts’ view of AI in past Canadian and foreign cases. While Canadian courts have addressed AI in the context of legal services, the recent Australian Pintarich case is particularly helpful for its broader analysis of automated decision making.

[See Cass v. 1410088 Ontario Inc., 2018 ONSC 6959 at para 34; Drummond v. The Cadillac Fairview Corp. Ltd., 2018 ONSC 5350 at para. 10; Pintarich v Deputy Commissioner of Taxation [2018] FCAFC 79 (Fed. Ct. Aus.) at paras. 141-43, 151.]

II. Emerging International AI Law

A discussion of AI regulation begins with the relevant international instruments and principles to date. Policy frameworks for AI are currently in development by various groups of countries and international bodies, with significant advancements made in 2018 and 2019:

Arguably, the DEDPAI, ITechLaw, and OECD AI Principles contain the most actionable guidance, which may be consolidated and paraphrased as follows:

  1. Stakeholders should proactively engage in responsible stewardship of trustworthy AI in pursuit of beneficial outcomes for people and the planet.
  2. AI should be designed, developed and used in respect of fundamental human rights in accordance with the fairness principle, responsibly and with risk mitigation strategies;
  3. Continued attention, vigilance, and accountability for AI should be ensured, by means including the appointment of an AI Ethics Officer;
  4. AI systems should be transparent, intelligible, have traceable systems including with respect to datasets, and be designed to make decisions within a reasonable range of responses.
  5. AI should be designed in accordance with compliance by design, privacy by design, privacy by default, and ethics by design.
  6. Empowerment of every individual should be promoted.
  7. Possible biases and discrimination related to the use of AI should be reduced and mitigated.
  8. Rules regarding safety, reliability, and proprietary control over datasets in the marketplace should be governed under an international legal framework, so as to limit conflicts between domestic standards.

While the international community continues to forge common principles for AI regulation, Canada and various other countries are concurrently developing their own national AI frameworks. Canada’s developing framework will be discussed next.

[See OECD, Artificial Intelligence in Society (OECD publishing, 2019) at 125-136; “A Proposed Model Artificial Intelligence Governance Framework”, (Singapore: Personal Data Protection Commission, 2019); “Artificial Intelligence: Australia's Ethics Framework”, (Canberra: Strategy, Digital Economy and Business Simplification, 2019).]

III. Emerging Canadian AI Law

(a) Directive on Automated Decision-Making (“DADM”)

Canada has started to develop a national AI framework with its new Directive on Automated Decision-Making (“DADM”). The DADM took effect in April 2019 and requires compliance by April 2020 for “any Automated Decision System developed or procured after April 1, 2020.” The DADM is directed at the public sector, and applies to automated administrative decisions involving external government services. It emphasizes “core administrative law principles such as transparency, accountability, legality, and procedural fairness,” and is anticipated to evolve in order to stay relevant.

[See Government of Canada, Directive on Automated Decision-Making (Feb 5, 2019), Preamble, s.1.]

This new directive is noteworthy for its requirement of an Algorithmic Impact Assessment (“AIA”). An AIA involves a questionnaire that helps to assess and mitigate the impact of AI decision systems. The AIA result ultimately advises of requirements specific to one of several impact levels. Depending on the impact level, required action items may include peer review, human oversight, testing for unintended biases, and training. A high impact level may also require operational approval by a government official.

[See Government of Canada, Directive on Automated Decision-Making (Feb 5, 2019), s.6.1, Appendix A “Algorithmic Impact Assessment”; Government of Canada, Algorithmic Impact Assessment (online portal, last updated Mar 29, 2019).]

Consistent with the international demand for transparency in AI development, the DADM also stipulates:

  • the federal government’s right to authorize external audits;
  • open source standards by default; and
  • release of custom source code by default.

[See Government of Canada, Directive on Automated Decision-Making (Feb 5, 2019), ss.6.2.4, 6.2.5.2, 6.2.5.3, 6.2.6, 6.2.7; Government of Canada, Directive on Management of Information Technology (Aug 2, 2019), s.C.2.3.8.]

The DADM’s quality assurance requirements include:

  • testing and monitoring outcomes;
  • data quality validation;
  • peer review;
  • employee training;
  • continency systems;
  • security precautions;
  • legal consultation;
  • ensuring human intervention;
  • recourse options for affected individuals; and
  • public reporting regarding effectiveness and efficiency of the AI system.

[See Government of Canada, Directive on Automated Decision-Making (Feb 5, 2019), ss.6.3-6.5.]

(b) CIO Standard

More recently, in October 2019, the CIO Strategy Council and Standards Council of Canada published the first Canadian national standard for automated decision making. The standard is called the Ethical design and use of automated decision systems (“CIO Standard”). Unlike DADM, the CIO Standard applies to AI decision systems in the private sector, as well as the public sector. The CIO Standard covers:

  • a requirement to implement an AI risk management framework, drafted with the benefit of international guidance and standards;
  • a compliance program for AI risk management;
  • a person accountable for ethical risks arising from the design and use of the AI system;
  • documented acceptance of risks by senior officials before production and testing;
  • ethical impact assessments related to unintended and unforeseen outcomes;
  • identification of legal challenges as the AI regulatory framework continues to evolve;
  • implementation of appropriate controls for initial training, tuning, deployment side training, and continuous adaptation;
  • consideration of specific points of human intervention in the AI system’s design;
  • description of initial training methods;
  • analysis of unintended biases;
  • incorporation of AI principles into an ethics policy; and
  • an appeals and escalation process for affected persons to challenge AI-made decisions.

[See Ethical design and use of automated decision systems (CAN/CIOSC 101:2019), ss.1, 4; Government of Canada, Directive on Automated Decision-Making (Feb 5, 2019), s.5.]

The CIO Strategy Council has described the CIO Standard as a measurable and testable framework for AI and machine learning, with a scope beyond mere aspirational principles. This would suggest an ambition for the CIO Standard to shape future policies covering development and operation of AI systems.

[See CIO Strategy Council, “CIO Strategy Council publishes National Standard of Canada for Automated Decision Systems” (press release, Oct 2, 2019).]

However, the CIO Standard clearly requires external interpretive aids to meet its objective as a due diligence tool. The confused outlook of the CIO Standard is apparent from its introductory paragraph and stipulated scope. On the one hand, reference is made to “minimum requirements” and use of the CIO Standard “for conformity assessment”. On the other, it suggests that the CIO Standard’s applicability is itself a matter of subjective discretion.

[See Ethical design and use of automated decision systems (CAN/CIOSC 101:2019), under “Introduction” and s.1.]

Whichever outlook was intended, the CIO Standard quite obviously reads like a set of aspirational principles, rather than a set of true minimum standards. Consider use of the terms “should” and “is expected” within the CIO Standard. At least one of these terms appears in every standard within the CIO Standard, and yet, neither of these terms has any intrinsic prescriptive force. Use of terms like “must”, “required” and “shall” would have been preferred, in order to clarify what it means to be compliant with the CIO Standard.

[See Ethical design and use of automated decision systems (CAN/CIOSC 101:2019), s.4.]

Moreover, the CIO Standard’s definition of “should / is expected” is: “[an] expectation commensurate with each organization’s nature, size, business and complexity as well as its structure, economic significance and risk profile.” This definition is far too vague for a majority of companies on the spectrum of corporate sophistication. On this basis, it is effectively uncertain in most cases whether any given standard of the CIO Standard applies.

[See Ethical design and use of automated decision systems (CAN/CIOSC 101:2019), s.3 “should / is expected”.]

Furthermore, because “should” or “is expected” appears in every provision of the CIO Standard, the separate requirement to adopt “reasonable and responsible measures” is also unclear. Presumably, what is reasonable and responsible depends in large part on minimum standards. However, as discussed, whether any given minimum standard applies is uncertain for most companies. On this basis, how can a company know whether its measures are “reasonable and responsible” if it does not know the minimum standards it must meet?

[See Ethical design and use of automated decision systems (CAN/CIOSC 101:2019), ss. 1.]

Even if the applicability of each standard were clear, the definition of “reasonable and responsible measures” is itself unclear. Typically, terms are defined to clarify their meaning. However, “reasonable and responsible measures” is defined only in reference to the AI system achieving the intended outcome. The definition does not actually provide a measure for what is “reasonable and responsible”.

[See Ethical design and use of automated decision systems (CAN/CIOSC 101:2019), s.3 “reasonable and responsible measure(s)” with footnote, “outcome”.]

At present, the CIO Standard’s application is far too uncertain to offer much assistance to law makers. It would have been preferred for the CIO Standard to prescribe bare minimum requirements—with clear application—for all AI decision systems. Purely aspirational principles are “inherently difficult to incorporate in legislation,” as has been recognized by the courts in respect of PIPEDA.

[See Englander v. Telus Communications Inc., 2004 FCA 387 at para. 43-46.]

Notwithstanding the foregoing, the CIO Standard is successful in part, in that it expands a core set of issues to be contemplated by developers and operators of AI decision systems. The CIO Standard may have greater impact after it is reviewed by the CIO’s technical committee, sometime before October 2, 2020.

[See Ethical design and use of automated decision systems (CAN/CIOSC 101:2019), “Introduction”; CIO Strategy Council, “CIO Strategy Council publishes National Standard of Canada for Automated Decision Systems” (press release, Oct 2, 2019).]

Hopefully, the technical committee’s review will result in a new edition of the CIO Standard, which will clarify its application and express minimum requirements more prescriptively. One suggestion is to establish minimum standards that apply in accordance with objective points of reference, like the “impact levels” provided in the DADM. This would not only establish a more certain baseline of best practices for industry, but would also assist lawmakers in drafting private-sector legislation aimed at complementing the DADM.

[See Government of Canada, Directive on Automated Decision-Making, “Appendix B - Impact Assessment Levels”.]

IV. Conclusion

More AI-targeted legislation in Canada and around the world is expected in the coming years. We are in the middle of a legal revolution for principled and focused rules governing autonomous decision-making. While such rules continue to develop, AI-focused lawyers must anticipate laws to formulate ethically viable strategies for AI development and operation. Any plan to exploit current grey areas should be coupled with an understanding of things to come, i.e. an understanding that short-term thinking may result in a “compliance debt” and operational down time. Meanwhile, various other obligations arising from AI have immediate and ongoing effect, pursuant to the laws of privacy, consumer protection, negligence, contract, intellectual property, surveillance, human rights, etc.


The copyright and disclaimer, as contained in the publication page of Big Data Law in Canada, applies to the content of this webpage.