- Learning Centre
- New Lawyer Resources
- Lawyer Programs
- Key Resources
- Legal Practice
- Continuous Improvement
- Cultural Competence & Equity, Diversity and Inclusion
- Lawyer-Client Relationships
- Practice Management
- Professional Conduct
- Professional Contributions
- Truth and Reconciliation
- Well-Being
- Sole Practitioner Resources
- Student Resources
- Public Resources
- Request a Presentation or Resource
- Home
- Resource Centre
- Key Resources
- Professional Conduct
- Gen AI Rules of Engagement for Canadian Lawyers
Last updated November 2024
Since ChatGPT entered the legal lexicon a little more than a year ago, eleven Canadian courts, five law societies, two professional liability insurers, one provincial government and the Canadian Judicial Council have issued guidance documents regarding the use of Generative AI, also known as Gen AI. They uniformly emphasize that lawyers are responsible for the truth and accuracy of their work and how Gen AI must be used carefully. None have attempted to ban its use.
For an introduction to using Gen AI technology and the different terminology used in this resource, see the Law Society of Alberta’s Generative AI Playbook.
Lawyers need to understand the rules of engagement in every jurisdiction where they practice. Read more details in the sections below.
The Law Society of Manitoba was the first Canadian law society to pronounce on the topic of Gen AI. In its May 2023 Communiqué, it posed the questions, “Generative AI: What is it? And What are the Ethical Considerations Relating to its Use?”
You have probably heard about ChatGPT that was released by OpenAI last November. If you haven’t heard of it, take note now. It is important that lawyers understand current technology and implications regarding its use in the delivery of legal services…
The Law Society noted how Gen AI was already being used in a variety of ways by professionals and the public despite its recent arrival on the scene. It reminded lawyers to verify any work product produced through artificial intelligence.
Powerful tools still need human verification (see chatbot hallucination). Just as you wouldn’t rely on a summary of a decision to include in a factum or brief, you wouldn’t rely on a memo developed through an artificial intelligence tool without reviewing the source materials. Artificial intelligence is here to stay. Be aware of the new tools being developed, but be wary as well.
In April 2024, the Law Society followed up with a comprehensive resource, Generative Artificial Intelligence: Guidelines for Use in the Practice of Law, designed to assist the profession in evaluating the benefits and risks of Gen AI. It includes definitions of the terms used in AI discussions and examples of how its use connects to the Code of Professional Conduct. For example, it recommends that:
Fee arrangements should not generate an inappropriate windfall for a lawyer arising from the efficiencies created by using an AI tool to perform a certain task. Where Generative AI tools are used, time spent in various tasks might be reduced, creating efficiencies for counsel. It would not be appropriate to charge hourly fees reflecting the time it would have taken to generate the work product without the use of generative AI. However, it is appropriate to charge for the time spent in crafting and refining AI inputs and prompts and in reviewing, confirming, analyzing and editing generative AI tool output.
Significantly, it provides a window to the future when it notes that expectations about the use of Gen AI are rapidly evolving:
Generative AI can be an effective tool in the delivery of legal services that lawyers may soon be expected to take advantage of for the benefit of their clients. Similar to the adaptation by lawyers of electronic rather than manual case research, the use of generative AI may become commonplace.
Still in Manitoba, the Court of King’s Bench of Manitoba issued a one-paragraph Practice Direction on the Use of Artificial Intelligence in Court Submissions on June 23, 2023 requiring parties to reveal “how artificial intelligence was used” in the preparation of any materials they file with the court.
With the still novel but rapid development of artificial intelligence, it is apparent that artificial intelligence might be used in court submissions. While it is impossible at this time to completely and accurately predict how artificial intelligence may develop or how to exactly define the responsible use of artificial intelligence in court cases, there are legitimate concerns about the reliability and accuracy of the information generated from the use of artificial intelligence. To address these concerns, when artificial intelligence has been used in the preparation of materials filed with the court, the materials must indicate how artificial intelligence was used.
Within days of Manitoba’s announcement, the Supreme Court of the Yukon issued a two-paragraph Practice Direction dated June 26, 2023 on the Use of Artificial Intelligence Tools. It too required any lawyer or party who “relies on artificial intelligence (such as ChatGPT or any other artificial intelligence platform) for their legal research or submissions in any matter and in any form before the Court, [to] advise the Court of the tool used and for what purpose.”
On Oct. 6, 2023, the Alberta Courts issued a Notice to the Profession & Public – Ensuring the Integrity of Court Submissions When Using Large Language Models. The Courts acknowledged that emerging technologies like Gen AI bring both opportunities and challenges, and that the legal community must adapt accordingly.
Unlike Yukon and Manitoba, Alberta did not require lawyers or litigants to disclose what tools they use to prepare court filings.
The Courts urged litigants to exercise caution when citing legal authorities or analysis derived by Gen AI platforms and simply noted that “it is essential that parties rely exclusively on authoritative sources such as official court websites, commonly referenced commercial publishers, or well-established public services such as CanLII” for any references to case law, statutes or commentary in representations to the courts.
The Courts called for “human[s] in the loop” and stipulated that all AI-generated submissions must be verified with “meaningful human control” that cross-references reliable legal databases to ensure that citations and their content hold up to scrutiny.
In January 2024 the Law Society of Alberta released the Generative AI Playbook, outlining the duties, risks, opportunities and recommendations associated with Gen AI.
A week after the Alberta Courts’ Notice, the Supreme Court of Newfoundland and Labrador issued a Notice entitled Ensuring the Integrity of Court Submissions When Using Large Language Models dated Oct. 12, 2023.
The Court echoed the language used in Alberta and recognized that emerging technologies “often bring both opportunities and challenges, and the legal community must adapt accordingly.”
They urged practitioners and litigants “to exercise caution” when referencing legal authorities or analysis derived from Large Language Models in their submissions. They noted that it is essential that parties rely exclusively on authoritative sources and well-established public services such as CanLII and they required a “human in the loop” to verify any AI-generated submissions.
Like Alberta, but unlike Yukon and Manitoba, the Supreme Court of Newfoundland and Labrador did not require lawyers or litigants to disclose what tools they use to prepare court filings.
The theme of not requiring disclosure of the tools used in preparation of court materials was continued in an E-Brief issued by the Law Society of British Columbia in July 2023. The Law Society stopped short of mandating disclosure and simply said that it would be “prudent to advise the court accordingly” if materials are generated using technologies such as ChatGPT.
On Oct. 12, 2023, the Law Society issued a lengthier Guidance on Professional Responsibility and Generative AI focusing on various ethical issues raised by Gen AI: technological competence, competence generally, confidentiality, honesty and candour, responsibility, information security, reasonable fees and disbursements, plagiarism, fraud, deep fakes and bias.
The Law Society’s guidance repeated the caution that some courts require lawyers to disclose when they use generative AI to prepare submissions. It noted that “some courts even require not just disclosure that generative AI was used, but how it was used.” The Law Society did not take a similar position and simply reminded anyone contemplating the use of generative AI to “check with the court, tribunal, or other relevant decision-maker to verify whether you are required to attribute, and to what degree, your use of generative AI.”
The Law Society cautioned lawyers “not [to] lose sight of your responsibility to review the content carefully and ensure its accuracy.”
Still in British Columbia, on Nov. 23, 2023, the Lawyers Indemnity Fund (LIF) issued Generative AI: What Lawyers Need to Know. It succinctly observed that “technology has certainly seen its share of fads that come and go, but generative AI is here to stay.” LIF noted that “generative AI breakthroughs have brought tremendous opportunities for efficiency and effectiveness in all professions,” and reminded lawyers to be aware of and consider the risks before adopting AI into practice.
LIF’s guidance identified potential issues regarding client confidentiality, inaccurate results, biased results and cybersecurity and fraud, and offered specific suggestions for steps lawyers can take in response to the risks presented.
Echoing the Law Society of British Columbia’s comment, LIF encouraged practitioners to “check with any courts, tribunals or authorities on their AI policies; some courts in Canada have started requiring disclosure of the use of generative AI tools in submissions.” For their own part, LIF did not require such disclosure themselves.
On Feb. 20, 2024, the B.C. Supreme Court in Zhang v. Chen, 2024 BCSC 285 dealt with an actual case of a lawyer who had misused ChatGPT by inserting two non-existent cases into a notice of application. As the presiding Justice observed, “it is unfortunate that [the offending counsel] was not aware of the various notices from the Law Society regarding the risks of generative AI.” The Justice noted that “citing fake cases in court filings and other materials handed up to the court is an abuse of process and is tantamount to making a false statement to the court. Unchecked, it can lead to a miscarriage of justice.”
Moreover, the Court noted, “the fact that [counsel] did not read or heed the express warning on the ChatGPT website that the output could be inaccurate and that using ChatGPT is not a substitute for professional advice is troubling.”
The Court ordered the lawyer to be personally responsible for costs. Moreover, it directed her to:
… review all of her files that are before this court. If any materials filed or handed up to the court contain case citations or summaries which were obtained from ChatGPT or other generative AI tools, she is to advise the opposing parties and the court immediately. Otherwise, a report confirming her review of her files is to be provided within 30 days of the date of these reasons for judgment.
Prospectively, consistent with the guidance of the Law Society referenced above, it would be prudent for [counsel] to advise the court and the opposing parties when any materials she submits to the court include content generated by AI tools such as ChatGPT.
On March 12, 2024, the B.C. Court of Appeal issued a Registrar’s Filing Directive dealing with the filing of documents. Regarding the use of litigation aids & artificial intelligence, it noted that:
Given the rapid development of artificial intelligence tools, the Court reminds all litigants that they are responsible for the authenticity and accuracy of all materials filed with the Court.
In its 2023 Annual Report issued March 28, 2024, the BC Supreme Court commented that:
The Court expects litigants, lawyers, and others who participate in court proceedings to inform themselves of the current issues and advice regarding artificial intelligence in relation to court proceedings, and to ensure that the materials they produce to the Court are authentic and accurate.
In February 2024, the Law Society of Saskatchewan issued Guidelines for the Use of Generative Artificial Intelligence in the Practice of Law.
Much like the Law Society of British Columbia, Saskatchewan focused on a series of ethical duties that Gen AI engages: competence and diligence, confidentiality, complying with the law, supervision and delegation, communication, charging for work, candour to the tribunal and the prohibition on discrimination, harassment and guarding against bias.
The Law Society of Saskatchewan emphasized the need for lawyers to engage in continuous learning about AI and its implications for legal practice. It encouraged legal workplaces to establish policies and mechanisms to identify, report and address concerns about the use of AI.
Consistent with other jurisdictions, Saskatchewan cautioned that “the outputs of generative AI tools may not always be sufficiently reliable for use in the legal services context without independent vetting by a lawyer.”
The Law Society addressed the need for lawyers to understand Gen AI before using it.
[A] lawyer should ensure that they sufficiently understand how the technology works, its limitations, and the applicable terms of use and other policies governing the use of client data by the product. A lawyer must critically review, validate, and correct both the inputs and the outputs of generative AI tools to ensure the content accurately reflects and supports the interests and priorities of the client in the matter at hand. The duty of competence requires more than the detection and elimination of false AI-generated results. Competence requires the continuous application of legal reasoning and analysis regarding all potential options and impacts, including those that are included or omitted from or by AI tools.
On Oct. 18, 2023, the Supreme Court of Nova Scotia issued a one-page notice entitled Ensuring the Integrity of Court Submissions When Using Generative Artificial Intelligence (‘AI’). The court recognized that emerging technologies bring both opportunities and challenges, and noted that “the legal community must adapt accordingly.”
It urged all practitioners and litigants to exercise caution when citing legal authorities or analysis derived from generative AI in their submissions and repeated the often-seen reminder to “rely exclusively on authoritative sources such as official court websites, commonly referenced commercial publishers or well-established public services such as CanLII.” Any AI-generated submissions must be verified with meaningful human control.
On Oct. 4, 2024, the Court issued a Directive Regarding the Use of Artificial Intelligence (AI) in Proceedings before the Registrar in Bankruptcy, noting that “effective immediately,… the Registrar in Bankruptcy … requires individuals to include a declaration in the first paragraph of their materials stating that artificial intelligence was used, in whole or in part, to generate content in the document, indicate what content, and what AI tools were used.” The Directive added that “AI-generated data and reports… referenced in BIA proceedings… must be based on verifiable sources and accompanied by supporting evidence” and that “Counsel, Trustees and self-represented individuals are responsible for verifying the accuracy of such evidence, submissions, and authorities.”
On Oct. 27, 2023, the Nova Scotia Provincial Court issued a document entitled Use of Artificial Intelligence (AI) and Protecting the Integrity of Court Submissions in Provincial Court. It, too, encouraged litigants to “exercise caution when relying on reasoning that was ascertained from artificial intelligence applications” and expected them to use only “accredited and established databases.’ Consistent with some of the earlier positions adopted by Canadian courts, it stipulated that ‘any party wishing to rely on materials that were generated with the use of artificial intelligence must articulate how the artificial intelligence was used.”
On Dec. 20, 2023, the Federal Court issued a three-page Notice to the Parties and the Profession: The Use of Artificial Intelligence in Court Proceedings in which it too “recognize[d] that emerging technologies often bring both opportunities and challenges.” The Court published an updated Notice to the Profession on May 7, 2024 in which it expanded on its original position requiring litigants to inform the court and other parties “if documents they submit to the Court, that have been prepared for the purposes of litigation, include content created or generated by artificial intelligence.”
The disclosure requirement can be triggered by the use of any AI, not just the major Gen AI platforms. “A Declaration is not required if AI was used to merely suggest changes, provide recommendations, or critique content already created by a human who could then consider and manually implement the changes.” However, “a Declaration is required when the role AI plays in the preparation of materials for the purpose of litigation resembles that of a co-author.”
The broad reach of the Court’s disclosure requirements was reiterated in a video entitled ‘Compliance with the Notice on the Use of Artificial Intelligence’ published by the Court on Oct. 28, 2024. It provides examples of work by a lawyer or litigant that would require disclosure, including:
- asking AI “what is leading case law in judicial review applications”;
- asking it to evaluate the strength of opposing counsel’s argument and selectively pasting portions of the response into submissions;
- asking it to translate a judgment from French and submitting the translation;
The Declaration must be done in writing in the first paragraph of your filing and must indicate whether AI was used to prepare either the entire document or specific paragraphs, which you must then identify.
The Notices stipulate that “the Declaration is only intended to notify the Court and parties so that they can govern themselves accordingly” and that “the primary purpose for the Declaration is simply to notify the other party or parties, as well as the Court, that AI has been used to generate content.” It continued by noting that “the inclusion of a Declaration, in and of itself, will not attract an adverse inference by the Court. Similarly, any use of AI by parties and interveners that does not generate content that falls within the scope of this Notice will not attract any adverse inference.”
The Court explained itself by noting that Codes of Conduct only apply to lawyers and do not ensure reliability in documents submitted by self-represented litigants:
The Court recognizes that counsel have duties as Officers of the Court. However, these duties do not extend to individuals representing themselves. It would be unfair to place AI-related responsibilities only on these self-represented individuals, and allow counsel to rely on their duties. Therefore, the Court provides this Notice to ensure fair treatment of all represented and self-represented parties and interveners.
The Court “urge[d] caution when using legal references or analysis created or generated by AI, in documents submitted to the Court. When referring to jurisprudence, statutes, policies, or commentaries in documents … it is crucial to use only well-recognized and reliable sources. These include official court websites, commonly referenced commercial publishers, or trusted public services such as CanLII.”
The Court noted that it is “essential to check documents and material generated by AI. The Court urges verification of any AI-created content in these documents.” Humans must once again be “in the loop.”
To start the New Year, on Jan. 26, 2024, the Cour du Quebec issued a one-page Notice to the legal community and the public – Maintain the integrity of submissions to the Court when using large language models.
The Court recognized that emerging technologies present both opportunities and challenges but encouraged practitioners and litigants to exercise caution when referring to legal authorities or analysis derived from large language models. It emphasized the importance of relying exclusively on authoritative sources such as the official websites of the courts, commonly referenced commercial publishers or well-established public services such as SOQUIJ and CanLII.
The Court called for AI generated material to undergo serious human review, and be cross-referenced with reliable legal databases.
The Quebec Court of Appeal maintained this position in an Aug. 8, 2024 Notice Respecting the Use of Artificial Intelligence Before the Court of Appeal which called on litigants to respect the principles of caution, reliability and human involvement when AI is used to research or prepare documents for the Court. The Court aligned itself with the current trend among Canadian courts to not require parties to disclose when AI is used. It did, however, remind them that “litigants are responsible and accountable for the accuracy of their written and oral submissions” and referred to the Chief Justice’s Directive entitled Rules Respecting the Preparation of the PDF File of Pleadings, Briefs, Memoranda, Books of Authorities or Any Other Document which requires that lists of authorities include hyperlinks.
On Feb. 28, 2024, the Canadian Lawyers Insurance Association (CLIA) issued a brief Generative AI Guidance.
Rather than offer cautions and recommendations itself, CLIA drew attention to several other guidances issued by authorities in the four western provinces, British Columbia, Alberta, Saskatchewan and Manitoba.
On April 25, 2024, the Law Society of Ontario’s Futures Committee presented a White Paper for information to Convocation entitled Licensee’s use of generative artificial intelligence. The document included a quick start checklist, best practices tips and a discussion about professional obligations raised by Gen AI.
The goal of the White Paper was to encourage licensees to better understand Gen AI and use it in an informed, productive manner:
The increased use of generative AI products presents opportunities to provide more efficient services. All licensees are encouraged to experiment with these products and determine how they might be useful in their practice. At the same time there are some risks involved in using generative AI for legal work, and it is important that licensees understand those risks and how to use generative AI in a manner consistent with their professional obligations.
It noted that “if a licensee is billing by the hour, they can only charge for the time actually spent by the licensee on the file, even if a generative AI tool has made the task much more efficient.” It added that “whether a licensee can pass on the cost of using generative AI or other technology to a client as a disbursement depends on the specific circumstances.”
The White Paper did not insist that lawyers inform their clients every time they use Gen AI. In deciding whether to do so, considerations should include:
- Will the use of Gen AI necessarily be disclosed publicly (for example if Gen AI is being used in preparation of a court document before a court that requires such disclosure)?
- Does the client reasonably expect that the material being prepared by Gen AI would actually be prepared by a licensee?
- Are there reputational or other forms of risk to the client that could arise from the use of Gen AI?
- Does use of Gen AI require inputting of the client’s personal or proprietary information?
In these situations, lawyers should be prepared to explain to their clients how they use the technology in their matter, any associated risks, and what steps they take to mitigate same.
On Oct. 15, 2024, Ontario Regulation 384/24 was filed to amend the Ontario Rules of Civil Procedure. Borrowing from the U.S. Federal Rules of Civil Procedure, this regulation makes changes respecting certification of the authenticity of authorities cited in factums and other documents and records cited in expert reports.
It requires every lawyer (or self-represented litigant) who files a factum to include a statement certifying that they are satisfied as to the authenticity of every authority cited in the factum.
The certification must stipulate that authorities published on a government website or by a government printer, on CanLII, on a court’s website, or by a commercial publisher of court decisions are presumed authentic, in the absence of evidence to the contrary.
The regulation requires every expert report to include a statement by the expert certifying that they are satisfied, with limited exceptions, as to the authenticity of every authority or other document or record cited in their report.
The changes to the Rules of Civil Procedure are in force as of Dec. 1, 2024.
To complement the focus of individual courts and regulators on the use of Gen AI by lawyers and litigants, the Canadian Judicial Council released Guidelines for the Use of Artificial Intelligence in Canadian Courts on Oct. 24, 2024, “to provide Canadian judges with a principled framework for understanding the extent to which AI tools can be used appropriately to support or enhance the judicial role.”
The Guidelines are aspirational and advisory in nature. They recognize that some forms of AI are already embedded into everyday judicial practice for tasks such as translation, grammar checking and legal research but note that “it must be unequivocally understood that no judge is permitted to delegate decision-making authority, whether to a law clerk, administrative assistant, or computer program, regardless of their capabilities.”
The Guidelines speak of the need to protect judicial independence, use AI consistently with the courts’ core values and ethics, the legal aspects of AI use, the need for AI tools to be subject to “stringent information security standards (and output safeguards)”, and a desire that any AI tool used in court applications “be able to provide understandable explanations for their decision-making output.”
They note that “Courts should be particularly attentive to the nature of the source material used to train proposed AI systems, aiming to strike an optimal balance between safety and accuracy.”
Generative AI is not mystical; it derives insights from pre-existing content, often sensitive data entrusted to the courts by litigants for safekeeping, not for commercial purposes… Moreover, the content used in training data for widely used generative AI models may have been procured under potentially unlawful circumstances, a concern that courts cannot overlook.
To aid in the evaluation of AI deployment, the Council endorsed the use of innovation sandboxes:
Best practices advocate commencing with a pilot project or establishing a controlled testing environment, known as a sandbox, to allow users to assess AI’s capabilities without incurring the risks of a full-scale deployment.
What’s Next
As courts and regulators across the country become more familiar with Gen AI technology, they will continue to update and clarify the guidance they have issued about how these tools can and should be used. Check the Key Resources section of the Law Society of Alberta website for the latest from Alberta.