Applicants Must Verify AI-Generated Information Before Legal Use or Risk Serious Consequences

Applicants are reminded to take caution when using AI programs such as ChatGPT to conduct legal research, as such programs can provide incorrect or misleading information, and the impacts on applicants can be serious.

AI tools have been found to cite or provide summaries of decisions that do not exist (referred to as “hallucinated” decisions) and should not be considered a reliable source of legal information. Before including any AI-generated material in submissions, you must verify its accuracy using trusted sources, such as a recognized legal database like CanLII. Failure to do so could be seen as citing false case law. As stated by the Human Rights Tribunal of Ontario (HRTO) in a recent decision, citing a fake decision is the same as making a false statement in court, and could be an abuse of the HRTO’s process. This could result in the HRTO refusing to accept a party’s submissions or even dismissing an application.

The HRTO recently issued a new Practice Direction addressing the use of AI in HRTO proceedings. The Practice Direction states:

  • Parties are responsible for the accuracy of any case law or legal submissions that they provide during an HRTO proceeding.
  • If parties do use AI to conduct their research or document drafting, they must verify the information provided by the AI program using a trusted source, such as a recognized legal database like CanLII, before relying on it in an HRTO proceeding.   

It is strongly recommended that you conduct your own research instead of relying on AI-generated material. All HRTO decisions can be accessed free of charge from the HRTO database on the Canadian Legal Information Institute [CanLII] website. For guidance on how to conduct searches on CanLII’s website, consult CanLII’s Search Help Guide. Consult the HRLSC’s How-To Guide Finding Human Rights Decisions for more information on conducting legal research.