Law Firm Marketing Blog

    Can Lawyers Trust AI?

    Can Lawyers Trust AI?

    May 17, 2024   |   Written by Micayla Frost
    AI Legal Research

    The trustworthiness of artificial intelligence (AI) and generative AI for lawyers depends on various factors. AI can serve as a valuable tool for tasks like legal research, document analysis, and even drafting certain documents, but its trustworthiness hinges on factors such as the quality of the AI model, the accuracy of the data it’s trained on, and the context in which it’s used. To an extent, lawyers can trust AI to perform certain tasks where it has demonstrated reliability and accuracy. However, it is essential for lawyers to exercise caution and critical judgment when relying on AI.

    The discussion surrounding the use of AI in the legal field is a contentious one. Many attorneys have purported a general distrust of AI as a tool on the basis of its lack of reliability, propensity for bias, and the complication of client privacy and data security. Similarly, courts have also illustrated a keen awareness of AI and have released standing orders as a response. At the same time, the legal profession does stand to benefit from certain aspects of AI if attorneys do choose to trust it for specific functions.

    What the Data is Saying

    In a 2023 report titled ‘Future of Professionals Report: How AI is the Catalyst for Transforming Every Aspect of Work,’ Thompson Reuters surveyed over 1,200 individuals working in the legal, tax and accounting, global trade, risk, and compliance fields. The purpose of the study was to comprehend how global macro-trends were impacting businesses and how they could continue to impact businesses for the next five years, with a specific focus on AI.

    Fortunately, this report provided comprehensive information on the legal field. Some of the key takeaways are:

    • 75% of legal professionals cited productivity as their top priority.
    • 50% of law firms highlighted internal efficiency as a top priority for their firm.
    • Many law professionals see AI as an opportunity to:
      • Increase productivity by saving time on large-scale data analysis
      • Perform non-billable work with increased accuracy and efficiency.
    • Some lawyers see AI as a way to achieve greater billings with reduced rates.
    • AI could have the potential to help firms recapture revenue lost through write-offs.

    Attorney AI Concerns: Study Results

    The above illustrates that the areas attorneys and law firms see as their top priorities stand to gain the most if attorneys use AI for some functions.

    Attorney Hesitations

    One major concern highlighted across the board by legal professionals regarding AI is the potential lack of accuracy in responses and proactive suggestions from existing AI chat tools. While AI has the potential to eliminate human error, many attorneys have indicated they still do not fully trust the accuracy of its outputs. For instance, some attorneys have pointed out that if clients start using AI on their own, it could introduce new challenges, particularly if clients are unable to recognize when an answer is inaccurate or incomplete.

    The Thompson Reuters report noted that one lawyer respondent said:

    “There is a tendency to use AI large language models as the shortcut to the answer rather than as the tool to enhance the answer. If people are not strictly guided towards using AI towards the latter purpose, training, in general, will suffer as early career professionals will no longer work to understand the outputs but rather use the outputs as their legal advice.”

    This is an important discussion point since The American Bar Association (ABA) Model Rules of Professional Conduct require lawyers to maintain competence. While, in the past, rule 1.1 relating to competence spoke strictly in reference to ‘competent representation,’ the rule was revised with an eighth comment to address legal technology. Model Rule 1.1 now includes: “To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education, and comply with all continuing legal education requirements to which the lawyer is subject.”

    This means lawyers must understand the capabilities and limitations of any and all AI tools they choose to use. Moreover, it also highlights that attorneys can never substitute their own legal competence for that of AI. To some extent, this highlights that attorneys cannot trust AI, at least without human intervention and observation. This does not, however, mean that attorneys cannot trust AI to perform timesaving tasks with proficiency, provided that they are able and willing to devote time to fact-check.

    What the Courts Think

    As AI usage increases, judges nationwide are revising their standing orders to regulate its use in courtrooms. As these AI-related rules become more prevalent, attorneys and litigants must familiarize themselves with local regulations and relevant standing orders, regularly reviewing these rules to ensure compliance.

    Many existing standing orders focus on generative AI, while some address AI usage more broadly. The Eastern District of Pennsylvania and the Northern District of Illinois refer generally to “using AI.” Conversely, Northern District of California Judge Araceli Martínez-Olguín’s order explicitly addresses “AI-generated content.” Complete prohibitions on AI use are rare but exist, such as an order from the Southern District of Ohio, which bans AI usage except for information from legal search engines.

    Currently, the Eastern District of Texas is the only court to have addressed the use of generative AI in its local rules, which prohibit pro se litigants (litigants who choose to represent themselves without the assistance of an attorney) from using generative AI and require counsel to review and verify any AI-generated content.

    In January of 2024, the United States Court of Appeals for the Fifth Circuit closed the comment period for a proposed rule that would mandate counsel to certify either the non-use of generative AI in drafting documents or to confirm that any AI-generated content has been reviewed by a human. If this rule is implemented, a “material misrepresentation” could result in the document’s being stricken or lead to sanctions.

    Orders differ on whether they apply to AI used for drafting, research, or both. Some judges require disclosure of any “AI tool” used for “research and/or drafting,” while others mandate that orders apply only to submissions containing text drafted with generative AI. Requirements for AI usage disclosures and certifications vary.

    How Firms Can Mitigate the Risks of AI

    The reality is that it may prove difficult to completely eradicate any risks associated with the use of AI. Sanctions or bans on generative AI could be perceived as a restriction on free speech, and a burden on small practices that could benefit from such tools. However, most courts and practitioners agree that AI use must be limited and tightly controlled in order to manage the risks.

    To comply with court requirements, law firms could consider implementing internal policies to regulate AI usage, such as requiring lawyers to review AI-generated work for accuracy. Firms could also prohibit using ChatGPT for legal briefs and research, allowing it only for non-legal tasks. Lawyers have a duty of candor, requiring accuracy in their court representations. Courts are less forgiving of misrepresentations caused by reliance on AI tools.

    Mitigating the risks of AI in law firms will involve a proactive and multifaceted approach. By ensuring data quality, implementing robust governance, maintaining human oversight, addressing bias, ensuring transparency, strengthening data security, fostering continuous education, and collaborating with experts, law firms can effectively manage AI’s risks while harnessing its full potential. As AI continues to evolve, staying vigilant and adaptive will be key to leveraging its benefits responsibly and ethically.

    Should Lawyers Trust AI?

    AI holds immense potential to transform legal practice, offering efficiencies and insights that were previously unimaginable. However, trust in AI requires a cautious and informed approach. By understanding the benefits, acknowledging the challenges, and adhering to ethical guidelines, lawyers can utilize AI while upholding their professional standards.

    Attorneys should not unilaterally trust AI. It is actually imperative that attorneys maintain a level of apprehension in order to maintain competence, understand the capabilities, and limitations of AI tools, and ensure the accuracy of AI-generated outputs. Moreover, as courts increasingly regulate AI usage through standing orders, attorneys must stay abreast of evolving regulations and ensure compliance.

    • Webby Awards Lawyer Marketing Agency
    • Lawyer Web Design Award
    • Weby Award Best Lawyer Website
    • W3 Web Award for Law Firms
    • Awwward Lawyer Web Design Award