As with nearly all other industries and professions, the use of artificial intelligence has very rapidly advanced into a powerful tool available to legal professionals. AI tools have been deployed across the legal profession for a wide variety of uses, including legal research, document creation, document processing, and many other functions.
Touted for their speed and convenience, legal professionals have been quick to adopt these tools into their practice in order to significantly improve productivity and reduce the amount of time required for various tasks that traditionally have been very labor-intensive. Clients likewise have expectations that their legal professionals will adapt to AI usage as a way to deliver savings in legal costs, as well as deliver improved accuracy and thoroughness with the use of state-of-the-art technology.
So what could possibly go wrong?
AI hallucinations occur when an AI model generates incorrect, nonsensical, or fabricated outputs. The AI model presents these hallucinations as factual. These hallucinations occur because AI models predict the most plausible next word or pattern based on their training data, not because they understand truth. AI models may be designed to tell you what you want to hear, and not what you need to hear.
The most infamous examples of AI hallucinations in the legal arena are “fake case citations.” These garner press accounts of lawyers being heavily fined or disciplined because they filed court documents containing AI-generated content that included fictitious case citations. Lawyers cited to cases that did not exist. In some situations, the actual cited case may exist, but the citation does not stand for the legal proposition being cited in the filing. In either event, the lawyer relying upon these AI-generated documents is making misrepresentations to the court. They may not have intended to make a misrepresentation, but because they relied on AI, that is exactly what happened.
The failure by attorneys to carefully check every case citation, and the legal principles set forth in these AI-generated documents presents significant risks to the attorneys using them. Stiff fines may be the least of the attorney’s concerns. Ethics rules can also be implicated which could result in an attorney’s license being suspended or other forms of discipline.
Many courts, concerned with over-reliance by lawyers on AI, have actually enacted rules prohibiting attorneys from using any AI-generated documents in court filings. The United States District Court for the Northern District of Ohio enacted a standing order that prohibits the use of AI “in the preparation of any filing submitted to the Court.” Violations can result in economic sanctions, contempt, and even dismissal of the lawsuit.
Attorneys looking to use AI for any document preparation are best advised to carefully review the Court’s local rules and standing orders in order to ensure that they are not in violation of these rules should they choose to implement AI tools for document presentation. Attorneys should also never accept AI-generated documents at face value. Thorough research to make sure that generated case citations actually exist and actually stand for the legal propositions asserted is required in every instance.
Another pitfall in which lawyers may unexpectedly find themselves is disclosing their client’s confidential information into an open-source AI platform. By uploading any materials or information regarding clients for purposes of providing legal services, AI models will use that information to learn. That shared information necessarily gets exposed outside of the lawyer-client relationship.
As such, if the client has not consented to this disclosure, the attorney has violated ethical rules regarding client confidentiality. Not only can the attorney be subject to economic liability to that client, the attorney may also be subject to license suspension or other discipline due to the ethics breach, even if it was unintentional.
Use of closed-source AI tools may avoid some these risks. However, attorneys must carefully evaluate any AI tool to make sure that a client’s confidential information is not being disclosed beyond the attorney-client relationship. If there is any potential of disclosure, attorneys should consult their clients and obtain signed disclosures and waivers allowing for such AI tool use. Clients expecting their lawyers to use efficient AI tools and platforms, may be very willing to sign such consents.
Another AI tool popular with legal professionals is document summarizers. By uploading extensive documents and then using the tool’s feature to summarize those documents, attorneys can save hours of labor to obtain a concise summary, key pieces of information, or critical documents that may be a game-changer for the client’s legal matter.
As with the other AI tools, over-reliance on AI summarizers can lead the legal professional into troubled waters. Clients rightfully expect their attorneys to carefully review all documents and evidence to ensure that their rights are protected and that they get the best advice based upon all of the information available. AI summarizers may not identify certain documents or pieces of information that may be crucial to a client’s particular legal issue. Reliance upon the summarizer, without double-checking accuracy or completeness, is a malpractice trap.
The key to effective and safe use of AI tools is to use them as a starting tool to get base-level research, information, or general document summaries. Getting initial results from AI tools, and then carefully vetting those results, building onto them with additional research and review is the key to effective and safe use of AI.
As AI technology continues to advance, it may begin to address some of these significant shortcomings that continue to be exposed on a regular basis. However, the technology has not yet reached that point. Legal professionals who look to use AI tools as a crutch, and without carefully reviewing the product generated by AI, do so at their own risk. The consequences of using AI as a shortcut rather than a base-level tool can destroy an attorney’s reputation, create significant economic liability, and even license suspension depending upon the severity.
Legal professionals and law firms have turned to Plakas Mannos over the years for legal ethics advisory and risk management services. Contact us if you need a legal ethics attorney or assistance navigating professional conduct issues.
David Dingwell is a partner at Plakas Mannos where he provides ethics and professional conduct advisory services, as well as probate, estate, and trust administration and planning.