In recent years, the legal world has experienced a remarkable transformation through the integration of artificial intelligence (AI). With its continued growth in the legal profession and integration into an expanding number of legaltech tools, it’s important for attorneys to embrace AI and learn to use it to their advantage.
This powerful technology has proven its potential to streamline processes, enhance research capabilities, and improve decision-making. However, as with any rapidly advancing technology, there can be some challenges with AI.
Generative vs. non-generative AI
Before diving in too deep, it’s important to differentiate between generative AI—tools such as Open AI’s ChatGPT—and other AI approaches.
We actually asked ChatGPT to explain the difference, and here’s what it said:
Generative AI, also known as creative AI, stands apart from other forms of AI due to its ability to generate new content and create original output. Unlike other AI systems that are designed for specific tasks and rely on predefined rules, generative AI utilizes complex algorithms and machine learning techniques to produce novel and innovative outputs. It has the capacity to generate new text, images, music, and even videos, exhibiting a level of creativity and autonomy not found in other AI models.
One key distinction between generative AI and other AI approaches lies in their purpose and functionality. While traditional AI models are designed to follow predetermined rules and solve specific problems, generative AI focuses on creative exploration and generating new content. Instead of relying on extensive data inputs and rigid instructions, generative AI models learn from vast amounts of data to develop an understanding of patterns and create unique outputs. This distinctiveness enables generative AI to excel in fields such as art, storytelling, and content generation, where the ability to think outside the box and produce novel ideas is highly valued.
A trust but verify approach to AI
As ChatGPT noted, generative AI excels in storytelling. That becomes a problem when the story is pure fiction.
The legal community was recently buzzing with news of an attorney sanctioned after relying on AI to prepare a legal brief. In an unfortunate turn of events, ChatGPT cited cases that did not exist, leaving the judge, court clerks, and attorneys bewildered.
When AI simply makes facts up whole cloth, it’s called a hallucination.
While it was not a mistake for the attorney to use ChatGPT to assist his legal practice, it was a mistake to leave the work unchecked.
Using AI to catch AI gone rogue
You know the saying it takes a thief to catch a thief? The same can be said of AI.
For example, in the education sector, (and much to the chagrin of students everywhere) one enterprising Princeton University student recently created an app called GPTZero that can detect when students submit essays that have been written by ChatGPT AI.
In the legal sector, TypeLaw actually has the ability to catch when other AI tools’ hallucinations appear in legal briefs.
During the formatting process, our software automatically recognizes legal citations and populates them into the Table of Authorities. If our software does not recognize a legal source, it flags that source for review. Once our AI tool completes its initial check, a human expert reviews all citations to ensure both that the sources exist, and that the sources are cited properly.
This process gives clients the best of both worlds: a combination of artificial intelligence + human intelligence that drastically minimizes the likelihood of errors.
Now, if the attorney in the above case had opted to use TypeLaw to format and finalize his legal brief, we would have provided an additional check, flagging the hallucinated case and preventing the mishap from occurring.
The future of AI-driven legaltech
AI within the legal profession is here to stay, so it’s important that we learn how best to utilize it, while we continue to debate whether it’s coming for our jobs.
We encourage attorneys to explore how AI can improve their legal practice and help them automate rote tasks, but to remain cautious in reviewing any work completed by generative AI.
If you’re interested in using an AI-powered technology that expedites and improves your brief formatting—while still incorporating human expertise—consider giving TypeLaw a try.
Watch the webinar AI & Legal Briefs: Trust But Verify, where experts dive deeper into these issues and discuss how you can use AI to your advantage in your briefing process, while minimizing risk.