Prompt engineering for Lawyers

Prompt engineering for lawyers

Key Takeaways

  • Be Specific: Precision in your requests enhances the accuracy of AI responses.
  • Be Clear: Direct language helps the AI understand and respond more effectively.
  • Be Concise: Too much information can confuse the AI. Clarity is essential.
  • Fact-Check: Given training data can be outdated and AIs can make errors, verify everything against current legal databases.
  • Edit and Review: Use AI as a tool to create drafts and refine those drafts for accuracy.
  • Give Context: Background information and/or specific circumstances can improve the output.
  • Avoid Overly Broad or Vague Questions
  • Don’t Use Legal Jargon Without Explanation
  • Avoid Leading or Biased Questions
  • Don’t Ignore the Importance of Context

How to Effectively Communicate with AI

AI is moving at a breakneck pace. The improvements to various AI capabilities and their integration into business practices are becoming easier by the week. This year will be key for law firms to learn and integrate AI into their practices. This will be a year of wider adoption as these tools become more stable in their use.

Communication is one key consideration for working with AI. Most AI systems require some input to guide the AI in helping you accomplish what you are trying to achieve. This has been termed prompt engineering. A prompt is an input message guiding an AI to help you achieve your target goal.

These can be very simple prompts like “Can you provide an outline of Illinois HB 2389?” to much more complex prompts like a chain of thought where you run a series of intermediate reasoning steps to get better results on more complex tasks that require reasoning before responding.

An example of a chain of thought is here:

Chain-of-thought-diagram

 

 

Image Source: Wei et al. (2022)

Understanding Prompt engineering for Lawyers

Prompt engineering for Lawyers is a newer discipline, having really emerged upon the release of ChatGPT in late November 2022. Before we dive into the dos and don’ts, let’s clarify some terminology and just what we are dealing with when we reference AI today.

AI Today

We are not dealing with AI as we see in the movies such as Terminator, iRobot, and The Creator. Today, we are getting easy access to a piece of that puzzle. The AI in the movies is actually made up of many systems, such as:

  • Deep Learning
  • Generative AI
  • Natural Language Processing (NLP)
  • Computer Vision
  • Robotics
  • And more…

What we are seeing all the buzz around today is Generative AI. This is a subset of Deep Learning wherein a system can produce unique and realistic content from what it’s learned in a very large dataset. The most common of these today are Large Language Models (LLMs), which focus on language-based tasks. Their goal is to mimic human intelligence in tasks such as image recognition, natural language processing, and translation. LLMs reuse their training data to solve the problems you give them.

Large Language Models (LLMs)

You can think of LLMs as a huge library that stretches as far as the eye can see, filled with every book, article, and letter ever written. It’s unique because it can read and understand everything stored inside its vast collection. This allows you to ask it a question, and the virtual librarian runs through the library’s endless aisles to gather all the information needed to provide you with a response. Like any human, though, it is possible on occasion that the librarian misunderstands you and returns mixed-up information – but they are always learning and aiming to provide better answers every time you ask.

Machine Learning

You can think of machine learning like a child in the library. This child has a never-ending desire to understand patterns and stories from the books and other materials in the library. Each time they are shown a picture or told a story, they try to figure out the underlying pattern or lesson to it. They will get things wrong, but through every mistake and correction, they get a little better at predicting and understanding the next thing you show them.

Neural Network

This can be thought of as a group of children working together. Each child specializes in a different genre or subject in the library. When asked complex questions, they work together, passing notes among themselves and combining their specialized knowledge to come up with a comprehensive answer. Like a single child, the group can make mistakes but learn from these, too, which allows them to refine their cooperation and improve their collective understanding over time. This is how, piece by piece, question by question, they become better at helping you find exactly what you are looking for.

Now, when you communicate with the librarian, you need to be very specific about what you are looking for, almost like you’re talking to a child who is going to learn as they work with you. If you ask to be told a story, you could get anything from them, but if you ask to be told a “funny story about a dog that loves pant legs,” you are giving it specific details to work with, and you’ll get a much more specific and interesting story.

The Dos of Prompt Engineering for Lawyers

Be Specific About What You Ask

Specify the type of information or insights you are looking for. This can influence the accuracy and relevance of the response. Going back to the story example, instead of asking, “Tell me a story,” ask, “Tell me a funny story about a dog that loves pant legs.”

Prompts-be-specific-about-what-you-ask

Example Prompt: “Identify key differences in copyright infringement laws between the United States and the European Union post-2018.”

Be Clear About What You Ask

Avoid ambiguous language, and be direct in your request. Clarity helps the AI understand the query better and reduces the chances of getting irrelevant information.

Prompts-be-clear-about-what-you-ask

Example Prompt: “Explain the standard procedure for filing a patent application in Japan, including necessary documents and timelines.”

Be Concise

While it is good to be detailed, using too many words can confuse the AI. Try to be clear and specific without adding extra information that is unnecessary. This helps keep your question on point.

Prompts-be-concise

Example Prompt: “List the steps for initiating a small claims case in Texas, including any required forms.”

Fact-Check the Output

Always, let me repeat that ALWAYS verify AI-generated information with current legal databases and resources. LLMs are not up to date. They are trained on data up to a certain point and, therefore, can produce outdated or incorrect information. Even worse, in some cases, it will just make something up.

Example Outcome of Not Fact-Checking: A lawyer in New York used ChatGPT to pull case law and did not fact-check it against the legal databases available. The judge was not familiar with the cited case law, so he went to check the cases but could not find them anywhere. The judge confronted the lawyer, and the lawyer said he used ChatGPT but did not check the case law it returned. It turns out ChatGPTjust made up a bunch of case law.

Edit and Review the Output

Use the initial response as a draft product. Edit and refine it to your liking and understanding. This way, you can tailor it more closely to your needs and catch and correct any inaccuracies.

Practical Tip: After generating your draft document, review and adjust the language to ensure it meets legal standards and is appropriate for your intended audience.

Give Context

Providing background information or the specific circumstances surrounding your request can significantly improve the relevance and precision of the response. There is a fine line here, though, when keeping to the other tip of being concise. There is nothing wrong with testing longer requests. In some cases, the added detail is necessary.

Prompts-give-context

Example Prompt: “Considering the recent amendments to the commercial leasing laws in New York, how might these changes impact tenant obligations?”

Note: It is often better in situations like this to provide the recent amendments or information in a .docx format and attach it to your request. A revised prompt when doing that would be “Considering the recent amendments in the attached document for the commercial leasing laws in New York, how might these changes impact tenant obligations?”

Note: You can take an added step in this and first upload the document and ask the AI to “Please review the attached document and summarize what is in it so I know you understand it all.” Then, go back and use the previous prompt. This way, you make sure it reads and analyzes the full document.

The Don’ts of Prompt Engineering for Lawyers

Avoid Overly Broad or Vague Questions

Providing prompts that are overly broad or vague can lead to answers that don’t help. In law, it’s very important to be specific because even small details can change how laws are understood or applied. An example of the impact is a broad question that can lead to an overwhelming amount of output information, most of which will not apply to your issue or question.

Prompts-avoid-overly-broad

 

Example of a Poor Prompt: “Tell me about property law.”

This is broad and does not specify the jurisdiction, type of property law (commercial, residential), or the issue you’re trying to resolve.

Example of a Good Prompt: “Explain the process of transferring residential property titles in Florida, focusing on any mandatory disclosure requirements.”

Don’t Use Legal Jargon Without Explanation

Using dense legal jargon or overly technical terms without context can confuse AI models, especially if it has multiple meanings or is used differently in different jurisdictions.

The impact can lead the AI models to misinterpret various terminology. This then leads to incorrect or irrelevant information.

Prompts-don’t-use-legal-jargon

Example of a Poor Prompt: “Discuss the application of the doctrine of laches in recovery actions.”

Without specifying jurisdiction or context, the AI may provide incorrect information.

Example of a Good Prompt: “Explain how the doctrine of laches is applied in California copyright infringement cases.”

Avoid Leading or Biased Questions

The LLMs are not biased, but the information they are trained on can be, so bias can come out in responses to your requests. To help prevent additional bias, make sure to avoid questions that imply a certain answer or are skewed to a certain response. If this is not done, it can lead to a confirmation of the bias rather than an objective analysis.

This can result in a lack of comprehensive insight or overlook critical aspects of a case, reducing the effectiveness of your legal research or argument preparation.

Avoid leading or biased questions in your prompts

Example of a Poor Prompt: “Why is the opposing party’s argument in this case unfounded?”

This assumes that the argument has no foundation, which may not be an objective starting point for analysis.

Example of a Good Prompt: “What are the strengths and weaknesses of the opposing party’s argument in this case?”

Don’t Ignore the Importance of Context

If you do not provide enough context or background information, it can lead the AI to make assumptions or ignore critical factors relevant to the request.

This can have a drastic impact on the accuracy and usefulness of the response.

Prompts-don’t-ignore-the-importance-of-context

Example of a Poor Prompt: “How to draft a will”

Example of a Good Prompt: “What are the legal requirements for drafting a valid will in New York, considering a scenario involving overseas assets?”

Limitations and Ethical Considerations

These tools have limitations, and they start with the LLMs and the data they are trained on.

  • AI was trained on 13 trillion tokens. The term “token” refers to a chunk of text that the model reads or generates. A token is typically not a word. It could be a smaller unit, like a character or a part of a word, or a larger one, like a whole phrase.
  • AI does not understand legal principles or ethics in the human sense but processes patterns in data. It does not have the understanding and judgment that come naturally to experienced legal professionals.Think of it as a smart word sorter. It can put words together in ways it has learned, but it doesn’t really understand laws or right and wrong like a lawyer does.
  • The use of outdated information, misinterpretation of complex legal questions, or overlooking nuances in a case are all possible and why human oversight is still required. You should use AI to augment your practice, not to replace an expert’s judgment and expertise. Always double-check the AI’s work as you would a junior lawyer’s draft.

Personal Identifiable Information (PII)

PII is another thing to watch out for. From a privacy standpoint, you should not be putting anything into a retail tool like ChatGPT that can identify someone, and if your settings are not correct, it can use that information in its training.

Public data is an exception here since it’s already available online, so business phone numbers and employee names are not as much of a concern, but Social Security numbers and individual cellphone numbers are.

Also, be careful about how you match this information up. While a name might be public, the unique case situation may not be, so putting them together could potentially put information out there that should remain private. You can always look at anonymizing the information.

You can adjust this in the settings by turning off Chat History & Training in the Data Controls section. This will then make it so any chat sessions are not saved on your browser or used for training on OpenAI’s side in the case of ChatGPT. You should know they still store all data for 30 days for safety reasons.

Chat-history-training-ai

We highly recommend you embrace this technology and begin using it, even if just to get experience with the tools and a better understanding of how to use them. Prompt engineering is a way to leverage AI technology effectively and ethically, so get some hands-on experience. This technology is not going anywhere and will be a part of businesses regardless of industry. We are already seeing heavy use of it in many fields. Now is the time for adoption.

If you want to stay ahead of the curve – sign up for more exclusive law firm industry updates here.