Image by freestockcenter on Freepik
With the emergence of large language models, prompt engineering has become an essential skill. Put simply, prompting involves how humans interact with machines. Engineering the prompt suggests an effective way to communicate the requirement so that the machines’ responses are contextual, relevant, and accurate.
The Framework
The prompt engineering framework shared in this article significantly enhances your interactions with AI systems. Let’s learn to create powerful prompts by following the six-step framework, including persona, context, and task, and show me how expected output and tone.
1. Persona
Consider a persona as the go-to person or a domain expert you’d approach to solve a particular task. Persona is similar, just that the expert is now the model you are interacting with. Assigning the persona to the model is equivalent to giving it a role or identity that helps set the appropriate level of expertise and perspective for the task at hand.
Example: “As an expert in sentiment analysis through customer care conversations…”
The model that is trained on a huge corpus of data is now instructed to tap into the knowledge and perspective of a data scientist performing sentiment analysis.
2. Context
Context provides the background information and the scope of the task that the model must be aware of. Such an understanding of the situation could include facts, filters, or constraints that define the environment in which the model needs to respond.
Example: “… analyzing call records to understand the customer pain points and their sentiments from the call details between a customer and agent”
This context highlights the specific case of call center data analysis. Providing context is equivalent to an optimization problem – giving too much context can obscure the actual objective while providing too little limits the model’s ability to respond appropriately.
3. Task
The task is the specific action that the model must take. This is the whole objective of your prompt that the model must accomplish. I call it 2C – clear and concise, implying the model should be able to understand the expectation.
Example: “… analyze the data and learn to compute the sentiment from any future conversation.”
4. Show me how
Note that there is no free lunch. The large language models have been shown to hallucinate, meaning they tend to produce misleading or incorrect results. As Google Cloud explains, “These errors can be caused by a variety of factors, including insufficient training data, incorrect assumptions made by the model, or biases in the data used to train the model.”
One way to limit such behavior is to ask the model to explain how it arrived at the response, rather than just share the final answer.
Example: “Provide a brief explanation highlighting the words and the reasoning behind the computed sentiment.”
5. Expected Output
Mostly, we need the output in a specified format that is structured in a clear and easy-to-follow. Depending on how the user consumes the information, the output could be organized in the form of a list, a table, or a paragraph.
Example: “Share the response for the give call summary in a 2-pointer format including Customer sentiment and Keywords that reflect the sentiment category…”
6. Tone
Although specifying the tone is often considered optional, specifying it helps tailor the language to the intended audience. There are various tones that the model can alter its response, such as casual, direct, cheerful, etc.
Example: “Use a professional yet accessible tone, avoiding overly technical jargon where possible.”
Putting It All Together
Great, so we have discussed all six elements of the prompting framework. Now, let’s combine them into a single prompt:
“As an expert in sentiment analysis through customer care conversations, you are analyzing call records to understand the customer pain points and their sentiments from the call details between a customer and agent. Analyze the data and learn to compute the sentiment from any future conversation. Provide a brief explanation highlighting the words and the reasoning behind the computed sentiment. Share the response for the give call summary in a 2-pointer format including Customer sentiment and Keywords that reflect the sentiment category. Use a professional yet accessible tone, avoiding overly technical jargon where possible.”
Benefits of Effective Prompting
Not only does this framework lay down the groundwork for a clear ask, but it also adds the necessary context and describes the persona to tailor the response to the specific situation. Asking the model to show how it arrives at the results adds further depth.
Mastering the art of prompting comes with practice and is a continuous process. Practicing and refining the prompting skills allows us to extract more value from AI interactions.
It is similar to experiment design while building machine learning models. I hope this framework provides you with a solid structure, however, do not feel restricted by it. Use it as a baseline to experiment further and keep adjusting based on your specific needs.
Vidhi Chugh is an AI strategist and a digital transformation leader working at the intersection of product, sciences, and engineering to build scalable machine learning systems. She is an award-winning innovation leader, an author, and an international speaker. She is on a mission to democratize machine learning and break the jargon for everyone to be a part of this transformation.