Six Prompt Engineering Strategies
by Daisy Grant, Prossimo Global Partners
Welcome to the cutting-edge world of generative AI, where platforms like ChatGPT are redefining the boundaries of human-computer interaction. As these technologies become increasingly sophisticated, the ability to effectively communicate and extract desired outcomes from them becomes paramount.
Whether you're a seasoned user or new to the realm of artificial intelligence, understanding how to enhance your results with these platforms is essential.
This blog post delves into six practical strategies that will empower you to get the most out of ChatGPT or any generative AI system.
From crafting clear instructions to leveraging external tools, these methods will guide you in unlocking the full potential of AI in your daily tasks and queries. Let's embark on this journey of discovery and learn how to optimize our interactions with these remarkable tools.
“OpenAI released a reference guide to improve results from ChatGPT or any generative AI platform. These methods aren’t mutually exclusive. You can use many of them in combination for greater effect.
The point is, experiment!”
Below are six strategies you can use for better results.
These six strategies for Generative AI work to improve your outputs
1) Write clear instructions
These models can’t read your mind. If outputs are too long, ask for brief replies. If outputs are too simple, ask for expert-level writing. If you dislike the format, demonstrate the format you’d like to see. The less the model has to guess at what you want, the more likely you’ll get it.
Tactics:
2) Provide reference text
Language models can confidently invent fake answers, especially when asked about esoteric topics or for citations and URLs. In the same way that a sheet of notes can help a student do better on a test, providing reference text to these models can help in answering with fewer fabrications.
Tactics:
3) Split complex tasks into simpler subtasks
Just as it is good practice in software engineering to decompose a complex system into a set of modular components, the same is true of tasks submitted to a language model. Complex tasks tend to have higher error rates than simpler tasks. Furthermore, complex tasks can often be re-defined as a workflow of simpler tasks in which the outputs of earlier tasks are used to construct the inputs to later tasks.
Tactics:
Use intent classification to identify the most relevant instructions for a user query
Summarize long documents piecewise and construct a full summary recursively
4) Give the model time to “think”
If asked to multiply 17 by 28, you might not know it instantly, but can still work it out with time. Similarly, models make more reasoning errors when trying to answer right away, rather than taking time to work out an answer. Asking for a “chain of thought” before an answer can help the model reason its way toward correct answers more reliably.
Tactics:
Instruct the model to work out its own solution before rushing to a conclusion
Use inner monologue or a sequence of queries to hide the model’s reasoning process
5) Use external tools
Compensate for the weaknesses of the model by feeding it the outputs of other tools. For example, a text retrieval system (sometimes called RAG or retrieval augmented generation) can tell the model about relevant documents. A code execution engine like OpenAI’s Code Interpreter can help the model do math and run code. If a task can be done more reliably or efficiently by a tool rather than by a language model, offload it to get the best of both.
Tactics:
Use embeddings-based search to implement efficient knowledge retrieval
Use code execution to perform more accurate calculations or call external APIs
6) Test changes systematically
Improving performance is easier if you can measure it. In some cases a modification to a prompt will achieve better performance on a few isolated examples but lead to worse overall performance on a more representative set of examples. Therefore to be sure that a change is net positive to performance it may be necessary to define a comprehensive test suite (also known an as an “eval”).
Tactic:
In conclusion, enhancing your experience with ChatGPT or any generative AI platform requires a blend of clear communication, strategic task breakdown, and the judicious use of external resources.
The six strategies outlined above—from providing detailed instructions and reference texts to leveraging external tools and systematic testing—are pivotal in fine-tuning the AI's performance to meet your specific needs. Remember, these models thrive on precision and context, so the more accurately you articulate your requirements, the better the results.
Embrace experimentation as a key part of the process; it's through trial and error that you'll discover the most effective ways to interact with these advanced technologies. By adopting these techniques, you're not just passively receiving outputs; you're actively shaping the AI's capabilities to deliver more relevant, accurate, and useful responses. Keep exploring, keep refining, and watch as your AI interactions become more fruitful and aligned with your objectives.
Daisy Grant specializes in AI, Machine Learning, and Data Science for Leaders holding a professional certification from the University of Chicago, and additional credentials in Data Storytelling and Data Literacy. She provides creative services and special consulting for Prossimo Global Partners.