Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

LLMs

The ultimate guide to AI prompt engineering

The ultimate guide to AI prompt engineering

16 min read

Jun 25, 2024

Surrealist image of a man writing on a desk attached to a tiny red house
Surrealist image of a man writing on a desk attached to a tiny red house

Prompt engineering guide for beginners and advanced AI users. Explore techniques, tools, and best practices to improve your AI requests and responses.

Casimir Rajnerowicz

Casimir Rajnerowicz

Product Content Writer

Prompt engineering means writing precise instructions for AI models. These instructions are different from coding because they use natural language.

And today, everybody does it—from software developers to artists and content creators. Prompt engineering can help you improve productivity and save time by automating repetitive tasks.

In this article, we’ll dive into the essence of prompt engineering and its importance across various industries. We’ll cover:

  • What is prompt engineering in AI? An explanation of different types and their applications.

  • ChatGPT prompt engineering guide for beginners: A step-by-step tutorial with practical examples.

  • An overview of techniques and tools used for designing the best prompts.

  • Best practices and tips: Strategies to refine your prompt engineering skills.

By the end of this guide, you’ll be equipped to communicate effectively with generative AI and large language models. You'll get desired outcomes, regardless of your field or experience level.

And, if you want even better control over your outputs for real-life applications of AI and ChatGPT in business, try V7 Go to use AI models at scale for free.

A Generative AI tool that automates knowledge work like reading financial reports that are pages long

Knowledge work automation

AI for knowledge work

Get started today

A Generative AI tool that automates knowledge work like reading financial reports that are pages long

Knowledge work automation

AI for knowledge work

Get started today

You can also use V7 Go to follow our guide below and test different prompt examples.

Are you ready? Let’s start with some core definitions.

What is prompt engineering in AI?

An AI prompt is a carefully crafted instruction given to an AI model to generate a specific output. These outputs can range from text and images to videos or even music.

Prompt engineering means writing precise instructions that guide AI models like ChatGPT to produce specific and useful responses. It involves designing inputs that an AI can easily understand and act upon, ensuring the output is relevant and accurate.

Prompt engineering is essential for improving the performance of AI in various tasks, such as answering customer inquiries, generating content, or analyzing data. Compare two ChatGPT examples below:

Examples of different responses to different prompts in ChatGPT

If we refine our request, we get a response that aligns more with what we need.

Rather than a rigid technical skill, prompt engineering is more about mastering the art of communicating with AI to achieve consistent, controllable, and repeatable results. Although the term "engineering" suggests a highly technical process, it's actually all about using strategic thinking and problem-solving to interact effectively with AI.

Here is another example of adjusting the results by tweaking the input prompt in Midjourney:

Prompt:

photography, person standing in front of the Eiffel Tower at sunrise, wearing a red coat and holding a cup of coffee --ar 3:2

If we change the first word, we get images that will emulate different styles.

Midjourney images generated for different techniques:

Examples of different MidJourney images showing the same topic but using varying styles

As you can see, some prompts may contain additional parameters or specific commands that are slightly more technical and use specific parameters. For example, the --ar element in Midjourney prompts allows us to use fixed image aspect ratios (--ar 1:1 produces a square).

How to become a prompt engineer?

So, you want to be a prompt engineer? The reality is, that it’s not really a standalone career path.

Let’s use an analogy—

If you think about it, people don’t become writers simply because they’ve just learned to read and write. “Writers” are professional storytellers, journalists, or academics—who just happen to write a lot to tell a story or share knowledge. Writing is a means to achieving specific goals.

And, prompt engineering works similarly. It is more of a skill than a profession of its own.

AI is commonly used by software developers, marketers, lawyers, and teachers. However, in a professional context, prompt engineering usually complements other skills.

To understand how prompt engineering can be applied, let's look at some common use cases.

An infographic showing the importance of prompt engineering for specific industries and use cases

While prompt engineering has numerous applications, it's also important to understand its limitations and challenges.

Limitations of prompt engineering

When it comes to prompt engineering, there's both good news and bad news.

The good news is that AI has reached a point where it can understand natural language (NLP technologies), allowing us to express our needs in plain, descriptive terms without needing to code.

The “bad” news, however, is that you still need to have a clear understanding of what you want, describe it in detail, and communicate your request effectively.

A satirical cartoon image showing a robot overwhelmed by too many requests

Moreover, AI can still struggle with nuances, context, and subtleties, meaning it might not always get every aspect of your request perfectly.

Even with very explicit instructions, sometimes you'll still struggle with getting repeatable results. This is especially true when deploying applications and business solutions powered by LLMs. Even with high-performance levels, occasional mistakes are inevitable when processing hundreds and thousands of requests.

An example of ChatGPT not following the prompt

That’s why it is important to use tools that help us control the outputs of our models, for instance by limiting responses to a set of fixed options. Here is an example of how V7 Go allows you to determine available options that ChatGPT can use.

 

Different types of prompt engineering explained.

To get the most out of your prompts, you should understand different types of AI prompts. Here are some common types and techniques:

One-shot and few-shot prompts

These techniques involve providing the AI with examples of the desired task or output before asking it to complete a similar task. By showing the model what is expected through one or a few examples, the AI learns the context and format needed and applies it to new inputs. This is particularly useful for specialized or less common tasks. AI generates a more accurate response based on the example included in the prompt.

__wf_reserved_inherit

Notice that in the conversation above, AI also prepared the lyrics because it tried to mimic the format of the example.

Zero-shot prompts

Unlike few-shot, zero-shot prompts require the AI to perform tasks without any prior examples—based solely on its pre-training. This is used to assess the AI’s ability to generalize from its training to new tasks.

__wf_reserved_inherit

Chain-of-thought prompts

These prompts guide the AI to follow a logical progression or reasoning pathway to reach a conclusion or solve a problem. The prompt encourages the AI to detail its thought process step-by-step, which is helpful for complex decision-making or problem-solving tasks where understanding the rationale is as important as the answer itself. In some cases, even adding a phrase like “explain your reasoning” will improve the quality of our final response.

Iterative refinement prompts

In this approach, the initial response from the AI is refined through subsequent prompts, each aimed at improving or specifying the response further. This can involve correcting errors, asking for more details, or re-directing the AI's approach.

__wf_reserved_inherit

Hybrid prompts

Combining multiple techniques, hybrid prompts might integrate direct instructions with creative challenges or conditional elements with exploratory questions to guide the AI more effectively according to complex needs.

How to learn AI prompt engineering: Start by using an AI prompting tool like V7 Go to compare responses from different models to the same request. Practice few-shot and zero-shot learning techniques. Try out various prompt engineering methods, like using the output of one model as input to another model.

Meta-prompts

These are higher-level prompts that ask the AI to consider its own capabilities or reflect on the type of reasoning it uses. This can be used to adjust its approach or to develop new strategies for answering questions or solving problems.

Each of the prompt engineering techniques mentioned above can be adapted and combined depending on the specific requirements of the task at hand and the capabilities of the AI model being used. Effective prompt engineering involves a deep understanding of these techniques and the ability to apply them creatively.

How to write better AI prompts: prompt engineering guide

To craft effective AI prompts, it's essential to understand the nuances and techniques that can enhance the accuracy and relevance of AI responses. Here are some key aspects to consider:

1. Know the difference between system prompts/main instructions and individual requests

Tools like ChatGPT or DALL-E pair user requests with “system prompts” which provide broader rules. For example, if we request a caricature of Donald Trump or Elon Musk it won’t work because the default system prompt includes a hidden policy, with point 6 stating:

Do not create images of politicians or other public figures. Recommend other ideas instead.

__wf_reserved_inherit

Still, sometimes AI can be tricked into providing results it is not supposed to share. These are known as jailbreak prompts. While fun, it's best to solve specific problems through normal prompt engineering without tricking AI models.

In some cases, we can adjust the core instructions instead. For example, transforming a “helpful assistant” into an “unhelpful assistant.”

 

When writing your requests, investigate the main prompt the model is using and ensure it doesn’t clash with your specific requests. Sometimes system prompts take precedence over user prompts, and the results may disappoint you. For example, we tell AI to write a 1500-word essay while the system prompt tells AI to keep the answer shorter than 500 words.

2. Provide context and ask for the solution to your ultimate problem

This may sound counterintuitive, but it's very easy to overthink your problem. When AI doesn’t understand your final goal, you can spend hours framing the problem incorrectly. It's better to explain the final goal clearly rather than trying to solve intermediate steps without context.

Here is an example of a person trying to figure out what is the standard 4:3 resolution used in old-school video games. They remember that the height is above 700 pixels.

The answer is quite straightforward.

__wf_reserved_inherit

OK. So, the XGA resolution is 1024x768 pixels.

Now, what would have happened if instead the problem was presented as more of an abstract math puzzle?

__wf_reserved_inherit

We get some more weird calculations and suggestions focusing on mathematical criteria specified in the prompt. But there is no answer to our problem.

Let’s confront ChatGPT with the situation:

__wf_reserved_inherit

3. Get a good grasp of the medium or form you are trying emulate

To write good prompts, we must understand terminology related to styles, forms, and formats. For example, for AI-generated music, it is crucial to have knowledge about different music genres.

Here is what the prompting interface of Udio looks like (notice the Lyrics Tip with hints for effective prompting):

__wf_reserved_inherit

And, here is an example of an AI-generated song. The main prompt uses genres (like Nu-metal) but also moods and other keywords.

__wf_reserved_inherit

Also, the prompt for lyrics contains additional information about the song structure. By adjusting where to put a verse, an instrumental solo, a bridge, a chorus, and so on, we can have better control over the final result.

__wf_reserved_inherit

4. Include one-shot or few-shot examples in the prompt

Providing examples within the prompt helps the AI understand the desired format and style. For instance, if you're asking for a product description for your website, include a few examples of existing descriptions to guide the AI.

MidJourney generation with a reference image uploaded

With image generators you can also add reference images to influence the results. For example, Midjourney allows you to add image links to your prompt and uses them for guidance.

Generative AI tool that turns a pitch deck into structured information from unstructured input

Data extraction powered by AI

Automate data extraction

Get started today

Generative AI tool that turns a pitch deck into structured information from unstructured input

Data extraction powered by AI

Automate data extraction

Get started today

5. Attach files to your prompts or build custom knowledge base for your model

OpenAI and Google models support images, PDFs, CSV files, audio, and many other types. Sharing documents and files with the model can significantly enhance its understanding and the quality of the output. This approach is particularly beneficial for tasks involving document analysis, data extraction, and more complex operations that require context from specific files.

However, scaling these processes can be difficult with prompt engineering alone. Performing more complex tasks, like comparing documents for compliance automation may require using an external tool.

 

Various AI platforms and tools support document sharing for enhanced operations. For example, V7 Go offers features that allow users to upload documents and perform complex data operations seamlessly. By integrating these tools into your workflow, you can leverage AI’s capabilities to handle any document-intensive tasks.

6. Learn how to use additional parameters and weight control

If we need an additional layer of control, we can adjust parameters like response temperature. To unlock this functionality, we need to access GPT models directly by using our OpenAI API key.

__wf_reserved_inherit

The temperature setting determines the randomness of the output. A lower temperature (closer to 0) makes the model more deterministic, producing more focused and predictable responses. Conversely, a higher temperature (closer to 1) increases the model's creativity and variability, allowing for more diverse and unexpected outputs.

Each AI solution offers advanced parameters and unique systems for controlling model outputs. For example, many implementations of Stable Diffusion use a system of brackets for determining the importance of specific words within your prompt.

Take a look at the example below:

__wf_reserved_inherit

A prompt like  (((blue))) robot dog results in an image that is more blue than other examples (it even uses a blue background). If we shift our focus to the word robot and put it in triple brackets, we get a picture of a robot on all fours which looks more like a generic toy robot than a dog.

7. Request for specific functionalities in your prompt

ChatGPT can access the web to find or verify information. However, the feature is not used by default. If we ask a generic question, it will first try to provide an answer based on its core training. As a result, we may get a completely different answer depending on whether we request a given functionality or not.

Compare these examples:

No web browsing functionality

ChatGPT not using web browsing functionality

Web browsing functionality requested

ChatGPT using web functionality to find the best cameras in 2024

Notice that the web browsing functionality is available in the chat interface versions of various GPT models but not as an API function. If you need to set up and deploy an LLM-based application that uses web scraping or webhooks, it is best to use a platform like V7 Go.

 

8. Use negative prompts and descriptions of what you don't want to see

When engineering prompts for AI models, specifying what you don't want in the output can be just as important as detailing what you do want.

An example of prompting using negative prompt - a vacation resort with and without people

Negative prompts explicitly instruct the AI on what to exclude from its response. By defining these boundaries, you can prevent the AI from producing content that includes irrelevant, inappropriate, or unwanted elements. This technique is particularly useful for maintaining control over the tone, style, and subject matter of the AI's output.

Image generation in V7 Go

An image generated with V7 Go using the Dalle 3 model

9. Use chain-of-reasoning techniques and connect multiple AI models that us separate prompts

Connecting multiple AI models to handle separate tasks via unique prompts can enhance the overall capability of your AI-driven projects. By breaking down tasks into smaller, more manageable chunks, you can leverage the strengths of different models and achieve more accurate and reliable results.Here is an example of multiple instances of GPT4 performing a task through a sequence of steps:

Step 1: An independent AI model provides feedback on a short story written by the first AI

__wf_reserved_inherit

Step 2: The third instance rewrites the story based on the original text and the feedback

__wf_reserved_inherit

You can use one GPT instance to extract specific information from legal documents and then another instance to verify that information against a set of guidelines or policies. Or, you can generate better responses by setting up a series of “good cop” and “bad cop” models chained together. These models will try to iteratively improve the response by techniques similar to RLAIF.

Prompt engineering software

Various platforms and tools are designed to help users create, manage, and refine prompts for different AI applications to make the process more scalable. Here’s a look at some of the most popular and effective prompt engineering tools available today:

V7 Go: AI & LLM Orchestration Toolkit

V7 Go is a powerful GenAI platform for orchestrating large language models (LLMs), managing AI workflows, and document processing at scale. It offers advanced features for creating detailed prompts, refining outputs, and integrating various data types seamlessly.

Key features:

  • Multimodal AI. Work with text, documents, images, or audio files with multimodal GPTs.

  • LLM orchestration. Connect multiple AI models to solve complex challenges.

  • Prompt chaining. Create sequences of prompts that guide AI through multi-step processes.

  • Index knowledge. Turns documents into searchable databases for more accurate querying.

  • Human-in-the-loop. Integrate human feedback at critical decision points to enhance AI reliability.

You can sign up here to try the free version of this prompt engineering solution.

OpenAI's Playground

__wf_reserved_inherit

OpenAI’s Playground provides a user-friendly interface to experiment with prompts for models like GPT-3 and GPT-4. It allows users to tweak parameters, test different prompts, and see immediate results, making it an ideal environment for learning and refining prompt engineering skills.

Key features:

  • Interactive prompt testing. Input prompts and see responses in real-time.

  • Parameter adjustments. Control settings like temperature, max tokens, and more to fine-tune responses.

  • Example prompts & templates. Pre-built prompts for various tasks to help users get started.

You can create your free OpenAI account and test the playground here.

Lexica: Image Generation & Style Prompts

__wf_reserved_inherit

This website is a handy tool that can help you improve your image generation prompts. It has a huge repository of AI-generated resources complete with prompts that you can copy/paste and reuse in your projects. You can also reverse-engineer prompts by uploading your own images and finding similar matches that were generated with prompts.

Key features:

  • Prompt library. Access a wide range of example prompts tailored for different artistic styles and image requirements.

  • Style guides. Detailed guides on crafting prompts to achieve specific visual effects and styles.

  • Community contributions. Share and explore prompts created by other users, fostering a collaborative environment for prompt refinement.

You can browse Lexica here.

Best practices in prompt engineering

To excel in prompt engineering, you should continuously learn and adapt to new techniques, leverage advanced tools and platforms like V7 Go and follow the best practices listed below:

  1. Clarity and precision. Always be clear and precise in your instructions. Ambiguities can lead to varied interpretations and outputs, which may not meet your needs.

  2. Iterative refinement. Start with a basic prompt and refine it based on the responses you get. This process helps in fine-tuning the AI's outputs to your specific requirements.

  3. Use of keywords. Incorporate relevant keywords and specific details that can guide the AI more effectively towards the desired output.

  4. Understanding the model's limitations. Be aware of what the AI can and cannot do. This understanding will help you craft prompts that are within the capabilities of the model, avoiding overly complex requests that lead to poor responses.

  5. Feedback loop. Utilize feedback to continuously improve the prompts. Feedback from users or the outputs themselves can provide valuable insights into how prompts can be adjusted for better results.

  6. Prompt length. Mind the length of your prompts. If the instructions are too lengthy they can confuse the AI. It will also lead to higher token consumption and higher costs if you are deploying an AI-powered solution for your users and customers.

  7. Ethical considerations. Ensure that the prompts do not encourage the AI to generate harmful or biased content. Being ethical in your prompt engineering is crucial for responsible AI use.

Prompt engineering empowers individuals to harness the capabilities of AI models effectively, enabling them to achieve their desired outcomes with precision and reliability. As AI continues to evolve and integrate into various aspects of our lives, mastering prompt engineering will become increasingly essential for unlocking its full potential in solving complex problems and driving innovation.

Useful Links:

Further Reading:

Next steps

Have a use case in mind?

Let's talk

You’ll hear back in less than 24 hours

Next steps

Have a use case in mind?

Let's talk