Speaking Nicely to Generative AI Seems to Help Get Better Answers

Person speaking nicely to a computerPublished by Ray Poynter, 4 July 2024


In a paper published in April this year, researchers showed how speaking more collaboratively to LLMs seems to generate better responses. The paper describes how Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, and Xinyun Chen conducted a series of experiments to explore how different strategies for prompts delivered different answers.

Their experiments worked with known problems that have solutions, so they were able to measure how different prompts performed in an objective way.

There are two interesting conclusions from the paper:

  • Using LLMs to improve the quality of prompts delivers better results – something that several other people have reported.
  • Using friendly/collaborative language can improve the results.

Using Friendly/Collaborative Language

Examples of the friendly/collaborative language that scored best in their experiments were “Take a deep breath and work on this problem step-by-step.” and “Let’s work together to solve math word problems! First, we will read and discuss the problem together to make sure we understand it. Then, we will work together to find the solution. I will give you hints and help you work through the problem if you get stuck.”

The report does not suggest why these friendly/collaborative prompts perform better. However, if we think about what LLMs do and how they are trained, we see that they analyse a prompt and give the answer that a real human would probably give. A real human is perhaps more likely to give a good answer to a nicer question than to a more terse question, and this could be the learned response for the LLM?

In addition to how LLMs interpret language, I wonder if the way we ask questions impacts our own thinking processes. I can imagine that if I am being friendly, open, and collaborative it might make me more flexible in my thinking and more ready to consider alternatives. I often type “please” in my prompts because that is how I think. I can remember not to type “please,” but doing that does change my thinking process. So, while I do not want to anthropomorphize my relationship with an algorithm, I don’t want to dull my thinking by actively re-wording my prompts to make me less human.

Using LLMs to Improve Your Prompts

The research paper uses a specific framework (which they called OPRO – Optimization by PROmpting). However, the general point about using an LLM to improve your prompts has been written about widely. Two simple approaches are:

  1. Ask, “How could I improve my previous question?”
  2. Use a prompt similar to this: “I am about to ask you [your question], but before you answer it, please ask me five questions that will help you give me a better answer.”