Intel or AMD? MacBook Neo or iPad? McDonald’s or Wendy’s? These are all tough choices, and if we’re feeling harried or overwhelmed, it might be tempting to simply ask an AI to pick for us. Or, consider this: You actually do want to make the choice yourself, but your prompt–“Should I go with an Intel processor or AMD?–makes it sound like you want the AI to make the call, not you. A recent chat in the r/PromptEngineering subreddit addresses this very issue, that issue being the difference between asking an AI to make a choice and asking it to analyze the pros and cons of the choices you have. “AI making the choice for you = might be wrong for your situation,” Reddit user AdCold610 wrote. “AI explaining the tradeoffs = you make the informed choice.” That’s a crucial distinction, and the difference between a prompt that yields an argument in favor of a choice and a prompt that instead delivers an analysis of your choices. That means, instead of this prompt: Should I get an Intel processor or AMD? You use this prompt: Intel versus AMD: give me the pros and cons And consider tacking this on, too: along with hidden downsides Of course, your mileage may vary depending on the LLM you’re using and the subject you’re asking about. But, in general, I’m in agreement with the thrust of the r/PromptEngineering discussion: the first prompt may push the model toward making a choice first and then building an argument for it, while the second tends to yield a more balanced and detailed analysis that leaves the final decision up to you. This type of prompting leads to a larger discussion about the idea of AI as a “middle-to-middle” assistant rather than a bot that does everything for you. Put simply, we humans have key roles at each “end” of a project. We provide the problem for an AI to solve, and then we judge the outcome of the “middle” part that the AI did. In this case, we’re beginning with a topic–say, Intel versus AMD–and asking AI for the analysis, which is the middle part. The final decision is for us, not for the AI .