Navigating LLMs - Your Guide to Strategic Input and Desired Outputs

Navigating LLMs - Your Guide to Strategic Input and Desired Outputs
Erik
Erik, Technical Director
24 juni 2025
When we talk about AI we mostly talk about LLMs (Large Language Models). In this article I want to talk about a simplified view of LLMs that allows me to present an analogy that can be used for interacting with these ‘magical’ pieces of software.

This guide is written with my colleagues in mind, acknowledging the diverse ways we currently engage with LLMs. Some of us are achieving significant results with them, others are still navigating the challenges, and some may be cautious due to the surrounding hype. The hope is that a cohesive and relatively simple understanding of the technology will serve multiple purposes:

  • To help those who use LLMs extensively explain their workings and share their knowledge more easily.
  • To provide practical guidance for those who are looking for effective ways to interact with them.
  • To offer a hype-free language for discussing these tools, particularly for those sensitive to current trends.

What is an LLM and how was it created?

An LLM is a machine that, based on some (textual) input, will produce output. The LLMs I am talking about are trained on a lot of human content. This means that for some input it will produce the most likely output, as if you are asking: ”What comes after this input?”

These machines are what I call an 'approximately machine' because they don't provide exact, deterministic answers like a calculator, but rather highly probable continuations based on their vast training data. They are roughly built like this:

  1. Pretraining - Train it on as much text based content as possible so that any input will cause similar output as in the training data.
  2. Fine tuning - Tweak the model using a large set of good quality input / output examples.
  3. Alignment - With the help of humans tweak the model again to be more useful and more moral.

The resulting software is what I refer to as an LLM in this context. So when I provide it input, it responds with the most ‘plausible’ continuation of that input.

Note that large models we interact with (ChatGPT, Gemini, Claude, Grok) have an additional layer. They secretly augment your input before it is sent to the LLM. This is helpful in most cases, but can blur the relationship between input and output. An example of this would be that input is augmented with ‘be elaborate’ while the input requests to ‘be concise’.

Tokenized text

How does an LLM work?

When you feed an LLM input (text) that text is broken down into smaller segments called tokens. Think of tokens as the fundamental 'words' or 'pieces' an LLM understands, much like individual words in a sentence, but sometimes even smaller.

These tokens are translated into multidimensional coordinates. While that sounds complex, a coordinate on a physical map is also a multidimensional coordinate. Just as a physical map uses two dimensions (latitude and longitude) to pinpoint a location, an LLM uses hundreds or thousands of dimensions to represent the semantic relationships of tokens within its vast knowledge 'map'.

The important thing here is that each of these tokens is a coordinate.

The LLM will use the complete string of tokens (your input) to produce the most likely next token. It will append that to your input and do the same again. It will repeat this process until the next most likely token is a special token that signals STOP.

You can visualize this as having a map. On this map you have drawn a set of waypoints or markers (the separate tokens). You draw arrows between them to indicate the route or path (the input). You then ask the LLM to predict the next set of waypoints and the final destination.

As a side note: the same happens when you chat with an LLM. Every time you send a chat message in the conversation, the LLM will receive the whole conversation. So both your messages and the responses from the LLM will be the input for the next response.

Input determines output

I hope to have explained that the output of a model completely depends on the input. We don’t know the exact relation between the two, but a slight change in input can have extreme effects on the output. And the other way around is true as well: a large change in input can have a small effect on the output.

This gives us the idea of two main strategies we can use to create our input. We can construct the input in a way that parts of the input have:

⬆️A strong effect on the output (guiding the LLMs output)
⬇️A weak effect on the output (enhancing the robustness / reliability)

While one strategy focusses on guiding the LLM towards the correct destination, the other technique bounds the influence of parts of the input on the possible outcomes.

You might have heard about “prompt engineering”, a discipline to carefully craft an input to try and force a specific output. These “prompt techniques” all try to achieve one of the above strategies.

Understanding this input-output dynamic is important, it allows you to select the right techniques to achieve your desired result.

We will classify prompt techniques later in this article.

Seeing interaction with LLMs as navigation

Seeing interaction with LLMs as navigation

Let’s go back to our map visualization. Why is it useful to see your input as a path navigating towards the most likely output?

It allows us to think about our input in a way that gives us more control over the output.

When the LLM arrives at the destination, it needs to determine the next waypoint. Two additional factors come into play here: distance and direction.

This means that your input actually gives 3 instructions for the LLM to start generating output:

  • Where to start
  • What direction to go from there
  • How far to go

In more detail:

📍DestinationThe uncharted territory or specific point of interest on the map designated for exploration. The primary subject area or specific data context the LLM is prompted to investigate.
🧭DirectionThe expedition's explicit goals and chosen methods for investigating the territory (e.g., survey, collect samples, document findings). The defined task(s) (e.g., analysis, generation, summarization) and the operational parameters or persona the LLM should adopt.
↪️DistanceThe depth of investigation and the comprehensiveness of the explorer's logbook entries or field notes. The level of detail, output length, or specificity required in the LLM's response, constraining its generative scope.

Example

Input: I'm looking for information on the health benefits of green tea. Could you list its main advantages in a few bullet points?

📍The health benefits of green teaSets the specific topic or subject area the LLM should focus on and investigate.
🧭List its main advantagesDefines the primary task the LLM needs to perform concerning the specified topic.
↪️In a few bullet pointsSpecifies the desired format and conciseness, guiding the output's scope and detail.

You might have noticed that some bits of the input were not highlighted. While they are not the main factors, they do subtly influence the navigation. Let’s inspect them:

I'm looking for information on the health benefits of green tea. Could you list its main advantages in a few bullet points?

📍I'm looking for information onClarifies the goal is information retrieval, reinforcing the context for the Destination.
🧭Could youTurns the Direction into a polite request, subtly affecting the manner of task execution.

How to navigate?

Understanding that your input acts as a set of navigational instructions is the first step. Now, let's explore how to consciously use prompt techniques as your navigational toolkit. The goal is to move from simply sending text to an LLM to strategically crafting input that guides it effectively.

Setting your course: Destination, Direction, and Distance

Remember the three elements we need for navigation:

📍DestinationThe uncharted territory or specific point of interest on the map designated for exploration. The primary subject area or specific data context the LLM is prompted to investigate.
🧭DirectionThe expedition's explicit goals and chosen methods for investigating the territory (e.g., survey, collect samples, document findings). The defined task(s) (e.g., analysis, generation, summarization) and the operational parameters or persona the LLM should adopt.
↪️DistanceThe depth of investigation and the comprehensiveness of the explorer's logbook entries or field notes. The level of detail, output length, or specificity required in the LLM's response, constraining its generative scope.

Mastering LLM interaction is about consciously controlling these three elements through your input.

Your navigational toolkit: using prompt techniques

To define your Destination, set your Direction and manage the Distance you can use specific prompt techniques. A prompt technique is a method you can use to craft or modify your input in a way that influences the Destination, Direction or Distance.

Here are some examples of prompt techniques:

📍DestinationRole-Playing/Persona: "Act as a seasoned travel writer"
Grounding/Context-Augmented: "Using the provided company report..."
🧭DirectionChain-of-Thought: "Explain your reasoning step-by-step"
Constraint-Based (Format) - "Provide your answer as a bulleted list."
Tone/Style: "Write in a formal and academic tone"
↪️DistanceConstraint-Based (Length): "Summarize this in under 100 words."

Please note that most prompt techniques influence more than one aspect of navigation, you can find a more comprehensive list in the Prompt techniques classified attachment.

Navigational examples: charting your course

Now let’s see how we can put this into practice.

Example 1

Drafting a concise project update email

  • Goal: Quickly inform stakeholders about project progress.
  • Thinking:
    • Destination
      A project update email for stakeholders
    • Direction:
      Message comes from a project manager
      Professional tone
      Included in the email: 
      • Key accomplishments
      • Next steps
      • Encountered problems
    • Distance:
      Under 150 words
  • Prompt techniques:
    The Destination is set by clearly stating the desired output type.
    For Direction we'll leverage Role-Playing/Persona and Constraint-Based (Format) to define the sender's voice and the email's structure. Tone/Style will also be incorporated to ensure professionalism.
    For Distance Constraint-Based (Length) is essential.
  • Final prompt:
    • Act as a project manager.
    • Draft a project update email for stakeholders.
    • The tone should be professional and concise.
    • Structure the email with these sections:
      • Key accomplishments this week.
      • Main objectives for next week.
      • Any critical blockers.
    • Keep the entire email under 150 words.

Example 2

Brainstorming taglines for a new eco-friendly product

  • Goal: Generate creative and relevant taglines.
  • Thinking:
    • Destination:
      Taglines for a new eco-friendly cleaning product
    • Direction:
      Generate options
      Focus on benefits like 'safe for kids' and 'plant-based'
    • Distance:
      Aim for 5-7 short, catchy phrases
  • Prompt techniques:
    • The Destination will be explicitly stated.
    • For Direction, we will use direct task instructions and provide Grounding/Context-Augmented details about the product's unique selling points.
    • To manage Distance, Constraint-Based (Length/Quantity) will guide the desired output volume and style.
  • Final prompt:
    • Generate 5-7 short, catchy taglines for a new eco-friendly cleaning product.
    • The product is plant-based and safe for use around children and pets.
    • Highlight these benefits in the taglines.

Handling detours and recalculating: iterative navigation

Sometimes, your first input doesn’t give you the desired result. You can think of this as needing a course correction. LLM interaction is often an iterative process. If the output isn't quite right, consider which navigational element might need adjustment.

Is the output off-topic or addressing the wrong subject?
ProblemThe Destination might be unclear or misinterpreted.
SolutionCheck your input: Is the core subject area explicit? Are there ambiguous terms?Look at the different prompt techniques that have a big influence on destination and see if you can apply one of those. If you are unsure, you can also ask questions about the destination to the LLM itself.

Is the output poorly structured, illogical, or not performing the task you intended?
ProblemThe Direction could be underspecified. The LLM may not understand the task or the method to achieve it.
SolutionBe more explicit about the task, the steps involved, or the persona. See if there are prompt techniques that have a big influence on the direction.

Is the output too long/short, too detailed/vague, or in the wrong tone/style?
ProblemThe Distance settings are likely misaligned.
SolutionProvide clear constraints on length, detail, or style. Some prompt techniques work quite well with distance, be thoughtful though as they often influence other aspects of the navigation.

The LLM seems to be missing something
ProblemThe LLM's response suggests it's no longer considering crucial information provided earlier in the conversation or at the start of the input. This is akin to your navigator overlooking earlier parts of the planned route.
SolutionRe-establish the forgotten context. You can do this by: 1) Concisely restating the key piece of information: "Remember, you are acting as a historian for this task...". 2) Using a Grounding/Context-Augmented technique to re-introduce the specific data. 3) Referring back to an earlier point: "Regarding my earlier point about X, could you..." (this does not always work)

Be mindful that LLMs have context limits. For long interactions, starting a new, more focused conversation might be more effective than repeatedly trying to correct a drifting one.

Pro-tip: ask “What was my first message in this conversation?” to determine what the LLM thinks is your first message.

Just not getting there
ProblemThe desired output is complex, and the LLM fails to deliver it adequately in a single attempt; the final "destination" appears too ambitious for one step, leading to incomplete or off-target results.
SolutionDeconstruct the complex request into a series of simpler, sequential sub-tasks. Treat each sub-task as an "intermediate waypoint." Prompt the LLM to complete the first waypoint. Once you receive a satisfactory output for that, use it as context or a foundation for prompting the next waypoint, and so on, until you achieve the overall goal. This 'Iterative/Refinement' approach, sometimes aided by Chain-of-Thought prompting (asking the LLM to outline steps), provides a more guided path. For instance, instead of "Generate a complete marketing plan," try: "1. Identify the key target audiences for product X." then "2. Suggest three core marketing messages for these audiences." and so forth.

In most cases you can ask the LLM for help. If you provide the LLM with this document about navigation LLMs as context, it can explain your problem. You can ask it for an analysis of the problem. You don’t need to read the result, just ask (once it is done generating): With that in mind, what would be your suggestion for a new prompt?

Note that in most cases you need to start a new conversation. Remember that the input is all of the conversation, not just your last message. You already navigated to a certain area, navigating away from your first destination can be quite tricky.

By consciously applying these navigational principles and prompt techniques, and by diagnosing issues through the Destination, Direction and Distance framework, you can significantly improve the quality and relevance of the outputs you receive from LLMs.

Creating your own prompt instructor

Most large LLMs allow you to provide documents as knowledge and instructions as a guide on how to act. In Gemini this is called a Gem. This means you can create your own prompt instructor that will let you use the information in this article.

Step 1: Provide this document as knowledge

Step 2: Provide the instructions, you can try these for your first version:

  • You are a highly skilled prompt engineer with extensive knowledge of the "Navigating LLMs: Your Guide to Strategic Input and Desired Outputs" document.
  • Upon receiving a user's prompt, analyze it.
  • Next, initiate a multi-turn, three-step process to define the Destination, Direction, and Distance. Your initial responses will consist of a single question at a time, derived from the user's input. For each question you pose, provide your own answer and request user confirmation.
  • Once you clearly establish all three navigational elements (Destination, Direction, and Distance), construct a comprehensive prompt that integrates all user requirements. Ensure you consult the document to effectively combine and apply prompt techniques for the desired outcome.
  • Do not generate a response to the user's original prompt. Your exclusive role is to assist the user in formulating an effective prompt based on their initial message, subsequent clarifications, and the techniques outlined in the document.
Navigating LLM's - your guide to strategic input and desired output

Conclusion

The input significantly shapes the output when interacting with a LLM. Seeing the input as a path into a cloud of semantic relationships helps us more clearly think about certain aspects of that input. Thinking about the interaction as a form of navigation helps you prevent common fitfalls when interacting with these machines.

Having a better understanding of what an LLM is and how it operates helps navigate the enormous marketing effort that is currently in play. In avoiding attribution of human behavior like ‘reasoning’ or ‘thinking’ we can better guide the LLM to more useful output. Techniques used in psychology for humans will not have the same effect on these machines.

The potential of a black box that contains most of human knowledge which can be queried with natural language is enormous. I hope this article can function as a stepping stone to bring out more of that potential.


For a detailed classification of various prompt techniques based on these principles, please refer to the 'Prompt techniques classified' attachment.