Task 1.1

Agentic Loops

An agentic loop is the core execution pattern for AI agents. The model receives a task, decides which tool to call, observes the result, and repeats until the task is complete or a stop condition is met.

The Agentic Loop Pattern

An agentic loop follows a simple cycle: the model receives context (system prompt, user message, and prior tool results), generates a response that may include tool calls, the application executes those tool calls, and the results are fed back into the model. This cycle continues until the model produces a final response without tool calls, or a programmatic stop condition is triggered.

The key insight is that the model itself decides when to use tools and when to stop. The application code manages the loop mechanics — executing tools, enforcing limits, and handling errors — but the model drives the decision-making.

Stop Conditions

Every agentic loop needs well-defined stop conditions to prevent runaway execution. Common stop conditions include: the model returns a response with no tool calls (natural completion), a maximum iteration count is reached, a timeout expires, a specific tool signals completion, or token budget is exhausted.

In production, you should always implement at least two stop conditions: natural completion and a hard maximum iteration limit. Without a hard limit, a confused model could loop indefinitely, consuming tokens and compute.

Token Efficiency in Loops

Each iteration of the loop sends the full conversation history to the model, which means token usage grows with each iteration. For long-running agents, this can become expensive. Strategies to manage token usage include summarizing intermediate results, pruning irrelevant tool results from the conversation, and setting explicit token budgets.

The Claude API's prompt caching feature can significantly reduce costs in agentic loops, since the system prompt and early conversation turns remain unchanged across iterations.

Key Concept

The Model Decides, the Code Executes

In an agentic loop, the model is the decision-maker — it chooses which tools to call and when to stop. Your application code is the executor — it runs tools, enforces limits, and manages the conversation state. Mixing these responsibilities (e.g., hardcoding tool sequences in application code) defeats the purpose of agentic architecture.

Exam Traps

EXAM TRAP

Confusing agentic loops with simple tool use

A single tool call and response is not an agentic loop. The loop requires iteration — the model must be able to make multiple sequential decisions based on tool results.

EXAM TRAP

Thinking the application code decides which tools to call

In an agentic loop, the model selects tools. The application code defines which tools are available, but the model decides which to use and in what order.

EXAM TRAP

Forgetting stop conditions

Every agentic loop must have programmatic stop conditions (max iterations, timeouts). The exam may present scenarios where missing stop conditions lead to runaway loops.

Check Your Understanding

You are building an agent that researches a topic and writes a report. The agent has access to search, read_page, and write_file tools. What is the correct way to structure the agentic loop?

Build Exercise

Build a Research Agent Loop

Beginner30 minutes

What you'll learn

  • Implement the core agentic loop pattern
  • Define and enforce stop conditions
  • Handle tool results in conversation context
  • Observe token usage growth across iterations
  1. Create a new TypeScript file and set up a Claude API client. Define a simple tool (e.g., get_weather) with an input schema.

    WHY: You need the API client and at least one tool to create a loop.

    YOU SHOULD SEE: A working API client that can send messages with tool definitions.

  2. Implement a while loop that sends messages to Claude, checks if the response contains tool_use blocks, executes the tool, and feeds results back.

    WHY: This is the core agentic loop — the model decides, your code executes.

    YOU SHOULD SEE: The loop runs multiple iterations as the model calls tools and processes results.

  3. Add a maximum iteration counter (e.g., 10) and a token budget check. Log the iteration count and cumulative token usage.

    WHY: Stop conditions prevent runaway loops and control costs.

    YOU SHOULD SEE: The loop terminates either when the model stops calling tools or when limits are reached.

  4. Add a second tool (e.g., search_web) and give the model a multi-step task. Observe how it chooses between tools.

    WHY: With multiple tools, you can see the model's decision-making in action.

    YOU SHOULD SEE: The model calls different tools in different iterations based on what information it needs.

Sources