Prompt Chaining
Prompt chaining breaks a complex task into a sequence of steps, where each step's output becomes the next step's input. Each step uses a focused prompt optimized for that subtask. For example, a document processing pipeline might: (1) extract key information, (2) classify the document type, (3) generate a summary.
Prompt chaining is ideal when you can decompose a task into clear sequential steps. Each step can have its own validation gate — if a step's output doesn't meet quality criteria, you can retry that step without rerunning the entire chain.
Parallelization and Routing
Parallelization runs multiple Claude calls simultaneously, either to process independent subtasks (sectioning) or to get multiple perspectives on the same input (voting). Routing uses a lightweight classifier to direct inputs to specialized handlers.
For example, a customer support system might route tickets to different specialized prompts based on category (billing, technical, account). Each specialized prompt can be optimized for its domain, yielding better results than a single general-purpose prompt.
Evaluator-Optimizer Pattern
The evaluator-optimizer pattern uses one Claude call to generate output and another to evaluate it. If the evaluation identifies issues, the output is sent back for revision with specific feedback. This creates an inner loop of generation and critique.
This pattern is particularly effective for tasks with clear quality criteria — code generation (does it pass tests?), writing (does it meet style guidelines?), or data extraction (are all required fields present?). The evaluator acts as an automated quality gate.
Key Concept
Match Pattern to Problem Shape
The simplest orchestration that solves the problem is the best one. A single well-crafted prompt beats a complex chain if the task is straightforward. Prompt chaining beats an agentic loop if the steps are predictable. Use agentic loops only when the model needs to make dynamic decisions about what to do next. Over-engineering the orchestration adds latency, cost, and failure points.
Exam Traps
Defaulting to agentic loops for every problem
Agentic loops are powerful but expensive. Many tasks are better served by simpler patterns like prompt chaining or parallelization. The exam tests whether you can select the appropriate pattern.
Confusing parallelization with async processing
Parallelization in this context means making multiple Claude API calls simultaneously for subtasks. It is not about async/await syntax or background job processing.
Ignoring the routing pattern
Routing is often overlooked but is critical for production systems. A lightweight classifier that directs work to specialized prompts can dramatically improve quality and reduce costs.
Check Your Understanding
A customer support application receives messages about billing, technical issues, and account management. Each category requires different context and handling. Which orchestration pattern is most appropriate?
Build Exercise
Build a Routing Classifier
What you'll learn
- Implement a routing pattern with Claude
- Design specialized prompts for each route
- Compare routing vs. single-prompt approaches
- Measure quality differences between patterns
Create a classifier prompt that categorizes input text into one of three categories. Return the classification as structured JSON.
WHY: The router is the first step in the routing pattern — it must be fast and accurate.
YOU SHOULD SEE: Consistent JSON classification of test inputs.
Create three specialized handler prompts, one for each category. Each should have domain-specific instructions and context.
WHY: Specialized prompts outperform general-purpose ones because they can include relevant context and instructions.
YOU SHOULD SEE: Each handler produces higher-quality responses for its category than a general prompt would.
Wire the classifier to the handlers: classify the input, then dispatch to the appropriate handler. Add error handling for unrecognized categories.
WHY: The routing logic connects classification to specialized handling.
YOU SHOULD SEE: End-to-end routing: input goes to classifier, then to the correct specialized handler.
Compare the routing approach against a single general-purpose prompt. Test with 5 inputs from each category and evaluate quality.
WHY: Empirical comparison demonstrates when routing provides value over simpler approaches.
YOU SHOULD SEE: The routing approach produces more relevant, detailed responses for specialized categories.
Sources
- Building Effective Agents— Anthropic Documentation
- Prompt Engineering: Chain of Thought— Anthropic Documentation