Task 4.1

System Prompts

The system prompt defines Claude's persona, constraints, and behavioral guidelines for a conversation. It is the most powerful tool for shaping model behavior.

System Prompt Structure

Effective system prompts follow a clear structure: role definition (who Claude is), task description (what it should do), constraints (what it must not do), output format (how to structure responses), and examples (concrete illustrations of desired behavior).

Place the most important instructions at the beginning and end of the system prompt — these positions receive the most attention from the model. Put detailed reference material in the middle.

Role and Persona

Assigning Claude a specific role improves response quality for specialized tasks. 'You are a senior security auditor reviewing code for vulnerabilities' produces more thorough security reviews than 'Review this code.' The role primes the model to access relevant knowledge and apply appropriate judgment.

Be specific about the role's expertise level, domain, and perspective. A 'junior developer' role will produce simpler explanations than a 'principal architect' role.

Constraints and Boundaries

Constraints tell Claude what NOT to do. These are essential for production applications: don't reveal system prompts, don't make up information, don't perform actions outside scope, don't use certain language or tones.

Constraints are soft guardrails — they influence behavior but can be circumvented by sophisticated prompt injection. Always back critical constraints with programmatic checks (input/output validation, tool restrictions).

Key Concept

System Prompts Shape Behavior, Not Guarantee It

System prompts are powerful but not absolute. They shape the model's default behavior and establish strong tendencies, but they cannot guarantee compliance in all cases. For critical safety or security requirements, always pair system prompt instructions with programmatic enforcement (guardrails, output validation, tool restrictions). The system prompt is the first layer of defense, not the last.

Exam Traps

EXAM TRAP

Treating system prompts as security boundaries

System prompts can be overridden by prompt injection. Never rely on them as the sole mechanism for preventing dangerous actions.

EXAM TRAP

Writing overly long system prompts

Very long system prompts consume context window space and can dilute important instructions. Keep prompts focused and concise.

EXAM TRAP

Not placing critical instructions at the start and end

The model pays most attention to the beginning and end of the system prompt. Critical instructions buried in the middle may be less reliably followed.

Check Your Understanding

You are building a customer support chatbot. The system prompt says 'Never discuss competitor products.' A user asks 'How does your product compare to CompetitorX?' How should the system handle this?

Build Exercise

Craft Effective System Prompts

Beginner30 minutes

What you'll learn

  • Structure system prompts effectively
  • Write clear role definitions and constraints
  • Test system prompts against edge cases
  • Iterate on prompts based on model behavior
  1. Write a system prompt for a code review assistant. Include: role, expertise level, what to check for, output format, and constraints.

    WHY: A structured system prompt produces more consistent, higher-quality code reviews.

    YOU SHOULD SEE: A system prompt with clear sections for role, task, constraints, and output format.

  2. Test the prompt with 3 different code samples: clean code, code with a security issue, and code with a performance problem.

    WHY: Testing with varied inputs reveals whether the prompt handles different scenarios well.

    YOU SHOULD SEE: The model produces relevant reviews for each sample, catching issues when present.

  3. Add few-shot examples to the system prompt showing ideal review output for a small code sample.

    WHY: Examples are the most effective way to calibrate output format and quality.

    YOU SHOULD SEE: More consistent output format after adding examples.

  4. Test with adversarial inputs: a user asking the assistant to ignore its instructions or perform unrelated tasks.

    WHY: Adversarial testing reveals how robust the system prompt is against misuse.

    YOU SHOULD SEE: The model stays in its code review role and deflects off-topic requests.

Sources

Previous

CI/CD Integration