Overview
This skill has four layers:
- Write the task well (without caring about prompt structure)
- Review and refine
spec.md(clarity + coverage pass) - Identify the context layer (slots + tools)
- Structure the prompt (purely technical assembly)
1) Write the task well without caring about prompt structure
What you’re optimizing for
Write instructions that are processing-minimal (low working-memory cost) while being expressive enough to generalize across messy edge cases without growing a brittle exception list.
The mechanism is simple:
- Use natural frames (shared schemas) so you can say less.
- State invariants (governing principles) so many cases collapse into one line.
- Keep flow linear so the reader/model never has to “detour” to reconcile side facts.
The linear writing algorithm
You will create spec.md immediately and iteratively refine it. Each step updates spec.md until it contains only the final specification (no draft markers, no commentary).
0) Create spec.md
Output: a spec.md file with headings for: Intent, Unit of attention, Narrative, Invariants, Ambiguity.
You will fill and rewrite these sections as you proceed.
1) Write the intent sentence (activate the frame)
Output: one sentence naming the job in domain language.
Guidelines:
- Prefer culturally-loaded verbs (“evaluate eligibility”, “assess compliance”, “summarize profile”) over implementation verbs (“parse”, “extract”, “compute”).
- Include the audience/purpose only if it changes decisions.
Template:
- “Your task is to <do the job> using <inputs> to produce <output>.”
2) Declare the unit of attention (prevent drift)
Output: one line stating the repeatable “thing” being handled.
Examples:
- “The unit of work is each rule.”
- “The unit of work is each requirement.”
- “The unit of work is each section of the policy.”
This is the spine that stops the work from dissolving into vague commentary.
3) Draft a single narrative paragraph (the spine)
Output: one continuous paragraph explaining the task end-to-end.
Rules:
- No parentheticals.
- No asides.
- No extra facts that aren’t needed to execute the job.
Write it as if you’re explaining it to a competent colleague so they can do it without asking follow-ups.
4) Promote edge-case pressure into 1–3 invariants (good abstraction)
Output: 1–3 governing principles.
Method:
- Scan your paragraph and ask: “Where could a reasonable reader/model make the wrong call?”
- For each risk, write the invariant the correct behavior is protecting.
Invariant patterns:
- “Always …” (non-negotiable constraint)
- “Never …” (hard prohibition)
- “Prefer … when …” (priority rule)
Examples:
- “Always cite the specific evidence used; if none exists, say ‘Not found’ rather than guessing.”
- “Treat each unit independently even if multiple units overlap.”
- “If sources conflict, prefer <source A> and flag the conflict.”
This is the edge-case engine: invariants generalize; lists overfit.
5) Add one ambiguity sentence (the long-tail catcher)
Output: exactly one sentence describing what to do when reality doesn’t fit cleanly.
Choose a posture:
- Conservative (strict/safer)
- Best-effort (reasonable interpretation, clearly labeled)
- Ask (request missing info only if decision hinges on it)
Template:
- “When unclear or information is missing, <fallback behavior>, and <surface signal>.”
Example:
- “When a rule is ambiguous, interpret it conservatively and note the interpretation used.”
6) Weave the invariants back into the narrative (make it readable)
Output: a rewritten narrative paragraph where the principles are integrated naturally.
Goal:
- The paragraph should read linearly without detours.
- Principles should feel like part of the story, not an appendix.
Common weaving move:
- Turn relationships into actions (“First derive X from Y, then…”), rather than stating orthogonal facts in parentheses.
7) Tighten for processing-minimality (editing pass)
Output: the final minimal version that still feels like normal writing.
Checklist:
- Remove orthogonal clarifications.
- Move definitions before first use.
- Replace parentheticals with integrated clauses or actions.
- Merge sentences that force reconciliation.
- Prefer one clean sentence over two that introduce a bookkeeping burden.
The single move that prevents “exception sprawl”
When you catch yourself adding a special-case line:
- Write the special case.
- Ask: “What invariant is it protecting?”
- Replace it with that invariant.
- Weave it into the narrative paragraph.
If you can’t turn it into an invariant, it belongs in the single ambiguity sentence, not as another rule.
Final artifact
Output: spec.md containing only the final versions of:
- Intent sentence
- Unit of attention
- Narrative spine (woven + tightened)
- Invariants (1–3)
- Ambiguity sentence
spec.md is updated throughout the algorithm, then finalized with a last cleanup pass to remove any drafting residue. It is the source of truth for the next layers.
2) Review and refine spec.md
This step is a deliberate “read-through” pass that catches issues that are hard to notice while drafting. You will only edit spec.md (not add new side documents).
Review sequence
- Linear read test (working-memory cost)
- Read the Narrative section top-to-bottom.
- Remove anything that forces the reader to reconcile two places (“oh, that earlier sentence changes what this means”).
- Convert orthogonal clarifications into integrated actions (avoid parentheses and side notes).
- Coherence test (order + relevance)
- Every sentence should either advance the task or constrain a decision.
- If a sentence is “true but not needed”, delete it.
- If a constraint is needed, move it to the earliest point it’s required.
- Frame check (say less by saying the right thing)
- Verify the Intent sentence uses domain language that activates the right shared schema.
- Replace implementation language with the name of the real-world activity.
- Invariant check (edge-case coverage without sprawl)
- Confirm the invariants are:
- Few (1–3)
- General (each covers multiple cases)
- Decision-shaping (not just commentary)
- If you find yourself wanting an “exception”, promote it into an invariant instead.
- Ambiguity check (one sentence, does real work)
- Ensure the Ambiguity sentence is specific about:
- What to do (fallback behavior)
- What to surface (uncertainty, missing info, interpretation)
- Ensure it doesn’t secretly introduce new steps or extra policy.
- Coverage check (unit discipline)
- Ensure the Unit of attention is unambiguous and the Narrative guarantees every unit is handled.
- If the narrative allows “skipping” units implicitly, tighten it.
Exit condition
Stop when spec.md reads like a short set of instructions a competent colleague could execute in one pass without backtracking.
3) Identify the context layer
The context layer is everything the model will rely on besides the narrative instructions themselves.
It has two parts:
- Injected context (slots): variables and constants you provide directly in the prompt/system instructions.
- Explored context (tools): capabilities the model uses to discover additional context dynamically (retrieval, search, computation, actions).
Tool design is out of scope for this skill. Assume the tool surface already exists (at least roughly). The goal here is to make sure your specification and prompt assembly treat tools as first-class context, just like variables/constants.
3A) Injected context: variables and constants
This step turns spec.md into a template by identifying the “slots” you will inject at runtime, and the stable reference material you should isolate for clarity and lookup.
This step turns spec.md into a template by identifying the “slots” you will inject at runtime, and the stable reference material you should isolate for clarity and lookup.
Definitions
- Variable: any value that can change per run (user name, the object under consideration, a student profile, an input document, timestamps, IDs).
- Constant: stable, reusable material that is worth isolating because it improves:
- Narrative flow (keeps the task description clean and linear)
- Salience (isolated blocks feel “important”)
- Needle-in-a-haystack lookup (large reference data is easier to search when clearly bounded)
“Constant” does not mean “everything that isn’t variable.” It means “static content that should be extracted into its own block.”
Output
- A list of slots (variables and constants), each with a clear name and what it contains.
- A mapping of where the slot lives:
- Inline substitution for small local values.
- Global blocks for large or reference-like content.
- Local blocks (scoped tags) only when it materially improves clarity.
You will incorporate this slot inventory into your templating system in step 4, alongside the tool context.
How to identify variables (easy)
- Scan
spec.mdand highlight anything that comes from the specific instance being processed. - If it could differ between runs, it is a variable.
Heuristic:
- If the model needs to “look through” it (profile data, documents, JSON structures), prefer a tagged block.
- If it is a small label (a name, a short identifier), prefer inline substitution.
How to identify constants (the optimization surface)
Start from the opposite angle: look for content that is stable and either interrupts the story or functions like reference data.
Extract as a constant when at least one is true:
- Reference / lookup material
- Glossaries, lookup tables, enumerations, policy excerpts, large docs.
- Anything the model will search within if it knows what it’s looking for.
- Reusable decision logic
- Conditional or combinatorial logic that is stable and can be referred to by name.
- Stable “implicit rules” or meta-rules that would otherwise clutter the narrative.
- Long-form instructions that aren’t the narrative
- Evaluation rubrics, formatting rules, consistent procedures reused across prompts.
- If it’s long and stable, isolate it.
Slot placement rules
Inline substitution (small + local)
Use inline substitution when the value is short and does not create a detour.
Examples:
- “You are speaking to {USER_NAME} …”
- “Assess eligibility for {PROGRAM_NAME} …”
Global blocks (large + reference-like)
Use global tagged blocks for any haystack content or large structured inputs.
Examples of global blocks:
<student_profile>(structured data)<structure>/<all_rules>(source-of-truth requirements)<implicit_rules>(stable meta-rules)<evaluation_instructions>(stable rubric)
Local blocks (scoped tags)
Use local blocks only when a small constant/variable is tightly coupled to one instruction and extracting it globally would reduce clarity.
Rule:
- Default to global blocks; use local blocks sparingly.
Naming
- Use noun phrases:
student_profile,structure,combination_logic,all_rules,implicit_rules,evaluation_instructions. - Prefer names that match how the narrative refers to them.
Size and salience heuristic
- If it’s tiny: inline.
- If it’s read once: keep it in the narrative.
- If it’s read many times or searched: extract as a block.
- If extraction makes the narrative read more linearly: extract.
3B) Explored context: tools
Tools make the context layer dynamic: instead of only consuming injected slots, the model can explore its environment and construct additional context on demand.
Common tool categories:
- Internal retrieval: RAG, file search, SQL queries, bash/shell, system catalogs.
- Web search: open-web lookup for up-to-date or niche details.
- Computation / analysis: code interpreter for math, transformation, aggregation, reasoning over tool outputs.
- External actions: performing allowed operations in connected systems.
You do not need to design these tools here. You only need to:
- Confirm the task narrative in
spec.mdimplicitly assumes the existence of these capabilities where appropriate. - Represent tool availability as explicit context blocks in the prompt, so the model knows what it can use.
Representing tool context
In step 4, tools should be described in the same explicit, sectioned way as constants and variables (often XML-style tagged blocks). This turns “the model can use tools” into bounded, searchable context.
Example shape (illustrative):
<tools>describing available tools and what they are for<retrieval>/<sql>/<bash>/<web_search>blocks as needed
The actual anatomy and ordering of these blocks relative to the task narrative is defined in step 4.
Notes on syntax (preview for step 4)
In step 4, injected context (variables/constants) and explored context (tools) are typically placed in tagged blocks adjacent to the task description so the narrative can refer to them without inline clutter.
Example shape (illustrative):
- Place reference data in blocks:
<structure>…</structure>,<student_profile>…</student_profile> - Describe tools in blocks:
<tools>…</tools>(and optionally per-tool blocks) - Keep the task narrative clean and point to the blocks by name.
4) Structure the prompt (purely technical assembly)
This step takes the written specification (spec.md) and the context layer (slots + tools) and assembles a prompt that is mechanically easy for a model to follow.
The base anatomy mirrors the common “Goal / Return format / Warnings” pattern.
- Goal: what to do.
- Return format: how to present results.
- Warnings: constraints and edge-case pressure.
- Context layer: the inputs and reference material the model should rely on.
4A) Base anatomy (works for one-shot prompts)
When there is no tool surface, the context layer should be treated as complete: it contains everything the model needs.
Goal
- Derived from
spec.md:- Intent sentence
- Unit of attention
- Narrative (woven + tightened)
Warnings
- Derived from
spec.md:- Invariants (1–3)
- Ambiguity sentence
Return format
- A compact output contract.
- Prefer schemas and checks that make answers verifiable.
Context layer (complete context)
- Injected slots (variables + constants) as explicit tagged blocks.
4B) Evolution when tools are available (tool-enabled prompts)
Tool-enabled prompts do not add a second, separate “context” concept. Instead, the context layer changes meaning:
- In one-shot prompts, the context layer is complete context.
- In tool-enabled prompts, the context layer is starting context (what you already have + where to begin).
- Tools describe how to obtain any additional context needed to complete the task.
Context layer (starting context)
- Provide the initial artifacts and reference material available without exploration.
- Make it explicit where the agent should start (primary inputs / first sources of truth).
Tools (explored context)
- Describe available tools as explicit tagged blocks.
- Keep descriptions oriented around purpose and boundaries (what the tool is for, what it can access, any constraints).
4C) Tagged-block assembly shape
Use explicit tagged blocks (often XML-style) to bound context. This supports salience and needle-in-a-haystack lookup.
At a high level:
- Goal / Return format / Warnings should read cleanly without inline clutter.
- The context layer (slots and, if applicable, tools) lives in tagged blocks adjacent to the task narrative.
Illustrative layout
- Goal
- Return format
- Warnings
- Context layer blocks
- One-shot: complete context blocks
- Tool-enabled: starting context blocks
- Tools blocks (only if tools exist)
- Task narrative refers to blocks by name
4D) Mapping from spec.md into the base anatomy
- Goal ← Intent + Unit + Narrative
- Warnings ← Invariants + Ambiguity
- Context layer ← injected slots (variables/constants) and, when tools exist, the starting context
- Tools ← explored context blocks (only when available)
The final prompt should feel like a single linear set of instructions, with context separated into bounded blocks rather than woven inline.