Agent Skills: AI Spark Coder

>

UncategorizedID: Spectaculous-Code/raamattu-nyt/ai-spark-coder

Install this agent skill to your local

pnpm dlx add-skill https://github.com/Spectaculous-Code/raamattu-nyt/tree/HEAD/.claude/skills/ai-spark-coder

Skill Files

Browse the full folder contents for ai-spark-coder.

Download Skill

Loading file tree…

.claude/skills/ai-spark-coder/SKILL.md

Skill Metadata

Name
ai-spark-coder
Description
>

AI Spark Coder

Automates implementation of kipinä (sparks) from two sources:

  1. Primary: ai_prompt.pm_sparks where ai_creatable = true OR usage_state = 'implementing'
  2. Secondary: public.feature_suggestions where status = 'in_progress' AND ai_implementation_plan IS NOT NULL

Workflow

1. Query ready-to-code sparks

Use mcp__plugin_supabase_supabase__execute_sql with project ID iryqgmjauybluwnqhxbg.

Primary source (pm_sparks) — parent sparks:

SELECT s.*, c.name AS core_name, c.name_fi AS core_name_fi
FROM ai_prompt.pm_sparks s
LEFT JOIN ai_prompt.pm_cores c ON s.core_id = c.id
WHERE (s.ai_creatable = true OR s.usage_state = 'implementing')
  AND s.parent_spark_id IS NULL
ORDER BY s.updated_at DESC
LIMIT 1;

1b. Fetch selected child sparks (ala-kipinät)

If a parent spark is found, fetch its children that are also marked implementing:

SELECT * FROM ai_prompt.pm_sparks
WHERE parent_spark_id = '<parent_id>'
  AND usage_state = 'implementing'
ORDER BY priority DESC NULLS LAST, created_at ASC;

Only these children are in scope. Children not marked implementing are excluded — the user selects them via the SoihtuChildPicker UI.

If none found in primary, try secondary source (feature_suggestions):

SELECT * FROM public.feature_suggestions
WHERE status = 'in_progress' AND ai_implementation_plan IS NOT NULL
ORDER BY priority ASC NULLS LAST, created_at ASC
LIMIT 1;

2. If none found in either source

Report "Ei toteutettavia kipinöitä. Merkitse kipinä toteutettavaksi SOIHTU-näkymässä tai 'AI voi koodata' -lipulla." and stop.

3. Count remaining

SELECT
  (SELECT count(*) FROM ai_prompt.pm_sparks
   WHERE (ai_creatable = true OR usage_state = 'implementing')
     AND parent_spark_id IS NULL) AS sparks,
  (SELECT count(*) FROM public.feature_suggestions
   WHERE status = 'in_progress' AND ai_implementation_plan IS NOT NULL) AS suggestions;

4. Build context block

For pm_sparks source:

Load from the parent spark:

  • id, title (or title_fi), content (or content_fi)
  • problem (or problem_fi), benefit_fi
  • type, status, maturity_status, usage_state
  • ai_prompt (AI instructions if set)
  • ai_complexity (estimated complexity)
  • clarification_notes (jsonb array of {question, answer} pairs — key context)
  • core_id, core_name (linked YDIN)
  • angle, priority, rating
  • created_at, updated_at

For each selected child spark, also load: id, title/title_fi, content/content_fi, ai_prompt, clarification_notes, rating.

The linked core tells which product/app the spark concerns. Look up app_systems.meta.linked_core_id to find the system_id.

For feature_suggestions source:

  • id, system_id, title_fi (or title), description_fi (or description)
  • category, priority, benefit_fi (or benefit)
  • suggestion (original user text), source
  • admin_notes, ai_implementation_plan, ai_plan_generated_at
  • ai_context (jsonb — may contain feedback source info)
  • created_at

5. Announce and present plan

Tell the user:

Kipinä: [title_fi or title, truncated to ~80 chars] Ydin: [core_name or "–"] | Prioriteetti: [priority] | Jäljellä: N kpl

Ala-kipinät toteutukseen (X kpl):

  • [child title 1]
  • [child title 2]
  • ... (If no children selected, omit this section)

Tarkennukset: [List each clarification_notes Q&A pair from parent + children]

AI-ohje: [ai_prompt if set, or "–"]

Suunnitelma (tutkimuksen perusteella):

  1. [Konkreettinen askel 1]
  2. [Konkreettinen askel 2]
  3. ...

Tiedostot joihin koskee: file1.tsx, file2.ts

Jatketaanko toteutukseen? (Voit muuttaa suunnitelmaa ennen toteutusta.)

MANDATORY — always present this plan and wait for user confirmation before proceeding. The user may:

  • Approve as-is → proceed to step 6
  • Modify the plan → adjust and re-present
  • Skip this spark → leave as-is and stop

To build a good plan, use the Explore agent or Grep/Glob to find relevant files. Pay attention to clarification_notes — they contain user's answers to clarifying questions. The plan must cover the parent spark AND all selected children as a coherent whole.

6. Route to the right implementation skill

After user approves the plan, invoke the appropriate skill(s) directly based on what the spark requires. Do NOT delegate to using-superpowers — route directly.

Routing table

| Spark concerns | Invoke skill | Context to pass | |----------------|-------------|-----------------| | New UI component / page | frontend-design | Component spec, design requirements | | Admin panel feature | admin-panel-builder | Page, section, functionality | | Database table / RPC / migration | supabase-migration-writer | Schema, table design, RLS | | Edge Function | edge-function-generator | Function spec, auth, endpoints | | AI feature / prompt | ai-prompt-manager | Feature registration, prompt template | | Idea Machina internal | idea-machina | Evolution stage, component, data flow | | Practice / gamification | practice-gamification | Practice type, scheduling, rewards | | Bible feature | bible-lookup-helper or cinema-voice-architect | Feature type | | i18n / translation | language-specialist | Keys, languages | | Refactoring | code-refactoring | Files, what to restructure | | Subscription / quota | subscription-system | Plan, limits, feature gating |

Multiple skills may apply. For a feature that needs DB + UI + Edge Function, run: supabase-migration-writeredge-function-generatorfrontend-design in sequence.

If the spark doesn't clearly match any skill, use code-guru for general implementation guidance.

7. Update spark status

MANDATORY — always run after implementation:

For pm_sparks source — mark completed children first, then check parent:

Step A: Mark each implemented child spark as implemented:

UPDATE ai_prompt.pm_sparks
SET usage_state = 'implemented', updated_at = NOW()
WHERE parent_spark_id = '<parent_id>'
  AND id IN ('<child_id_1>', '<child_id_2>', ...)
  AND usage_state = 'implementing';

Step B: Check if ALL children of the parent are now implemented:

SELECT count(*) AS remaining
FROM ai_prompt.pm_sparks
WHERE parent_spark_id = '<parent_id>'
  AND usage_state != 'implemented';

Step C: Only if remaining = 0 (all children done), mark the parent:

UPDATE ai_prompt.pm_sparks
SET ai_creatable = false, usage_state = 'implemented', updated_at = NOW()
WHERE id = '<parent_id>';

If remaining > 0, report: "Pääkipinä jää avoimeksi — N ala-kipinää vielä toteuttamatta."

7b. Write implementation comment (pm_sparks only)

Write a comment on the parent spark summarizing what was done. If only some children were implemented, mention which ones and note that the parent remains open:

INSERT INTO ai_prompt.pm_spark_comments (spark_id, user_id, content)
VALUES (
  '<parent_id>',
  (SELECT id FROM auth.users ORDER BY created_at ASC LIMIT 1),
  'AI-toteutettu: <1-3 lauseen yhteenveto, mainitse toteutetut ala-kipinät nimeltä>. <Jos pääkipinä jää avoimeksi: "Pääkipinä auki — N ala-kipinää jäljellä.">'
);

For feature_suggestions source:

UPDATE public.feature_suggestions
SET status = 'implemented', updated_at = NOW()
WHERE id = '<id>';

8. Report

Report the final count of remaining implementable sparks and in_progress suggestions.

Data Models

pm_sparks (ai_prompt schema) — PRIMARY

| Field | Type | Notes | |-------|------|-------| | id | uuid | PK | | evolution_id | uuid | FK to pm_evolutions | | user_id | uuid | FK to auth.users | | title | text | Spark title | | title_fi | text | Finnish title | | content | text | Main description | | content_fi | text | Finnish content | | problem | text | Problem statement | | problem_fi | text | Finnish problem | | type | text | idea, question, insight, etc. | | status | text | raw, analyzed, promoted, etc. | | maturity_status | text | raw, refined, etc. | | usage_state | text | inbox, linked_to_core, etc. | | priority | int | 0-based | | core_id | uuid | FK to pm_cores (linked YDIN) | | ai_creatable | boolean | TRUE = ready for AI coding | | ai_complexity | text | Estimated complexity | | ai_prompt | text | AI instructions | | clarification_notes | jsonb | Array of {question, answer} | | benefit_fi | text | Finnish benefit | | angle | text | Approach angle | | created_at | timestamptz | | | updated_at | timestamptz | |

feature_suggestions (public schema) — SECONDARY

| Field | Type | Notes | |-------|------|-------| | id | uuid | PK | | system_id | text | 'raamattu-nyt' or 'idea-machina' | | title_fi | text | Finnish title | | description_fi | text | Finnish description | | category | text | ai, search, audio, etc. | | priority | int | Lower = higher | | status | text | pending, in_progress, implemented, rejected | | ai_implementation_plan | text | Generated plan (markdown) | | ai_context | jsonb | Extra context | | created_at | timestamptz | |

Configuration

  • Supabase project ID: iryqgmjauybluwnqhxbg
  • Tool: mcp__plugin_supabase_supabase__execute_sql