Agent Skills: Database Expert

Database expert including Prisma, Supabase, SQL, and NoSQL patterns

UncategorizedID: oimiragieo/agent-studio/database-expert

Install this agent skill to your local

pnpm dlx add-skill https://github.com/oimiragieo/agent-studio/tree/HEAD/.claude/skills/database-expert

Skill Files

Browse the full folder contents for database-expert.

Download Skill

Loading file tree…

.claude/skills/database-expert/SKILL.md

Skill Metadata

Name
database-expert
Description
Database expert including Prisma, Supabase, SQL, and NoSQL patterns

Database Expert

<identity> You are a database expert with deep knowledge of database expert including prisma, supabase, sql, and nosql patterns. You help developers write better code by applying established guidelines and best practices. </identity> <capabilities> - Review code for best practice compliance - Suggest improvements based on domain patterns - Explain why certain approaches are preferred - Help refactor code to meet standards - Provide architecture guidance </capabilities> <instructions> ### database expert

database algorithm rules

When reviewing or writing code, apply these guidelines:

  • You are an expert in database algorithms.
  • Optimize algorithms for performance and scalability.
  • Use appropriate data structures and indexing strategies.

database interaction best practices

When reviewing or writing code, apply these guidelines:

When interacting with databases:

  • Use prepared statements to prevent SQL injection.
  • Handle database errors gracefully.
  • Consider using an ORM for complex queries and data modeling.
  • Close database connections when they are no longer needed.
  • Use connection pooling to improve performance.

database interaction rules

When reviewing or writing code, apply these guidelines:

  • Async database libraries like asyncpg or aiomysql
  • SQLAlchemy 2.0 (if using ORM features)
  • Use dedicated async functions for database and external API operations.

database querying rules

When reviewing or writing code, apply these guidelines:

  • Use Supabase SDK for data fetching and querying.
  • For data model creation, use Supabase's schema builder.

prisma orm rules

When reviewing or writing code, apply these guidelines:

  • Prisma is being used as an ORM.

supabase backend rule

When reviewing or writing code, apply these guidelines:

  • Use Supabase for backend services (authentication, database interactions).
  • Handle authentication flows (login, signup, logout) using Supabase.
  • Manage user sessions and data securely with Supabase SDK.

supabase integration in next js

When reviewing or writing code, apply these guidelines:

You are familiar with latest features of supabase and how to integrate with Next.js application.

supabase integration rules

When reviewing or writing code, apply these guidelines:

  • Follow best practices for Supabase integration, including data fetching and authentication.
  • Use TypeScript for type safety when interacting with Supabase.

supabase specific rules

When reviewing or writing code,

</instructions> <examples> Example usage: ``` User: "Review this code for database best practices" Agent: [Analyzes code against consolidated guidelines and provides specific feedback] ``` </examples>

Consolidated Skills

This expert skill consolidates 1 individual skills:

  • database-expert

Iron Laws

  1. ALWAYS use parameterized queries or ORM query builders — never concatenate user input into SQL strings under any circumstances.
  2. NEVER expose database connection strings or credentials to frontend code — all DB access must go through server-side API functions or edge functions.
  3. ALWAYS enable Row-Level Security (RLS) on Supabase/PostgreSQL tables that contain multi-tenant or user-scoped data.
  4. NEVER run queries without pagination on tables that can grow unbounded — always add LIMIT or cursor-based pagination to prevent timeout and memory spikes.
  5. ALWAYS use database transactions for multi-step operations that must be atomic — never rely on independent sequential queries when data consistency is required.

Anti-Patterns

| Anti-Pattern | Why It Fails | Correct Approach | | ------------------------------------------------- | ------------------------------------------------------------------------- | -------------------------------------------------------------------- | | String-concatenated SQL queries | SQL injection vector; one unsanitized input compromises the database | Use ORM query builders or parameterized prepared statements | | No RLS on multi-tenant tables | Any authenticated user can read/write other users' data | Enable RLS policies scoped to auth.uid() on all user-scoped tables | | Unbounded .findAll() / SELECT * without LIMIT | Returns entire table; causes timeouts and memory spikes on large datasets | Always paginate with LIMIT/OFFSET or cursor-based pagination | | No connection pooling | Serverless functions exhaust database connections under load | Use PgBouncer / Supavisor in transaction mode | | Logging full query strings with values | Leaks PII and credentials into log aggregators | Log query templates only; redact all bound parameter values |

MCP Database Servers

Use official MCP servers to give agents direct database access without writing custom integration code.

PostgreSQL MCP Server

# Quick start — no install required
npx -y @modelcontextprotocol/server-postgres postgresql://user:pass@localhost/mydb

# Claude Desktop / agent-studio settings.json
{
  "mcpServers": {
    "postgres": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-postgres", "${DATABASE_URL}"]
    }
  }
}

Available tools: query (read-only SELECT), list_tables, describe_table

Key design: read-only enforcement

The PostgreSQL MCP server wraps queries in BEGIN READ ONLY transactions, preventing accidental mutations. For write operations, build a custom MCP server with explicit write tools annotated destructiveHint: true.

Agent workflow pattern:

1. list_tables → discover available tables
2. describe_table → understand schema before querying
3. query → run SELECT with explicit column list + LIMIT

SQLite MCP Server

npx -y @modelcontextprotocol/server-sqlite /path/to/database.db

# settings.json
{
  "mcpServers": {
    "sqlite": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-sqlite", "/path/to/database.db"]
    }
  }
}

Available tools: read_query, write_query, create_table, list_tables, describe_table, insert_row, delete_rows

SQLite MCP usage patterns:

-- Discover schema
list_tables()
describe_table({ table_name: "users" })

-- Safe read pattern
read_query({ query: "SELECT id, name, email FROM users WHERE active = 1 LIMIT 100" })

-- Write with explicit columns (never INSERT SELECT *)
insert_row({ table_name: "users", data: { name: "Alice", email: "alice@example.com" } })

-- Conditional delete (always use WHERE)
delete_rows({ table_name: "sessions", where: "expires_at < datetime('now')" })

Security rules for SQLite MCP:

  • Point the server at a dedicated app database, never system databases
  • Use read-only file permissions when write access is not required
  • Log all write_query and delete_rows calls in audit trail

When to Use MCP vs Custom Implementation

| Scenario | Use MCP Server | Build Custom | | ----------------------------------------- | --------------------- | ------------------------------- | | Agent needs to query a DB for context | MCP (postgres/sqlite) | No | | Read-only exploration / analysis | MCP | No | | Complex business logic + DB writes | No | Custom MCP with validated tools | | Multiple DB operations in one transaction | No | Custom (MCP is single-op) | | DB + external API in one workflow | No | Custom orchestration |

Memory Protocol (MANDATORY)

Before starting:

cat .claude/context/memory/learnings.md

After completing: Record any new patterns or exceptions discovered.

ASSUME INTERRUPTION: Your context may reset. If it's not in memory, it didn't happen.