Agent Skills: PromptInjection

Test LLM applications for prompt injection vulnerabilities — jailbreak attempts, system prompt extraction, context manipulation, guardrail bypass techniques, direct injection, indirect injection, multi-stage attacks, and reconnaissance. USE WHEN prompt injection, jailbreak, LLM security, AI security assessment, pentest AI application, test chatbot, guardrail bypass, direct injection, indirect injection, RAG poisoning, multi-stage attack, complete assessment, reconnaissance.

UncategorizedID: danielmiessler/personal_ai_infrastructure/PromptInjection

Install this agent skill to your local

pnpm dlx add-skill https://github.com/danielmiessler/personal_ai_infrastructure/PromptInjection

Skill Files

Browse the full folder contents for PromptInjection.

Download Skill

Loading file tree…

Select a file to preview its contents.