PromptShield
7.5
A proactive prompt security auditing and hardening tool that, like Anthropic's approach, continuously tests LLM prompts for vulnerability to injection attacks, guidance on mitigation, and enforcement of parameters to improve model reliability.
120h
mvp estimate
7.5
viability grade
9
views
technology stack
Python
SQLite
Medium
inspired by
Anthropic published prompt injection failure rates