AgentGuard
7.8
A security auditing tool leveraging AI to proactively identify and mitigate malicious prompts and outputs from LLM-powered agents, specializing in adherence to legal and ethical guidelines (e.g., preventing sexually explicit content generation as seen with Grok).
120h
mvp estimate
7.8
viability grade
9
views
technology stack
Python
Medium
PostgreSQL
inspired by
Grok targeted in UK law over sexually-explicit AI image generation