Guardian AI Safety Auditor
8.2
A SaaS platform that leverages AI (like Anthropic's Claude) to automatically scan and identify potential safety and ethical risks in AI models and applications before deployment, especially focusing on bias, malicious outputs, and misuse potential. Addresses concerns raised about AI's use in war and surveillance.
250h
mvp estimate
8.2
viability grade
0
views
technology stack
Python
Difficult
PostgreSQL
inspired by
US military feud with Anthropic highlighted AI safety concerns