AI Image Safety Monitor
8.2
A proactive system detects and flags potentially harmful or inappropriate content generated by AI chatbots like Grok, providing a layer of safety and potentially assisting in compliance efforts.
220h
mvp estimate
8.2
viability grade
5
views
technology stack
Python
Difficult
Medium
inspired by
Crackdown on Grok after sexualized images were generated