Anthropic Alignment Monitor
7.0
A system that passively monitors the output of large language models (LLMs) like Claude. It analyzes textual outputs for inconsistencies, biases, and potential ethical violations, providing alerts and aggregate insights to developers and researchers working on AI safety and alignment.
200h
mvp estimate
7.0
viability grade
0
views
technology stack
Python
PostgreSQL
Medium
inspired by
Anthropic invited 15 Christians for a summit