Chat Maintenance
We are currently undergoing maintenance. Please check back shortly.
Moderation Framework Overview
We employ a multi-layered moderation approach combining automated safeguards, application-level restrictions, continuous monitoring, and human review to proactively prevent, detect, and remediate prohibited content.
1. AI Model Safety Controls
Our platform relies on third-party and proprietary AI models that include built-in safety and compliance mechanisms. These models are designed to:
These safeguards operate at the model level prior to content generation.
2. Application-Level Input Restrictions
In addition to model-level protections, we implement application-side controls that restrict content before it is submitted to AI models. Specifically:
This layer serves as a preventative control to reduce misuse of the platform.
3. Automated Post-Generation Content Scanning
All generated content is subject to ongoing automated monitoring after creation. We use AI-based scanning systems to:
If content is determined to violate policy, it is promptly removed, and associated user accounts may be restricted or suspended.
4. Human Moderation and Manual Review
We maintain a dedicated content moderation team responsible for manual review and enforcement. This team:
Confirmed violations result in content removal and may lead to account suspension or termination, depending on severity and recurrence.
5. Enforcement and Remediation
When prohibited content is identified, we take appropriate corrective actions, which may include:
We maintain internal logs of moderation actions to support compliance reviews and audits.
6. Ongoing Review and Improvement
Our content moderation processes are continuously evaluated and updated to: