Classify text content for safety violations
The Moderations API helps detect potentially harmful content in text.Documentation Index
Fetch the complete documentation index at: https://docs.electronhub.ai/llms.txt
Use this file to discover all available pages before exploring further.
POST /moderations
Classify a text to see if it violates OpenAI’s usage policies.
id: Unique identifier for the moderation requestmodel: The model used for moderationresults: Array of result objects with the following properties:
flagged: Whether the content was flaggedcategories: Object with boolean values for each categorycategory_scores: Object with confidence scores for each category