Classify text content for safety violations
POST /moderations
Classify a text to see if it violates OpenAI’s usage policies.
id: Unique identifier for the moderation requestmodel: The model used for moderationresults: Array of result objects with the following properties:
flagged: Whether the content was flaggedcategories: Object with boolean values for each categorycategory_scores: Object with confidence scores for each category