Introduction:
Stability AI has an innovative suite of tools and models to create images based on user-provided prompts while ensuring adherence to legal and ethical standards. To maintain a safe and inclusive environment, we have implemented safeguards that prevent the generation of illegal or inappropriate content. In this article, we'll explain how our content filtering system works, instances where it may trigger falsely, and our approach to addressing these situations.
Content Filtering System:
To uphold our commitment to safety, Stability AI employs advanced algorithms that analyze user prompts and generated images. If a prompt or image is flagged as potentially illegal or inappropriate, our content filter intervenes by blurring the image or preventing its generation altogether.
Instances of False Positives:
While our safeguards strive to maintain accuracy, false positives may occur occasionally. This means that an image might be flagged as inappropriate even if the user did not prompt for or intend to trigger the safeguards. In such cases, the end-user is not charged for the blurred image.
Addressing False Positives:
To address customer inquiries regarding false positives, we encourage users to provide specific details when reporting issues through our Support Portal -> Blurry Image Result/Content Moderation form. This feedback helps us refine our algorithms and improve the accuracy of our content filtering system.
Improving Safeguards:
To enhance our safeguards and minimize false positives, we analyze user feedback provided through this form. This information allows us to refine our algorithms and improve the accuracy of our content filtering system continually.
If you are interested in learning about employing your own safeguards in lieu of Stability AI's, please find more information in the following article: Custom Safeguards / Request to Disable Default Safeguards
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article