Key Takeaways
- Every Sora 2 video carries both visible watermarks and invisible C2PA metadata traceable to OpenAI’s systems
- Users must attest to consent before uploading real people’s photos for image-to-video generation
- Teen accounts have filtered feeds, restricted adult contact, and default continuous scroll limits in Sora
- Audio safeguards actively block attempts to generate music imitating living artists or existing works
OpenAI published updated safety documentation for Sora 2 on March 23, 2026, and the scope of protections is more layered than most users realize. The app combines content provenance standards, consent-based likeness controls, and teen-specific guardrails into a single framework. This breakdown covers every protection currently active, based entirely on OpenAI’s official documentation.
What C2PA Watermarking Actually Does in Sora 2
Every video generated through Sora carries both a visible, dynamically moving watermark that includes the name of the creator, and an invisible C2PA metadata signature embedded at the moment of creation. C2PA is an industry-standard signature system, and OpenAI builds on it with internal reverse-image and audio search tools that can trace any video back to Sora with high accuracy.
These internal tracing systems build on successful detection infrastructure developed for ChatGPT image generation and the original Sora 1 model. The combination of visible and invisible provenance signals means every Sora 2 output carries two independent layers of origin identification.
OpenAI acknowledges that perfect protections are difficult, particularly in the audio space, and the company continues to invest in improving these systems. Creators and viewers in both the US and India should treat C2PA as a strong first-line provenance tool rather than an absolute guarantee across all downstream platforms.
How Consent Works for Image-to-Video With Real People
Sora 2 allows users to upload photos of family or friends to generate videos featuring those individuals. Before any such generation, users must actively attest that they have consent from anyone pictured and hold the rights to upload that media.
Image-to-video generations involving real people are subject to particularly strict safety guardrails, stricter even than what applies to Sora Characters (formerly known as the cameo feature). Images featuring children and young-looking individuals trigger even stricter moderation and guardrails about what can be created from them. All videos created from real-person photo uploads carry mandatory watermarks upon sharing.
How the Characters Feature Gives You Likeness Control
The Characters system gives creators full control over their own likeness, including both appearance and voice. OpenAI applies guardrails to ensure that the audio and image likeness captured in characters are used only with the creator’s consent.
Only the character owner decides who can use their character, and access can be revoked at any time. Every video that includes your character, including drafts created by other users, is always visible to you, allowing you to review, delete, and if needed report any video featuring your likeness.
OpenAI also applies extra safety guardrails to any video that includes a character. An optional stricter guardrail setting further limits major changes to your appearance, prevents placement in embarrassing situations, and keeps your identity broadly consistent across all videos. OpenAI separately takes measures to block depictions of public figures, except for those using the Characters feature themselves.
Teen Safety Controls: What Parents Can and Cannot Do
Sora 2 applies stronger protections specifically for younger users. The feed is designed to be appropriate for all Sora users, and content that may be harmful, unsafe, or age-inappropriate is filtered out for teen accounts.
Teen profiles are not recommended to adults, and adults cannot initiate direct messages with teen accounts. Parents managing a teen’s account through ChatGPT parental controls can toggle whether teens can send and receive direct messages and can select a non-personalized feed in the Sora app. By default, teens also have limits on how much they can continuously scroll in Sora.
How Content Filtering Blocks Harmful Video Before Creation
Sora 2 uses layered defenses to keep the feed safe while leaving room for creativity. At the creation stage, guardrails seek to block unsafe content before it is generated by checking both prompts and outputs across multiple video frames and audio transcripts.
Blocked categories include sexual material, terrorist propaganda, and self-harm promotion. OpenAI has red-teamed Sora 2 to explore novel risks specific to video generation and has tightened policies relative to image generation given Sora’s greater realism and the addition of motion and audio.
Beyond generation, automated systems continuously scan all feed content against OpenAI’s Global Usage Policies and filter out unsafe or age-inappropriate material. These systems are continuously updated as new risks emerge and are complemented by human review focused on the highest-impact harms.
Audio Safeguards and Music Protection
Adding audio to Sora raises the bar for safety, and OpenAI has built specific protections into this area. Sora automatically scans transcripts of generated speech for potential policy violations and blocks attempts to generate music that imitates living artists or existing works.
OpenAI’s systems are designed to detect and stop such prompts, and the company honors takedown requests from creators who believe a Sora output infringes on their work. OpenAI explicitly acknowledges that perfect protections are difficult in the audio space and states it continues to invest seriously in improving these capabilities.
User Control and Reporting Tools
Creators retain full control over what they publish in Sora. Videos are only shared to the feed when the creator chooses to do so, and published content can be removed at any time.
Every video, profile, direct message, comment, and character can be reported for abuse, with clear recourse when policies are violated. Blocking another account prevents that user from viewing your profile and posts, using your character, and contacting you via direct message.
Considerations and Limitations
Sora 2’s safety framework is comprehensive within its own environment. OpenAI itself acknowledges that perfect protections are difficult, particularly for audio. Parental controls offer parents management of DM permissions and feed personalization but do not provide granular visibility into specific content a teen is viewing or generating. Users in both the US and India should treat these controls as strong baseline safeguards and review OpenAI’s Global Usage Policies for the full scope of what is and is not permitted.
Frequently Asked Questions (FAQs)
Does Sora 2 watermark every video it generates?
Yes. Every Sora 2 video carries both a visible, dynamically moving watermark that includes the creator’s name and an invisible C2PA metadata signature embedded at creation. OpenAI also maintains internal reverse-image and audio search tools that can trace any video back to Sora with high accuracy.
Can someone use my face in a Sora video without my permission?
Sora 2’s Characters feature requires your consent before anyone can use your likeness or voice. You decide who can access your character, can revoke that access at any time, and can view every draft video featuring your likeness, including those created by other users.
How does Sora 2 protect children and teens?
Teen accounts receive filtered feeds, restricted adult contact, and default continuous scroll limits. Parents can manage DM permissions and switch the feed to non-personalized mode through ChatGPT parental controls. Images featuring children and young-looking individuals also trigger stricter moderation at the point of generation.
What is the difference between image-to-video and the Characters feature?
Image-to-video allows users to upload photos of family and friends after attesting to consent and media rights. It is subject to particularly strict guardrails, even stricter than the Characters feature. Characters is a consent-based system where you register your own likeness and voice and control exactly who can use it.
Can Sora 2 generate music that copies real artists?
No. Sora 2 actively blocks attempts to generate music imitating living artists or existing works. Audio transcripts are scanned automatically for policy violations. OpenAI also processes takedown requests from rights holders who believe a Sora output infringes on their work, though OpenAI acknowledges perfect protections in this area remain difficult.
What content does Sora 2 block outright?
Sora 2 blocks sexual material, terrorist propaganda, and self-harm promotion. These checks run on both the input prompt and the generated output across multiple video frames and audio transcripts before anything reaches the feed. Automated systems continuously scan all published feed content against OpenAI’s Global Usage Policies.
Can I remove a Sora video I already published?
Yes. You can remove any published video at any time. Videos are only shared to the Sora feed when you actively choose to do so, giving you full control over what reaches your audience.
Can I block other users in Sora?
Yes. Blocking an account prevents that user from viewing your profile or posts, using your character, and contacting you via direct message. Every video, profile, direct message, comment, and character also carries a reporting option for policy violations.

