back to top
More
    HomeNewsOpenAI Sora Feed: The Algorithm That Ranks Creativity Over Passive Scrolling

    OpenAI Sora Feed: The Algorithm That Ranks Creativity Over Passive Scrolling

    Published on

    iOS 26.3 RC : Apple’s Privacy-Focused Update Brings Android Transfer to iPhone

    Apple is preparing a fundamental shift in its approach to ecosystem walls and iOS 26.3 beta proves it. The update, currently in beta testing with developers, introduces nine features that prioritize interoperability and user control over walled-garden exclusivity

    Key Takeaways

    • Sora Feed prioritizes active creation over passive consumption with creativity-focused ranking algorithms
    • Parents control feed personalization and continuous scroll for teens through ChatGPT parental controls
    • Algorithm considers ChatGPT history, engagement signals, and safety filters to personalize recommendations
    • OpenAI’s multi-layer safety system filters harmful content through generation-stage blocking and human review

    OpenAI published its Sora Feed philosophy on February 3, 2026, detailing how the platform’s ranking algorithm differs from traditional social media. The system optimizes for creativity and connection rather than time spent scrolling, positioning Sora as an AI-native content creation platform with user control mechanisms. This approach marks a departure from engagement-maximizing algorithms used by conventional video platforms.

    How Sora Feed’s Ranking Algorithm Actually Works

    The Sora recommendation system analyzes multiple signal categories to personalize video feeds without replicating engagement-bait patterns. OpenAI employs large language models to power “steerable ranking” a natural language interface where users instruct the algorithm on content preferences in real-time.

    The algorithm evaluates these signal types:

    • User activity on Sora: Posts you’ve created, accounts you follow, content you’ve liked, comments you’ve made, videos you’ve remixed, and general location data derived from your IP address
    • ChatGPT integration: Your conversation history with ChatGPT influences recommendations unless you disable this connection through Sora’s Data Controls in Settings
    • Content engagement signals: Views, likes, comments, instructions to “see less content like this,” and remix frequency
    • Author credibility metrics: Follower counts, past post performance, and historical engagement patterns of content creators
    • Safety classification: Whether content violates OpenAI’s usage policies or requires age-restriction

    The system defaults to showing content from accounts you follow and your connections, prioritizing videos predicted to inspire your own creations rather than maximizing watch time. OpenAI states this creates a more “communal” experience focused on active participation.

    What makes Sora’s ranking different from traditional feeds?

    Sora explicitly avoids optimizing for time spent in the feed. The algorithm weights content from accounts you follow higher than viral global content, favoring videos that might inspire you to create rather than passively consume. This represents OpenAI’s attempt to build a recommendation system that “supports active, creative participation rather than passive scrolling“.

    Parental Controls Integrated With ChatGPT Accounts

    Parents manage teen Sora access through ChatGPT’s parental controls, which became available when Sora 2 launched on September 30, 2025. The controls include three primary toggles:

    1. Personalized feed: When disabled, Sora shows non-personalized content without using the teen’s activity history or ChatGPT conversations
    2. Continuous feed: Turning this off limits uninterrupted scrolling; by default, teens have limits on continuous scrolling
    3. Messaging: Parents control whether teens can send or receive direct messages within the platform

    OpenAI clarifies these controls do not provide parents access to teen conversations with ChatGPT and do not enable real-time monitoring of Sora activity. The system sends safety alerts in certain situations, though specific trigger criteria are not publicly disclosed.

    Teen accounts receive additional safeguards by default:

    • Limitations on mature content output
    • Teen profiles not recommended to adult users
    • Adults cannot initiate messages with teens
    • The feed is designed to be appropriate for teen audiences

    Multi-Layer Safety Architecture Blocks Harmful Content

    OpenAI implements content filtering across multiple stages to prevent policy violations from reaching the Sora feed. The safety architecture combines automated systems with human oversight.

    Content Moderation Approach

    OpenAI blocks content at the generation stage and monitors published videos through automated scanning tools. Human review teams complement automation by monitoring user reports and proactively auditing feed activity.

    Prohibited Content Categories

    Content flagged for removal from Sora feeds includes:

    • Graphic sexual or violent material
    • Extremist propaganda and hateful content
    • Self-harm promotion, disordered eating depictions, and unhealthy exercise behaviors
    • Appearance-based critiques and bullying
    • Dangerous challenges likely to be imitated by minors
    • Engagement bait and low-quality spam content
    • Unauthorized likeness recreation of living individuals or deceased public figures
    • Intellectual property infringements
    • Misinformation and deceptive content
    • Child safety violations
    • Harassment and abusive behavior
    • Manipulated media without disclosure
    • Content that violates privacy

    OpenAI acknowledges safety systems remain “living, evolving” and will require continuous refinement based on user feedback. The company states it will “adjust the details” as real-world usage patterns emerge and new risks surface.

    Does Sora remove all potentially harmful content?

    OpenAI adopts what it calls a “balanced” approach. The company prioritizes proactive guardrails for high-risk categories while allowing content that doesn’t clearly violate policies. OpenAI acknowledges it “won’t get this balance perfect from day one” and expects to iterate based on community feedback and emerging patterns.

    Steerable Ranking Gives Users Direct Algorithm Control

    Sora includes natural language ranking controls that let users directly instruct how their feed operates. You can tell the system things like “show me more cinematic videos” or “prioritize content from artists,” leveraging OpenAI’s language models to interpret intent and re-weight recommendation signals.

    This “steerable ranking” system represents OpenAI’s attempt to give users more agency over their algorithmic experience. Rather than relying solely on implicit signals like watch time and engagement, the system accepts explicit natural language instructions about content preferences.

    OpenAI positions this design as maximizing creation rather than consumption, countering concerns that algorithmic feeds inherently promote passive scrolling behaviors. The effectiveness of this approach at scale remains to be demonstrated as the platform expands beyond its initial user base.

    Design Philosophy: Connection Over Consumption

    OpenAI’s stated goal is creating a recommendation system that “supports active, creative participation rather than passive scrolling“. The company emphasizes three core principles:

    1. Prioritize creativity: Surface content likely to inspire users to create their own videos
    2. Foster connection: Weight content from followed accounts and connections higher than viral global content
    3. Empower user control: Provide steerable ranking so users can directly shape their experience

    The feed also aims to be “appropriate and valuable for all types of users, from professionals to people looking for creative inspiration“. OpenAI acknowledges balancing safety, creativity, and community values remains an ongoing challenge.

    Limitations Creators Should Consider

    OpenAI openly states the Sora Feed balances safety with creative freedom but “won’t get this balance perfect from day one”. The company expects to “adjust the details over time” based on how the community uses the platform.

    The algorithmic bias toward followed accounts and connected content may limit discoverability for new creators lacking established audiences. OpenAI has not disclosed whether the feed incorporates viral “For You” style recommendations for users with minimal activity history.

    Parental controls offer limited monitoring capabilities parents cannot view teen activity or restrict specific content types beyond the general mature content limitations. Parents also cannot prevent teens from creating content or control whose likeness appears in videos.

    Frequently Asked Questions (FAQs)

    Can parents see their teen’s Sora videos?

    No. Parental controls in ChatGPT do not provide access to teen conversations or Sora activity. Parents can only toggle feature availability (personalized feed, continuous scroll, messaging) and receive safety alerts in certain undisclosed situations.

    What data does Sora Feed use for recommendations?

    Sora analyzes user activity (posts, likes, comments, remixes), ChatGPT conversation history, engagement signals, author credibility metrics, location data from IP addresses, and safety classifications. Users can disable ChatGPT history integration through Sora’s Data Controls in Settings.

    How does steerable ranking work in Sora?

    Steerable ranking lets you use natural language to instruct the algorithm on your content preferences. You can tell Sora to show more of certain content types or prioritize specific creators, and the system uses large language models to interpret your instructions and adjust recommendations.

    When did Sora 2 launch publicly?

    Sora 2 launched on September 30, 2025, initially available in the United States and Canada through the iOS app and web interface. The original Sora model was first announced in February 2024.

    What safety protections exist for teen accounts?

    Teen accounts have limitations on mature content output by default. Teen profiles are not recommended to adult users, adults cannot initiate messages with teens, and the feed is designed to be appropriate for teen audiences. Parents can also disable feed personalization and continuous scrolling.

    Does Sora Feed show ads or sponsored content?

    OpenAI has not disclosed monetization plans for Sora Feed as of February 2026. The philosophy document does not mention advertising or sponsored video placements.

    Can I turn off algorithmic ranking completely?

    Parents can disable feed personalization for teen accounts, resulting in non-personalized content recommendations. The philosophy document does not specify whether adult users can completely disable algorithmic ranking, though steerable ranking allows natural language instructions to modify feed behavior.

    What content categories does OpenAI remove from Sora?

    OpenAI removes 13 categories of content including graphic sexual or violent material, extremist propaganda, self-harm promotion, bullying, dangerous challenges, engagement bait, unauthorized likeness recreation, intellectual property violations, misinformation, child safety violations, harassment, manipulated media without disclosure, and privacy violations.

    Mohammad Kashif
    Mohammad Kashif
    Senior Technology Analyst and Writer at AdwaitX, specializing in the convergence of Mobile Silicon, Generative AI, and Consumer Hardware. Moving beyond spec sheets, his reviews rigorously test "real-world" metrics analyzing sustained battery efficiency, camera sensor behavior, and long-term software support lifecycles. Kashif’s data-driven approach helps enthusiasts and professionals distinguish between genuine innovation and marketing hype, ensuring they invest in devices that offer lasting value.

    Latest articles

    iOS 26.3 RC : Apple’s Privacy-Focused Update Brings Android Transfer to iPhone

    Apple is preparing a fundamental shift in its approach to ecosystem walls and iOS 26.3 beta proves it. The update, currently in beta testing with developers, introduces nine features that prioritize interoperability and user control over walled-garden exclusivity

    RentAHuman.ai: The Platform Where AI Agents Become Your Boss

    AI agents can write code, analyze data, and generate content but they cannot pick up packages, attend meetings, or verify physical locations. RentAHuman.ai solves this fundamental limitation by positioning humans as the "meatspace layer" for artificial intelligence.

    Xcode 26.3 Brings Claude Agent SDK: Autonomous Coding Arrives for Apple Developers

    Apple has fundamentally changed how developers build apps for iPhone, iPad, and Mac. Xcode 26.3, released as a Release Candidate on February 3, 2026, now supports Anthropic’s Claude Agent SDK, the same framework powering Claude Code.

    Claude AI Remains Ad-Free While ChatGPT Embraces Advertising: What This Divide Means

    Anthropic has drawn a defining line in AI development. On February 4, 2026, the company announced Claude will never display advertisements, directly contrasting OpenAI's move to monetize ChatGPT through sponsored content.

    More like this

    iOS 26.3 RC : Apple’s Privacy-Focused Update Brings Android Transfer to iPhone

    Apple is preparing a fundamental shift in its approach to ecosystem walls and iOS 26.3 beta proves it. The update, currently in beta testing with developers, introduces nine features that prioritize interoperability and user control over walled-garden exclusivity

    RentAHuman.ai: The Platform Where AI Agents Become Your Boss

    AI agents can write code, analyze data, and generate content but they cannot pick up packages, attend meetings, or verify physical locations. RentAHuman.ai solves this fundamental limitation by positioning humans as the "meatspace layer" for artificial intelligence.

    Xcode 26.3 Brings Claude Agent SDK: Autonomous Coding Arrives for Apple Developers

    Apple has fundamentally changed how developers build apps for iPhone, iPad, and Mac. Xcode 26.3, released as a Release Candidate on February 3, 2026, now supports Anthropic’s Claude Agent SDK, the same framework powering Claude Code.
    Skip to main content