back to top
More
    HomeNewsChatGPT Age Verification: Why OpenAI May Ask Adults for ID After Teen...

    ChatGPT Age Verification: Why OpenAI May Ask Adults for ID After Teen Suicides

    Published on

    Cursor’s AI Agents Now Write Code, Run It, and Prove It Works

    Cursor just crossed a threshold that most AI coding tools have only promised. Its cloud agents no longer just generate code; they spin up their own virtual machines, run the software they build, capture video evidence, and submit pull requests that are ready to merge.

    OpenAI is building an age-prediction system for ChatGPT. If the system isn’t confident you’re an adult, it will default to a teen-safe version that blocks sexual content and avoids self-harm discussions. In some countries, adults may be asked for ID to unlock adult features. Parental controls are slated for rollout. Regulators are watching closely. Expect changes to arrive in phases and to vary by country.

    Will ChatGPT ask me for ID?
    Possibly. OpenAI says that when its automated system isn’t sure you’re over 18, it will default to the teen experience and, in some countries, ask adults to verify age with an ID. Timing and scope will vary by region.

    What changed and why now

    Two forces converged. First, lawsuits alleging that chatbot interactions contributed to teen suicides, including a California case involving a 16-year-old, put intense pressure on vendors. Second, Senate hearings this week surfaced bipartisan interest in youth protections for AI. OpenAI responded with a pair of posts detailing new teen safety rules and an age-prediction roadmap.

    Regulators joined in. On Sep 11, the FTC launched a 6(b) inquiry into AI “companion” chatbots and their impact on kids and teens, sending orders to seven firms, including OpenAI. That study can precede enforcement or rulemaking.

    Accuracy Note: some reports implied OpenAI execs testified in person this week. According to the Washington Post, parents testified, while OpenAI pointed to its published plans and statements.

    OpenAI’s stated principles: safety, freedom, privacy

    In a signed post, CEO Sam Altman describes three principles:

    1. Protect privacy in AI use.
    2. Treat adults like adults within broad safety bounds.
    3. Prioritize teen safety over privacy and freedom when principles conflict. That’s the justification for stricter controls and potential ID checks.

    The “under 18” ChatGPT experience

    If the system believes a user is under 18, or isn’t confident the user is an adult, ChatGPT moves the account to a stricter mode. OpenAI says this version will:

    • Block graphic sexual content, and avoid flirtatious responses.
    • Avoid discussions about suicide or self-harm, even in creative contexts.
    • In rare emergencies, attempt to contact parents or notify authorities if a teen appears at imminent risk.
    • Support parental controls, including account linking, distress alerts, feature disablement, and blackout hours.

    Will adults be asked for ID?

    Sometimes. OpenAI says that “in some cases or countries we may also ask for an ID” so adults can unlock the full experience. The default when age is uncertain is to play it safe and use the under-18 experience until proof is provided. Expect a phased rollout country by country, likely influenced by local law.

    Privacy angle: OpenAI also says it is developing security features to keep your data private, even from employees, with narrow exceptions for serious misuse, threats to life, plans to harm others, or a potential large-scale cybersecurity incident that may be escalated to human review.

    How age prediction might work

    OpenAI hasn’t published code, but it says it’s “building toward a long-term system” to tell if someone is over or under 18, and it admits it will sometimes get it wrong. When in doubt, it defaults to teen mode and gives adults ways to prove age. Signals likely include interaction patterns and device context. False positives are expected. Appeals and manual checks will matter.

    Safety vs privacy: the trade-offs

    Benefits: fewer risky interactions for minors; clearer crisis response; parental tools.
    Costs and risks: misclassification, data sharing during ID flows, and a slippery line between automated monitoring and human review. OpenAI says it prioritizes teen safety and that adult privacy remains a goal, but critical-risk content may be escalated. You should expect regional differences in what’s logged, retained, and audited.

    Comparison Table: Before vs After

    AreaBeforeWhat’s Changing
    Age handlingSelf-attested; generic guardrailsAge-prediction separates teens from adults; default to teen if unsure; ID may be requested in some countries.
    Content for teensGeneral safety filtersBlocks graphic sexual content; no flirtation; avoids suicide/self-harm talk, even in creative contexts.
    Crisis responseGeneric resource linksParent alerts and potential law-enforcement involvement in rare emergencies for teens.
    Parental controlNoneLink teen accounts, disable features, set blackout hours, distress notifications.
    Adult privacyStandard protectionsPledged stronger privacy, with narrow exceptions for serious misuse and threats.

    For parents and schools: quick setup checklist

    • Link your teen’s account in Parental controls once available. Set blackout hours and disable memory/history if desired.
    • Teach teens to flag distress and to come to you, not just the app.
    • Keep a living “AI rules” doc at home or in class: what’s okay, what isn’t, and where to go for help offline.
    • If you’re in a country that starts ID checks, review what data is captured, who stores it, and for how long.

    If you or someone you know may be considering self-harm, seek professional help immediately. In India, call AASRA 24×7: 91-22-27546669. In the US, call or text 988. In the UK & ROI, Samaritans: 116 123. Local numbers vary by country.

    Risks and unknowns to watch

    • False positives: Adults routed to teen mode by mistake.
    • Data handling: What ID data is retained and by whom.
    • Policy drift: “Rare” human review exceptions can expand over time.
    • Regulatory outcomes: The FTC’s 6(b) inquiry could shape product rules across the industry.
    • Terminology drift: Be cautious with “AI psychosis.” It is not a clinical diagnosis.

    Timeline and what to expect next

    OpenAI describes this as a “building toward” program. Expect parental controls by month-end, and a gradual rollout of age prediction and ID flows that vary by country as laws evolve. Keep an eye on official posts for specifics.

    FAQ

    Is ChatGPT banning users under 13?
    ChatGPT is intended for people 13 and up, with stricter policies for teens.

    Will every adult be forced to show ID?
    No. OpenAI says ID may be requested in some countries when the system can’t be sure you’re over 18.

    What happens during a self-harm crisis?
    For teen accounts, OpenAI says it will attempt to reach parents and, if necessary, contact authorities in cases of imminent harm. Adults continue to see supportive resources without method details.

    What is “AI psychosis”?
    It’s a media label for chatbot-linked delusions. Clinicians caution it’s not a formal diagnosis, though some cases are being reported and studied. Treat the term carefully.

    Are regulators doing anything?
    Yes. The FTC opened a study into AI chatbots acting as companions and their effects on kids and teens. Outcomes could include enforcement or new rules.

    Did OpenAI testify in the Senate hearing this week?
    Parents testified. OpenAI pointed to its blog and controls. Lawmakers discussed adding youth protections.

    What triggers ChatGPT’s teen-safe mode?

    If the system predicts a user is under 18—or can’t be sure they’re an adult—ChatGPT defaults to a stricter under-18 experience until age is verified. In some countries, adults may need to verify age to switch back.

    Will ChatGPT ask adults for ID?

    Possibly. OpenAI says that in certain countries it may request an ID when the automated system can’t confidently confirm you’re over 18. Without verification, you’ll stay in the teen-safe mode.

    What will parents actually get?

    Parental controls are planned: link teen accounts, set blackout hours, limit features like chat history/memory, and receive rare distress notifications in emergencies. Rollout will be phased by region.

    Does OpenAI read my chats?

    OpenAI says it’s developing security features to keep your data private—even from employees—with narrow exceptions for serious misuse or imminent threats that may be escalated to human review.

    What’s changing right now?

    Age prediction, a default teen-safe experience when age is uncertain, and region-specific ID checks are being built and rolled out in phases. Expect differences by country and local law.

    Source: OpenAI

    Mohammad Kashif
    Mohammad Kashif
    Senior Technology Analyst and Writer at AdwaitX, specializing in the convergence of Mobile Silicon, Generative AI, and Consumer Hardware. Moving beyond spec sheets, his reviews rigorously test "real-world" metrics analyzing sustained battery efficiency, camera sensor behavior, and long-term software support lifecycles. Kashif’s data-driven approach helps enthusiasts and professionals distinguish between genuine innovation and marketing hype, ensuring they invest in devices that offer lasting value.

    Latest articles

    Cursor’s AI Agents Now Write Code, Run It, and Prove It Works

    Cursor just crossed a threshold that most AI coding tools have only promised. Its cloud agents no longer just generate code; they spin up their own virtual machines, run the software they build, capture video evidence, and submit pull requests that are ready to merge.

    Claude Cowork Enterprise Plugins: How Anthropic Is Rebuilding the AI Workplace in 2026

    This is what separates it from generic AI assistants. The update gives IT admins, department heads, and knowledge workers a unified system to build, manage, and deploy AI agents that follow how their organization

    Anthropic Acquires Vercept: Claude Now Operates Software Like a Human

    Anthropic’s acquisition of Vercept is not a talent grab or a defensive move. It is a direct investment in making Claude the most capable computer-using AI agent available. The bottleneck has always

    Samsung Galaxy Buds4 Pro Officially Lauched: Everything You Need to Know Before March 11

    Samsung launched the Galaxy Buds4 series at Galaxy Unpacked 2026 in San Francisco, and the lineup arrives with more hardware changes than any previous Buds generation. The Buds4 Pro moves to a dual-

    More like this

    Cursor’s AI Agents Now Write Code, Run It, and Prove It Works

    Cursor just crossed a threshold that most AI coding tools have only promised. Its cloud agents no longer just generate code; they spin up their own virtual machines, run the software they build, capture video evidence, and submit pull requests that are ready to merge.

    Claude Cowork Enterprise Plugins: How Anthropic Is Rebuilding the AI Workplace in 2026

    This is what separates it from generic AI assistants. The update gives IT admins, department heads, and knowledge workers a unified system to build, manage, and deploy AI agents that follow how their organization

    Anthropic Acquires Vercept: Claude Now Operates Software Like a Human

    Anthropic’s acquisition of Vercept is not a talent grab or a defensive move. It is a direct investment in making Claude the most capable computer-using AI agent available. The bottleneck has always
    Skip to main content