back to top
More
    HomeNewsApple Siri overhaul in 2026: What ‘Veritas’ is testing and what you’ll...

    Apple Siri overhaul in 2026: What ‘Veritas’ is testing and what you’ll get

    Published on

    Apple Is Making the Mac Mini in America for the First Time, and the Scale Is Bigger Than You Think

    Apple just moved one of its most iconic products to American soil, and the decision carries weight far beyond a single factory. For the first time in its history, the Mac mini will be assembled in the United States, at a Foxconn-operated campus in north Houston, Texas.

    Apple is testing a ChatGPT-style internal app called Veritas to trial features for a fully rebuilt Siri due in early 2026, likely with iOS 26.4 around March 2026. The new Siri will run on large language models, use a planner → search → summarizer pipeline, and may use Google’s Gemini (and potentially Anthropic’s Claude) for some web tasks, while Apple’s own models handle your on-device data for privacy. Apple pushed the date after reliability issues; the goal is a more conversational Siri that can follow context and complete multi-step actions.

    What changed: Apple’s Siri overhaul now targets early 2026

    Apple has shifted the revamped Siri to 2026, after initially aiming sooner. In March, Apple acknowledged it would “take longer than we thought,” setting expectations for a 2026 rollout. At WWDC 2025, Craig Federighi and Greg Joswiak explained the first attempt didn’t meet quality bars.

    Why the delay happened

    According to executives, early versions worked but didn’t reliably hit Apple’s standards. Rather than ship a shaky assistant, the team went back to rebuild. That lines up with broader reporting about reliability problems and a deeper architectural reset.

    New internal ‘Veritas’ app: what it’s for

    Per fresh reporting, Apple built an internal ChatGPT-like iPhone app, code-named “Veritas,” to rapidly test Siri’s new behavior across long, multi-topic chats and saved threads. It’s not meant for public release; it’s a tool for Apple’s AI team to vet features quickly and gather feedback on the chat format.

    How the 2026 Siri is supposed to work

    Planner → Search → Summarizer (the 3-part flow)

    The new Siri pipelines your request through a planner (interprets your prompt), a search system (queries the web and/or your device), and a summarizer (packages the final answer). This is the model Apple’s testing for more natural, ongoing conversations and multi-step tasks.

    Where third-party models (Gemini/Claude) may slot in

    Apple has a formal test agreement with Google to try a Gemini variant for summaries, and it has also discussed Anthropic’s Claude for planning. Apple appears pragmatic here: use external models for the web-facing pieces while keeping personal-data logic under Apple control.

    What Apple’s own models will handle on-device

    Reporting indicates Apple will rely on its Foundation Models for searching personal user data and executing app intents a continuation of Apple Intelligence’s privacy stance. That keeps sensitive context processing tied to your device or Apple-controlled infrastructure.

    Launch window and versions

    Expected timing (iOS 26.4, March 2026 window)

    Outlets tracking the timeline say Apple is on track for early 2026, with a likely iOS 26.4 release around March 2026. Expect a staggered rollout rather than a single global flip.

    Regions, languages, and a likely phased rollout

    Historically, Apple rolls out major language models in English first, then expands. Given the complexity (app actions, summarization, and screen context), expect staged support by language, region, and app category.

    What this means for you (practical examples)

    • Trip planning: “Book a 7 pm table near my hotel and add it to my calendar” → Siri infers your hotel from mail, finds options, books via app, and sends confirmations.
    • Work follow-through: “Summarize yesterday’s Threads, draft a reply, and schedule it for 9 am” → Siri pulls context, drafts, and schedules.
    • Personal admin: “Find the PDF lease my landlord sent and remind me a week before it renews.”
      The assistant may lean on Gemini (or similar) for web summaries, but should use Apple’s own stack for on-device context.

    Privacy implications and limits

    Apple’s approach keeps personal context processing with Apple’s models and infrastructure. That doesn’t mean third-party models won’t be used but the line appears to be: web content via partners; your data via Apple’s models. Expect granular consent prompts and audit controls at launch.

    New Siri vs Today’s Siri vs ChatGPT/Gemini (quick compare)

    FeatureToday’s SiriNew Siri (target 2026)ChatGPT/Gemini apps
    Conversational memoryLimitedOngoing, multi-topic threads (tested in Veritas)Strong
    Multi-step app actionsBasicPlanner-driven, cross-appStrong (via plugins/tools)
    Web summariesLimitedLikely via Gemini/ClaudeCore strength
    Personal data actionsLimitedApple FMs + app intentsRequires permissions/integrations
    RolloutLiveEarly 2026 (iOS 26.4)Live

    Realistic expectations: what may slip (or change)

    Even Apple says some features are taking longer than planned. Expect phased capabilities, language limits at launch, and ongoing reliability work. Apple leadership has been clear: shipping late beats shipping broken.

    Comparison Table / Pros & Cons

    New Siri (target 2026) – Pros: conversational memory, deeper app actions, smarter summaries. Cons: phased rollout, possible regional delays, reliance on partner models for some web tasks.

    Frequently Asked Question

    What is the Apple Siri overhaul 2026?
    A rebuilt, LLM-powered Siri with planner → search → summarizer flow for natural conversations and multi-step actions.

    What is Veritas?
    An internal ChatGPT-like app to test Siri features; not for public release.

    What’s the release window?
    Early 2026, likely iOS 26.4 in March.

    Which AI partners are involved?
    Apple is testing Google’s Gemini and has discussed Anthropic’s Claude; Apple’s own models handle personal data search.

    Why did Apple delay Siri upgrades?
    To meet quality and reliability thresholds after early prototypes fell short.

    Will every feature launch everywhere?
    Unlikely. Expect phased language/region support and staged capabilities.

    Is this a standalone chatbot?
    No. Apple has said “building a chatbot” isn’t the goal; features will be integrated into your experience.

    Will privacy change?
    Apple aims to keep personal context with Apple’s models and use partners for web content tasks.

    Featured Answer Boxes

    When is Apple’s new Siri coming?

    Reporting points to early 2026, likely with iOS 26.4 around March 2026, with a phased rollout by region/language.

    What is Apple’s ‘Veritas’ app?

    Veritas is an internal, ChatGPT-like iPhone app Apple uses to test the rebuilt Siri’s conversational features. It isn’t planned for public release.

    Will Siri use Google Gemini?

    Apple is testing a Google-designed Gemini model for summaries and has discussed Anthropic’s Claude for planning, while Apple’s models handle your personal data.

    Why was Siri delayed?

    Apple says the first approach didn’t meet its reliability standards; it chose to overhaul and ship later rather than release something unstable.

    Mohammad Kashif
    Mohammad Kashif
    Senior Technology Analyst and Writer at AdwaitX, specializing in the convergence of Mobile Silicon, Generative AI, and Consumer Hardware. Moving beyond spec sheets, his reviews rigorously test "real-world" metrics analyzing sustained battery efficiency, camera sensor behavior, and long-term software support lifecycles. Kashif’s data-driven approach helps enthusiasts and professionals distinguish between genuine innovation and marketing hype, ensuring they invest in devices that offer lasting value.

    Latest articles

    Apple Is Making the Mac Mini in America for the First Time, and the Scale Is Bigger Than You Think

    Apple just moved one of its most iconic products to American soil, and the decision carries weight far beyond a single factory. For the first time in its history, the Mac mini will be assembled in the United States, at a Foxconn-operated campus in north Houston, Texas.

    Australia’s First Cisco Secure AI Factory: What 1,024 NVIDIA Blackwell Ultra GPUs Mean for Enterprise AI

    Enterprises across Asia-Pacific now have access to sovereign, high-performance AI infrastructure that keeps sensitive data entirely onshore. Australia’s first Cisco Secure AI Factory, built with Sharon AI and NVIDIA, combines cutting-edge GPU

    OpenClaw + Ollama: The Local AI Agent Setup That Keeps Your Data Off the Cloud

    Your AI agent does not need to live in a server farm 3,000 miles away. OpenClaw, paired with Ollama, puts a fully autonomous, multi-step AI agent directly on your own hardware, with no subscription, no telemetry, and no data leaving your

    NVIDIA Cosmos on Jetson: World Foundation Models Now Run on Edge Hardware

    NVIDIA just demonstrated that physical AI inference no longer requires a data center. Cosmos world foundation models now run directly on Jetson edge hardware, from the AGX Thor down to the compact Orin Nano Super.

    More like this

    Apple Is Making the Mac Mini in America for the First Time, and the Scale Is Bigger Than You Think

    Apple just moved one of its most iconic products to American soil, and the decision carries weight far beyond a single factory. For the first time in its history, the Mac mini will be assembled in the United States, at a Foxconn-operated campus in north Houston, Texas.

    Australia’s First Cisco Secure AI Factory: What 1,024 NVIDIA Blackwell Ultra GPUs Mean for Enterprise AI

    Enterprises across Asia-Pacific now have access to sovereign, high-performance AI infrastructure that keeps sensitive data entirely onshore. Australia’s first Cisco Secure AI Factory, built with Sharon AI and NVIDIA, combines cutting-edge GPU

    OpenClaw + Ollama: The Local AI Agent Setup That Keeps Your Data Off the Cloud

    Your AI agent does not need to live in a server farm 3,000 miles away. OpenClaw, paired with Ollama, puts a fully autonomous, multi-step AI agent directly on your own hardware, with no subscription, no telemetry, and no data leaving your
    Skip to main content