Apple is testing a ChatGPT-style internal app called Veritas to trial features for a fully rebuilt Siri due in early 2026, likely with iOS 26.4 around March 2026. The new Siri will run on large language models, use a planner → search → summarizer pipeline, and may use Google’s Gemini (and potentially Anthropic’s Claude) for some web tasks, while Apple’s own models handle your on-device data for privacy. Apple pushed the date after reliability issues; the goal is a more conversational Siri that can follow context and complete multi-step actions.
Table of Contents
What changed: Apple’s Siri overhaul now targets early 2026
Apple has shifted the revamped Siri to 2026, after initially aiming sooner. In March, Apple acknowledged it would “take longer than we thought,” setting expectations for a 2026 rollout. At WWDC 2025, Craig Federighi and Greg Joswiak explained the first attempt didn’t meet quality bars.
Why the delay happened
According to executives, early versions worked but didn’t reliably hit Apple’s standards. Rather than ship a shaky assistant, the team went back to rebuild. That lines up with broader reporting about reliability problems and a deeper architectural reset.
New internal ‘Veritas’ app: what it’s for
Per fresh reporting, Apple built an internal ChatGPT-like iPhone app, code-named “Veritas,” to rapidly test Siri’s new behavior across long, multi-topic chats and saved threads. It’s not meant for public release; it’s a tool for Apple’s AI team to vet features quickly and gather feedback on the chat format.
How the 2026 Siri is supposed to work
Planner → Search → Summarizer (the 3-part flow)
The new Siri pipelines your request through a planner (interprets your prompt), a search system (queries the web and/or your device), and a summarizer (packages the final answer). This is the model Apple’s testing for more natural, ongoing conversations and multi-step tasks.
Where third-party models (Gemini/Claude) may slot in
Apple has a formal test agreement with Google to try a Gemini variant for summaries, and it has also discussed Anthropic’s Claude for planning. Apple appears pragmatic here: use external models for the web-facing pieces while keeping personal-data logic under Apple control.
What Apple’s own models will handle on-device
Reporting indicates Apple will rely on its Foundation Models for searching personal user data and executing app intents a continuation of Apple Intelligence’s privacy stance. That keeps sensitive context processing tied to your device or Apple-controlled infrastructure.
Launch window and versions
Expected timing (iOS 26.4, March 2026 window)
Outlets tracking the timeline say Apple is on track for early 2026, with a likely iOS 26.4 release around March 2026. Expect a staggered rollout rather than a single global flip.
Regions, languages, and a likely phased rollout
Historically, Apple rolls out major language models in English first, then expands. Given the complexity (app actions, summarization, and screen context), expect staged support by language, region, and app category.
What this means for you (practical examples)
- Trip planning: “Book a 7 pm table near my hotel and add it to my calendar” → Siri infers your hotel from mail, finds options, books via app, and sends confirmations.
- Work follow-through: “Summarize yesterday’s Threads, draft a reply, and schedule it for 9 am” → Siri pulls context, drafts, and schedules.
- Personal admin: “Find the PDF lease my landlord sent and remind me a week before it renews.”
The assistant may lean on Gemini (or similar) for web summaries, but should use Apple’s own stack for on-device context.
Privacy implications and limits
Apple’s approach keeps personal context processing with Apple’s models and infrastructure. That doesn’t mean third-party models won’t be used but the line appears to be: web content via partners; your data via Apple’s models. Expect granular consent prompts and audit controls at launch.
New Siri vs Today’s Siri vs ChatGPT/Gemini (quick compare)
| Feature | Today’s Siri | New Siri (target 2026) | ChatGPT/Gemini apps |
|---|---|---|---|
| Conversational memory | Limited | Ongoing, multi-topic threads (tested in Veritas) | Strong |
| Multi-step app actions | Basic | Planner-driven, cross-app | Strong (via plugins/tools) |
| Web summaries | Limited | Likely via Gemini/Claude | Core strength |
| Personal data actions | Limited | Apple FMs + app intents | Requires permissions/integrations |
| Rollout | Live | Early 2026 (iOS 26.4) | Live |
Realistic expectations: what may slip (or change)
Even Apple says some features are taking longer than planned. Expect phased capabilities, language limits at launch, and ongoing reliability work. Apple leadership has been clear: shipping late beats shipping broken.
Comparison Table / Pros & Cons
New Siri (target 2026) – Pros: conversational memory, deeper app actions, smarter summaries. Cons: phased rollout, possible regional delays, reliance on partner models for some web tasks.
Frequently Asked Question
What is the Apple Siri overhaul 2026?
A rebuilt, LLM-powered Siri with planner → search → summarizer flow for natural conversations and multi-step actions.
What is Veritas?
An internal ChatGPT-like app to test Siri features; not for public release.
What’s the release window?
Early 2026, likely iOS 26.4 in March.
Which AI partners are involved?
Apple is testing Google’s Gemini and has discussed Anthropic’s Claude; Apple’s own models handle personal data search.
Why did Apple delay Siri upgrades?
To meet quality and reliability thresholds after early prototypes fell short.
Will every feature launch everywhere?
Unlikely. Expect phased language/region support and staged capabilities.
Is this a standalone chatbot?
No. Apple has said “building a chatbot” isn’t the goal; features will be integrated into your experience.
Will privacy change?
Apple aims to keep personal context with Apple’s models and use partners for web content tasks.
Featured Answer Boxes
When is Apple’s new Siri coming?
Reporting points to early 2026, likely with iOS 26.4 around March 2026, with a phased rollout by region/language.
What is Apple’s ‘Veritas’ app?
Veritas is an internal, ChatGPT-like iPhone app Apple uses to test the rebuilt Siri’s conversational features. It isn’t planned for public release.
Will Siri use Google Gemini?
Apple is testing a Google-designed Gemini model for summaries and has discussed Anthropic’s Claude for planning, while Apple’s models handle your personal data.
Why was Siri delayed?
Apple says the first approach didn’t meet its reliability standards; it chose to overhaul and ship later rather than release something unstable.

