How OpenAI and Anthropic Just Triggered the Next Big Shift in Software

The last week has felt like a turning point in the AI landscape. Not because of a single product launch, but because two of the biggest players in the industry effectively declared that the next frontier isn’t chat, search, or reasoning. It’s agentic coding. It’s the potential to change the landscape of SaaS applications really quickly (especially for SMBs) which has caused the world to wobble a bit.

This agentic code (or AI Codex) is the word that has been buzzing tech news and social.

That said, the term has been around for a while, but last week it became shorthand for a new class of models designed not just to write code, but to understand systems, navigate repositories, fix issues, and build software with a level of autonomy that edges closer to genuine engineering assistance.

The second narrative and the “concern” that has been rising alongside it is the idea that AI isn’t just transforming software development – it’s going to (and is already beginning to) challenge the software industry itself.

Analysts and commentators have been exploring whether AI agents could eventually replace entire categories of SaaS tools, reshape how companies buy software, or even enable organisations to build their own alternatives on demand.

These two threads – the rise of agentic coding and the pressure on traditional software models – collided last week, and that’s why the conversation has become so loud.

What is an AI Codex?

AI Codex refers to a specialised family of models optimised for programming tasks. Where a general model like ChatGPT is trained to converse, explain, and reason across domains, Codex models are tuned to:

  • read and interpret codebases
  • generate new code with high precision 
  • refactor and optimise existing logic 
  • follow patterns, libraries, and frameworks accurately 
  • maintain context across multi‑file projects 
  • act as an agent that can plan, execute, and iterate on tasks

You can think of ChatGPT as a generalist and Codex as the engineer. Both are powerful, but they’re built for very different jobs.

Why It Exploded Into the News

Timing… There were three things that happened almost simultaneouslyast week and it was the timing if these as well as what happened that brought it front and centre to the tech news.

1. OpenAI and Anthropic launched competing coding models within minutes

It started with OpenAI releasing their latest Codex model at almost the exact moment Anthropic unveiled its new Claude coding variant.

Whether coordinated or coincidental, it created a sense of a head‑to‑head sprint. The press framed it as an AI coding arms race, and the narrative stuck.

2. OpenAI and Anthropic both claimed major breakthroughs in agentic coding.

This was the next thing.. Both Anthropic and OpenAI both positioned these are more than just incremental updates. Instead both said these models are a leap forward and their models can now:

  • plan multi‑step coding tasks
  • call tools 
  • run and evaluate code
  • fix their own mistakes
  • work across entire repositories 

This is essentially a shift from “autocomplete on steroids” to more of a a “junior engineer that can take a ticket and run with it”.

3. The question of could AI replace software?

A wider industry conversation about AI replacing software intensified last week too with commentators and analysts exploring the potential of whether these AI Codex models AI could eventually:

  • reduce the number of SaaS subscriptions companies need
  • automate workflows traditionally handled by enterprise software
  • enable organisations to build their own tools instead of buying them
  • reshape the economics of “seats” and licences

4. A viral comment poured fuel on the fire

Thank social media for this one as OpenAI’s Sam Altman’s remark that using the new Codex made him feel “a little useless” hit a nerve and sent it viral.

Of course, developers amplified it, commentators debated it, and suddenly the story wasn’t just about new models. It was about what these models mean for the future of software development. But it dominated tech feeds and news stories last week. So much I wanted to know more…

ChatGPT vs GPT‑Codex

So again.. For clairy…The simplest way to explain the difference between these two models is:

  • ChatGPT is built for natural language. 
  • GPT‑Codex is built for code.

ChatGPT is the base LLM most users are familiar with. Used by ChatGPT, Microsoft Copilot base LLM etc. It’s designed for most things… brilliant for language, strategy, communication, architecture, and ideation.

Codex is built to read your repo, understand your patterns, and produce code that fits your environment. It’s not for the general user.

Why This Matters for Engineering Teams and MSPs

The shift to agentic coding of course sparks concerns over the future role of the developers but both are trying to assure the world that it’s not about replacing developers, it’s about changing the shape of work.

This means:

  • faster delivery of repeatable patterns 
  • more consistent infrastructure‑as‑code 
  • automated remediation and optimisation 
  • better documentation and handover 
  • reduced toil across migration and modernisation projects

It also means of course that governance becomes even more important. If these models can now act and not just suggest, then organisations need a clear governance process to define the rules, guardrails, and patterns that define how AI participates in engineering workflows.

At the same time, the broader industry conversation about AI replacing software should be seen as a signal, not a threat. The opportunity for technology partners is to help organisations navigate this shift….(just as they are helping to adopt Gen AI tools like Copilot). This is still about  exploring where AI augments existing tools, where it replaces them, and where it enables entirely new workflows.


This is only the beginning.

The Real Story Behind the Headlines
The noise last week wasn’t just about OpenAI and  Anthropic competing (publically), it was about the potential for a new category of AI becoming mainstream.

Here, coding models are being treated not as assistants, but as participants in the software lifecycle – and that has implications far beyond engineering.

This is the beginning of a new phase in how software gets built, bought, and delivered
That’s why the term “AI Codex” suddenly feels everywhere. It captures the moment we moved from conversational AI to operational AI. From chat to action. From suggestion to execution.


What do you think? Fad or as SaaS apps going to be impacted?

More Anthropic Models coming to Microsoft Copilot

Microsoft is making a major change to how AI models are integrated into Copilot experiences. From 7 January 2026, Anthropic’s models will be enabled by default for Microsoft 365 Copilot licensed users, moving away from the current opt-in setting to a standard feature under Microsoft’s own contractual terms rather than Anthropic’s.

What’s Changing?

  • Default Enablement: Anthropic models, which were previously optional, will now be switched on by default for most commercial cloud customers. UK and EU/EFTA customers will find this OFF by default, requiring manual opt-in for others it will be ON.
  • Microsoft Sub processor Status: Anthropic is now a Microsoft sub processor, meaning its operations fall under Microsoft’s Data Protection Addendum and Product Terms (previously Anthropic use was under Anthropic own Commercial Terms).
  • Admin Controls: A new toggle should now be active in the Microsoft 365 Admin Centre.

Why It Matters

This update extends Microsoft’s enterprise-grade data protection standards to Anthropic-powered Copilot features and makes more of a secure broker around AI models with less of a dependance on just Open AI. Working with companies like Anthropic in this “AI sub-processor” approach ensures:

  • Contractual Safeguards: Anthropic operates under Microsoft’s direction and compliance frameworks.
  • Choice and Flexibility as well as ensuring access to specific models to perform the best tasks drive quality and refinement to Copilot.
  • Enterprise Data Protection: Your data remains covered by Microsoft’s commitments, including the DPA and Product Terms.

Why Microsoft Is Adding Anthropic Support

Microsoft’s goal is to give organisations more choice and flexibility in Copilot experiences. Anthropic’s Claude models are known for strong reasoning and safety alignment, which complements Microsoft’s own AI capabilities. By onboarding Anthropic as a subprocessor, Microsoft can:

  • Offer advanced generative AI features in Word, Excel, PowerPoint, and Copilot Studio.
  • Maintain consistent compliance and security standards across all integrated models.
  • Enable customers to select external models for specialised use cases without sacrificing enterprise-grade protections.

Regional and Cloud Exceptions

  • UK & EU/EFTA: Toggle remains OFF by default; admins need to opt in.
  • Government & Sovereign Clouds: Anthropic models are not yet available.

Controlling access to other AI Provides like Anthropic

To do this, head to the Admin Centre, Go to Copilot, Settings and choose Data Access Tab

Decide if to enable (or disable)

Looking Ahead

This change signals Microsoft’s commitment to expanding AI capabilities responsibly by leveraging the best model for the job or task. Enabling Anthropic (and other models) unlocks richer functionality – especially in Word, Excel, PowerPoint, and Copilot Studio – while maintaining strong data protection standards and still giving organisations choice.

Beyond OpenAI: Microsoft Copilot add Claude support

Microsoft has started to broaden their AI horizons by adding their first (not Open AI) model into Copilot.

Microsoft are integrating Anthropic’s Claude models into Microsoft 365 Copilot which marks a significant pivot from their exclusive OpenAI-centric approach. Microsoft are also working on their models which we already see on Copilot Plus PCs which will at some point make their way to Copilot.

This move is more than just a new menu option or toggle, it is part of their strategic play to diversify AI capabilities and reduce dependency on a single vendor.

Claude Opus and Sonnet in Copilot.

Claude Opus 4.1 and Sonnet 4 are now available to commercial Frontier Copilot users, (Corporate early adoptors) offering, for the first time, alternatives to Open AI’s GPT models for agents in Copilot Studio and also for their Researcher Agent.

Copilot Studio Model Selector (preview)

It’s worth noting that enabling access does require admin approval. See later.

In the formal annoucement, Microsoft said that Anthropic’s models will unlock “more powerful experiences” for users.

Using Claude in Microsoft Researcher Agent – Via Copilot Chat.

Claude is not new to Copilot, but not new to Copilot AI. Claude is already available (along with other AI models) are already embedded in Visual Studio and Azure AI alongside Google’s AI and Elon Musk’s Grok. This is, however the first time Copilot launch that we have seen non-OpenAI models powering Copilot.

Why This Matters

Microsoft’s shift to leveraging different models reflects a broader trend. Microsoft’s message here is that Copilot is no longer about a single model or even vendor, bit more about orchestration, choice, and adaptability.

Different models have different areas of excellence and this sets the foundations for Microsoft to give flexibilityto tailor and tune AI experiences to specific business needs  using the most appropriate model for the task.

It does, however, raise questions around governance, model performance, and cost. With multiple models in play, we don’t really know how the future of pricing will work if multi model is the future for Microsoft 365 Copilot.

Data Sovereignty and Multi-Model Concerns?

One question I’m already seeing is around Microsoft’s boundary of trust and responsibility, something Microsoft boast around with their Microsoft 365 portfolio.

While the flexibility of multi-model AI is compelling, the question is does it introduce new considerations around data residency and compliance when multi models are being used?

To address that, Microsoft has confirmed that these Claude models run within its Azure AI infrastructure, but states that are not Microsoft-owned. This means that when users “opt to” use Claude, their prompts and responses may be processed by Anthropic’s models hosted within Microsoft’s environment.

This means that when organisations choose to use Anthropic models, they are using these under Anthropic’s Commercial Terms of Service, not the consumer user terms.

For regulated industries or organisations with strict data governance policies, this is likely to raises a few red flags or at least questions that Microsoft will need to be able to answer.

  • Data Boundary Clarity: Is the data staying within Microsoft’s compliance boundary, or is it crossing into Anthropic’s operational domain? If so what does this mean for data compliance and security?
  • Model-Specific Logging: Are logs and telemetry handled differently across models? Can organisations audit usage per model? How is encrypted data handled?
  • Privacy and Consent: Are users aware when their data is being processed by a non-Microsoft model? Is consent granular enough? Will users understand even if Microsoft tell them?

Again, Microsoft has stated that Claude models are “fully integrated” into the Microsoft 365 compliance framework, but organisations will still want to (and should) validate this against their own risk posture – especially where sensitive or regulated data is involved.

Enabling Claude models in Copilot.

To enable the models, your Microsoft 365 Admin needs to head over to the Microsoft 365 Admin Centre and enable access to the other models. Instructions for this are shown in the link below.

https://learn.microsoft.com/en-us/copilot/microsoft-365/connect-to-ai-models?s=09

Microsoft Message Centre annoucement: https://admin.cloud.microsoft/?#/MessageCenter/:/messages/MC1158765?MCLinkSource=DigestMail

Thoughts.

This is a smart move I think. Microsoft is playing the long game — moving their eggs out of one basket and looking a different models that made most economic and performance sense and brining more choice to agent builders.

For those of us partners like us at Cisilion, advising clients on AI adoption, this reinforces the need to think modularly. When building agents, don’t just pick a model – pick a framework that allows you to evolve. Microsoft’s Copilot is becoming that framework and that should be good for business.

I do expect this is just the start. We know Microsoft’s relationship with OpenAI is “less properpous” that it once was. As such I do expect more models, more integrations, and more choice and I do think we will see Microsoft’s own models making their way to Copilot soon.

But with choice comes complexity. We need to ensure that governance, transparency, and user education keep pace with innovation. Again partners will need to help customers navigate this.

What do you think. Is this a good move for Microsoft and their customers?