How OpenAI and Anthropic Just Triggered the Next Big Shift in Software

The last week has felt like a turning point in the AI landscape. Not because of a single product launch, but because two of the biggest players in the industry effectively declared that the next frontier isn’t chat, search, or reasoning. It’s agentic coding. It’s the potential to change the landscape of SaaS applications really quickly (especially for SMBs) which has caused the world to wobble a bit.

This agentic code (or AI Codex) is the word that has been buzzing tech news and social.

That said, the term has been around for a while, but last week it became shorthand for a new class of models designed not just to write code, but to understand systems, navigate repositories, fix issues, and build software with a level of autonomy that edges closer to genuine engineering assistance.

The second narrative and the “concern” that has been rising alongside it is the idea that AI isn’t just transforming software development – it’s going to (and is already beginning to) challenge the software industry itself.

Analysts and commentators have been exploring whether AI agents could eventually replace entire categories of SaaS tools, reshape how companies buy software, or even enable organisations to build their own alternatives on demand.

These two threads – the rise of agentic coding and the pressure on traditional software models – collided last week, and that’s why the conversation has become so loud.

What is an AI Codex?

AI Codex refers to a specialised family of models optimised for programming tasks. Where a general model like ChatGPT is trained to converse, explain, and reason across domains, Codex models are tuned to:

  • read and interpret codebases
  • generate new code with high precision 
  • refactor and optimise existing logic 
  • follow patterns, libraries, and frameworks accurately 
  • maintain context across multi‑file projects 
  • act as an agent that can plan, execute, and iterate on tasks

You can think of ChatGPT as a generalist and Codex as the engineer. Both are powerful, but they’re built for very different jobs.

Why It Exploded Into the News

Timing… There were three things that happened almost simultaneouslyast week and it was the timing if these as well as what happened that brought it front and centre to the tech news.

1. OpenAI and Anthropic launched competing coding models within minutes

It started with OpenAI releasing their latest Codex model at almost the exact moment Anthropic unveiled its new Claude coding variant.

Whether coordinated or coincidental, it created a sense of a head‑to‑head sprint. The press framed it as an AI coding arms race, and the narrative stuck.

2. OpenAI and Anthropic both claimed major breakthroughs in agentic coding.

This was the next thing.. Both Anthropic and OpenAI both positioned these are more than just incremental updates. Instead both said these models are a leap forward and their models can now:

  • plan multi‑step coding tasks
  • call tools 
  • run and evaluate code
  • fix their own mistakes
  • work across entire repositories 

This is essentially a shift from “autocomplete on steroids” to more of a a “junior engineer that can take a ticket and run with it”.

3. The question of could AI replace software?

A wider industry conversation about AI replacing software intensified last week too with commentators and analysts exploring the potential of whether these AI Codex models AI could eventually:

  • reduce the number of SaaS subscriptions companies need
  • automate workflows traditionally handled by enterprise software
  • enable organisations to build their own tools instead of buying them
  • reshape the economics of “seats” and licences

4. A viral comment poured fuel on the fire

Thank social media for this one as OpenAI’s Sam Altman’s remark that using the new Codex made him feel “a little useless” hit a nerve and sent it viral.

Of course, developers amplified it, commentators debated it, and suddenly the story wasn’t just about new models. It was about what these models mean for the future of software development. But it dominated tech feeds and news stories last week. So much I wanted to know more…

ChatGPT vs GPT‑Codex

So again.. For clairy…The simplest way to explain the difference between these two models is:

  • ChatGPT is built for natural language. 
  • GPT‑Codex is built for code.

ChatGPT is the base LLM most users are familiar with. Used by ChatGPT, Microsoft Copilot base LLM etc. It’s designed for most things… brilliant for language, strategy, communication, architecture, and ideation.

Codex is built to read your repo, understand your patterns, and produce code that fits your environment. It’s not for the general user.

Why This Matters for Engineering Teams and MSPs

The shift to agentic coding of course sparks concerns over the future role of the developers but both are trying to assure the world that it’s not about replacing developers, it’s about changing the shape of work.

This means:

  • faster delivery of repeatable patterns 
  • more consistent infrastructure‑as‑code 
  • automated remediation and optimisation 
  • better documentation and handover 
  • reduced toil across migration and modernisation projects

It also means of course that governance becomes even more important. If these models can now act and not just suggest, then organisations need a clear governance process to define the rules, guardrails, and patterns that define how AI participates in engineering workflows.

At the same time, the broader industry conversation about AI replacing software should be seen as a signal, not a threat. The opportunity for technology partners is to help organisations navigate this shift….(just as they are helping to adopt Gen AI tools like Copilot). This is still about  exploring where AI augments existing tools, where it replaces them, and where it enables entirely new workflows.


This is only the beginning.

The Real Story Behind the Headlines
The noise last week wasn’t just about OpenAI and  Anthropic competing (publically), it was about the potential for a new category of AI becoming mainstream.

Here, coding models are being treated not as assistants, but as participants in the software lifecycle – and that has implications far beyond engineering.

This is the beginning of a new phase in how software gets built, bought, and delivered
That’s why the term “AI Codex” suddenly feels everywhere. It captures the moment we moved from conversational AI to operational AI. From chat to action. From suggestion to execution.


What do you think? Fad or as SaaS apps going to be impacted?