As I finished my first week back to work of 2026, I was thinking about Microsoft’s Satya Nadella’s end‑of‑year reflection blog post “Looking Ahead to 2026“ which he wrote at the end of December 2025.
This for me landed at an important moment for the industry. After a year dominated by “AI slop,” concerns over whether the AI boom was ending and stock prices in key AI stocks becoming a bit rocky. In the Microsoft space, there has been some “backlash” about how much AI Microsoft are “pushing” in their products and the industry in general failing to really get big AI initiatives off the ground. We then of course have the upcoming chip shortages (which remind us back to COVID times) with AI cited as the cause of these shortages (memory especially) with AI Data Centres demand exceeding supply.
We have seen huge advances in AI models from Open AI, Google, Anthronic and Microsoft this year but the news has really been coined with the term AI Slop – with AI appearinf to be baked into everything whether people want it or not – who ever thought Notepad in Windows would get the Copilot treatments!!
Despite the negative press, AI is still very much the buzz word (heck, its driving all those component and chip shortages). At work, our clients are still talking about it, driving it forward and investing more into adoption, enablement and consultative engagements. Businesses remain “excited” about the opportunity AI can bring to businesses, consumers and the world!
In his blog post, Satya Nadella argues that we’re finally moving from discovery to diffusion. The message is clear: 2026 must be the year we separate spectacle from substance.
This rest of my blog looks at Nadella’s framing, the wider industry commentary, and what it means for organisations that need AI to deliver measurable value – not just noise with AI Slop or no slop!
From “bicycles for the mind” to “scaffolding for human potential”
In his blog, Satya Nadella revisits Steve Jobs’ famous metaphor – computers as “bicycles for the mind” – and argues that it no longer captures the scale or nature of modern AI. Instead, he positions AI as scaffolding: a structure that supports, amplifies and extends human capability rather than replacing it.
This is a subtle but important shift. It reframes AI not as a tool we operate, but as a system that surrounds us — shaping how we work, decide, and collaborate.
Coverage from the Economic Times reinforces this by highlighting the need to move beyond the novelty phase and focus on real‑world impact. In short, the industry must stop debating “slop vs sophistication” and start designing systems that genuinely improve human cognition and outcomes.
From what I see with my engagement with business leaders, it means moving away from app and model‑centric thinking (what is better ChatGPT or Copilot) and towards systems engineering within our line of business. We need to look at how our business works today, where AI fits and where automation counts. Businesses need to work with Microsoft, with their technology partners and most importantly from within, to plan and deliver measurable outcomes.
“Aiming to please” is not alignment
One of the most important – and least discussed issues in business led AI is the tension between:
- AI that aims to please (reinforcement learning from human feedback, training, safety tuning, helpfulness scoring) where AI is just that assistant, that Copilot!
- AI that is aligned (truthful, reliable, predictable, value‑consistent and therefore really useful) which is much more important as we look for it to help us make decisions, take action and do stuff!
These are not the same thing.
Research throughout 2025 showed that models tuned to be more “helpful” often become more sycophantic, more agreeable, and more likely to produce confident but incorrect answers. An interesting article I found from “PC Gamer” cites Microsoft‑linked research suggesting that over‑reliance on AI tools can actually reduce user capability over time due to the mind set of “AI can do anything” whilst at the same point “we don’t trust AI”. So, which is it?
This re-enforces Satya’s view that as we enter 2026, the priority must shift from “does it sound good?” to “does it stand up to scrutiny?”
The AI race is real – and so is the concentration of power
AI is now a currency and we are seeing this everywhere. Leaders cite it, big tech companies are embracing it and the models are doubling in capability in less than a 12 month cycle (some say 8 months). Compute, data, and model access are becoming strategic assets – we are seen this through component shortages! These global hyper-scalers and a handful of model labs will capture disproportionate value, and enterprises will increasingly depend on them for:
- model access
- agent frameworks
- safety and alignment layers
- orchestration platforms
- compute and optimisation pipelines
This shouldn’t be seen as negative, but it does require deliberate strategy. Vendor lock‑in, opaque safety layers, and proprietary agent ecosystems will shape enterprise risk profiles for years.
Then there is the vast number of other tech venders embracing the security and safety layers – in the dozens these are and will appear to sell you AI Safety and Security tools, while we wait for the leading AI models and companies to bake this is. Firms like Microsoft will likely do this well and make it part of their security and compliance approach across their wider platforms. others will or may lean to other vendors through partnerships.
The biggest gap perhaps is still user awareness, adoption and controlling shadow-IT. This becomes very more important as organising and the world begin the next pivot shift from AI assistants to autonomous AI.
Again, Satytas’ blog acknowledges this indirectly: the industry does not yet have “societal permission” for widespread autonomous AI. That permission will depend on trust, governance, and measurable outcomes – not just capability.
Two strategic paths for Corporate and Enterprise AI?
We look at two different paths for how organisation use and leverage AI. The oath for superior intelligence and the path that recognises AI’s purpose as cognitive scaffolding.
Path A: Pursue superior intelligence
- Focus on emergent capabilities and autonomy
- High potential upside
- High systemic risk
- Harder to govern, audit, and predict
Path B: AI as cognitive scaffolding
- Focus on augmentation, reliability, and workflow integration
- Measurable value
- Lower risk
- Requires strong systems engineering
Satya calls out that most organisations will need to operate somewhere between these paths, but that the balance must be intentional.
What IT leaders should prioritise in 2026
There have been lots of posts around this from many different players, analyst firms and now little ol’ me! For me, nothing much has changed since last year (though the models have got cleverer, faster, cheaper!)
Outcome‑based evaluation: Benchmarks and demos are not enough. Businesses need to look at real use cases, go deep with proof of value production pilots. they need to determine what good looks like. It will likely require solutions/use cases that deliver task‑level accuracy, has no (minimal) hallucination, and longitudinal user‑impact studies which they can be loud and proud of.
Human‑centred design: This partly means no AI for AI’s sake. That said user familiarisation and confidence (like Internet Skills once were) are key for trust and understanding Departments need to surface provenance, uncertainty, and verification steps in putting AI to work. Moving beyond assistive to authorative AI should not be done lightly – trust is everything and humans being in the loop is vital in order to avoid “auto‑execute” defaults for high‑risk actions.
Governance and entitlements: IT and compliance need to have (and enable in the case of Copilot for example) auditing to track which agents can access which data and enforce least‑privilege by default. Audit everything, Trust nothing, Training everyone and Pilot, Pilot, Pilot, test, test, test.
Red‑team testing and observability – This goes beyond security and governance – this is about trusting AI to act like a human would. Use cases need to ensure they test for sycophancy, bias, adversarial prompts, and silent failure modes – for AI to world in place of human roles, there needs to be trust, failback and no “time outs” in the middle of a conversation with an agent.
Skills protection – Training is key to ensure organisations pair automation with training to prevent deskilling of people and upskilling of how inference and AI models work. We need to document everything, know what expected outcomes are and re-evaluate each business process, AI model and task our AI is performing. We need to preserve the entity of what our business is about this is the human factor) and retain our rooted institutional knowledge.
Final reflection
Sayta Nadella’s call to move beyond “AI slop” is timely and what many leaders are thinking. It’s hard to beleive it is less that 2 years since Copilot make it’s general availabiloity to the masses and the industry has spent these two years chasing and demaning more capability, bigger models, flashier demos, and viral outputs.
As we sit here, closer to 2030 than we are to 2020, 2026 demands something different. AI needs to deliver on the promises. This is less about the AI and more about how organisations re-shape, re-invest and rebuild business modes, systems, and processes with AI in mind. Technology change is not quick and simple and AI is not (in most cases) not a quick fix answer to underlying issues. For IT pros and business leaders, the message is clear:
- Demand measurable value in pilot and projects
- Prioritise trust and governance and use a model/tool that fits into what you have
- Treat AI as scaffolding, not a substitute for good processes and people
- Build for augmentation, not automation (in the immediate term)
- Evaluate outcomes, not optics and do in phases.
The AI race will continue to get faster, we will hear more success stories, more doom and gloom, feel left behind because we are not moving fast enough and people will worry more about job losses, environmental impact and cost. The economics will only intensify, but I believe that the organisations that win will be those that operationalise AI responsibly, at the right pace, deliberately, and with a clear understanding of where the real risks and opportunities lie.
Thanks for reading or listening. Happy 2026!
References / Sources in this doc
- Economic Times — “Microsoft CEO Satya Nadella calls for a big AI reset in 2026…”
- PC Gamer — “Microsoft CEO Satya Nadella says it’s time to stop talking about AI slop…”

























