What are Copilot connectors?

A Copilot connector is the plumbing that brings your organisation’s external content into Microsoft 365 so Copilot can ground answers in real, company-specific data.

Practically speaking, a connector extracts or fetches items from a third‑party system (files, tickets, PRs, CRM records, etc.), indexes those items into the Microsoft Graph, and makes them available to Copilot and Microsoft Search under your tenant’s security and governance rules.

How it works in plain terms

A connector usually has three moving parts: the source adapter (code that knows how to read the external system), an agent or orchestration layer that schedules crawls and handles incremental updates, and the ingestion into Microsoft Graph where items are stored and security‑trimmed. Microsoft provides a Connectors SDK and a lightweight connector agent so you can build custom connectors or run Microsoft’s prebuilt ones; the agent handles full and incremental crawls, delete detection, ACL stamping, and ingestion into Graph.

Real connector examples

Copilot connectors already cover a wide range of enterprise systems. Here are practical examples you’ll see in the wild and the kinds of prompts they unlock:

  • Salesforce – pull opportunity summaries and recent activity to prep for a customer call.  
  • Jira / Azure DevOps – surface sprint status, open bugs, or backlog items when you ask for release readiness.  
  • GitHub / GitLab / Bitbucket – summarise open pull requests, list failing CI runs, or extract changelog notes.  
  • Box / Dropbox / Google Drive / SharePoint – compare versions of a spec or summarise the latest product doc.  
  • ServiceNow / Zendesk / Freshservice – analyse incident trends and recommend process changes.

Each connector turns source content into indexed, searchable items so a prompt like “Summarise the top five customer issues from Zendesk this month and suggest fixes” returns grounded, citeable results instead of generic advice.

Connector versus MCP server versus calling an API

Copilot connector (indexed or federated). 
A connector’s job is to make content available to Microsoft Graph and Copilot.

Many connectors perform a crawl and index model (content is ingested into the Microsoft  Graph), while newer federated or user‑level connectors let Copilot fetch live data on demand using the user’s credentials. The indexed model is great for broad search and fast responses; federated connectors are useful when you need real‑time, user‑scoped access.

MCP server (Model Context Protocol). 

MCP is a protocol and server model that standardizes how AI assistants call external tools and fetch context in real time. An MCP server exposes a set of tools/endpoints that agents (like Copilot or Teams Channel Agent) can discover and invoke; it’s about runtime tooling and live interactions, not indexing. Think of an MCP server as a live automation or tool host — for example, a calendar MCP server that creates events or a custom MCP server that exposes internal business actions to an agent. MCP servers are registered so Microsoft 365 agents can discover and securely call them.

Direct API Calls

Calling a service’s API is the most basic integration pattern: your app authenticates and requests data or performs actions directly. That’s perfect for bespoke workflows or when you need full control over data flow, but it doesn’t automatically make that data searchable inside Microsoft Graph or available to Copilot across the tenant. If you want Copilot to use that data as part of its knowledge, you either build a connector that indexes it into Graph or expose it via an MCP/federated connector for live access.

How they differ…

  • If you index Jira with a connector, Copilot can quickly answer questions such as “What’s blocking Product release X?” using indexed issues.  
  • If you run an MCP server that exposes a ticket‑triage tool, Copilot (or an agent) can interface and directly open a triage workflow and assign owners in real time without needing to open the app.
  • If you call the Jira API from a custom app, you can build any workflow you want – but Copilot won’t automatically see that data unless you also surface it via a connector or MCP endpoint.

Security, governance, admin controls

Connectors are designed to respect tenant boundaries and Microsoft 365 governance. Indexed content is ingested under your tenant, encrypted, and security‑trimmed so users only see what they’re allowed to see. Admins manage connectors from the Microsoft 365 admin center, control which connectors are available, and can stage rollouts or disable federated connectors if they don’t meet policy. The connector agent and SDK also let you enforce crawl schedules, incremental updates, and ACL mapping so indexing behaves predictably at scale.

Connector, MCP server, or API?

When you need Copilot (or an agent) to work with an external service, you usually have three realistic integration patterns to choose from: indexing via a Copilot/Graph connector, exposing live tools through an MCP server, or calling the service’s API directly from your app. Each option solves different problems.

Below I will try to explain the tradeoffs in plain language, give concrete examples, and finish with a short decision checklist you can use immediately. NB: I’m still learning this myself so bear with me!

  • Use a Copilot connector when your goal is to make a lot of content searchable and citeable across the tenant so Copilot and Microsoft Search can answer questions quickly (think documents, tickets, PRs).
  • Use an MCP server when you need runtime tooling – live, discoverable actions an agent can call (for example: triage a ticket, kick off a workflow, or run a business action) rather than just read data. 
  • Call the API directly if  you are or need to build a bespoke app or workflow that needs full control, custom logic, or real‑time two‑way operations and you do not need Copilot to automatically see that data unless you also surface it via a connector or MCP. 

These are not mutually exclusive, and sometimes you might need a combination of both. For example you may index the data for search and also expose a small set of live actions via MCP or direct API calls.

Deciding which you need

I found this check list helpful which some of my developer friends use…

  1. What do you need Copilot to do? If it’s answer/search/summarise → connector. If it’s act/execute/triage → MCP or API. 
  2. Do you need tenant‑wide, security‑trimmed search? If yes, prioritise a connector. 
  3. Do you need live user‑scoped actions or interactive workflows? If yes, design an MCP server or expose specific API endpoints as MCP tools. 
  4. How fast do you need it? Real‑time → MCP/API. Fast search across many items → connector. 
  5. What’s your governance posture? If you need Microsoft 365 controls and auditing, connectors and Power Platform connectors give built‑in alignment; MCP requires explicit registration and security design.

The other tip from my limited experience so far (I did say bear with me) is

  1. Use prebuilt connectors first – there are loads. Check the Microsoft Copilot connectors gallery for existing connectors before building custom ones.  
  2. If you need to build (or get someone to build) a custom connextor, use the Connectors SDK and agent to build and test your connector; the agent handles crawling and ingestion. 
  3. Pilot / test with real prompts and measure relevance, latency, and trust (are results correctly security‑trimmed and useful?).

Cisco updates contract terms in Response to Market Volatility

Cisco’s recent update to partner contract terms, prompted by rising memory prices, has caught attention across the partner and customer community. Of course any change that vendors make that has potential to impact pricing protection or order certainty will get the channel talking.

But taken in context – and with Cisco’s own commentary in mind – this looks less like a shift in philosophy and more like a pragmatic response to a market that remains anything but stable.

Market-Wide Challenge, Not a Cisco-Specific One

The backdrop here matters.

As Cisco’s global partner sales SVP Tim Coogan outlined in a note to partners, “the industry is dealing with sustained supply constraints driven largely by AI-led demand for memory and storage” . Production capacity simply hasn’t kept pace, leading to longer lead times and rising component costs across the server and infrastructure ecosystem.

This isn’t unique to Cisco. Other major vendors have already taken similar – and in some cases more aggressive steps (we have seen this and expect to see more in the devices and desktops space). Against this challenge, Cisco’s approach is arguably measured and something they need to do given the volume of hardware product lines this is likely to affect.

What did Cisco announce around pricing changes?

As Cisco announced to partners last week and the channel has loudly commented on, the key headlines were:

  • Cisco are changing partner terms and now reserve the right to cancel compute orders up to 45 days before shipment
  • Cisco also reserve the right to adjust pricing if component or manufacturing costs materially change.
  • Cisco are revising the quote validity but they have said they work work with partners to operationalise the changes and will clearly communicate the changes.

For partners like us, none of this is ideal as customers expect prices to be help subject to deal reg and partner pricing terms. As such this means partners will likely need to change their terms too.

Certainty and confidence is of course preferable, but there are some important nuances worth calling out in these annoucements.

Why are there changes happening?

One of the more overlooked aspects of Cisco’s position here is the why these changes are being made.

On Cisco’s Q2 earnings call, CEO Chuck Robbins was explicit that Cisco is leaning heavily into its supply chain scale and financial strength to secure memory supply – including advanced purchase commitments that have increased significantly year-on-year. They are also assuring partners that they will keep them up to date around pricing and supply chain challenges.

For partners, that matters a lot since it’s partners customers talk to about what is happening so important for partners to be kept up to date.

Any technology vendor that can secure supply, even at higher cost, is better positioned to fulfil customer demand than a vendor (compete) that simply freezes or cancels orders at the last minute.

According a CRN report, Cisco’s 45‑day cancellation window is actually more generous than policies being introduced by other vendors in the market. Cisco’s approach does means that partners should get:

  • Earlier visibility of potential changes impacting deals and projects.
  • More time to manage customer expectations and look at alternatives or options.
  • Greater confidence that committed orders are backed by real supply planning.

Of course from a customer perspective, the alternative to this approach is often worse.

The worst thing for an organisation is when they see poorly (or not communicated) silent price increases, delayed shipments, or last-minute order cancellations. These can erode trust far more quickly than upfront conversations about market volatility.

Cisco’s CFO Mark Patterson said that that the “we are adjusting terms to control what we can control” while staying close to market dynamics .

This transparency will hopefully gives partners the opportunity to have informed, proactive discussions with their  customers with statements backed up by the Cisco rather than fobbing them off. Something many organisations learned the hard way during the pandemic-era supply crunch with stuff cancelled, prices increased and supply chain lies.

Trust and Execution

Communication will of course be how Cisco are measured and trusted here (as will Cisco Partners).

Partners need clarity and will demand it from Cisco if it’s not given proactively, particularly around revised quote price protection periods — and they’ll need it quickly.

Parnters need to be able to meet there commitments too, so know whether a quote is valid for 45 days, 30 days or three days materially changes how customer conversations are framed by partners.

Final Thoughts

It’s not perfect, but it’s happening and expect others to follow.  In a market defined by uncertainty, pragmatism may be the most partner-friendly option available.

Will organisation change or shift vendors? Maybe. Will those others compete vendors be as open and honest with partners and customers? Who knows. Will organisations look to do more in cloud to mitigate these risks? Perhaps.

2026 Role‑Based training for Microsoft 365 Copilot users

I’m a huge believer in role‑based learning because it gives people practical, relevant ways to bring AI into the work they already do. Wherever you are on your Copilot journey – curious beginner or confident user – these sessions can help you, your team mates or your friends understand how to get the best from Copilot in your line or work.

There’s FREE virtual training available for executive leadership, finance, HR, IT, marketing, operations, legal, and sales. Each session focuses on real workflows, real scenarios, and great examples

Training sessions commence from Tuesday 10th Feb, so take a look and register below. https://msft.it/6047Q3DET

How OpenAI and Anthropic Just Triggered the Next Big Shift in Software

The last week has felt like a turning point in the AI landscape. Not because of a single product launch, but because two of the biggest players in the industry effectively declared that the next frontier isn’t chat, search, or reasoning. It’s agentic coding. It’s the potential to change the landscape of SaaS applications really quickly (especially for SMBs) which has caused the world to wobble a bit.

This agentic code (or AI Codex) is the word that has been buzzing tech news and social.

That said, the term has been around for a while, but last week it became shorthand for a new class of models designed not just to write code, but to understand systems, navigate repositories, fix issues, and build software with a level of autonomy that edges closer to genuine engineering assistance.

The second narrative and the “concern” that has been rising alongside it is the idea that AI isn’t just transforming software development – it’s going to (and is already beginning to) challenge the software industry itself.

Analysts and commentators have been exploring whether AI agents could eventually replace entire categories of SaaS tools, reshape how companies buy software, or even enable organisations to build their own alternatives on demand.

These two threads – the rise of agentic coding and the pressure on traditional software models – collided last week, and that’s why the conversation has become so loud.

What is an AI Codex?

AI Codex refers to a specialised family of models optimised for programming tasks. Where a general model like ChatGPT is trained to converse, explain, and reason across domains, Codex models are tuned to:

  • read and interpret codebases
  • generate new code with high precision 
  • refactor and optimise existing logic 
  • follow patterns, libraries, and frameworks accurately 
  • maintain context across multi‑file projects 
  • act as an agent that can plan, execute, and iterate on tasks

You can think of ChatGPT as a generalist and Codex as the engineer. Both are powerful, but they’re built for very different jobs.

Why It Exploded Into the News

Timing… There were three things that happened almost simultaneouslyast week and it was the timing if these as well as what happened that brought it front and centre to the tech news.

1. OpenAI and Anthropic launched competing coding models within minutes

It started with OpenAI releasing their latest Codex model at almost the exact moment Anthropic unveiled its new Claude coding variant.

Whether coordinated or coincidental, it created a sense of a head‑to‑head sprint. The press framed it as an AI coding arms race, and the narrative stuck.

2. OpenAI and Anthropic both claimed major breakthroughs in agentic coding.

This was the next thing.. Both Anthropic and OpenAI both positioned these are more than just incremental updates. Instead both said these models are a leap forward and their models can now:

  • plan multi‑step coding tasks
  • call tools 
  • run and evaluate code
  • fix their own mistakes
  • work across entire repositories 

This is essentially a shift from “autocomplete on steroids” to more of a a “junior engineer that can take a ticket and run with it”.

3. The question of could AI replace software?

A wider industry conversation about AI replacing software intensified last week too with commentators and analysts exploring the potential of whether these AI Codex models AI could eventually:

  • reduce the number of SaaS subscriptions companies need
  • automate workflows traditionally handled by enterprise software
  • enable organisations to build their own tools instead of buying them
  • reshape the economics of “seats” and licences

4. A viral comment poured fuel on the fire

Thank social media for this one as OpenAI’s Sam Altman’s remark that using the new Codex made him feel “a little useless” hit a nerve and sent it viral.

Of course, developers amplified it, commentators debated it, and suddenly the story wasn’t just about new models. It was about what these models mean for the future of software development. But it dominated tech feeds and news stories last week. So much I wanted to know more…

ChatGPT vs GPT‑Codex

So again.. For clairy…The simplest way to explain the difference between these two models is:

  • ChatGPT is built for natural language. 
  • GPT‑Codex is built for code.

ChatGPT is the base LLM most users are familiar with. Used by ChatGPT, Microsoft Copilot base LLM etc. It’s designed for most things… brilliant for language, strategy, communication, architecture, and ideation.

Codex is built to read your repo, understand your patterns, and produce code that fits your environment. It’s not for the general user.

Why This Matters for Engineering Teams and MSPs

The shift to agentic coding of course sparks concerns over the future role of the developers but both are trying to assure the world that it’s not about replacing developers, it’s about changing the shape of work.

This means:

  • faster delivery of repeatable patterns 
  • more consistent infrastructure‑as‑code 
  • automated remediation and optimisation 
  • better documentation and handover 
  • reduced toil across migration and modernisation projects

It also means of course that governance becomes even more important. If these models can now act and not just suggest, then organisations need a clear governance process to define the rules, guardrails, and patterns that define how AI participates in engineering workflows.

At the same time, the broader industry conversation about AI replacing software should be seen as a signal, not a threat. The opportunity for technology partners is to help organisations navigate this shift….(just as they are helping to adopt Gen AI tools like Copilot). This is still about  exploring where AI augments existing tools, where it replaces them, and where it enables entirely new workflows.


This is only the beginning.

The Real Story Behind the Headlines
The noise last week wasn’t just about OpenAI and  Anthropic competing (publically), it was about the potential for a new category of AI becoming mainstream.

Here, coding models are being treated not as assistants, but as participants in the software lifecycle – and that has implications far beyond engineering.

This is the beginning of a new phase in how software gets built, bought, and delivered
That’s why the term “AI Codex” suddenly feels everywhere. It captures the moment we moved from conversational AI to operational AI. From chat to action. From suggestion to execution.


What do you think? Fad or as SaaS apps going to be impacted?

Create Agents in One-Click from your OneDrive 

Microsoft has now made it possible to create grounded knowledge “agents” directly from OneDrive. If you’ve not seen this yet, it allows you to select up to 20 OneDrive for Business files and create an agent you can use to ask natural‑language questions across all of those documents at once. You can even share it with others to use.


It’s works just like creating an agent from scratch that is grounded in specific OneDrive files or folders, but you can do it with a quick click create. This is a simple way to create an agent for reasoning over a bunch of project, customer or research files.

It lives directly in OneDrive folder as well as an agent pinned in Copilot Chat… It makes agent building (if you can really call it that very simple and quick).

Once created, it works like any other agent – meaning you can ask questions based on yoru files such as:

  • What decisions have we made so far? 
  • What risks keep coming up across these documents? 
  • Give me a summary of themes across all project files.

Microsoft describes the result as “complete, grounded responses”, and in the demos and tests I have done so far, that seems about right.

How to create an Agent from OneDrive

To Create an agent from OneDrive (Microsoft 365), head over to your OneDrive and choose a folder. Open it and simply click the “Copilot” button and then “Create an agent”.

One-Click to Create an Agent from your OneDrive files or library

Optionally, you can name the agent, change its logo, give it custom instructions, starter prompts and a role.

The agent creates a .agent file in the OneDrive folder.

When you open it, it installs the agent into the M365 Copilot pane which you then need to select to use it.

The agent responds and works like any other agent.

Do we need a OneDrive agent?

I think it demands. You could easily just create a agent and point it a these same OneDrive files. You could also use a Copilot Notebook and drag word files into the Notebook as references.

What the OneDrive agent does, however, from an ease of use and productivity standpoint, is makes it really simple (almost no thinking just click) to create a agent that works across the docs you need and want.

Being able to interrogate the entire project pack in one go is a meaningful step toward the “AI-powered workplace” Microsoft has been forecasting for the last few years.

For organisations already deep into Microsoft 365 and Copilot, this is a simple addition. It won’t change the world but does make Copilot a little more accessible in a really quick and simple way.

Satya Nadella’s Call for an “AI Reset” – What Business Leaders Must Prioritise in 2026

Cut out image of Sayta Nadella

As I finished my first week back to work of 2026, I was thinking about Microsoft’s Satya Nadella’s end‑of‑year reflection blog post “Looking Ahead to 2026 which he wrote at the end of December 2025.

This for me landed at an important moment for the industry. After a year dominated by “AI slop,” concerns over whether the AI boom was ending and stock prices in key AI stocks becoming a bit rocky. In the Microsoft space, there has been some “backlash” about how much AI Microsoft are “pushing” in their products and the industry in general failing to really get big AI initiatives off the ground. We then of course have the upcoming chip shortages (which remind us back to COVID times) with AI cited as the cause of these shortages (memory especially) with AI Data Centres demand exceeding supply.

We have seen huge advances in AI models from Open AI, Google, Anthronic and Microsoft this year but the news has really been coined with the term AI Slop – with AI appearinf to be baked into everything whether people want it or not – who ever thought Notepad in Windows would get the Copilot treatments!!

Despite the negative press, AI is still very much the buzz word (heck, its driving all those component and chip shortages). At work, our clients are still talking about it, driving it forward and investing more into adoption, enablement and consultative engagements. Businesses remain “excited” about the opportunity AI can bring to businesses, consumers and the world!

In his blog post, Satya Nadella argues that we’re finally moving from discovery to diffusion. The message is clear: 2026 must be the year we separate spectacle from substance.

This rest of my blog looks at Nadella’s framing, the wider industry commentary, and what it means for organisations that need AI to deliver measurable value – not just noise with AI Slop or no slop!

From “bicycles for the mind” to “scaffolding for human potential”

In his blog, Satya Nadella revisits Steve Jobs’ famous metaphor – computers as “bicycles for the mind” – and argues that it no longer captures the scale or nature of modern AI. Instead, he positions AI as scaffolding: a structure that supports, amplifies and extends human capability rather than replacing it.

This is a subtle but important shift. It reframes AI not as a tool we operate, but as a system that surrounds us — shaping how we work, decide, and collaborate.

Coverage from the Economic Times reinforces this by highlighting the need to move beyond the novelty phase and focus on real‑world impact. In short, the industry must stop debating “slop vs sophistication” and start designing systems that genuinely improve human cognition and outcomes.

From what I see with my engagement with business leaders, it means moving away from app and model‑centric thinking (what is better ChatGPT or Copilot) and towards systems engineering within our line of business. We need to look at how our business works today, where AI fits and where automation counts. Businesses need to work with Microsoft, with their technology partners and most importantly from within, to plan and deliver measurable outcomes.

“Aiming to please” is not alignment

One of the most important – and least discussed issues in business led AI is the tension between:

  • AI that aims to please (reinforcement learning from human feedback, training, safety tuning, helpfulness scoring) where AI is just that assistant, that Copilot!
  • AI that is aligned (truthful, reliable, predictable, value‑consistent and therefore really useful) which is much more important as we look for it to help us make decisions, take action and do stuff!

These are not the same thing.

Research throughout 2025 showed that models tuned to be more “helpful” often become more sycophantic, more agreeable, and more likely to produce confident but incorrect answers. An interesting article I found from “PC Gamer” cites Microsoft‑linked research suggesting that over‑reliance on AI tools can actually reduce user capability over time due to the mind set of “AI can do anything” whilst at the same point “we don’t trust AI”. So, which is it?

This re-enforces Satya’s view that as we enter 2026, the priority must shift from “does it sound good?” to “does it stand up to scrutiny?”

The AI race is real – and so is the concentration of power

AI is now a currency and we are seeing this everywhere. Leaders cite it, big tech companies are embracing it and the models are doubling in capability in less than a 12 month cycle (some say 8 months). Compute, data, and model access are becoming strategic assets – we are seen this through component shortages! These global hyper-scalers and a handful of model labs will capture disproportionate value, and enterprises will increasingly depend on them for:

  • model access
  • agent frameworks
  • safety and alignment layers
  • orchestration platforms
  • compute and optimisation pipelines

This shouldn’t be seen as negative, but it does require deliberate strategy. Vendor lock‑in, opaque safety layers, and proprietary agent ecosystems will shape enterprise risk profiles for years.

Then there is the vast number of other tech venders embracing the security and safety layers – in the dozens these are and will appear to sell you AI Safety and Security tools, while we wait for the leading AI models and companies to bake this is. Firms like Microsoft will likely do this well and make it part of their security and compliance approach across their wider platforms. others will or may lean to other vendors through partnerships.

The biggest gap perhaps is still user awareness, adoption and controlling shadow-IT. This becomes very more important as organising and the world begin the next pivot shift from AI assistants to autonomous AI.

Again, Satytas’ blog acknowledges this indirectly: the industry does not yet have “societal permission” for widespread autonomous AI. That permission will depend on trust, governance, and measurable outcomes – not just capability.

Two strategic paths for Corporate and Enterprise AI?

We look at two different paths for how organisation use and leverage AI. The oath for superior intelligence and the path that recognises AI’s purpose as cognitive scaffolding.

Path A: Pursue superior intelligence

  • Focus on emergent capabilities and autonomy
  • High potential upside
  • High systemic risk
  • Harder to govern, audit, and predict

Path B: AI as cognitive scaffolding

  • Focus on augmentation, reliability, and workflow integration
  • Measurable value
  • Lower risk
  • Requires strong systems engineering

Satya calls out that most organisations will need to operate somewhere between these paths, but that the balance must be intentional.

What IT leaders should prioritise in 2026

There have been lots of posts around this from many different players, analyst firms and now little ol’ me! For me, nothing much has changed since last year (though the models have got cleverer, faster, cheaper!)

Outcome‑based evaluation: Benchmarks and demos are not enough. Businesses need to look at real use cases, go deep with proof of value production pilots. they need to determine what good looks like. It will likely require solutions/use cases that deliver task‑level accuracy, has no (minimal) hallucination, and longitudinal user‑impact studies which they can be loud and proud of.

Human‑centred design: This partly means no AI for AI’s sake. That said user familiarisation and confidence (like Internet Skills once were) are key for trust and understanding Departments need to surface provenance, uncertainty, and verification steps in putting AI to work. Moving beyond assistive to authorative AI should not be done lightly – trust is everything and humans being in the loop is vital in order to avoid “auto‑execute” defaults for high‑risk actions.

Governance and entitlements: IT and compliance need to have (and enable in the case of Copilot for example) auditing to track which agents can access which data and enforce least‑privilege by default. Audit everything, Trust nothing, Training everyone and Pilot, Pilot, Pilot, test, test, test.

Red‑team testing and observability – This goes beyond security and governance – this is about trusting AI to act like a human would. Use cases need to ensure they test for sycophancy, bias, adversarial prompts, and silent failure modes – for AI to world in place of human roles, there needs to be trust, failback and no “time outs” in the middle of a conversation with an agent.

Skills protection – Training is key to ensure organisations pair automation with training to prevent deskilling of people and upskilling of how inference and AI models work. We need to document everything, know what expected outcomes are and re-evaluate each business process, AI model and task our AI is performing. We need to preserve the entity of what our business is about this is the human factor) and retain our rooted institutional knowledge.

Final reflection

Sayta Nadella’s call to move beyond “AI slop” is timely and what many leaders are thinking. It’s hard to beleive it is less that 2 years since Copilot make it’s general availabiloity to the masses and the industry has spent these two years chasing and demaning more capability, bigger models, flashier demos, and viral outputs.

As we sit here, closer to 2030 than we are to 2020, 2026 demands something different. AI needs to deliver on the promises. This is less about the AI and more about how organisations re-shape, re-invest and rebuild business modes, systems, and processes with AI in mind. Technology change is not quick and simple and AI is not (in most cases) not a quick fix answer to underlying issues. For IT pros and business leaders, the message is clear:

  • Demand measurable value in pilot and projects
  • Prioritise trust and governance and use a model/tool that fits into what you have
  • Treat AI as scaffolding, not a substitute for good processes and people
  • Build for augmentation, not automation (in the immediate term)
  • Evaluate outcomes, not optics and do in phases.

The AI race will continue to get faster, we will hear more success stories, more doom and gloom, feel left behind because we are not moving fast enough and people will worry more about job losses, environmental impact and cost. The economics will only intensify, but I believe that the organisations that win will be those that operationalise AI responsibly, at the right pace, deliberately, and with a clear understanding of where the real risks and opportunities lie.

Thanks for reading or listening. Happy 2026!

References / Sources in this doc

  • Economic Times — “Microsoft CEO Satya Nadella calls for a big AI reset in 2026…”
  • PC Gamer — “Microsoft CEO Satya Nadella says it’s time to stop talking about AI slop…”

What is Copilot Checkout? Microsoft and PayPal’s new AI-Powered Commerce Experience.

Microsoft and PayPal have officially joined forces to launch Copilot Checkout, a groundbreaking integration that redefines the online shopping experience. This collaboration merges Microsoft’s conversational AI capabilities with PayPal’s trusted payment infrastructure, creating a seamless journey from product discovery to purchase all within the Copilot interface on Windows, app and the Web.

It’s rolling out initially in the USA and works with partners including PayPal (and later), Shopify, Stripe, and Etsy.

Intelligent Discovery Meets Trusted Payments

Copilot Checkout will allows users to browse,  evaluate, and complete purchases (check out) directly from within Microsoft Copilot experience, eliminating the need to switch between different apps, devices or websites. By embedding PayPal’s agentic commerce services, the experience becomes not only faster but also more secure and flexible.

Key Features of Copilot Checkout

  • Conversational Shopping: Users interact with Copilot to explore products, compare options, and make informed decisions.
  • Integrated Payments: PayPal will power the end to end checkout flow, offering branded and guest checkout, credit card support, and even PayPal wallet integration.
  • Merchant Enablement: Retailers are able to surface and promote inventory, and reach high-intent customers at the moment of decision without loosing to the “buy later crew”

Benefits for PayPal sellers and consumers

For sellers:

  • Let’s them reach customers during high-intent moments
  • Reduce friction and cart abandonment and avoids the buy later mind set.
  • Expand visibility across Microsoft touch points such as Bing, Edge, and of course Copilot.

For Consumers:

  • Enables fast, flexible and in the moment checkout options (Amazon experience)
  • Stay within the Copilot experience from start to finish to ensure customers do not have to shift platforms, devices or apps.
  • Benefit from PayPal’s buyer protections and payment versatility.

Early Adoption and Ecosystem Impact

In this early stage of the partnership the early adopters include major brands including Etsy and Urban Outfitters.

What’s more Shopify merchants are auto-enrolled, while PayPal and Stripe sellers can opt in for this experience (Microsoft will need to convince them and sell the value).

This rollout marks a significant step in the evolution of agentic e-commerce, where AI agents act on behalf of users to simplify and accelerate transactions.

PayPal and Microsoft Partnership.

In the annoucement, Michelle Gill, GM of Small Business and Financial Services at PayPal, said:

“By integrating PayPal’s agentic commerce services with Copilot’s intelligent shopping platform, we are enabling seamless, reliable transactions for both merchants and consumers.”

Kathleen Mitford, CVP of Global Industry at Microsoft shared:

“Retailers can automate what slows them down and amplify what sets them apart.”

What and when?

Copilot Checkout will be come more than just a new hidden feature. Following OpenAI (and expect Google to follow), Microsoft and PayPal are looking to reshape consumer behaviour and drive more footfall to Copilot and PayPal. This is just the start and Microsoft plan to work with other e-commerce providers.

The joint goal is to redefine how brands connect with customers in real time, with intelligence and trust at the core from both PayPal and Microsoft.

For more insights, visit PayPal’s announcement or explore Microsoft’s commerce vision.

What is Copilot Real Talk Mode? And how to use it.

Back at the “Fall update in October”, Microsoft announced that a new talk mode called “Real Talk” was coming to Copilot. This has been available in the US for a while in preview but is now available in the UK and other regions.

This (currently) optional chat mode allows Copilot to better adapt to the user’s tone by adding more personality, wit, and even “playful” challenges in its responses. This new mode “real talk” offers a more balanced and engaging conversational style that is designed to be more honest like a true “friend” whilst still staying polite. One of the critiques of many chat bots is that they “tell you what you want to hear”. Real Talk, aims to address this by providing a more balanced, open and honest conversation but still without being rude!

Jacob Andreou, Microsoft’s CVP of product and growth for AI, talks about this mode as being able to understand and “mirror” the user’s attitude, offer its own perspective, and not simply agree with everything said by the human. “Real talk aims to create a more interactive, human-like dialogue that goes beyond providing straightforward information. It will provoke users to think by occasionally questioning or offering alternative views, making conversations with Copilot more stimulating“.

How to use Copilot Real-Talk mode

This new mode is accessible through a dropdown menu and is currently limited to text chats (so not yet available in Copilot’s voice chat interface).

Copilot Real-Talk Moe

It’s available in the Windows Copilot App, Web (at https://copilot.microsoft.com) and mobile.

You can also switch to Real Talk mode at any point, but this does start a new conversation. To do this simply click the drop down and change the voice mode:

Copilot Real-Talk in Action

In this example, I gave Copilot a simple statement where I said, “Many people still think the earth is flat – what do you think about that?“. I run this in Copilot’s default chat mode and then again in “Real Talk” mode.

I’ve also created a video on this where I go into more detail.

Copilot Standard Chat Response

The response tone and length was quite typical – a balanced view, not wanting to say the wrong thing and looking at both angles whilst still stating facts.

"It’s a fascinating example of how belief can diverge from evidence.

From a scientific standpoint, the Earth being spherical isn’t up for debate. It’s supported by centuries of observations and measurements, including satellite imagery, GPS systems, gravity measurements, time zones, airline routes, and even how shadows behave at different latitudes. These aren’t abstract theories; they’re practical realities that modern life depends on every day.

What is interesting is why flat‑Earth beliefs persist. They tend to be less about geography and more about psychology and trust:

- Distrust of institutions – Some people see flat‑Earth ideas as a way to reject governments, scientists, or perceived elites.
- Community and identity – Belonging to a group that “knows the truth” can be powerful.
- Misunderstanding of science – Scientific methods can feel opaque or intimidating, making alternative explanations seem appealing.
- Algorithmic reinforcement – Online platforms can amplify fringe ideas by repeatedly showing similar content.

I don’t see flat‑Earth belief as a failure of intelligence. It’s more a failure of critical thinking skills being nurtured and a breakdown in trust between experts and the public. When people feel excluded from knowledge, they’re more likely to reject it entirely.

The real challenge isn’t correcting the shape of the Earth; it’s rebuilding curiosity, scientific literacy, and trust in evidence-based reasoning. That’s a much harder problem—and a far more important one."

Copilot Real-Talk Mode Response

This was much more concise and to the point. What do you think and what do you prefer?

It's wild that in 2025, with satellite photos and physics we use daily (like GPS literally requiring round-earth calculations), some folks still reject basic science. The psychology behind it fascinates me though—it's less about evidence and more about wanting to feel special, distrust of institutions, or finding community in shared beliefs.

Copilot gets free GPT‑5.2 upgrade with “Smart Plus” mode

Microsoft has begun rolling out GPT‑5.2 across Copilot on the web, Windows, and mobile as a free upgrade. It sits alongside GPT‑5.1 rather than replacing it, giving users a clear choice between speed and deeper reasoning.

GPT‑5.2 is described as OpenAI’s strongest model series so far, designed to accelerate real work: building spreadsheets and presentations, writing and reviewing code, analysing long documents, using tools, and working with images. Copilot only moved to GPT‑5.1 for Smart mode in November, and Microsoft is now introducing GPT‑5.2 as a new “Smart Plus” option.

GPT5.2 gives higher‑reasoning model for complex tasks

The updated version of Open AI’s GPT arriving in Copilot is the “Thinking variant” of GPT‑5.2 – the same model Microsoft highlights for complex, multi‑step tasks and strategic reasoning.

GPT5.2 Benchmark performance data

OpenAI’s own release notes describe GPT‑5.2 Thinking as expert‑level for well‑specified office tasks and significantly more reliable than previous models. Industry benchmarks reinforce this:

  • On knowledge‑work tasks across 44 occupations (GDPval), GPT‑5.2 Thinking beats or ties industry professionals 70.9% of the time (vs 38.8% for GPT‑5). 
  • Coding performance is significantly higher, scoring 55.6% on SWE‑Bench Pro and 80% on SWE‑Bench Verified — both ahead of GPT‑5.1 Thinking. 
  • It posts strong reasoning scores: GPQA Diamond 92.4%, AIME 2025 100%, and CharXiv Reasoning with Python 88.7%.

Microsoft Makes Model Choice a core Copilot feature

Microsoft has also rolled out GPT‑5.2 into the commercial Microsoft 365 Copilot and Copilot Studio products, where it appears in the model selector alongside GPT‑5.1 and GPT‑5.2 Instant. This aligns with Microsoft’s broader strategy of giving users explicit model choice depending on the task.

In Microsoft 365 Copilot, GPT‑5.2 connects to Work IQ (aka Microsoft Graph) – Microsoft’s intelligence layer that reasons across meetings, emails, documents, and organisational data to unlock insights and strategic planning scenarios.

More Value for Copilot users

Microsoft frames this as part of its ongoing commitment to model choice across Copilot experiences, ensuring users can pick the right model for the job rather than relying on a single default. For consumers this is huge value as Open AI still make the 5.1 and 5.2 models exclusive to premium subscribers whereas Microsoft give the latest models for free (both in Windows 11 and on the web and mobile apps).

For the non commercial / enterprise version, Smart Plus becomes the “high‑reasoning” lane inside Copilot. It’s designed for:

  • Multi‑step workflows 
  • Strategic analysis and planning 
  • Complex document understanding 
  • Code generation and review 
  • Tasks that benefit from slower, more deliberate reasoning 

GPT‑5.2 Instant remains the fast, everyday model, while GPT‑5.2 Thinking powers Smart Plus for deeper work.


Automatic Alt Text on Copilot+ PCs: A Small Feature with a Big Accessibility Impact

Microsoft has rolled out a useful update to the Office Suite apps like Word, Excel and PowerPoint which creates automatic, “on‑device” Alt Text generation for images. This is a great way of helping content become more inclusive by providing automatic (but editable) descriptions of images, graphics, graphs etc. This helps screen readers and also AI tools better understand documents.

Alt Text Auto Generation is for Copilot + PCs only

This great feature leverages the on-chip NPU of Copilot+ PCs and is currently not available on older (non-AI) PCs. Sometimes the most valuable improvements are the ones that make good practice the default — especially for accessibility and inclusion.

Image of someone with hands in air emotionally speaking

This is a great way of ensuring content becomes meaningfully accessible by default.

You don’t need to do anything to enable this feature. You will need to be running the latest version of the Office apps on your device.

  • Word, Excel and PowerPoint now generate Alt Text instantly as you insert images.
  • The processing for this happens on‑device, so nothing is sent to the cloud.
  • It’s currently only for Copilot+ PCs, taking advantage of the NPU
  • Authors can still edit or override the generated text, keeping humans in control.

GPT-5.2 now available in Microsoft 365 Copilot

Microsoft has just (11th Dec) started rolling out OpenAI’s GPT‑5.2 across Microsoft 365 Copilot and Copilot Studio, marking another significant leap in AI-powered productivity.

The differences between GPT5 and GPT5.2 provide “a significant upgrade in performance across various metrics”. For example, GPT-5.2 has achieved higher scores on benchmarks like ARC-AGI2 and GPQA Diamond which indicate improved abstract reasoning and scientific knowledge. In coding tasks, GPT-5.2 outperformed GPT 5.0 and 5.1 in accuracy and speed. The model also excelled on CharXiv Reasoning an AIM 2025 scores which measures LLMs capabilities in advanced mathematics and problem-solving.

The update delivers two models under one banner:

  • GPT‑5.2 Thinking, ideal for deep reasoning and complex problem-solving, and
  • GPT‑5.2 Instant, tailored for everyday tasks like writing, translation, and learning – all now integrated into Work IQ for actionable insights across emails, meetings, and documents.

    Within Microsoft 365 Copilot, (as this rolls out), users are able to manually select GPT‑5.2 from the model menu in both Copilot Chat and Copilot Studio, enabling smarter decision-making for tasks such as:
  • Preparing insights ahead of meetings.
  • Conducting comparisons and analysis such as market research of reviewing reports.
  • Extracting strategic takeaways linked to objectives and milestones.
Model Selection for GPT5.2 in Copilot Chat

Microsoft has sad that Copilot Studio agents will transition automatically from GPT‑5.1 to GPT‑5.2 in early-release environments and early in 2026, the default model in Copilot Chat will shift from GPT 5 to GPT5.2

These improvements therefore deliver improvements for tasks requiring complex reasoning and problem-solving skills. How their fair in day to use will be down to you as user to evaluate and determine. For now – we get the choice to dip in and out.

This rapid launch reinforces Microsoft’s commitment to offering model choice allowing users to access the latest innovation tuned for enterprise-grade security, compliance, and performance.

More Anthropic Models coming to Microsoft Copilot

Microsoft is making a major change to how AI models are integrated into Copilot experiences. From 7 January 2026, Anthropic’s models will be enabled by default for Microsoft 365 Copilot licensed users, moving away from the current opt-in setting to a standard feature under Microsoft’s own contractual terms rather than Anthropic’s.

What’s Changing?

  • Default Enablement: Anthropic models, which were previously optional, will now be switched on by default for most commercial cloud customers. UK and EU/EFTA customers will find this OFF by default, requiring manual opt-in for others it will be ON.
  • Microsoft Sub processor Status: Anthropic is now a Microsoft sub processor, meaning its operations fall under Microsoft’s Data Protection Addendum and Product Terms (previously Anthropic use was under Anthropic own Commercial Terms).
  • Admin Controls: A new toggle should now be active in the Microsoft 365 Admin Centre.

Why It Matters

This update extends Microsoft’s enterprise-grade data protection standards to Anthropic-powered Copilot features and makes more of a secure broker around AI models with less of a dependance on just Open AI. Working with companies like Anthropic in this “AI sub-processor” approach ensures:

  • Contractual Safeguards: Anthropic operates under Microsoft’s direction and compliance frameworks.
  • Choice and Flexibility as well as ensuring access to specific models to perform the best tasks drive quality and refinement to Copilot.
  • Enterprise Data Protection: Your data remains covered by Microsoft’s commitments, including the DPA and Product Terms.

Why Microsoft Is Adding Anthropic Support

Microsoft’s goal is to give organisations more choice and flexibility in Copilot experiences. Anthropic’s Claude models are known for strong reasoning and safety alignment, which complements Microsoft’s own AI capabilities. By onboarding Anthropic as a subprocessor, Microsoft can:

  • Offer advanced generative AI features in Word, Excel, PowerPoint, and Copilot Studio.
  • Maintain consistent compliance and security standards across all integrated models.
  • Enable customers to select external models for specialised use cases without sacrificing enterprise-grade protections.

Regional and Cloud Exceptions

  • UK & EU/EFTA: Toggle remains OFF by default; admins need to opt in.
  • Government & Sovereign Clouds: Anthropic models are not yet available.

Controlling access to other AI Provides like Anthropic

To do this, head to the Admin Centre, Go to Copilot, Settings and choose Data Access Tab

Decide if to enable (or disable)

Looking Ahead

This change signals Microsoft’s commitment to expanding AI capabilities responsibly by leveraging the best model for the job or task. Enabling Anthropic (and other models) unlocks richer functionality – especially in Word, Excel, PowerPoint, and Copilot Studio – while maintaining strong data protection standards and still giving organisations choice.

Microsoft Ignite 2025 – “The Agentic Shift”

Microsoft used Ignite 2025 to tell the world that “agents are now the primary interface for enterprise work“. The focus throughout Ignite was about evolution from chat-bots to multi-discipline teams of agents across organisations and the unlaying architectural envelopments being done to make this work.

Agents are already changing how people work, and IDC predicts there will be 1.3 billion agents by 2028. Agent 365 is the control plane for agents, extending the infrastructure you trust to manage your people to agents.

This was about spelling out the developments across platform and the unifying of intelligence layers so agents can be treated as identity‑bound workers and allow security and governance to be baked in in the same way we manage human workers today. Below is a summary of the key aspects and ingredients of this approach.

For more on the latest developments and Microsoft Research on AI Frontier Firms, check out the WorkLabs


Fabric IQ

What it is: Fabric IQ layers semantic meaning and business ontologies on top of data stores so agents reason in terms of customers, orders, assets, and events rather than raw tables.

Why it matters: Agents that lack consistent business context make brittle or risky decisions. Fabric IQ gives agents a shared vocabulary and entity model, reducing ambiguity across analytics, automation, and agent workflows.

Example: A retail replenishment agent uses Fabric IQ to translate “low stock” signals into SKU hierarchies, supplier lead times, and regional demand forecasts. The agent then creates the correct purchase order, selects the right vendor SLA, and schedules delivery windows – cutting stockouts and manual reconciliation time.

Foundry Control Plane

What it is: Foundry will become the unified platform for building, routing, and operating agents with model selection, versioning, and governance hooks built in. It includes a built‑in control plane that brings security, observability, and cost signals directly into the developer experience. It surfaces alerts, policy violations, performance issues, and budget warnings where teams already build – and exposes Microsoft Entra, Defender, and Purview controls as simple toggles so identity, data protection, and threat detection can be enabled without re‑architecting toolchains.

Why it matters: Foundry reduces model selection risk, standardises deployment patterns, and makes governance a first‑class concern rather than an afterthought. It’s the place teams certify agents, attach policies, and monitor behaviour. It means developers can ship agents that are secure from day one, remain protected through updates, and automatically flow into Agent 365 governance at deployment. By meeting teams in their existing workflows, Microsoft removes much of the friction between building agents and securing them — a much needed “fix” for an area that has tripped up many enterprises and halted mass adoption.

Example: A financial services team certifies an underwriting agent in Foundry. The control plane enforces data access policies, routes sensitive scoring to on‑prem models, and produces audit trails for regulators – enabling faster production rollouts without compromising compliance.

Agent 365

What it is: Agent 365 treats agents like employees: Entra identities, scoped permissions, lifecycle management, and telemetry. It produces a security framework for how agents should be deployed and managed within an organisation. It gives IT and SecOps teams a single, consistent way to discover, manage, and secure agents wherever they’re built.

Agents are already changing how people work, and IDC predicts there will be 1.3 billion agents by 2028. Agent 365 is the control plane for agents, extending the infrastructure you trust to manage your people to agents.

Microsoft Ignite 2025

Agent 365 extends Entra ID with the same controls organisations already use to manage their people into the agent world with a – Entra Agent ID for identity and access, Defender for threat detection and posture, and Purview for data protection and compliance. Rather than new tools, this provides the familiar surface control plane where agents can collaborate and interact with users, keeping behaviour visible inside the tools people already use.


Why it matters: When agents have identity and governance, organisations can scale fleets safely. You can onboard, revoke, and audit agents the same way you manage human users.

IT gets a central registry of every agent across the estate; developers can register third‑party or custom agents via SDKs; and security teams gain continuous monitoring, anomaly detection, and enforcement capabilities such as conditional access.

For organisations wrestling with shadow AI and governance gaps, Agent 365 is a practical answer – it treats agents as first‑class entities that must be onboarded, scoped, audited, and revoked just like human accounts.

Example: An HR department deploys multiple document‑generation agents. Agent 365 ensures each agent only accesses approved templates and employee records, preventing accidental PII exposure and making incident investigations straightforward.

Azure Copilot for Cloud Ops

What it is: Azure Copilot orchestrates specialised cloud agents for migrations, observability, remediation, and optimisation – this can turn runbooks into agent workflows.

Why it matters: Cloud operations shift from manual dashboards to intent‑driven orchestration. Agents can triage, remediate, and coordinate across services faster than human‑only teams.

Example: During a cross‑region outage, Azure Copilot coordinates agents to triage logs, roll back a faulty deployment, and provision temporary capacity – reducing mean time to recovery from hours to minutes.

Conclusion

Ignite 2025 reframeed “AI” from an add‑on to an operational fabric for every organisation as we start to transition into the next phase of AI maturity and adoption.

Microsoft’s combination of Fabric IQ, Foundry and Agent 365 creates a practical path to agentic operations for business, but success still very much depends on the fuel for AI – disciplined data modeling, governance by design, and small, measurable pilots.

Organisations that treat agents as governed, identity‑bound teammates will be the ones that gain speed and resilience fastest – those that don’t face sprawl and risk and stalled or failed initiatives.

What is Work IQ?


Microsoft Ignite 2025 focus this year saw Microsoft fully committed to Agentic AI as the next platform layer. Across all of the briefings, keynotes, technical sessions, and partner announcements, Microsoft repeatedly emphasised that AI is no longer an add-on – and that is becoming the “operating system” for modern work.

Alongside this was the announcement of Work IQ at Ignite 2025 was probably one of the biggest impact announcements – which was announced during the day 1 keynote – hosted by Judson Althoff (CEO of Commercial Business at Microsoft) and LinkedIn CEO Ryan Roslansky.

What is Work IQ?

Work IQ was positioned as a multi-level intelligence layer that delivers company-specific, job-specific, and user-specific data to inform Microsoft 365 Copilot and agents. This was not about smarter tools, models or even a new first party agent, but instead was about Microsoft playing true to their initial vision of Copilot whereby (unlike other AI tools), about a new layer of contextual intelligence that adapts to how your organisation actually work. 

This layer actually includes Work IQ, Fabric IQ, and Foundry IQ – each designed to accelerate AI innovation and support organisations in becoming Frontier Firms. But Work IQ is the cornerstone of what this all about.

Microsoft positioned Work IQ as being built on three pillars: 

  • Inference – ability to connect dots, predict next actions, and recommend the right agent. 
  • Data – the rich knowledge inside emails, files, meetings, and chats and Entra ID  
  • Memory – your specific preferences, habits, and workflows. 

This framework enables Copilot to access data, retain memory, and understand how tasks and tools interact. Inference helps predict the most suitable action or agent for each job. 

Work IQ is deeply integrated into Microsoft 365 apps like Word and Excel, enabling Copilot to continuously learn through what Microsoft calls an AI-powered feedback loop. This loop far surpasses traditional connectors by retaining context and evolving with your business. 

Instead of reactive assistants waiting for prompts before acting, Work IQ interprets relationships, intent, and context. It connects documents, meetings, and workflows into a living map of organisational knowledge, powered by the Microsoft Graph- a competitive advantage no rival can match.  Whilst other AI’s such as ChatGPT can “plug-in” to Microsoft 365 via APIs, this is nothing compared to the power of the Microsoft Graph (which is what WorkIQ is). Work IQ (the Microsoft Graph) is about value creation inside the governance boundary. Work IQ respects permissions, compliance, and sensitivity labels, making it a trusted foundation for enterprise AI. 

Agents Powered by Work IQ

Work IQ is also the engine of the agent era. Agents can only act independently when they understand environment, history, dependencies, and intent. Without context, an agent is just a reactive assistant with a fancier name. With Work IQ, Microsoft enables true agentic behaviour – agents that can coordinate, reason, and act across the enterprise. 

This aligns with Microsoft’s broader vision of “Frontier Firms” – organisations that are human-led and agent-operated. Microsoft say that, already, more than 90% of the Fortune 500 use Microsoft 365 Copilot, and Work IQ is set to deepen that reliance by embedding intelligence into everyday workflows.  This got huge cheers at Ignite!

With Work IQ, Microsoft are raising the standard of the AI workplace – positioning itself as the core intelligence layer of the modern organisation and building the foundation of the next decade of digital work.   

What about other 3rd Party AIs?

Many of Microsoft’s Competitors are able to “plug in” to either apps though plug ins.  ChatGPT, Gemini, Salesforce, ServiceNow, Atlassian etc can all build smart vertical intelligence inside their own stack or products, but in terms of the wider enterprise context, they operate in silos. Even with APIs, they do not have the awareness, context and breadth of reach that Microsoft has with the connected Microsoft ecosystem and the Graph. They lack a unified fabric, and without it, they cannot deliver the seamless agentic experience Work IQ enables. 

Microsoft’s isn’t trying to win the AI assistant market as such; they are building the architecture for the agentic workplace. Think of it as the difference between building a smart car versus building the entire road system it drives on. 

Final Thoughts

Work IQ as the missing piece that makes Copilot more than a productivity tool. It’s the connective tissue that transforms AI from assistant to organisational intelligence. 

The real question is how quickly enterprises will adapt to this new standard, and how competitors will respond when Microsoft have just raised the bar so high. 

With Security Copilot now part of Microsoft 365 E5 – what do you actually get?

At Ignite this week, Microsoft announced that Security Copilot will now be included in Microsoft 365 E5 (and E5 Security) at no additional cost. Security Copilot delivers “AI-powered, integrated, cost-effective, and extensible security capabilities” that elevate an organisations IT Security Operations or SOC’s efficiency and resilience.

So, what does it actually include and what are the catches?

1. Integrated AI-driven defense across the Microsoft stack

Security Copilot agents are natively embedded into Microsoft Defender, Entra, Intune, and Purview, which means that IT / Sec teams don’t need to juggle separate tools. This allows for a single, cohesive workflow where identity, endpoint, data, and threat protection are all reinforced by AI and can be reviewed, configured and monitored with just a prompt!

2. Autonomous and proactive protection

As part of this announcement, Microsoft has also introduced a dozen new AI agents that enable “agentic defense” — adaptive, autonomous responses to threats. Instead of just alerting, Copilot can recommend or even automate actions, helping teams stay ahead of evolving attacks or reasons for concern and to plan for action.


3. Included at no additional cost with E5

For Microsoft E5 customers, Security Copilot will now be included as part of the core entitlement.

Here’s the important part: Organisations receive 400 Security Compute Units (SCUs) per month per 1,000 users, scaling up to 10,000 SCUs/month — enough to cover (Microsoft say) most typical enterprise scenarios without extra spend.

4. Faster incident response and investigation

Copilot in Copilot Security is designed to accelerates triage, root cause analysis, and remediation by summarising complex signals into actionable insights. This can significantly reduce mean time to detect (MTTD) and mean time to respond (MTTR), freeing analysts to focus on strategic threats rather than repetitive tasks.


5. Customisation and extensibility

Beyond the built-in agents, Microsoft also provides extensive developer tools and APIs so organisations can create custom agents or connect other systems securely specifically tailored to their environment. This means it is possible and configure Security Copilot to unique workflows, integrate with third-party systems, and align it with your specific compliance or operational needs.

Surface Management Portal is also included 🙂

Enablement

Depending on your organisation, you might qualify for funded workshops for awareness and enablement of Security Copilot. Speak to your Microsoft Partner to find out more.

Read more at Microsoft Learn:

Sora-2 now in Microsoft 365 Copilot

Sora 2 - Copilot

At Ignite 2025 this month, amongst a long list of AI and Security updates, Microsoft announced that OpenAI’s Sora 2 text-to-video model is now integrated into Microsoft 365 Copilot in their Create Agent bringing AI video into enterprise productivity.

Sora 2 can make content much more realistic than the previous version of Sora and has earned both praise and criticism, since AI-generated videos are quite a debated and controversial topic. Sora 2 also supports a “cameos” feature that creates the likeness of a person that can then be placed in content – again met with mixed opinions.

Sora 2 is available today (in the US) and rolling out to other regions, for Microsoft 365 Commercial users who are part of Microsoft’s Frontier program

What’s New with Sora 2

For those not familiar with Sora 2 the integration into Microsoft 365 Copilot (at no additonal cost) beings:

  • Improved realism and physics: Videos now follow motion dynamics more closely, from gymnastics routines to buoyancy on water.
  • Longer, coherent clips: Open AI’s Sora 2 can generate richer, more sustained video sequences than its predecessor.
  • Cameos feature: Users can insert likenesses (with consent) into videos, opening up new possibilities for training and storytelling.
  • Enterprise integration: Within Copilot’s Create experience, commercial users in the Frontier program can generate short clips, add voiceovers, music, and brand kit elements for consistency.

Whilst this may still feel like novality, it shows how far this is coming on and unleases new levels of quality allowing creators and marketiers to embedding video creation into the same environment where organisations already manage documents, presentations, and collaboration.

How to Access Sora-2 in Copilot

Users with a Microsoft 365 Copilot license can create video project with Copilot (powered by the Sora-2). It can be used for video and voiceovers, leverage your organisation brand kit and then be editied to add music, and include other visual elements using ClipChamp.

Note: Today, your oganisation must be enrolled in the Copilot Frontier (early adopter programme)

Why It Matters for Microsoft 365 Customers

Microsoft positions Copilot as a multimodal hub, combining text, images, documents, audio, and now realistic video. For enterprises, this means:

  • Marketing teams can rapidly prototype campaign assets.
  • HR and L&D can produce onboarding explainers without outsourcing.
  • Anyone can create and enrich presentations with dynamic video narratives.

Since all this happens inside Microsoft 365, identity, compliance, and governance frameworks apply. That’s a major differentiator compared to consumer-first AI video tools and helps business further enable this level of creativity within risking corporate data leakage.

Video also coming to Copilot Notebooks

Along side this new feature, Microsoft are also bringing video into Copilot Notebooks. ALong with the already available audio podcast feature, Copilot Notebooks can now create enhances overview pages, proactive topic suggestions, and …wait for it, audio and video summaries and podcasts.

What’s Next?

Sora 2 in Copilot is more than a feature—it’s a signal of where enterprise communication is heading. Video will sit alongside slides, spreadsheets, and documents as a default medium. The organisations that thrive will be those that treat AI video not as a gimmick, but as a strategic lever for clarity, engagement, and impact.

Read Microsoft’s Official Post here:
Available today: OpenAI’s Sora 2 in Microsoft 365 Copilot | Microsoft Community Hub

AI Explained: 9 Key Concepts You Need to Know in 2025

Artificial intelligence, whilst a phrase used in most of our daily lives, can feel huge, strange, unknown, scary, exciting and sometimes even intimidating. In this post I decided I would strip back the noise and waffle and share nine crisp, usable concepts. I’ve aimed to provide clarity over jargon and give some practical examples over theory.

Before I start, many and to put into familiar brands, here are a few AI tools and brands you will of already know or at least of have heard of:

1. Common AI Tools to know about

  • ChatGPT – What really started the world of “publicly accessible” Generative AI Chat Bots. ChatGPT (version 5 is the current) is a conversational AI that generates text, pictures, and even video. It can answer questions and help with creative writing. It’s a clear example of generative AI in action, showing how large language models can produce human‑like responses. Free and Paid versions.
  • Copilot (Microsoft) – leverages many different AI models including ChatGPT, Microsoft’s own and others, can do very what ChatGPT can do, but is also integrated across line of business apps and data like Word, Excel, PowerPoint, and Windows. Copilot acts as an AI agent that helps you create, draft, analyse, and even automate tasks. It’s a practical demonstration of how AI agents and retrieval techniques can boost productivity. Free tier (ChatGPT Pro equivalent) and Premium for Consumer/Family. Microsoft 365 Copilot for Business use.
  • Google Gemini – Google’s AI assistant that blends search with generative capabilities, pulling in live information to give context‑aware answers. Free and Paid tiers.
  • GitHub Copilot – A developer‑focused AI that suggests code snippets and functions in real time. It shows how reasoning models and pattern recognition can accelerate software development.
  • MidJourney / DALL·E – Image generation tools that turn text prompts into visuals. These highlight the creative side of AI, where models learn patterns from vast datasets and apply them to new artistic outputs.
  • Perplexity – Great for research including financial data and educational content. Has free and paid versions.
  • Siri / Alexa – typically home style voice assistants that act as simpler AI agents, interpreting commands and connecting to external systems like calendars, music apps, or smart home devices. Great for simple tasks like “what is weather like today” and for linking to smart home devices – “Alexa, turn on the porch light“.

If you are just starting (or are a beginner), the easiest way to decide which AI tool to use is to match the tool to the problem you’re trying to solve. If you need help writing or brainstorming, generative text tools like ChatGPT or Copilot in Word are ideal. If you’re working with numbers or data, Copilot in Excel can analyse and visualise patterns for you. For deeply creative projects, image generators like MidJourney or DALL·E turn ideas into visuals, while GitHub Copilot accelerates coding tasks. The key is not to chase every shiny new AI release, but to ask: what am I trying to achieve, and which tool is designed for that job? If you are starting out, start small, experiment with one or two tools in their daily workflow, and build confidence before expanding into more advanced applications.

Which AI in 5: Pick the AI tool that fits your task- writing, data, images, or code—and grow from there.

2. What is Artificial Intelligence (AI)

Artificial Intelligence (AI) is not really a product though word bingo might have people say ChatGPT or Copilot (at work), but it is far more than that! AI is a broad field of computer science focused on creating systems that can perform tasks which normally require human intelligence. These tasks include many things such as recognising speech, interpreting and understanding images and videos, making decisions, and even generating creative content such as music, videos and images. As of 2025, AI is already embedded in many aspects of our everyday lives – in work and in personal life – from recommendation engines on Netflix to fraud detection in banking, to summarising meetings at work.

At its core, AI combines data, algorithms, and computing power to simulate aspects of human cognition, but it does so at a scale and speed that humans could never achieve.

AI in 5: AI is machines learning, reasoning, and acting like humans.

3. AI Agents

Right, so an AI Agent is a system designed to act autonomously in pursuit of a goal. Unlike traditional software that follows rigid instructions, agents can perceive their environment, make decisions, and take actions with or without constant human input.

For example, a customer service chatbot is an agent that listens to queries, interprets intent, and responds appropriately. More advanced agents can coordinate multiple tasks, such as scheduling meetings, analysing reports, or even controlling robots in manufacturing.

The key is autonomy: agents don’t just follow orders—they adapt to changing conditions.

AI Agents in 5: AI agents are digital helpers that think and act for you.

4. Retrieval-Augmented Generation (RAG)

RAG is a technique that makes AI more reliable by combining generative models (or sub models) with external knowledge sources such as the Web or date from corporate SharePoint sites, email etc.

Instead of relying solely on what the AI model was trained on (which may be outdated or incomplete), RAG can retrieves relevant documents or data in (near) real time and integrates them into its response.

This is especially powerful in business contexts, where accuracy and timeliness are critical – for example, pulling the latest compliance rules or product specifications from an application or data repository, before answering a query. RAG bridges the gap between static training data and dynamic, real-world knowledge.

RAG in 5: RAG = AI that looks things up from multiple sources before answering.

5. Explainable AI (XAI)

One of the biggest challenges with AI is the “black box” problem. What I mean by that is that often do not know how AI arrived at its decisions or answer when instructed.

Explainable AI addresses this by making the reasoning process transparent and understandable to humans. For instance, if an AI is being used by a bank to determine if a customer should/can get a loan or not and that AI  model rejects the loan application, XAI will highlight / explain the factors such as credit history or income that influenced the decision.

In essence this is about seeing it’s workings out. If you have used Microsofts Researcher or Analyst agent at work, you will see some of this as it does its work.

This transparency is vital in ensuring we can trust AI and is required in regulated industries like healthcare, finance, and law, where accountability and fairness are non-negotiable.

By opening this black box, XAI builds trust and ensures AI is used responsibly.

XAI in 5: XAI shows you why the AI answers the way it did, what information it used and how it made its choice.

6. Artificial Super Intelligence (ASI)

While today’s AI is powerful, it is still considered “narrow AI” – specialised in specific tasks despite the advances we see every week.

Artificial Superintelligence (ASI) is a (some say) theoretical future state where machines surpass human intelligence across every domain, from scientific discovery to emotional understanding.

Many might be thinking “The Terminator” here but in reality it is more than conceivable given the current pace of evolution that ASI could in design new technologies, solve global challenges, or even “create” beyond human imagination.

This naturally raises profound ethical and safety concerns: how do we ensure such intelligence aligns with human values and what happens if ASI becomes smarter than the humans that created it?

ASI remains speculative and there are many opinions and research on the matter, but today it is a concept that drives much of the debate around the long-term future of AI.

ASI in 5: ASI is the idea of AI being smarter than all humans in every way.

7. Reasoning Models

Traditional AI models excel at recognising patterns, but they often struggle with multi-step logic.

Reasoning models are designed to overcome this by simulating structured, logical thought processes. They can break down complex problems into smaller steps, evaluate different pathways, and arrive at conclusions in a way that mirrors human reasoning.

This makes them especially useful in domains like legal analysis, financial analysis, scientific research, or strategic planning, where answers are notjust about recognising patterns and finding information but about weighing evidence and making defensible decisions in a way similar to how we as humans might undertake such work.

Reasoning Models in 5: Reasoning models let AI think step by step like us.

8. Vector Databases

AI systems need efficient ways to store and retrieve information, and that’s where vector databases come in.

Unlike traditional databases that store data in rows and columns, vector databases store information as mathematical vectors – dense numerical representations that capture meaning and relationships.

This allows AI to perform semantic searches, finding results based on similarity of meaning rather than exact keywords. For example, if you search for “holiday by the sea,” a vector database could also return results for “beach vacation” because it understands the conceptual link.

Vector Databases in 5: Vector databases help AI find meaning, not just words.

9. Model Context Protocol (MCP)

Finally, MCP is a framework that helps AI agents connect seamlessly with external systems, APIs, and data sources. Instead of being limited to their own training data, agents using MCP can pull in live information, interact with business tools, and execute workflows across platforms. For example, an MCP-enabled agent could retrieve customer records from a CRM, analyse them, and then trigger a follow-up email campaign—all without human intervention.

MCP makes AI more versatile and practical in enterprise environments.

MCP in 5 : MCP is the bridge that connects AI to other tools.


What next and Getting Started

AI is not a single technology but a constellation of concepts – agents, RAG, XAI, ASI, reasoning models, vector databases, and MCP – that together define its capabilities and potential. Understanding these terms helps demystify AI and highlights both its current applications and future possibilities.

As AI continues to evolve, these building blocks will shape how businesses, governments, and individuals harness its power responsibly.

AI is a toolkit of ideas working together to change the world. When we look at what tool to use when, in reality there is not one better than the other it’s more about context of use, the platform you use it on, what your work provides, what you get included in your other software (for example Copilot in Windows, Office apps etc) and what task you are performing. Some AI’s are better at images, some at research and some at writing and analysis.

Cisco Partner Summit 2025: The Infrastructure Behind the Digital and AI Era

Last night I tuned into aspects of the Global Cisco Partner Summit on demand, (the live event taking part in San Diago this week).

Day one messaging to partners was firmly on monetising AI and driving the next wave of digital transformation with Cisco. Cisco were not just  talking about AI as a buzzword – they were clearly (re)positioning themselves as the backbone of this revolution with some of the biggest innovations and product evolutions in decades and the value and importance of their partners to enable this for their customers.

AI doesn’t run on magic – it runs on silicon, bandwidth, and secure, scalable networks. From high-performance data centre fabrics to AI-ready networking and security, Cisco is building the digital highways that will power this era.

AI and Digital Transformation will fail without the right Infrastructure. This new age is as significant as the Internet and Cloud revolution and Cisco is there to power their customers and partners through it.

Think back to the 90s and early 2000s. The Internet was exploding, but none of it would have been possible without the underlying infrastructure – networks, servers, connectivity. Fast forward to today, and we’re seeing history repeat itself. AI is the new Internet, and infrastructure is once again the unsung hero, that invisible layer that defines how well stuff works, connects and secures.

There’s no question that AI is making a significant impact and that influence is only accelerating. We  hear a consistent message from customers and partners: ‘AI is evolving faster than their infrastructure can keep up’.
Tim Coogan, SVP Global Partner Sales

That statement sums up the challenge perfectly. Data strategies that worked two years ago are now struggling under today’s workloads, and the skills gap is widening.

Cisco’s beleive they are ready to help customer through their global partners to address this with a new wave of innovation designed to help partners and customers scale AI without disruption with connected infrastructure at the heart.

Cisco Unified Edge – for AI at Scale

One of the biggest announcements yesterday at Cisco Partner Summit was Cisco Unified Edge, a purpose-built platform for distributed AI workloads. It can integrate compute, networking, storage, and security at the edge, enabling low-latency, real-time inferencing for agentic and physical AI workloads. This includes:

  • Modular architecture combining compute, storage, and networking in a single chassis.
  • Zero-touch deployment and pre-validated blueprints for predictable AI rollouts.
  • Full-stack observability via Cisco Intersight, Splunk, and ThousandEyes.
  • Multi-layered zero-trust security with tamper-proof hardware and policy enforcement.

Things like network bandwidth, throughput and power consumption are all becoming massive issues as AI permeates data centres and workplaces.

Cisco Security: Mind the Trust Gap

AI adoption brings incredible opportunities but also new risks, which led nicely into Cisco’s re-engergized Security Play.

“We’re also seeing what we call a trust deficit… securing all this infrastructure and model safety is critical.” – Jeff Schultz, Cisco.

Security is embedded across Cisco’s tech stack, with Cisco Secure Access extending zero trust to the cloud and with Cisco Access Manager delivering identity-based access control natively through the Meraki Dashboard. This works seemlessly with leading Cloud Providers such as Microsoft 365 and Azure, AWS and GCP too as well as leading Enterprise SaaS providers.

Cisco re-iterated their approach to security in the AI era with their  Multi-layered including:

  • Zero Trust Everywhere – From edge to cloud, every device and workload is verified.
  • Tamper-Proof Hardware – Protecting against physical and firmware-level attacks.
  • Policy Enforcement at Scale – Automated compliance across distributed environments.
  • Model Safety – Ensuring AI models and data pipelines remain uncompromised with their new Cisco AI Defense suite.

In a world where AI decisions can impact millions, trust is the currency of AI adoption and the value of trust that is needed for success what ever business you are in.

Observability & Visibility

The last piece in the puzzle was the importance of observability and visibility. AI workloads are complex, distributed, and dynamic and without visibility, “you’re flying blind” . Cisco are doubling down on full-stack observability (through their Splunk acquisition) to give partners and enterprises the clarity they need without gaps. 

Key capabilities focussed on:

  • Cisco Intersight for infrastructure lifecycle management.
  • ThousandEyes for end-to-end network visibility across hybrid and multi-cloud.
  • Splunk integration for deep analytics and anomaly detection across all platforms.
  • Predictive Insights powered by AI to anticipate performance bottlenecks before they happen.

This was a strong message since Enterprise AI doesn’t just need connectivity, compute power and security and governance. It also needs predictability and reliability. Cisco’s Observability Platform is all about ensuring that every infrastructure component, from GPU clusters to edge nodes, to cloud is optimised and secure.

Cisco are also taking observability further with their AI Canvas. Originally announced at Cisco Live earlier this year, AI Canvas is a new (coming in 2026) collaborative workspace that combines telemetry, AI insights, and automation. It enables teams to troubleshoot issues using natural language, unify data across domains, and accelerate resolution – all powered by Cisco’s Deep Network Model.

Monetising AI for all

Steve Cougan said that Cisco’s strategy success aimed at Partner, Cisco and of course their Customers focuses on three core pillars:

  • Responding faster with partners (sell together)
  • Continuous innovation cycles (leading the pack)
  • Scaling efficiently to maximise customer impact (across all sectors and segments).

Another key thing for me was the investment in multi customer management for their Managed Service Partners, focusing how MSPs, can scale and simplify operations for their customers. The introduction of multi-customer management capabilities within Cisco Security Cloud Control was a highlight for me.

The new multi-customer management capabilities in Cisco Security Cloud Control, coupled with our Hybrid Mesh Firewall, are designed to eliminate operational friction, empower our partners to accelerate revenue growth, and ultimately deliver superior security outcomes for their customers
Jeetu Patel, President and Chief Product Officer, Cisco

This isn’t just about monetising Cisco and their partners either. Every organisation that is re investing what they do, digitising and innovating with AI will fail without the right Infrastructure in place. The focus Cisco have in “selling together” – Customer, Partner, Cisco is a key part of this success driver and they are laser focussed on this approach which is why their partner eco system is to important.

Day 1 Wrap Up

AI is only as good as the platform it runs on and the infrastructure that connects users, endpoints, enterprise data and AI models and agents together. The refresh opportunity is huge for Cisco and their partners as it is (one) of they key things hiding back enterprise AI and digitial transformation at scale.

The opportunity to do this right is about working with organisations that are deploying or enabling AI applications and services to ensure they are also building and managing the infrastructure that makes AI possible. Cisco is betting big on this, and it’s good to see.

It was great to see a renewed focus on MSPs and multi customer management across their unified platform recognising the continuous importance for their customers and partners.

Copilot Researcher Agent gets “Computer Use”

Yes, the Copilot Researcher Agent, once focused purely on gathering and summarising information, can now take action on your device through a capability called Computer Use.  This provides secure interaction with both public and gated web content through a virtual computer which allows human and Copilot control. It requires a Microsoft 365 Copilot license, supports advanced content creation. It can be disabled by IT admins, but comes enabled by default.

This represents part of the next evolution of how we think about and use AI in the workplace and our personal lives.

What Is Computer Use?

This enhancement to Researcher should help users uncover deeper insights, take action, and generate richer reports grounded in both their work data and the web even with gated content that requires the user to take over the browser or to authenticate to content, accept CAPTAs and more.

Computer Use allows Copilot to go beyond simply telling you what it finds. It can now: 

  • Open docs and web-pages directly and read, navigate web-sites and docs (including PDFs).
  • Navigate menus, click buttons, and move through dialogue boxes. 
  • Execute tasks such as generating a charts, drafting an email, or applying formatting. 

It does this in the context of a Microsoft hosted secure virtual PC and does not take over your personal device.

How This Differs from the Current Researcher Agent

The Researcher Agent as we’ve known it has been a powerful tool for finding and contextualising information. It queries internal and external sources, summarises results, and provides references. But until now, it stopped short of doing anything with that information other than the relatively limited for sting capability that it provides ready for your cut and paste skills.

This is where the new Computer Use tools come in by adding the following differences.

  • Researcher Agent (today): Finds, summarises, and presents knowledge. 
  • Researcher Agent with Computer Use: Acts on that knowledge by operating applications and completing tasks, this mess it will be able to leverage the full power of desktops apps to make the output more polished. It can also access secured/gated resources by opening the website or resource in a virtual secure browser and passing control back to you to sign-in with your credentials to access such content.

This allows (as you can see in my example below), Researcher to actually browse the web, reason over the pages it finds and conduct research more efficiently than just using it’s LLMs and AI search mode. The result “should” be a much more fluid, accurate and representative piece of research.

Where Researcher was your simply your analyst, Reseacher with Computer Use becomes more of a full-assistant – able to take the next step and apply the insights directly into your workflow. 

Why This makes a difference

This (and as it evolves) has the potential to make. A big difference in the way output is produced as it can leverage it’s research inside the core apps you would typically use when brining information together such as Word or Excel.

As such the implications are significant: 

  • Reduced friction: Fewer clicks, fewer context switches, less manual effort. 
  • Consistency: Routine tasks (like formatting reports or applying compliance templates) are executed the same way every time. 
  • Accessibility: Natural language becomes the new interface, lowering barriers for all users. 
  • Extends the range of access Researcher has which means users can now generate rich artifacts such as presentations, spreadsheets, and applications using advanced code generation.

This is really about amplifying the output and cutting out more steps, with Copilot handling more of the set the mechanics once the output is created.

Guardrails and Trust

Many will have yet more concerns about what AI can do and how much control it has.

Microsoft has built Computer Use with transparency and control in mind and bear in mind it is NOT talking over actual control of your device. As such:

  1. It only acts when explicitly instructed (you need to enable computer use). 
  2. It shows you what it’s doing the whole time with total transparency
  3. It respects organisational policies and permissions and can’t do anything or access anything you cannot already access.
  4. Passes control back to you if its need you to sign-in to access secure or gates resources
  5. You have control over what Researcher can access with regards web data and work data (based on your access rights naturally).

User credentials are never transfered to or from the sandbox environment, and all intermediate files are automatically deleted when sessions end. This ensures that this AI automation does not come at the cost of compliance or user trust.  You are always in control.

The diagram below, depicts how it works.

Researcher Agent with Computer Mode – Secure data orchestration to Sandbox Environment. (c) Microsoft.

The future...

This is part of a broader trend: AI agents are evolving from passive Copilots to active collaborators. Workflows are becoming conversational, not procedural and the tools start to fade into the background, and outcomes come to the foreground. 

For leaders, strategists, and IT professionals, the challenge now is to rethink processes, training, and measurement in a world where AI doesn’t just inform work it does it with and for us.

This latest addition of Computer Use to the Researcher Agent is a signal of what’s to come. The pace of change and evolution of these tools is rapid. The future of AI at work continues to punch boundaries and evolve from just finding and researching towards doing stuff for us….

What do you think? Have you tried it yet?


Read more at Microsoft

Read more on this at Microsoft

Check out my first use video below:

Windows 11 bringing new “Ask Copilot” to the taskbar

Image Describing Windows 11 updates

Windows Insiders in the Dev and Beta channels can start testing a new Copilot search experience which is available through the Windows Search bar.

To get started go to Settings > Personalisation > Taskbar > Ask Copilot to enable the experience. You can also manage whether the Copilot app launches automatically at sign-in using the “Auto start on log in” toggle in the Copilot app settings.

This is an opt-in experience, but once enabled gives you one-click access to Copilot Vision and Voice, so you can use what ever interaction style works best for –  text, voice, or guided support with Copilot Vision.

As you type, results appear and are updated instantly, making it easier than ever to find what you are looking for.

New Copilot experience in Windows Toolbar/Search