AI Explained: 9 Key Concepts You Need to Know in 2025

Artificial intelligence, whilst a phrase used in most of our daily lives, can feel huge, strange, unknown, scary, exciting and sometimes even intimidating. In this post I decided I would strip back the noise and waffle and share nine crisp, usable concepts. I’ve aimed to provide clarity over jargon and give some practical examples over theory.

Before I start, many and to put into familiar brands, here are a few AI tools and brands you will of already know or at least of have heard of:

1. Common AI Tools to know about

  • ChatGPT – What really started the world of “publicly accessible” Generative AI Chat Bots. ChatGPT (version 5 is the current) is a conversational AI that generates text, pictures, and even video. It can answer questions and help with creative writing. It’s a clear example of generative AI in action, showing how large language models can produce human‑like responses. Free and Paid versions.
  • Copilot (Microsoft) – leverages many different AI models including ChatGPT, Microsoft’s own and others, can do very what ChatGPT can do, but is also integrated across line of business apps and data like Word, Excel, PowerPoint, and Windows. Copilot acts as an AI agent that helps you create, draft, analyse, and even automate tasks. It’s a practical demonstration of how AI agents and retrieval techniques can boost productivity. Free tier (ChatGPT Pro equivalent) and Premium for Consumer/Family. Microsoft 365 Copilot for Business use.
  • Google Gemini – Google’s AI assistant that blends search with generative capabilities, pulling in live information to give context‑aware answers. Free and Paid tiers.
  • GitHub Copilot – A developer‑focused AI that suggests code snippets and functions in real time. It shows how reasoning models and pattern recognition can accelerate software development.
  • MidJourney / DALL·E – Image generation tools that turn text prompts into visuals. These highlight the creative side of AI, where models learn patterns from vast datasets and apply them to new artistic outputs.
  • Perplexity – Great for research including financial data and educational content. Has free and paid versions.
  • Siri / Alexa – typically home style voice assistants that act as simpler AI agents, interpreting commands and connecting to external systems like calendars, music apps, or smart home devices. Great for simple tasks like “what is weather like today” and for linking to smart home devices – “Alexa, turn on the porch light“.

If you are just starting (or are a beginner), the easiest way to decide which AI tool to use is to match the tool to the problem you’re trying to solve. If you need help writing or brainstorming, generative text tools like ChatGPT or Copilot in Word are ideal. If you’re working with numbers or data, Copilot in Excel can analyse and visualise patterns for you. For deeply creative projects, image generators like MidJourney or DALL·E turn ideas into visuals, while GitHub Copilot accelerates coding tasks. The key is not to chase every shiny new AI release, but to ask: what am I trying to achieve, and which tool is designed for that job? If you are starting out, start small, experiment with one or two tools in their daily workflow, and build confidence before expanding into more advanced applications.

Which AI in 5: Pick the AI tool that fits your task- writing, data, images, or code—and grow from there.

2. What is Artificial Intelligence (AI)

Artificial Intelligence (AI) is not really a product though word bingo might have people say ChatGPT or Copilot (at work), but it is far more than that! AI is a broad field of computer science focused on creating systems that can perform tasks which normally require human intelligence. These tasks include many things such as recognising speech, interpreting and understanding images and videos, making decisions, and even generating creative content such as music, videos and images. As of 2025, AI is already embedded in many aspects of our everyday lives – in work and in personal life – from recommendation engines on Netflix to fraud detection in banking, to summarising meetings at work.

At its core, AI combines data, algorithms, and computing power to simulate aspects of human cognition, but it does so at a scale and speed that humans could never achieve.

AI in 5: AI is machines learning, reasoning, and acting like humans.

3. AI Agents

Right, so an AI Agent is a system designed to act autonomously in pursuit of a goal. Unlike traditional software that follows rigid instructions, agents can perceive their environment, make decisions, and take actions with or without constant human input.

For example, a customer service chatbot is an agent that listens to queries, interprets intent, and responds appropriately. More advanced agents can coordinate multiple tasks, such as scheduling meetings, analysing reports, or even controlling robots in manufacturing.

The key is autonomy: agents don’t just follow orders—they adapt to changing conditions.

AI Agents in 5: AI agents are digital helpers that think and act for you.

4. Retrieval-Augmented Generation (RAG)

RAG is a technique that makes AI more reliable by combining generative models (or sub models) with external knowledge sources such as the Web or date from corporate SharePoint sites, email etc.

Instead of relying solely on what the AI model was trained on (which may be outdated or incomplete), RAG can retrieves relevant documents or data in (near) real time and integrates them into its response.

This is especially powerful in business contexts, where accuracy and timeliness are critical – for example, pulling the latest compliance rules or product specifications from an application or data repository, before answering a query. RAG bridges the gap between static training data and dynamic, real-world knowledge.

RAG in 5: RAG = AI that looks things up from multiple sources before answering.

5. Explainable AI (XAI)

One of the biggest challenges with AI is the “black box” problem. What I mean by that is that often do not know how AI arrived at its decisions or answer when instructed.

Explainable AI addresses this by making the reasoning process transparent and understandable to humans. For instance, if an AI is being used by a bank to determine if a customer should/can get a loan or not and that AI  model rejects the loan application, XAI will highlight / explain the factors such as credit history or income that influenced the decision.

In essence this is about seeing it’s workings out. If you have used Microsofts Researcher or Analyst agent at work, you will see some of this as it does its work.

This transparency is vital in ensuring we can trust AI and is required in regulated industries like healthcare, finance, and law, where accountability and fairness are non-negotiable.

By opening this black box, XAI builds trust and ensures AI is used responsibly.

XAI in 5: XAI shows you why the AI answers the way it did, what information it used and how it made its choice.

6. Artificial Super Intelligence (ASI)

While today’s AI is powerful, it is still considered “narrow AI” – specialised in specific tasks despite the advances we see every week.

Artificial Superintelligence (ASI) is a (some say) theoretical future state where machines surpass human intelligence across every domain, from scientific discovery to emotional understanding.

Many might be thinking “The Terminator” here but in reality it is more than conceivable given the current pace of evolution that ASI could in design new technologies, solve global challenges, or even “create” beyond human imagination.

This naturally raises profound ethical and safety concerns: how do we ensure such intelligence aligns with human values and what happens if ASI becomes smarter than the humans that created it?

ASI remains speculative and there are many opinions and research on the matter, but today it is a concept that drives much of the debate around the long-term future of AI.

ASI in 5: ASI is the idea of AI being smarter than all humans in every way.

7. Reasoning Models

Traditional AI models excel at recognising patterns, but they often struggle with multi-step logic.

Reasoning models are designed to overcome this by simulating structured, logical thought processes. They can break down complex problems into smaller steps, evaluate different pathways, and arrive at conclusions in a way that mirrors human reasoning.

This makes them especially useful in domains like legal analysis, financial analysis, scientific research, or strategic planning, where answers are notjust about recognising patterns and finding information but about weighing evidence and making defensible decisions in a way similar to how we as humans might undertake such work.

Reasoning Models in 5: Reasoning models let AI think step by step like us.

8. Vector Databases

AI systems need efficient ways to store and retrieve information, and that’s where vector databases come in.

Unlike traditional databases that store data in rows and columns, vector databases store information as mathematical vectors – dense numerical representations that capture meaning and relationships.

This allows AI to perform semantic searches, finding results based on similarity of meaning rather than exact keywords. For example, if you search for “holiday by the sea,” a vector database could also return results for “beach vacation” because it understands the conceptual link.

Vector Databases in 5: Vector databases help AI find meaning, not just words.

9. Model Context Protocol (MCP)

Finally, MCP is a framework that helps AI agents connect seamlessly with external systems, APIs, and data sources. Instead of being limited to their own training data, agents using MCP can pull in live information, interact with business tools, and execute workflows across platforms. For example, an MCP-enabled agent could retrieve customer records from a CRM, analyse them, and then trigger a follow-up email campaign—all without human intervention.

MCP makes AI more versatile and practical in enterprise environments.

MCP in 5 : MCP is the bridge that connects AI to other tools.


What next and Getting Started

AI is not a single technology but a constellation of concepts – agents, RAG, XAI, ASI, reasoning models, vector databases, and MCP – that together define its capabilities and potential. Understanding these terms helps demystify AI and highlights both its current applications and future possibilities.

As AI continues to evolve, these building blocks will shape how businesses, governments, and individuals harness its power responsibly.

AI is a toolkit of ideas working together to change the world. When we look at what tool to use when, in reality there is not one better than the other it’s more about context of use, the platform you use it on, what your work provides, what you get included in your other software (for example Copilot in Windows, Office apps etc) and what task you are performing. Some AI’s are better at images, some at research and some at writing and analysis.

Beyond OpenAI: Microsoft Copilot add Claude support

Microsoft has started to broaden their AI horizons by adding their first (not Open AI) model into Copilot.

Microsoft are integrating Anthropic’s Claude models into Microsoft 365 Copilot which marks a significant pivot from their exclusive OpenAI-centric approach. Microsoft are also working on their models which we already see on Copilot Plus PCs which will at some point make their way to Copilot.

This move is more than just a new menu option or toggle, it is part of their strategic play to diversify AI capabilities and reduce dependency on a single vendor.

Claude Opus and Sonnet in Copilot.

Claude Opus 4.1 and Sonnet 4 are now available to commercial Frontier Copilot users, (Corporate early adoptors) offering, for the first time, alternatives to Open AI’s GPT models for agents in Copilot Studio and also for their Researcher Agent.

Copilot Studio Model Selector (preview)

It’s worth noting that enabling access does require admin approval. See later.

In the formal annoucement, Microsoft said that Anthropic’s models will unlock “more powerful experiences” for users.

Using Claude in Microsoft Researcher Agent – Via Copilot Chat.

Claude is not new to Copilot, but not new to Copilot AI. Claude is already available (along with other AI models) are already embedded in Visual Studio and Azure AI alongside Google’s AI and Elon Musk’s Grok. This is, however the first time Copilot launch that we have seen non-OpenAI models powering Copilot.

Why This Matters

Microsoft’s shift to leveraging different models reflects a broader trend. Microsoft’s message here is that Copilot is no longer about a single model or even vendor, bit more about orchestration, choice, and adaptability.

Different models have different areas of excellence and this sets the foundations for Microsoft to give flexibilityto tailor and tune AI experiences to specific business needs  using the most appropriate model for the task.

It does, however, raise questions around governance, model performance, and cost. With multiple models in play, we don’t really know how the future of pricing will work if multi model is the future for Microsoft 365 Copilot.

Data Sovereignty and Multi-Model Concerns?

One question I’m already seeing is around Microsoft’s boundary of trust and responsibility, something Microsoft boast around with their Microsoft 365 portfolio.

While the flexibility of multi-model AI is compelling, the question is does it introduce new considerations around data residency and compliance when multi models are being used?

To address that, Microsoft has confirmed that these Claude models run within its Azure AI infrastructure, but states that are not Microsoft-owned. This means that when users “opt to” use Claude, their prompts and responses may be processed by Anthropic’s models hosted within Microsoft’s environment.

This means that when organisations choose to use Anthropic models, they are using these under Anthropic’s Commercial Terms of Service, not the consumer user terms.

For regulated industries or organisations with strict data governance policies, this is likely to raises a few red flags or at least questions that Microsoft will need to be able to answer.

  • Data Boundary Clarity: Is the data staying within Microsoft’s compliance boundary, or is it crossing into Anthropic’s operational domain? If so what does this mean for data compliance and security?
  • Model-Specific Logging: Are logs and telemetry handled differently across models? Can organisations audit usage per model? How is encrypted data handled?
  • Privacy and Consent: Are users aware when their data is being processed by a non-Microsoft model? Is consent granular enough? Will users understand even if Microsoft tell them?

Again, Microsoft has stated that Claude models are “fully integrated” into the Microsoft 365 compliance framework, but organisations will still want to (and should) validate this against their own risk posture – especially where sensitive or regulated data is involved.

Enabling Claude models in Copilot.

To enable the models, your Microsoft 365 Admin needs to head over to the Microsoft 365 Admin Centre and enable access to the other models. Instructions for this are shown in the link below.

https://learn.microsoft.com/en-us/copilot/microsoft-365/connect-to-ai-models?s=09

Microsoft Message Centre annoucement: https://admin.cloud.microsoft/?#/MessageCenter/:/messages/MC1158765?MCLinkSource=DigestMail

Thoughts.

This is a smart move I think. Microsoft is playing the long game — moving their eggs out of one basket and looking a different models that made most economic and performance sense and brining more choice to agent builders.

For those of us partners like us at Cisilion, advising clients on AI adoption, this reinforces the need to think modularly. When building agents, don’t just pick a model – pick a framework that allows you to evolve. Microsoft’s Copilot is becoming that framework and that should be good for business.

I do expect this is just the start. We know Microsoft’s relationship with OpenAI is “less properpous” that it once was. As such I do expect more models, more integrations, and more choice and I do think we will see Microsoft’s own models making their way to Copilot soon.

But with choice comes complexity. We need to ensure that governance, transparency, and user education keep pace with innovation. Again partners will need to help customers navigate this.

What do you think. Is this a good move for Microsoft and their customers?

Will OpenAI’s “gpt-realtime” set a new benchmark for AI Voice?

OpenAI has introduced gpt-realtime, a new cutting-edge speech-to-speech model, alongside the general availability of its Realtime API. This release marks a significant step forward in the evolution of voice AI, particularly for enterprise applications such as customer support and conversational agents. They announced this in a video broadcast which you watch below.

SIP Telephony Support: Lowering the Barrier to Entry

One of the most notable updates they annunced was the addition of SIP telephony support, which aims to simplify the process of building voice-over-phone applications. Developers will be able to integrate phone numbers directly into OpenAI’s SIP interface, streamlining deployment and reducing the need for complex telephony infrastructure. As it develops, this could reshape the competitive landscape, especially for startups that previously relied on expensive and bespoke integrations to differentiate their offerings.

A Unified Model for Natural Interaction

Gpt-realtime apart will feature an end-to-end architecture which will set it apart to how such integrations work today. Unlike traditional systems that chain together speech recognition, language processing, and text-to-speech, OpenAI’s new model will handle everything in a single pass. This will result in much faster response times, more natural audio, and improved emotional nuance (one of the biggest limitations today) meaning it will be capable of interpreting laughter, stress, worry, pauses, and tone shifts.

Open AI so it will also be highly configurable. Developers will be able to adjust pacing, tone, and persona, enabling more tailored and brand-consistent voice experiences.

Considerations for Enterprise Adoption

While the capabilities are lok super impressive, these models will still be expensive to start with anyway. Pricing is expected to be $32 per million input tokens and $64 per million output tokens which is significantly higher than traditional chained models. Additionally, the unified architecture offers less modularity and observability, which may limit flexibility for teams that require fine-grained control over model behavior or voice switching.

In a blog post from CX Today, Alex Levin, CEO at Regal is quoted saying,the cost of the speech-to-speech model is still approximately four times higher than chaining a speech-to-text (STT), large language model (LLM), text-to-speech (TTS) pipeline for Voice AI Agents

Strategic Implications

OpenAI’s latest release is a clear signal of intent: to make voice AI more accessible, performant, and enterprise-ready. Given Mcirosoft and other leading Cloud giants, close relationships with Open AI, we can also expect them to eventally add support for such models meaning customers that leverage, for example Microsoft 365 Copilot and Azure AI will likely gain support for this in the near future too through tools like Microsoft Dynamnics and Copilot Studio.

For organisations exploring and wanting to experiment more with conversational based automation, gpt-realtime promises to offers a powerful new toolset whilst talking the technolgy closer to human voice.

As always, the key lies in aligning technology choices with business goals, recognising ROI and customer expectation and keeping ahead of the curve as the landscape evolves and the pace of AI maturity and adoption contines to accelerate.


Sources: 
CX Today – OpenAI’s Latest Moves Put Many Voice AI Startups on Notice
Open AI – YouTube Video:
Open AI Blog

Microsoft makes OpenAI o1 model free for Copilot users.

OpenAI’s most advanced AI model “o1” which is known for its problem solving and deeper thinking has been available behind a $20 per month ChatGPT premium subscription. ChatGPT premium gives limited acess for $20 a month and unlimited access for $200 a month.

Copilot let’s you use it for free.

Microsoft has a tight partnership with OpenAI and is also on a mission to put their AI (Copilot) across every Microsoft Service it offers with huge capability and features even on theor “free” tiers.

Copilot Consumer Pro users have had access to Think Deeper (which uses the o1 model) for the past 12 months, but Microsoft have now made this feature free to everyone including those using the free version of Copilot.

To access it, you need to simply head ovee to Copilot on the web, (or via the mobile app) and ensure you are signed in with a Microsoft account (MSA). You then get completed free access to the Think Deeper search (which uses the o1 model).

How to get Microsoft Copilot

To get Copilot, head to the web (you actually find Copilot in the Edge browser) and go to https://copilot.microsoft.com or head over to you phones app store and search for Copilot and install it.

You need to be signed in with your Microsoft account to use these features.

Using o1 features aka Think Deeper

Once in Copilot, use the AI chat as you would before (or like you did in ChatGPT) and you will see a “think deeper” button inside the text input box.

Using Copilot’s Think Deeper (ChatGPT model o1)

Selecting it activates the o1 reasoning model. As it processed your prompt, you also get a spinning symbol since searches and responses using o1 are more thorough that with GPT 4 and typpically take around 30 secs.

Using Copilot’s Think Deeper.

This is Microsoft’s way of letting you know that you’re in for around a 20-30 seconds wait. If you don’t need deep search so for normal use), toggle this back off to go back to the super fast GPT-4o version…

So what can o1 do then?

The deep thinker feature of Microsoft Copilot is much better for more complex tasks and research due to the o1 model ability for in depth reasoning. 

As such it is simply better for solving complex issues like math, logic or science, for analysing or creating long or richer documents and reports or for code creation and debug. The best way to test this is to run two Copilot Windows side by side and test out the same prompt with and without Think Deeper enabled.

Content created with o1 is also more “accurate” with far less AI hallucinations (aka, making things up).

Why do many GPTs Hallucinate? In general, GPT models learn by mimicking patterns in their training data (huge amounts of data). The o1 model uses a different technique called reinforcement learning, whereby it's language model works things out (though it's training) by rewarding the right answers and penalising wrong ones. This takes longer through the iterative and testing process. Once done the model  moves through queries in a step-by-step fashion much like human problem  solving. 

o1 limitations?

It is worth noting that o1 isn’t quite on the same level as ChatGPT in some areas. It is less effective with factual knowledge and is currently less able to search the Internet and cannot process files and images.

What about DeepSeek?

The big story this week has of course been DeepSeek, a controversial Chinese AI firm that has announced and launched their own GPT-4 and o1 rivals that have been supposedly built at a fraction of the cost of OpenAI, Google and other US models, shaking share prices, disrupting the market and rasing many questions.

What is more is more is that DeepSeek models are claimed to be more advanced and faster than GPT-4o and smarter that o1.

The advent of DeepSeek has sent shockwaves through the tech industry. Global stock markets have reeled, sparking a cascade of investigations and looming threats of bans.

Yet, the bot hasn’t been without its champions. Interestly, Microsoft – OpenAI’s top financial invester and partner  – has already embraced the DeepSeek R1 reasoning model, and has integrating it into Azure AI Foundry and also GitHub.

These platforms, beloved by developers for fostering advanced AI projects, now stand as the new playground for DeepSeek’s innovative potential.

DeepSeek logo

Open AI Strikes Back

In the wake of its free mobile app’s viral triumph, OpenAI’s CEO Sam Altman swiftly revealed plans to accelerate the rollout of new releases to keep ahead of its new Chinese competitor.

OpenAI are not standing still either though. Et the end of December 2024, month, they began  trialing twin AI models, o3 and o3 mini. Remarkably, the former has surpassed o1 in coding, mathematics, and scientific capabilities, marking a significant advancement in their AI prowess.

There is no doubt this is an area that doesn’t stand still. By the time I click publish this post will likely already be out of date!


DeepSeek has certainly ignited an even greater sense of urgency within the already dynamic AI sector which moves and evolves on an almost daily basis.

Hands-on with Copilot Voice: An almost human conversation

Copilot Voice Cover


The recent Copilot update is a Game-Changer in AI Voice Technology. In the recent announcements, Microsoft unveiled a new version of its Copilot app for iPhone and Android. The update brings a fresh look and new features. It also includes an impressive voice mode that rivals OpenAI’s ChatGPT Advanced Voice – especially since Microsoft make this available for free – yes free

I have tested both recently. I can confidently say that the new Copilot is a significant upgrade. What’s more, it is totally free to use. This is best read while/after you have watched my hands-on video below.

Hands on with Copilot Voice

User-Friendly Interface and Enhanced Voice Mode

The updated Copilot app boasts a more “consumer-friendly” interface. I do wish they would bring some of the advanced customisations back. The standout feature in this update is most definitely the new voice mode, which on first look (a few app updates before it worked), I thought would be a bit of a fad – but it is absolutely brilliant.

Voice mode offers speech-to-speech functionality, allowing for more natural and engaging conversations. While it may not interrupt as fluidly as OpenAI’s offering (though it’s still in early stages), it feels more casual and less stilted, making interactions feel more like chatting with a friend.

A Conversation That Feels Real

During my testing, I found myself deeply starting to actually forget that I was talking to an AI as the conversation felt natural and real (there was the odd delay. In my hands-on example (see the video below), I participated in a discussion. We talked about “if and when AI could ever become self-aware”. We also considered what the implications might be. Unlike a text-based discussion, this level of engagement goes to show just how fast and how rapid the advancement of natural conversation is becoming.

Copilot appears to adapt its vocal tones and pace during conversations. It emphasizes certain words as we speak.

Perhaps the biggest (pleasant) surprise I found was how Copilot adapted to use slang terms the more I used them too. If I swore or spoke more loudly, it also seemed to detect the change in my tone and adjust its output. I’ll be testing this more to see just how far it can go.

Spoiler: I did find the occasional limitation as the conversation continued, such as occasional delays when I interrupted it and seconds of silence.

Customisation and Accessibility

Copilot offers four voice options: Grove, Canyon, Wave, and Meadow. Unlike ChatGPT, you can modify the speed and tone of these voices, making them sound more natural and suited to your preferences. This feature, combined with the app’s inclination to use slang and short-hand words, makes it easy to forget you’re interacting with a machine. I’m not a fan of all the voices though and they are not currently that localised – with most very American (which is fine for now).

Gemini Live (yes, all the chat bots are discovering their voice) currently gives users a choice of 10, but Microsoft say more voice options will be coming “soon”.

What I also like is that you can customise the speed at which each of the voices speaks. Personally, I find the standard setting is too slow and find that a speed of 1.1x sounds most natural. I also discovered that you can also ask Copilot to speak differently by explaining how you want it to sound – for example, applying a slightly different accent, changing its tone of voice or to be more empathetic but I’d like to think eventually Copilot will do this natively without me asking (after all it’s unlikely you’d ask a human to speak in a different tone!).

Copilot Voice is free

One of the most significant advantages of Copilot is that it’s free to use. Today, OpenAI’s ChatGPT Advanced Voice feature currently requires a $20 monthly subscription, whilst Microsoft makes this feature available to all Copilot users, regardless of their subscription status.

Conclusion

Copilot is now under the leadership of Mustafa Suleyman [Microsoft CEO for AI]. It seems poised to make a significant impact in the AI voice technology market. It builds on its partner OpenAI. Its user-friendly design, natural voice interactions, and accessibility make it a strong competitor against other AI voice models.

The best thing – this is totally free

Try this out. Let me know how in depth you feel during a conversation is and can be with Copilot. How “close” do you think this is in becoming a natural, almost human conversation.

Microsoft confirm GPT-4o is now available on Azure AI

Just ahead of Microsoft Build, the Azure team have announced the availability of GPT-4o, OpenAI’s latest flagship model on Azure AI. This innovative multimodal model combines text, vision, and audio capabilities, establishing a new benchmark for generative and conversational AI experiences. GPT-4o is now available in the Azure OpenAI Service for preview, with support for text and image inputs.

This is a preview for testing now

What does GPT-4o Bring?

GPT-4o represents a paradigm shift in the interaction of AI models with multimodal inputs. It integrates text, images, and audio to deliver a more immersive and engaging user experience.

What does the “preview” include?

Currently in preview, Azure OpenAI Service customers will be able to test GPT-4o’s broad capabilities via a preview playground in Azure OpenAI Studio. This initial version emphasizes text and visual inputs, offering a preview of the model’s possibilities and setting the stage for additional functionalities, including audio and video.

The preview is free to try but has limitations around usage and location availability.

Designed for rapidity and efficiency, GPT-4o’s sophisticated processing of complex inquiries with fewer resources has the potential to offer both cost efficiency and enhanced performance.

Note: At time of writing, this is preview is available in two US regions only West US3 and East US.

What about GPT-4o in Microsoft Copilot?

We don’t know yet, but we do know that there will exciting updates around the rest of the Microsoft AI stack this week. Microsoft has an agressive and innovation fuelled roadmap for Microsoft 365 Copilot so as Microsoft continues to update and integrate OpenAI’s latest models into Copilot – I’m looking forward to hearing more this week.

What else is coming?

This week is Microsoft Build 2024 in Seattle and online. I expect this to be (pretty much) all about Copilot, and AI so expect to hear more about GPT-4o and other Azure AI updates.


Further Reading

You can read more about GOT-4o at the official OpenAI Blog which is < here >.

Microsoft are adding a Copilot for Copilot (well sort of).

Yesterday, (8th May, 24) Microsoft released their 2024 Work Trend Index Report which covered the State of AI at Work (you can see this here) as well as announcing some more improvements coming to Copilot for Microsoft 365 in the coming months.

The new features annouced are all aimed at helping to optimise prompt writing, making it easier for people to get a prompt that does what they need first time (a Copilot for Copilot essentially). These updates will include.

  • Auto-complete for prompts
  • Prompt re-write
  • A new catch up feature
  • Copilot Labs upgrade.

Let dive into these quickly. All. Images (c) Microsoft.

Auto Complete for Prompts

Copilot’s new “autocomplete” feature is similar to what you get in a search engine, where it will anticipate (using Machine Learning) what you are writing and help you to complete your prompt when you start typing one out.

Image (c) Microsoft

The aim here to suggesting more details to ensure you get the intended outcome. It will also offer an expanded library of ‘next prompts’.

This means if you start typing “summarise” then Copilot will display options to summarise the last 10 unread emails and chat messages or other tasks that might be related.

Prompt Rewrite

The “rewrite” feature is something that many image AI tools have had for a while. The aim is to be able to takes a person’s basic prompt rewrite it to me more thorough, “turning everyone into a prompt engineer,” according to the Microsoft.

Image (c) Microsoft

Also known as “elaborate your prompt”, Microsoft say this will be able to rewrite any prompts people create making it much easier to do more complex tasks especially when working with documents or ‘connected apps’.

Copilot Catch-up

Copilot Catch Up aims to start making Copilot more “proactive”. Here the chat interface will be able to presents people with “responsive recommendations” based on their recent activity. As an example, it will be able to notify you about upcoming meetings and suggest ways to help you prepare for that meeting, by bringing a summary of recent email and chat threads, meetings notes and documents write in the chat thread. This feature is also coming into Copilot in Outlook.

This feature brings Copilot more into the realms of good ol Clippy (ok I’m kidding here) but will enable Copilot to start proactively helping rather than waiting for its pilot to issue a command and bring the genie out of its lamp!

The aim is to further integrate Copilot into the user’s workflows. Imagine for example having a morning prompt that tells you about your day, tickets logged via Service Now, or a project that is over running (via Project or Planner) or has  completed early perhaps!

Updates to Copilot Labs

Similar to Microsoft app Prompt Buddy, Microsoft will also start to allow people to create, publish, and manage prompts in Copilot Lab.

Image (c) Microsoft

This will bring new features that that can be tailored for individual teams within businesses. This is aimed to make it a lot easier to share useful prompts for employees, Teams and departments to use.

Will these help adoption?

What do you think about the new updates, will these help remove the dark art of promoting and make Copilot easier to use and faster to help people get the desired results.?

Let me know on the comments..

Microsoft to open new AI Hub in London

Microsoft has announced plans for a new artificial intelligence (AI) hub in London, which will be focused on leading edge product development and research. This will be led Microsoft AI Lead Mustafa Suleyman (confounder of DeepMind) who Microsoft hired last month.

This annoucement comes less than a month since Microsoft unveiled a new consumer AI division.

There is an enormous pool of AI talent and expertise in the UK, and Microsoft AI plans to make a significant, long-term investment in the region. (London).

Mustafa Suleyman

This is great for the UK and for London and will help both Microsoft and the UK become an AI  and technology superpower leveraging the hub of tech talent, access to leading and world class universities and research centres with ability to attract the best talent for the next generation of development of AI.

Microsofts AI Future in the UK

This announcement builds on Microsoft’s recent commitment to invest 2.5 Billion into data centre infrastructure and improving AI skills across the UK.

Microsoft’s AI investment in the UK includes building a major new data centre in West London and installing 20,000 high-powered processors in the UK by 2026.

Microsoft’s new UK hub will be run by Jordan Hoffmann,  (another former employee from DeepMind) and will collaborate closely with OpenAI which powers Microsoft’s AI driven Copilot System framework.