Microsoft’s Copilot for Security available April 1st

No – it’s not an April Fools Joke – Microsoft yesterday (13th March 2024) announced that their much anticpiated Copilot for Security will be available to buy and use from 1st April 2024.

What Does Copilot for Security Do?

Originally announced a year ago and after extensive testing in private preview, Copilot for Security is aimed at IT Security and Sec Ops teams as it brings Microsoft’s Copilot technology, Microsoft’s threat intelligence services and Machine Learning into a dedicated security service powered by Copilot. .Copilot for Security can processes prompts and responds in eight languages, with over 25 languages supported at launch.

For organisations that already invest and consume Microsoft security services such as Sentinel, Defender, Entra, Priva, Intune, and Purview this is a exciting time!

Image (c) Microsoft Security.

Copilot for Security is informed by large-scale data and threat intelligence, including Microsoft’s daily processing of more than 78 trillion security signals – a gaint increase from 65 trillion signals stated just last year. This is largest threat intelligence database in the world. Microsoft do not use any organisational data to train their LLMs.

One huge advantage of Copilot’s conversational abilities is its capacity to rapidly compose incident reports. It can also tailor these reports to be more or less technical based on the intended employee audience, say Microsoft.

Copilot for Security offers a huge variety of capabilities, including:

  • Human-readable explanations of vulnerabilities, threats, and alerts across all of Microsoft’s security products and services, aswell as, (later) third-party tooling as well.
  • Answer questions about alerts, threats and incidents in real-time and take action.
  • Automatically summarising incident analysis and offers recommendations for subsequent actions based on the tools the organisation is licnesed for and/or deployed.
  • Ability for users to edit the prompt to correct or adjust responses and share the findings with others and create extensive run books based on prompts as well as ability to share prompts with other anaysts in the team.

After nearly a year of various preview stages and vigorous testing both my Microosft Security Expert and enterprise organisations, Microsoft say the feedback has been “overwhelmingly positive.” A recent AI economic study by Microsoft demonstrated that security professionals work 22% faster and are 7% more accurate when utilising Copilot for Security. An impressive 86% of participants reported that Security Copilot enhanced the quality of their work, and >90% expressed a desire to use Security Copilot for future tasks. The report further indicates that security novices, possessing basic IT skills, performed significantly better with Security Copilot compared to members of a control group. Moreover, their superiors expressed greater confidence in their output.

Copilot for Security in Action

A year in readiness.

In the annoucement, Microsoft cited statements from Forrester VP Jess Pollard who said that “Experienced practitioners will reap the most rewards from the capabilities Microsoft offers, and while it’s unlikely to identify threats SOC [security operation center] teams would miss, it does make investigation and response faster”.

Just like Copilot for Microsoft 365 – Adoption and Training is Key

Just like any major technology change such as Copilot for Microsoft 365, adoption, training and practice is going to be vital to get maximum value anmd trust from Copilot for Security. Security teams will need to a fair amount of change management and training to ensure they can take advantage of the Microsoft Copilot for Security. Forrester cited in the report that “it takes around 40 hours of training to get security practitioners comfortable with using Copilot for Security. In addition, we heard that it takes four or more weeks — with many stops and starts — to get practitioners comfortable with the technology.”

With a global shortage of Cyber Security Skills, an exponential growth in attacks and attack surfaces and the rise of AI at cyber crimimals finger tips, Copilkot for Security has been one of the most anticipated uses for Copilot. There is no doubt that Copilot for Security can lower the barrier to entry into the cybersecurity industry, Forrester also said that “Though large language models and generative AI may level the playing field and allow for accelerated security talent development, no amount of out-of-the-box prompt books and guided response steps replace fundamental security knowledge, skills, and experience.

The Pros Microsoft Copilot for Security

Feedback from Microsoft early-access clients loved about Copilot for Security, including the following:

  • Making script analysis easier by de-obfuscating and explaining contents.
  • Accelerating threat hunting by helping write queries based on adversary methods.
  • Speeding up and simplifying complex KQL queries or PowerShell script creation.
  • Analysing phishing submissions by verifying true positives and providing inbox details.
  • Improving analyst experience by reducing the need to swap between various tools.
  • Generating leadership / executive-ready incident report summaries efficiently.

Things to be aware of at launch

There are serveral key areas which wont be available at intial launch, but epect to see rapid release cycles and updates once GA. Currently the following is not available but will be added over time.

  • Single Data Repositories – Copilot currently requires multiple instances for users / organisations that want to silo data between different business units, group companies or geo locations. These will be eventually be rolled into a single instance/interface but today will cause challenges for large MSPs and global / complex organisations.
  • Third Party Tools – At launch Copilot for Security will not provide integation into third party tools so organisations will need to be using Microsoft’s first party security tools like Defender for Ideneity and Defender for Endpoint. This is on roadmap.
  • Limited Integfration and Automation: Much of the work Copilot for Security does on day one is around reporting, alterting across mutiple signals sources and behaviour. Whilst it can execute run-books, some services like auto-quarantine and network isolation will not be available at launch.

New Features at Launch

In the annoucement, Vasu Jakkal, corporate VP of compliance, identity, management, and privacy at Microsoft said that as part of the launch, the following new features will be available to Copilot for Security:

  • Custom promptbooks,: allowing Security Teams to create and save their own natural language prompts for common security workstreams and tasks similar to the notebook feature in Copiolot for Microsoft 365.
  • Knowledge integrations: Which will enable the connecting of Copilot for Security to customers’ logic and workflow and the ability to perform activities based on company defined step-by-step guides.
  • Integration with customers’ curated external attack surface from Microsoft Defender External Attack Surface Management to identify and analyse the most up-to-date information.
  • Summarisation in natural language of additional insights from Microsoft Entra audit logs and diagnostic logs for a security investigation or IT issue analysis related to a specific user or event.
  • New fully customisationable usage dashboards to provide reporting on how teams interact with Copilot.

Which Organisations benefit most?

For organisations that already invest and consume Microsoft security services such as Sentinel, Defender, Entra, Priva, Intune, and Purview – Copilot for Security will likley be at tool that provides an indispensable enhancement that will not only reduce workload and increase productivity, but siginifcantly help Security Teams to work better together and detect and respond faster than ever.

Organistions that are not fully invested in Microsoft’s extensive secrtirty portfolio and choose to use other vendors will still benefit, but until wider third party support is available, runinng trials and evaluating the potential move to more Microsoft Security technologies is a smarter move. There will be increased funding pots and incentives to entice organisations to move to Microsoft Security.

Almost every Security vendor is adding Gen AI into their products and services, but today, no other organisation has built what Microsoft have (though this will likley change).

Pricing from $4 per hour

Yes, ok I saved this for the end.

Pricing will be offered through a consumption-based model, allowing customers to pay according to their usage needs. Usage will be categorised into Security Compute Units (SCUs). Customers will be billed for the number of SCUs provisioned on an hourly basis at a rate of $4 per hour, with a minimum usage requirement of one hour. Microsoft say this is an opportunity for any organisation to begin exploring Security Copilot and expand their usage as necessary.

This, lowers the entry point to the solution without a big initial license outlay and should simplify the pilot, on-boarding and rollout process. The PAYG model is also something organisations are used to, making it more accessible and straightforward and avoiding the complexity of traditional stackable licensing schemes.

Microsoft CSP partners, like Cisilion will be key in helping customers to manage their spend, working with the Sec Ops team to tweak and finetune the solution to help map, manage and plan spent.

My 8 AI tech predictions for 2024

man looking up a cloud thinking about AI advances in 2024

Our social media feeds will be full of predictions for the year ahead this week, after all, 2023 was an exciting and crazy year in tech with arguably some of the biggest advances we have seen for more than a decade. You can read my 2023 tech review here.

With all the advancements in Generative AI technology and chatbots in 2023, I have focussed my tech predications specifically around the rise and development of Generative AI, since every aspect of IT is going to be “AI infused” this year I believe, and organisations start to enter the next level of adoption maturity – from “what is coming” and “what might be possible” to real business impacts and tangible examples.

#1 AI is going to keep getting better and more “intelligent”.

This is quite a no-brainer really, as we already know that OpenAI has big plans for 2024 and with Google hot on their tail with Gemini, I would expect to see the release ChatGPT 4.5 (or even 5) at some point in the first half of 2024. We could also see image technology like DALL-E shift into video creation for the masses an not just images. There will also be more competition to win the Gen AI race from Microsoft, Apple, Google and Amazon. This could be the new browser and search engine wars. Microsoft will adopt the later ChatGPT and DALLE-3 tools into their Copilot products.

#2 Business will invest more AI and core technology training.

Outside of using Generative AI to help us write emails and documents, many organisations will be looking to AI to further enhance business automation and data processes to complement and enhance human capabilities.

With the output of most of the AI tools we will use in the enterprise being reliant on the data on which they use as a reference point or to operate, there will be a need to invest in skills around the fundamentals of AI and big data analytics. People will need to learn how to interface with AI, how to write to good prompts that deliver the right outcome and how to leverage these new tools to radially improve productivity and outcomes.

At the more basic levels, there will also be a focus and need to drive good adoption of the base technologies used within organisations as a result of the technologies and processes put in place. From good data labelling and classification, to simply working with and storing files in the right places in Office 365 and to using the new tools such as Copilot in Edge and Microsoft 365, Intelligent Recap in Teams, businesses will need to revisit the level of IT training given to employees, encouraging Centres of Excellence and building technology sponsors or mentors across different teams.

Training users on what tools to use, how to use them and when will be key and is something many organisations still do badly.

#3 We will see more Legal Claims against AI.

Whatever happens in terms of the tech advances of AI, there is no doubt that we see a leap in the number of legal claims from authors, publishers and artists against companies who have been building AI products – after all, we’ve already seen a few in 2023.

The reason for this, is that at the heart of any Generative AI products are large language models (LLMs). The leading AI companies such as Google, Microsoft and OpenAI, have worked really hard to ensure their models adhere to and respect copyright laws while “training” their models. In fact, Microsoft are so bold about this, they even put in place a copyright protect pledge to protect companies back in September last year.

Just last week (December 2023), the New York Times filed a huge lawsuit against Open AI and Microsoft for copyright infringement. They claim that their heavily journalism content was being used to train and develop ChatGPT without any form of payment.

OpenAI and Microsoft are also caught up in another lawsuit over the alleged unauthorised use of code in their AI tool Github Copilot and there have already been other examples of lawsuits against developers of generative AI products including Stability AI and Midjourney in which artists have accused the developers of using their content to train text-to-image and image creation generators on copyrighted artwork.

The legal battles of 2023 highlight some of the complex and evolving issues surrounding intellectual property rights with the development and use of AI.

As 2024 gets underway, I suspect we will see more examples (especially if the New York Times case is successful).

#4 The rise of robust governance policies.

As we move from proof of concepts and idealisation to real proven examples of how these AI tools can be used in our daily lives, I think we will see an increase in regional, state and local companies, putting in place robust governance policies, processes and tools including the testing and validation for content generated by AI generated content. This will require new tools for ensuring there are appropriate guard rails and monitoring throughout.

Organisations will need to have clear AI policies in place that map out what AI products and tools they allow, guidance around content and image generation as well as what they view as ethical, responsible, and inclusive use of AI, outside of the policies that the AI companies have in place and the guidance they provide.

Education will also be key to ensure that employees can learn and put to practice, the necessary skills to use AI tools in workplace and to ensure the above checks and policies are implemented. Creating centres of excellence and good practice sharing will also be key to ensure employees and organisations get maximum benefit and gains from using AI.

#5 Expect to see more deception, scams and deep fakes.

We will likely see more deception and trickery for financial gain this year as fake person generators and deep fake voice and videos become more of a widespread tool for phishing and scams. We have already seen cases (and warnings) by banks where voice cloning technologies can already accurately replicate human voices and threaten the security of voice print based security systems. In 2024,we are likely to see this go further to many more areas across personal, corporate and political exploitation and deception.

Left unsupervised and unprotected, the rapid growth and risks of digital deception imposes a huge risk and needs security response and protection organisations to respond. I think we will see more guidance, more safeguards, specialised detection tools, increased awareness and increased use of multi-factor protection. A new method of digital prints to detect such fakes is going to be critical if people and organisations are going to remain confident that these technologies can’t be beaten by deep fakes.

To protect the reliability of information in a fast-changing digital world, it will be essential to have the tools and skills to detect and counteract AI-generated fabrications.

#6 Proliferation of “new” wearable AI technology.

I expect to see a huge increase in products and services around AI wearables or AI-powered wearables. This will further drive the already increasing trend that shifts away from traditional screen-focused devices towards more integrated, context and environment aware devices that provide up-to-date monitoring that fuel data driven insights and decisions into personal and professional lives.

Applications: This could open up huge advances in for example continuous health monitoring devices, such as blood glucose monitors, anxiety detection, cancer scanning, gut health and even AI controlled insulin pumps. In sports we could a new level of performance monitoring and tracking with huge sponsorships deals by leading health and fitness companies. This will/could also lead to more data for unique advertising revenues…

Apple have also recently said they are working with OpenAI and plan to leverage the computing edge (their devices) by directly enabling AI processes on their devices rather than relying solely on cloud connected AI services.

Security comprises and wider privacy concerns are likely to be impacted by this shift especially as these devices (in a similar way to health trackers do today) will have the ability to record and process huge amounts of personal, health and other data. In the case of smart glasses for example, this could also lead to new laws and legislation (and restrictions) to ensure privacy isn’t compromised by recording or capturing video without permission or consent.

#7 Cyber attacks and defence will become “more AI driven”.

With any new technology – security plays a vital role. I think we will see a massive change to the level of attacks and therefore the protection and detection needed from cyber security systems this year. From an attacker perspective, it is likely that the use of Machine Learning and AI will continue to amplify the sophistication and effectiveness of cyber attacks – with more convincing personalised-driven tactics, including advanced deepfakes and intricate, personal phishing schemes, using AI to craft more convincing social engineering attacks that make it increasingly difficult to differentiate between legitimate and deceptive communications – both externally and from within the organisation. We will also see systems customise attacks based on industry, location and known threat protection landscape.

From a defence perspective, the fight against AI attacks will also be AI-centric with new AI-based detection tools and applications that work in real time. Identity will be the primary defence and attack vector. For example, Microsoft’s Security Copilot which is currently in preview promises to be the first generative AI security product to help businesses protect and defend their digital estate at AI speed and scale. These tools, in partnership with people powered response and remediation teams should at least even the fight between the AI powered attackers and the defenders that are needed to keep our businesses, industry and services safe.

Without playing the War Games/Terminator scare games, the treat of bad actors/nation state attackers. organised cyber crime division and opportunity hackers have a new set of tools available to help them. The battle between attackers and the biggest Cyber Security MSPs, Cloud giants and business is going to heat up. We will see victims and we will see scares. The battle against cyber threats is becoming ever more complex and intertwined with AI.

Businesses will need a more nuanced and advanced approach to cybersecurity which will mean simplification, standardisation and most likely reducing the number of different disconnected security products they have and adopting a more defence in depth approach with AI powered SEIM tools or full outsourced Managed Security.

#8 Zero Trust will finally be taken seriously.

To wrap it up – and with the growth of AI in to every part of our personal and work lives, working across more devices, applications, and services, the realm of control that IT traditionally had over the environment will continue to move outside of their control.

With the rise of AI and more importantly AI being used to drive more sophisticated attacks – compromising personal devices that are used to access corporate data, I think we will see more organisation adopting the zero-trust security models whilst consolidating their point product solutions into a more streamlined and unified approach.

Zero Trust is a security strategy – not a product or a service, but an approach in designing and implementing the following set of security principles regardless of what technology products or services an organisation uses:

  • Verify explicitly.
  • Use least privilege access.
  • Assume breach.

The core principle of Zero Trust is that nothing inside or outside the corporate firewall can be trusted. Instead of assuming safety, the Zero Trust model treats every request as if it came from an unsecured network and verifies it accordingly. The motto of Zero Trust is “never trust, always verify.”

We also know many organisation have a huge amount of digital dept when it comes to security – with lots of point products, duplicate products and dis-jointed systems. I think we will see organisations focus more around:

  • Closing the gaps in the Zero Trust strategy– making sure they have adequate protection against each of the layers
  • Focus on data protection to minimise data breach risk – things like Data Loss Prevention, encryption, conditional access, labelling and data classification etc.
  • Doing more with less – by removing redundant or duplicate products and aligning with tools that better integrate with one another and that can be managed holistically through a single pane of glass.
  • Doubling down on Identity and Access control – moving to passwordless authentication methods, tighter role based access control, time-based access for privileged roles and stricter conditional access policies.

I also think that Generative AI has a huge potential to strengthen both our awareness of data security, and in adding an additional layer of visibility and protection. I expect we will see admin tools become smarter at looking at information over sharing, pockets of risk and potential compromise and having the ability to take action (expect more premium SKUs) to close the gaps, inform information owners or alert Sec Ops teams. I think we will see organisations spend more time looking at risk management and insider risk too.


I could probably go on – as there is so much happening and the pace we saw in 2023 will only continue and if not increase.

Conclusion

In conclusion, this article has discussed some of the major trends and my predictions for AI in 2024, based on the developments, achievements, rumours and general trajectory seen last year.

In short, my predictions, include the improvement and competition of generative AI models, the need for more AI and data skills training, the legal and ethical challenges of AI-generated content, the rise of AI governance and security policies, the increase of deception and deepfakes, the proliferation of AI wearables, and the role of AI in cyberattacks and defence.

These trends highlight the both the opportunities and risks of AI for personal, professional, and societal domains, and the importance of being aware and prepared for the impact of AI in the near future.

Legal Damages Covered by Microsoft for their AI Customers

Microsoft said yesterday in a blog post that they will “pay legal damages on behalf of customers using its artificial intelligence (AI) products if they are sued for copyright infringement for the output generated by such systems“.

In the post, Microsoft said that they will assume responsibility for the potential legal risks arising out of any claims raised by third parties for copyright infringement so long as their company’s customers use “the guardrails and content filters” built into their AI powered products which include Bing Enterprise Chat and Microsoft 365 Copilot. Microsoft said that this offers functionality that is designed to reduce the likelihood that their AI-powered services will return content that infringes copyrighted content.

Microsoft is announcing our new Copilot Copyright Commitment. As customers ask whether they can use Microsoft’s Copilot services and the output they generate without worrying about copyright claims, we are providing a straightforward answer: yes, you can, and if you are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved.

Microsoft

Microsoft’s say that their Copilot Copyright Commitment will protect customers so long as they have “used the guardrails and content filters we have built into our products” said Hossein Nowbar, [CVP and Chief Legal Officer at Microsoft] in their blog post yesterday. Microsoft also pledged to pay related fines or settlements and said it has taken steps to ensure its Copilots respect copyright.

Microsoft’s pledge comes are part of their ethical use of AI commitments and say that “We believe in standing behind our customers when they use our products – we are charging our commercial customers for our Copilots, and if their use creates legal issues, we should make this our problem rather than our customers’ problem“.

Generative AI is now everywhere

Generative AI applications leverage existing content such including news, images and artwork, and evening programming code and use it to generate new “AI generated” content which may use combinations of different data sources. Microsoft is embedding much of this technology, powered by their partnership with OpenAI Inc, into their core technology products like Windows 11 and Microsoft 365 which as a potential to put their customers in “legal jeopardy”.

With the proliferation and growing use of generative AI – people are using these tools to generate text, images, sounds, other data, and people have raised concerns over the technology’s ability to generate content without referencing it to its original authors. To address this Microsoft, said that “We are sensitive to the concerns of authors, and we believe that Microsoft rather than our customers should assume the responsibility to address them. Even where existing copyright law is clear, generative AI is raising new public policy issues and shining a light on multiple public goals. We believe the world needs AI to advance the spread of knowledge and help solve major societal challenges. Yet it is critical for authors to retain control of their rights under copyright law and earn a healthy return on their creations“.

Protecting and upholding Copyright Laws

Artists, writers, and software developers are already filing lawsuits or raising objections about their creations being used without their consent which has accelerated since the available of Generative AI tools exploded with the release of ChatGPT back in November 2022. This includes programmers, artists, and authors.

I cannot show you that, as it would be unethical and illegal to do so. AI breaching copyright is a genuine issue that affects many artists and creators who have their original works used without their permission or compensation.

There are already several lawsuits against AI firms, which are testing issues of copyright. For example, three artists have filed a case against Stability AI, the company behind Stable Diffusion, Midjourney, and DeviantArt, an online art community with its own generator called DreamUpThey allege that the company unlawfully copied and processed their artworks without permission or license.

Microsoft say that their Copilot Copyright Commitment extends their existing intellectual property indemnification coverage to copyright claims relating to the use of its AI-powered assistants called Copilots and through to their AI powered Bing Chat Enterprise.

Microsoft state in their blog that “we have built important guardrails into our Copilots to help respect authors’ copyrights. We have incorporated filters and other technologies that are designed to reduce the likelihood that Copilots return infringing content. These build on and complement our work to protect digital safety, security, and privacy, based on a broad range of guardrails such as classifiers, meta prompts, content filtering, and operational monitoring and abuse detection, including that which potentially infringes third-party content”.

You can already see evidence of this safety net in tools such as Bing Enterprise Chat where the tools will do what it can to avoid purposely breaching copyright.


Further Reading

Microsoft on-the-issues blog (source): https://blogs.microsoft.com/blog/2023/06/08/announcing-microsofts-ai-customer-commitments/

Microsoft AI Commitments:
https://blogs.microsoft.com/blog/2023/06/08/announcing-microsofts-ai-customer-commitments/

Five things you need to know about Microsoft 365 Copilot: https://robquickenden.blog/2023/08/microsoft365copilot-5keythings/

Cisco XDR uses Cohesity to help protect your org from ransomware

Cisco has added ransomware detection and recovery support to its recently unveiled Extended Detection and Response (XDR) system.

Ransomware is a type of malicious software that encrypts the end user’s device and data and demands a ransom for its decryption. Ransomware attacks can cause considerable damage to businesses and organisations, disrupting their operations and compromising their data. To combat this threat, Cisco has now introduced a new solution that integrates with their new Extended Detection and Response (XDR) solution with Cohesity’s DataProtect and DataHawk offerings.

Cisco’s XDR system is a cloud-based platform that combines multiple security products and telemetry sources to detect, analyse, and respond to threats across the network and endpoints. As Cisco announced the General Availability of their XDR platform, they also announce that they have added ransomware detection and recovery support to their XDR system, enabling Security Operations Center (SOC) teams to automatically protect and restore business-critical data in the event of a ransomware attack.

This feature is made possible by integrating Cisco’s XDR system with Cohesity’s DataProtect and DataHawk offerings, which are well established and trusted, infrastructure and enterprise data backup and recovery solutions. These provide configurable recovery points and mass recovery for systems assigned to a protection plan and can preserve potentially infected virtual machines for forensic investigation and protect enterprise workloads from future attacks.

Cisco said that the exponential growth of ransomware and cyber extortion has made a platform approach crucial to effectively counter adversaries. It also noted that during the second quarter of 2023, the Cisco Talos Incident Response team responded to the highest number of ransomware engagements in more than a year.

The integration of Cisco’s XDR system and Cohesity’s solutions is designed to help Security Operations Centre (SOC) teams and IT to automatically detect, snapshot, and restore business-critical data at the very first signs of a ransomware outbreak; often before it has had a chance to move laterally through the network to reach the high–value assets.

In the announcement, Cisco and Cohesity said that they already have a long-standing partnership, with over 460 joint customers. Cisco have said that the Cohesity Cloud Services package will also be able to be sold by their Cisco channel partners like Cisilion later in 2023. The Cohesity Cloud Services include data security and management as well as threat defense, data isolation and backup/recovery. Cisco have also said that the software can be deployed and hosted on both Microsoft Azure and Amazon Web Services (AWS) via their marketplaces.

This brings more features to Cisco’s XDR service (a competitive landscape where they compete against the likes of Microsoft, Sentinel One and Palo Alto) and brings together a myriad first-party Cisco, and third-party security products to control network access, analyse incidents, remediate threats, and automate response all from a single cloud-based interface. The offering gathers six telemetry sources that SOC operators say are critical for an XDR solution: endpoint, network, firewall, email, identity, and DNS, Cisco stated in the announcement.

Part of Cisco’s growing Security Portfolio

The Cisco Security portfolio is a comprehensive set of solutions that work together to provide seamless interoperability with your security infrastructure, including third-party technologies. Their growing portfolio covers various aspects of security, such as network security, user and endpoint protection, cloud edge, advanced malware protection, email security, web security and workload security. The Cisco XDR system is part of this portfolio and integrates with other Cisco products and services to detect, analyse, and respond to threats across the network and endpoints.

Cisco XDR system can leverage the threat intelligence from Cisco Talos – the cloud-based platform known as Cisco SecureX, as well as the backup and recovery solutions from Cohesity to provide a powerful and proactive defense against ransomware and other advanced threats. Cisco XDR system also supports third-party integrations with other security vendors, including Microsoft, Splunk and many others.

Cisco have, and continue to invest heavily in their end-to-end security portfolio and their XDR solution (as of December 2022) is on the cusp of moving into the Leaders Quadrant in the Gartner Magic Quadrant for Endpoint Protection.

Cisco's XDR play competes against other industry leading XDR vendors including Sentinel One Microsoft Defender, Crowdstrike Falcon, Palo Alto Cortex XDR and Trend Micro Vision One.  

Cisco are on the verge of become a leader in the Gartner Magic Quadrant for Endpoint Protection.

Conclusion

Ransomware is a serious threat that requires a comprehensive and proactive solution. Cisco’s XDR system, integrated with Cohesity’s DataProtect and DataHawk offerings, provides a powerful way to detect, prevent, and recover from ransomware attacks.

For organisations with a fragmented security portfolio and those heavily invested in Cisco infrastructure, Cisco’s XDR can be an excellent choice for organisations that need to increase visibility and simplify the detection and remediation time with the integration of XDR with the rest of their Cisco Security portfolio – enhancing the visibility, automation, and effectiveness of security operations.