451 Research Looks at the Impact of AI on the SOC

To gain perspective on the effects of AI in cyber defense, we have partnered with 451 Research by S&P Global Market Intelligence to publish a Business Impact Brief analyzing the state of the Security Operations Center (SOC) and the impact of AI on its evolution. 

The brief is based on the 451 Research Voice of the Enterprise: Information Security survey, which tracks security professionals across industries since 2020. The survey found that on average, security teams are unable to investigate 45% of the alerts they receive each day. For 18% of the organizations, 75% of the alerts received go uninvestigated. 

The brief analyzes the challenges security teams are facing in the AI-driven threat landscape and assesses the potential business impact of AI SOC solutions across a range of factors, including threat detection, agent-driven remediation, and newly accessible use cases. It also includes predictions for how both attacks and responses will evolve in the near future and how AI will help to transform the role of SOC analysts. 

 

The 451 Research Voice of the Enterprise: Information Security survey has found out that SOC teams are unable to investigate 45% of the security analytics alerts they  receive each day. 

 

Adversaries are using AI to accelerate and rapidly scale attacks, creating significant challenges for security operations teams. As cyber threats proliferate and take a multitude of forms, the volume of data has left many teams experiencing alert fatigue, which poses a major security risk. 

SOC analysts need the ability to quickly review and assess unstructured data from a variety of sources, without moving or reshaping it. Many security teams are seeking to establish a robust data foundation, or data fabric, which allows analysts to identify, triage, and prioritize the most high-risk threats before they inflict damage. 

According to 451 Research, deploying advanced AI-powered systems and data solutions in the SOC is essential to create a single, governed source of truth. Ensuring universal data access enables analysts to automate mundane, repetitive tasks and use their experience, expertise, and contextual awareness to keep the organization safe.

 

What’s Next for AI-Powered Cybersecurity – Insights From Andesite Leaders and Advisors

While AI-powered cybersecurity redefines our field and the broader landscape is impacted by geopolitical conflicts and world events, the industry needs to revisit strategies and rules of engagement. 

 

At Andesite, we are dedicated to arming cybersecurity teams with actionable insights that put humans at the helm, enabling them to make critical decisions, and build a sustainable advantage based on prevention rather than reaction. To help you stay one step ahead, we gathered Andesite’s leaders and advisors to get their insights on where security technology for the enterprise market is going. 

 

“Investigation timelines for SOC teams that embrace AI SOC tech will accelerate dramatically, shifting the focus from investigation speed to investigation quality.”

— William MacMillan, Chief Product Officer, Andesite

 

To prepare for what’s next and empower your team to assess risk and make critical decisions, tap into strategic insights from seasoned security experts who’ve served global organizations including the CIA, Microsoft, JP Morgan Chase, CrowdStrike, and AWS. 

 

Expert insights from security leaders:

  • William MacMillan Chief Product Officer, Andesite
  • Greg Rattray Chief Strategy and Risk Officer, Andesite
  • Alex Thaman Chief Technology Officer, Andesite
  • Merritt Baer Andesite Advisor, Chief Strategy Officer, Enkrypt.AI
  • Kris Merritt Andesite Advisor, Founder & President, Vector8, Inc.

 


CISO Perspective | The AI SOC: What CISO Buyers Want to Know—and What They Might Be Missing

By Merritt Baer, Chief Security Officer at Enkrypt AI

 

The rapid evolution of AI technology in the last couple of years has transformed the way we work, do business, and secure our critical data. This applies across all sectors and specialties, with particular emphasis on data security and privacy in highly regulated industries. With AI permeating virtually every type of software, app, and system being used in enterprise organizations, CISOs face a new challenge that’s complex and multifaceted: how to decide which vendor to trust in the emerging, and already crowded, AI SOC market.

As someone who regularly meets with other CISOs, I wanted to share some insights about how you can best approach AI SOC vendors, what to look for in an AI SOC solution, and what broader contextual understanding will help guide you toward the right decision for your organization. AI is changing the nature of security work altogether, which directly impacts what AI in the SOC looks like in this brave new world.

New Technology Calls for New Metrics

When looking to invest in new software systems, stakeholders, including the Board and the rest of the C-suite, often expect to see key metrics for proof of ROI. The SOC is no exception. With no inherent expertise in this area, they look to the bottom line—for example, asking how much you’re able to reduce head count by investing in an AI SOC tool.

But this is a bit reductive. What you should be asking instead is how the solution will help your existing team work better. With AI changing the nature of work, we need new metrics to demonstrate how implementing AI across your security organization improves processes and outcomes.

For example, I recently met with the CISO of a financial services organization that’s using AI to relieve loan processors from menial daily tasks so they can focus solely on processing loans. This shift in work focus, slightly alters their role in the company. While this is a clear example of AI producing a positive change, it’s a change that would not be reflected in the traditional head count metric. This is the same within the SOC.  An AI SOC doesn’t necessarily reduce the need for people. It just means that the team you do have can  take more of a proactive vs. reactive stance.

The Changing Nature of Data and Security

One of the most important factors to consider when comparing AI SOC tools is that we’re not dealing with the same threat landscape that we were a year ago, or even a month ago. In a world where AI is everywhere, threats show up differently—and must be responded to differently, too. Security behaviors must continuously adapt if you want to stay ahead.

Constant change makes AI essential in the SOC. The question is, can you trust it to work completely autonomously? While I’m all-in on AI, I do believe that human oversight is essential. AI and machine learning can (and should) be trusted to handle volume-heavy tasks with greater speed and accuracy, but humans bring deep contextual knowledge to security work that machines simply can’t mimic. So, that’s the first factor to consider when comparing AI SOC vendors: is the solution fully autonomous, or does it keep humans at the helm?

Adapting to Your Specific Needs

One question I often hear from CISOs is, what are successful enterprises and SOCs doing right when it comes to AI? What are the applications, behaviors, and best-practices that similar organizations are using to deliver the best possible outcomes when deploying AI? While I’m always happy to talk shop with other security experts, it’s important to understand that each SOC is 100% unique, and what works at one organization may not work at another, even if they’re in the same industry and share traits.

The very nature of cyber security today demands tools that are fully customizable and adaptable to your unique needs. Change is constant. Even if an out-of-the-box solution does what you need it to now, it may not be able to meet your needs in the near future. Investing in a customizable AI platform enables you to incorporate it into your security infrastructure in a way that’s thoughtful, meaningful, and impactful, while also being fully adaptable as your SOC needs change. 

Data Processing: To Clean, or Not to Clean

Another important aspect of security operations in this new, AI-fueled world is that the very nature of data itself is changing. It’s proliferating rapidly, and coming from an ever-increasing array of sources—making much of the threat intelligence data your cybersecurity teams deal with unstructured.

Automation can help your SOC handle a higher volume of threat intelligence data. However, it needs to be able to connect all available data sources and tools together, and parse and analyze both structured and unstructured data where it is. The need to extract or ingest data before analysis slows you down, and that won’t cut it in today’s fast-moving threat landscape. When assessing AI vendors, be sure to ask if the proposed solution requires ETL. 

Once all that available data has been analyzed, you also need an AI SOC that surfaces timely, actionable insights. This will enable your security operations team to respond at speed, preventing attacks before the damage is done. It’s not about another tool to the ecosystem. It’s about separating the signal from the noise, enabling them to make smarter, more informed decisions about which threats to respond to first.

The Security of AI Itself

Finally, CISOs must carefully assess the security of the AI used by any vendor they’re considering for the SOC. The risks of AI are well known, which is why we’re seeing increasing data security regulations around its use, from the EU AI Act, to various state-based regulations in the US as well as standards laid out by regulatory bodies such as the International Organization for Standardization (ISO) and the Financial Industry Regulatory Authority (FINRA).

This means that we need repeatable, attestable, defensible — and auditable — security as table stakes for any AI SOC solution, no matter what industry or country regimes . But more importantly, consider how the AI vendor approaches security and safety. Can you trust them to protect your own network, applications, and data? Can you trust the data they use to train the AI? Are you certain they’ll never use your data for this purpose?

With more and more apps in your environment having AI features built into them, whether licensed apps or just the ones employees use in their daily work, the way we think about perimeters and content is changing. This dramatically reduces the time to successful lateral movement. 

 

About Merritt Baer

Merritt is a security executive based in Miami, FL. She serves as Chief Security Officer to Enkrypt AI, and advises a small handful of young tech companies including Andesite and AppOmni. Merritt served in the Office of the CISO at Amazon Web Services for over five years as a Deputy CISO to help to secure AWS infrastructure, at a vast scale. She worked in security in all three branches of the US Government and the private sector. Her insights on business strategy and tech have been published in Forbes, the The Wall Street Journal, VentureBeat, Tech Crunch, SC Media, The Baltimore Sun, The Daily Beast, LawFare, and Talking Points Memo. She is a graduate of Harvard Law School and Harvard College.

Why AI Won’t Replace Us: The Critical Role of Human Oversight in AI-Driven Workflows

The inevitable follow-up question I receive after telling someone I work with artificial intelligence (AI) is some version of the question, “So, will AI take my job?” This reaction isn’t surprising. Microsoft’s 2024 Workplace Learning Report shows nearly half of workers worry AI might replace them. But this framing misses a crucial nuance about our relationship with technology: the question isn’t whether AI will replace us but how we can most effectively wield this powerful tool in our work.

By addressing the misconceptions about AI replacing jobs and emphasizing the criticality of human input and oversight in AI-driven workflows, we can shift the conversation from fear to a more productive vision of human-AI collaboration.

Misconception #1: AI Will Outperform Humans in All Tasks

AI systems excel at processing large amounts of data and can help humans perform specific tasks with remarkable speed and accuracy. However, the belief that AI outperforms humans in every domain overlooks its key limitations. While AI is highly effective at pattern recognition, it’s limited by the quality and scope of the data it’s been trained on. Like a fraud detection model that performs well when new cases closely align with the legitimate and fraudulent purchases it’s been trained on but struggles with cases it hasn’t encountered, AI’s capabilities are constrained by the quality, diversity, and completeness of its training data, which is curated by humans.

AI’s disconnect from reality reveals itself when it confronts situations outside its training data. Humans can adapt to new contexts with limited information, drawing on intuition, prior experience, and flexible reasoning. In contrast, AI systems often falter under uncertainty, constrained by statistical patterns rather than conceptual understanding. Many also suffer from temporal rigidity. Trained on fixed snapshots of knowledge, they require human updates to remain current. Take Google’s Bard; it once confidently claimed that the James Webb Space Telescope took the first images of an exoplanet when such images were captured years before the telescope’s launch. This error demonstrates that AI doesn’t know things the way humans do – it predicts them, sometimes incorrectly, based on outdated or misaligned information.

Even powerful tools, like LLMs, lack a true understanding of real-world concepts and relationships. While they can generate coherent text or summarize data, they can’t understand some of the concepts and relationships that humans intuitively grasp. For instance, in cybersecurity, AI can analyze attack patterns based on historical data, but when facing novel threats, it lacks the reflex and intuition that come from years of hands-on experience.

Misconception #2: AI Will Remove the Need for Humans in Decision-Making

AI systems lack any innate moral compass or judgment. Ideas like dignity, justice, and human rights aren’t embedded in their architecture – they’re the product of centuries of philosophical debate, social struggle, and lived experience. That absence makes human oversight non-negotiable. Humans ensure that AI-powered work reflects the values we choose to uphold, not just the patterns we’ve recorded.

Executive decision-making is another area where human judgment remains superior. Business leaders understand what measures make sense at certain junctures based on organizational context, stakeholder needs, and subtle factors like team readiness or financial runway. This requires understanding unwritten rules, past experiences with similar situations, and internal dynamics that AI cannot access. The most effective decisions often integrate quantitative data with qualitative judgment in ways that AI cannot replicate.

Humans also possess creative problem-solving abilities that AI can’t match. While AI primarily recombines patterns from existing data, humans routinely make conceptual leaps that challenge established conventions. Consider Edward Jenner’s development of the smallpox vaccine: his insight didn’t come from structured data but from observing that milkmaids exposed to cowpox didn’t contract smallpox. This lateral thinking – drawing a novel connection from lived, physical experience – sparked a medical revolution. AI might eventually infer such relationships from large datasets, but it lacks the embodied experience and intuitive spark that led Jenner to his discovery. 

Misconception #3: AI Systems Won’t Require Human Oversight

Humans inherently trust other humans more than they trust machines. This comes from our innate understanding of emotional contexts that AI cannot authentically replicate. Humans recognize nuance, respond to emotional cues, and can communicate with genuine empathy. Those capabilities foster trust in ways AI cannot match. 

Accountability is another critical factor. When AI systems make mistakes or cause harm, responsibility ultimately falls to humans. Organizations require clear accountability chains with designated oversight roles and channels for appeals or remediation. People expect that, for decisions impacting their lives, a qualified human will be reviewing the process, ensuring that context, empathy, and moral reasoning are considered. This “human in the loop” approach serves as a critical safeguard against errors and unwittingly unjust outcomes.

Communities also want their values represented in decision-making processes. Human oversight ensures AI systems respect diverse stakeholder perspectives and operate within accepted ethical frameworks. As AI adoption grows, maintaining human involvement enhances legitimacy and upholds public confidence in AI-assisted decisions. 

The Future of Human-AI Collaboration

While AI won’t replace us anytime soon, it will undoubtedly transform how we work. The most successful organizations will be those that leverage AI as a powerful tool for augmentation rather than replacement. This represents an opportunity for humans to focus on what we do best – creative thinking, relationship building, and meaningful work.

As AI handles more routine tasks, humans can dedicate their energy to higher-order thinking. This productivity multiplier effect is already emerging across industries. Radiologists use AI to pre-screen images and focus on difficult cases, cybersecurity teams deploy AI for data analysis and triage while concentrating on higher-impact activities like proactive prevention and remediation, and content creators use AI for research while applying their perspective and creativity to the final product.

For organizations implementing AI, recommended best practices include:

  • Design AI systems with humans at the center – both as end-users and oversight providers. Ensure clear accountability chains with designated human review roles and appeal processes for AI-generated decisions.
  • Implement robust ethical guardrails, including thorough data privacy protections, a transparent explanation of how AI is used, ongoing bias monitoring, and proportional deployment that matches the level of AI autonomy to the risk involved.
  • Focus on skill transformation rather than replacement. As AI adoption grows, new roles like AI ethics specialists and human-AI collaboration managers will emerge.

By embracing AI as a tool, we can build a future where technology advances human potential rather than diminishing it. The most powerful outcomes will come not from AI alone, but from the combination of humans and AI working in concert.

About Stephanie Klaskin

Stephanie Klaskin is a data scientist at Andesite, where she evaluates the AI behind the product and works with security teams to translate customer data into better detection and faster response. Before Andesite, she partnered with clients in healthcare, marketing, and finance to solve repeatable problems with practical data science. She holds a M.A. in Quantitative Methods from the University of Texas at Austin and a B.A. in Cognitive Science from Johns Hopkins University.

Our Secure by Design Pledge

By Dave Brown, Head of Security and Compliance at Andesite

Building software that is secure by design is at the heart of what we at Andesite are passionate about – it’s the core of our mission and what we pursue as a security vendor. That’s why we proudly signed the CISA Secure by Design Pledge. From foundation to general availability, and since then, we have diligently worked through the Pledge goals to build security and compliance within our product. 

We have developed an internal auditing process with over 450 continuous monitoring controls that constantly validate our work against the Pledge, and we’re proud to openly share that in our Trust Center. That is one of many measures we take to ensure built-in security, compliance, and privacy controls for our customers’ and their customers’ data and networks.

Multi-factor Authentication (MFA)

We are fully committed to implementing multi-factor authentication (MFA). Our Shared Security Responsibilities Matrix outlines that all customers must use their identity provider (IdP) with MFA as part of our commitment to security by default. We integrate with all major identity providers and require that 100% of our customers link their platform instance to their IdP and MFA. We also collaborate with our customers to facilitate the integration of their IdP and MFA during the onboarding process.

Default Passwords

The customer is responsible for addressing default passwords, as we require them to use their identity provider and multi-factor authentication for administrator and user access to our platform. Our primary goal is to help reduce their risk by ensuring that they maintain user access through their chosen identity provider and meet multi-factor authentication requirements.

Reducing Entire Classes of Vulnerability

We have made tremendous progress by implementing tools to address vulnerabilities in our systems at three stages. This includes Software Composition Analysis (SCA), Static Application Security Testing (SAST), and Dynamic Application Security Testing (DAST). These tools enable us to identify vulnerabilities throughout our development, staging, and production phases.

Additionally, twice a year we undergo penetration testing and artificial intelligence assessments to ensure our AI systems’ strong security, compliance, and trustworthiness. We have also partnered with a security company specialized in attack resistance management, continuous assessment, and process enhancement for our Bug Bounty program.

Looking ahead, we are committed to developing a vulnerability notification program for our customers, which will include information on Common Vulnerabilities and Exposures (CVE) and Common Weakness Enumeration (CWE) as part of our comprehensive application security (AppSec) strategy.

Security Patches

As our Shared Security Responsibility Matrix outlines, we are responsible for security patching. We conduct quarterly Approved Scan Vendor (ASV) scans and assessments to prepare for the Payment Card Industry Data Security Standard (PCI DSS). Customers who self-manage our product are responsible for all security patches on those systems.

Evidence of Intrusions

Customer notifications are an essential part of our Incident Response Plan. For confirmed or suspected security incidents, we will collaborate with our customers in good faith to provide the necessary logging to support incident response efforts and meet any regulatory requirements to which the customer must adhere. Customers are fully responsible for evidence of intrusion, logging, or user access, and for providing their IdP with the credentials required for access to their Andesite single-tenant instance.