Andesite CEO Brian Carbaugh and CPO William Macmillan discussed SecOps on CISO Tradecraft

Our CEO Brian Carbaugh and CPO William Macmillan joined Mark Hardy for a great episode of CISO Tradecraft. They discussed the Human-AI SOC and how AI is transforming security operations.

They delved into the efficiency, accuracy, and proactive threat detection that AI systems bring to the SOC, and the critical role of contextual data in modern threat detection. The conversation covered the challenges of legacy SIEMs, the benefits of AI to solve for alert fatigue, and the sea change offered by a new SOC architecture.

Watch the full interview here.

Andesite’s Chief Product Officer William MacMillan talks with Politico

Our Chief Product Officer, William MacMillan, discussed with Politico’s Dana Nickel the importance of the CISA 2015 cybersecurity law and its treatment in the continuing resolution that ended the latest government shutdown. 

MacMillan discussed the importance of retroactive protections for companies and critical infrastructure operators that continued to share cyber threat data during the shutdown. You can learn more about the conversation and the topic on Politico’s cybersecurity newsletter.

 

Why AI Won’t Replace Us: The Critical Role of Human Oversight in AI-Driven Workflows

The inevitable follow-up question I receive after telling someone I work with artificial intelligence (AI) is some version of the question, “So, will AI take my job?” This reaction isn’t surprising. Microsoft’s 2024 Workplace Learning Report shows nearly half of workers worry AI might replace them. But this framing misses a crucial nuance about our relationship with technology: the question isn’t whether AI will replace us but how we can most effectively wield this powerful tool in our work.

By addressing the misconceptions about AI replacing jobs and emphasizing the criticality of human input and oversight in AI-driven workflows, we can shift the conversation from fear to a more productive vision of human-AI collaboration.

Misconception #1: AI Will Outperform Humans in All Tasks

AI systems excel at processing large amounts of data and can help humans perform specific tasks with remarkable speed and accuracy. However, the belief that AI outperforms humans in every domain overlooks its key limitations. While AI is highly effective at pattern recognition, it’s limited by the quality and scope of the data it’s been trained on. Like a fraud detection model that performs well when new cases closely align with the legitimate and fraudulent purchases it’s been trained on but struggles with cases it hasn’t encountered, AI’s capabilities are constrained by the quality, diversity, and completeness of its training data, which is curated by humans.

AI’s disconnect from reality reveals itself when it confronts situations outside its training data. Humans can adapt to new contexts with limited information, drawing on intuition, prior experience, and flexible reasoning. In contrast, AI systems often falter under uncertainty, constrained by statistical patterns rather than conceptual understanding. Many also suffer from temporal rigidity. Trained on fixed snapshots of knowledge, they require human updates to remain current. Take Google’s Bard; it once confidently claimed that the James Webb Space Telescope took the first images of an exoplanet when such images were captured years before the telescope’s launch. This error demonstrates that AI doesn’t know things the way humans do – it predicts them, sometimes incorrectly, based on outdated or misaligned information.

Even powerful tools, like LLMs, lack a true understanding of real-world concepts and relationships. While they can generate coherent text or summarize data, they can’t understand some of the concepts and relationships that humans intuitively grasp. For instance, in cybersecurity, AI can analyze attack patterns based on historical data, but when facing novel threats, it lacks the reflex and intuition that come from years of hands-on experience.

Misconception #2: AI Will Remove the Need for Humans in Decision-Making

AI systems lack any innate moral compass or judgment. Ideas like dignity, justice, and human rights aren’t embedded in their architecture – they’re the product of centuries of philosophical debate, social struggle, and lived experience. That absence makes human oversight non-negotiable. Humans ensure that AI-powered work reflects the values we choose to uphold, not just the patterns we’ve recorded.

Executive decision-making is another area where human judgment remains superior. Business leaders understand what measures make sense at certain junctures based on organizational context, stakeholder needs, and subtle factors like team readiness or financial runway. This requires understanding unwritten rules, past experiences with similar situations, and internal dynamics that AI cannot access. The most effective decisions often integrate quantitative data with qualitative judgment in ways that AI cannot replicate.

Humans also possess creative problem-solving abilities that AI can’t match. While AI primarily recombines patterns from existing data, humans routinely make conceptual leaps that challenge established conventions. Consider Edward Jenner’s development of the smallpox vaccine: his insight didn’t come from structured data but from observing that milkmaids exposed to cowpox didn’t contract smallpox. This lateral thinking – drawing a novel connection from lived, physical experience – sparked a medical revolution. AI might eventually infer such relationships from large datasets, but it lacks the embodied experience and intuitive spark that led Jenner to his discovery. 

Misconception #3: AI Systems Won’t Require Human Oversight

Humans inherently trust other humans more than they trust machines. This comes from our innate understanding of emotional contexts that AI cannot authentically replicate. Humans recognize nuance, respond to emotional cues, and can communicate with genuine empathy. Those capabilities foster trust in ways AI cannot match. 

Accountability is another critical factor. When AI systems make mistakes or cause harm, responsibility ultimately falls to humans. Organizations require clear accountability chains with designated oversight roles and channels for appeals or remediation. People expect that, for decisions impacting their lives, a qualified human will be reviewing the process, ensuring that context, empathy, and moral reasoning are considered. This “human in the loop” approach serves as a critical safeguard against errors and unwittingly unjust outcomes.

Communities also want their values represented in decision-making processes. Human oversight ensures AI systems respect diverse stakeholder perspectives and operate within accepted ethical frameworks. As AI adoption grows, maintaining human involvement enhances legitimacy and upholds public confidence in AI-assisted decisions. 

The Future of Human-AI Collaboration

While AI won’t replace us anytime soon, it will undoubtedly transform how we work. The most successful organizations will be those that leverage AI as a powerful tool for augmentation rather than replacement. This represents an opportunity for humans to focus on what we do best – creative thinking, relationship building, and meaningful work.

As AI handles more routine tasks, humans can dedicate their energy to higher-order thinking. This productivity multiplier effect is already emerging across industries. Radiologists use AI to pre-screen images and focus on difficult cases, cybersecurity teams deploy AI for data analysis and triage while concentrating on higher-impact activities like proactive prevention and remediation, and content creators use AI for research while applying their perspective and creativity to the final product.

For organizations implementing AI, recommended best practices include:

  • Design AI systems with humans at the center – both as end-users and oversight providers. Ensure clear accountability chains with designated human review roles and appeal processes for AI-generated decisions.
  • Implement robust ethical guardrails, including thorough data privacy protections, a transparent explanation of how AI is used, ongoing bias monitoring, and proportional deployment that matches the level of AI autonomy to the risk involved.
  • Focus on skill transformation rather than replacement. As AI adoption grows, new roles like AI ethics specialists and human-AI collaboration managers will emerge.

By embracing AI as a tool, we can build a future where technology advances human potential rather than diminishing it. The most powerful outcomes will come not from AI alone, but from the combination of humans and AI working in concert.

About Stephanie Klaskin

Stephanie Klaskin is a data scientist at Andesite, where she evaluates the AI behind the product and works with security teams to translate customer data into better detection and faster response. Before Andesite, she partnered with clients in healthcare, marketing, and finance to solve repeatable problems with practical data science. She holds a M.A. in Quantitative Methods from the University of Texas at Austin and a B.A. in Cognitive Science from Johns Hopkins University.

Our CEO, Brian Carbaugh, talked with Channel 8 News NOW at Black Hat

Carbaugh, a former CIA operative who was also part of the first U.S. team deployed to Afghanistan following the 9/11 attacks, was interviewed during Black Hat to talk about his perspective on some of the industry’s biggest challenges: rising AI-driven threats and a shrinking pool of skilled defenders.

Reflecting about his background, he explained, “I spent so much of my time focused on counterterror direct kinetic physical threats to the United States…But you realize playing out in the background all along are those cyber threats, that are persistent, that are coming every minute of the day.”

“The community here in Las Vegas has felt the impact of these attacks across a broad array of targets,” Carbaugh said. “It does highlight the importance of the conference, bringing together people to solve challenges, we’re all feeling it, this pressure.”

Andesite CPO William MacMillan discusses the SOC burnout crisis at The Pair Program

Our Chief Product Officer, William MacMillan, and Lucas Moody, SVP & CISO at Alteryx, joined the crew at HatchPad’s The Pair Program to discuss a pressing issue: SOC analysts burnout.

The conversation focused on how to reverse the skyrocketing burnout in SOC teams, and how AI can support rather than replace analysts. They emphasized the role of curiosity and creativity in modern cybersecurity and why junior analysts are essential to ensure a sustainable future for cyber defense.

MacMillan shared insights about the shift towards an AI-driven decision-layer built to empower analysts and what is next for Human-AI collaboration in cybersecurity.

 

Human-AI Collaboration is key to secure government systems, Andesite CPO William MacMillan tells GovCast

GovCast interviewed Andesite Chief Product Officer William MacMillan to talk about the role of Human-AI collaboration in national security.

Artificial intelligence powers many cybersecurity applications, and government agencies are increasingly using AI to augment systems in national security and intelligence capacities. The complexities of AI implementation require careful architectural considerations and robust governance frameworks to ensure safe execution.

William MacMillan, former CISO at CIA and current chief product officer at Andesite AI, noted how AI holds tremendous potential to enhance efficiency and accuracy, particularly through “human in the loop” systems that manage vast amounts of data.

MacMillan also talks about the critical role of leadership in establishing international AI standards and the necessity of user training and human-AI collaboration for effective implementation.

 

Andesite signs the Cloud Security Alliance AI Trustworthy Pledge

At Andesite, we take AI security seriously. With our Safe AI Architecture™, we’ve built guardrails to protect customers’ networks and data. We use encryption at-rest, in-transit, and in-storage, and do not train our AI with customer data.

That’s why it made so much sense for us to sign CSA’s AI Trustworthy Pledge, a public commitment to develop and manage AI responsibly.

The Pledge emphasizes our dedication to AI safety best practices, and our alignment with the four core principles of trusted AI: 

  • Safe and Compliant Systems: I will design, develop, deploy, operate, manage, and adopt AI systems that are safe for users and comply with applicable laws and regulations. 
  • Transparency: I will foster transparency about the AI systems I design, develop, deploy, operate, manage, and adopt. 
  • Ethical Accountability: I commit to ethical design, development, deployment, operation, and management of my AI systems and take responsibility for the outcomes, ensuring fairness and explainability. 
  • Privacy Practices: I will protect personal data with the highest standards of privacy. 

The Cloud Security Alliance (CSA) is the world’s leading organization dedicated to defining and raising awareness of best practices to help ensure a secure cloud computing environment.

CSA harnesses the subject matter expertise of industry practitioners, associations, governments, and its corporate and individual members to offer cloud security-specific research, education, certification, events and products.


CSA’s activities, knowledge and extensive network benefit the entire community impacted by cloud — from providers and customers, to governments, entrepreneurs and the assurance industry — and provide a forum through which diverse parties can work together to create and maintain a trusted cloud ecosystem.

 

Our Secure by Design Pledge

By Dave Brown, Head of Security and Compliance at Andesite

Building software that is secure by design is at the heart of what we at Andesite are passionate about – it’s the core of our mission and what we pursue as a security vendor. That’s why we proudly signed the CISA Secure by Design Pledge. From foundation to general availability, and since then, we have diligently worked through the Pledge goals to build security and compliance within our product. 

We have developed an internal auditing process with over 450 continuous monitoring controls that constantly validate our work against the Pledge, and we’re proud to openly share that in our Trust Center. That is one of many measures we take to ensure built-in security, compliance, and privacy controls for our customers’ and their customers’ data and networks.

Multi-factor Authentication (MFA)

We are fully committed to implementing multi-factor authentication (MFA). Our Shared Security Responsibilities Matrix outlines that all customers must use their identity provider (IdP) with MFA as part of our commitment to security by default. We integrate with all major identity providers and require that 100% of our customers link their platform instance to their IdP and MFA. We also collaborate with our customers to facilitate the integration of their IdP and MFA during the onboarding process.

Default Passwords

The customer is responsible for addressing default passwords, as we require them to use their identity provider and multi-factor authentication for administrator and user access to our platform. Our primary goal is to help reduce their risk by ensuring that they maintain user access through their chosen identity provider and meet multi-factor authentication requirements.

Reducing Entire Classes of Vulnerability

We have made tremendous progress by implementing tools to address vulnerabilities in our systems at three stages. This includes Software Composition Analysis (SCA), Static Application Security Testing (SAST), and Dynamic Application Security Testing (DAST). These tools enable us to identify vulnerabilities throughout our development, staging, and production phases.

Additionally, twice a year we undergo penetration testing and artificial intelligence assessments to ensure our AI systems’ strong security, compliance, and trustworthiness. We have also partnered with a security company specialized in attack resistance management, continuous assessment, and process enhancement for our Bug Bounty program.

Looking ahead, we are committed to developing a vulnerability notification program for our customers, which will include information on Common Vulnerabilities and Exposures (CVE) and Common Weakness Enumeration (CWE) as part of our comprehensive application security (AppSec) strategy.

Security Patches

As our Shared Security Responsibility Matrix outlines, we are responsible for security patching. We conduct quarterly Approved Scan Vendor (ASV) scans and assessments to prepare for the Payment Card Industry Data Security Standard (PCI DSS). Customers who self-manage our product are responsible for all security patches on those systems.

Evidence of Intrusions

Customer notifications are an essential part of our Incident Response Plan. For confirmed or suspected security incidents, we will collaborate with our customers in good faith to provide the necessary logging to support incident response efforts and meet any regulatory requirements to which the customer must adhere. Customers are fully responsible for evidence of intrusion, logging, or user access, and for providing their IdP with the credentials required for access to their Andesite single-tenant instance.

Andesite Named Trusted Cloud Provider by Cloud Security Alliance

Andesite is proud to announce that it has earned the Trusted Cloud Provider trustmark from the Cloud Security Alliance (CSA).


CSA is the world’s leading organization dedicated to defining and raising awareness of best practices to help ensure a secure cloud computing environment.

 

CSA harnesses the subject matter expertise of industry practitioners, associations, governments, and its corporate and individual members to offer cloud security-specific research, education, certification, events and products.


CSA’s activities, knowledge and extensive network benefit the entire community impacted by cloud — from providers and customers, to governments, entrepreneurs and the assurance industry — and provide a forum through which diverse parties can work together to create and maintain a trusted cloud ecosystem.

 

Andesite Raises Additional $23 Million and Announces General Availability of the Bionic SOC

MCLEAN, Va., Feb. 11, 2025 (GLOBE NEWSWIRE) — Andesite AI (Andesite) today announced the General Availability of the bionic Security Operations Center (SOC), its human-AI collaboration product empowering cyber defense teams. Additionally, Andesite revealed that it secured an additional $23 million in capital as a second tranche of seed funding from General Catalyst and Red Cell Partners. The investment brings Andesite’s total funding to $38.25 million and is the result of the company’s ahead-of-schedule achievement of technology, customer acquisition, and revenue milestones.