Top

Securing the future: Mindgard’s vision for AI security testing

Artificial Intelligence is rapidly transforming industries, but with its expansion comes an often-overlooked challenge: security. Many organizations underestimate the unique cybersecurity risks AI systems introduce, leaving them vulnerable to attacks that traditional security measures cannot detect. While AI is ultimately software and data running on hardware, its complexity creates new security threats that require specialized solutions. Enter Mindgard, a pioneering force in AI security testing. With a mission to secure the world’s AI, Mindgard is at the forefront of identifying and mitigating AI-specific risks. The company’s expertise stems from deep academic research and industry-first innovations, including Dynamic Application Security Testing for AI (DAST-AI)—a revolutionary approach that uncovers vulnerabilities only detectable when AI systems are operational. In this interview, we speak with a leading expert from Mindgard about their ultimate goal in AI security, how they differentiate from competitors, and why DAST-AI is a game-changer in the field.

What is Mindgard’s ultimate goal in the field of AI security, and how do you envision shaping the future of AI security testing?

Our mission is to secure the world’s AI. However, many organisations underestimate or don’t know the unique cybersecurity risks introduced by AI systems. According to Gartner, 29% of organisations deploying AI have experienced security breaches, yet only 10% of internal auditors have visibility into AI risks. Similarly, Salesforce reports that only 11% of CIOs have fully implemented AI because of security and data concerns.

I spend a considerable amount of time demystifying AI security, even with seasoned technologists who are experts in infrastructure security and data protection. At the end of the day, AI is still essentially software and data running on hardware. But it introduces new risks, creating a complex security landscape that traditional tools cannot address.

How does Mindgard differentiate itself from competitors in the AI security testing market?

What really sets us apart is our foundation at the intersection of AI and security. We bring deep academic expertise into the mix. As a professor at Lancaster University, I oversee a considerable portion of the UK’s AI security PhD candidates – some of the brightest minds out there – and we actively tap into that talent to drive research and innovation.

This expertise gives us an undeniable edge, which has accelerated the development of the most comprehensive threat library in the industry. On top of that, we’re the first and only provider of Dynamic Application Security Testing for AI (DAST-AI), which identifies vulnerabilities that only manifest at runtime.

Can you explain how your Dynamic Application Security Testing for AI (DAST-AI) solution works and why it’s considered industry-first?

Dynamic Application Security Testing (DAST) is a methodology used to identify vulnerabilities in a running application by simulating real-world attack scenarios. It tests applications from the outside without requiring access to the source code. It’s a security team’s go-to tool for catching runtime vulnerabilities and ensuring applications are secure before release or during continuous deployment cycles.

Mindgard is the first AI security testing solution to apply the concepts of DAST to AI. Our solution acts akin to an external adversary probing for weak spots within the system. Mindgard’s DAST-AI tests a running AI system, which includes the LLM, RAG, and all the other components that make up the entire operational model. By testing an instantiated model, Mindgard can find vulnerabilities that only surface when a system is running. In other words, vulnerabilities can’t be detected by static code analysis and would be very expensive to find via manual testing.

What industries have benefited the most from your solutions, and can you share specific success stories or case studies?

Anyone that is building an AI application, purchasing AI for their organisation, or performing AI red teaming or pen testing benefits from Mindgard.

Mindgard’s solution can be used across multiple industries and by teams with different roles. Security teams use it to gain visibility into AI risks and respond quickly. They test and evaluate AI guardrails and web application firewall (WAF) solutions and assess risks between tailored AI models and baseline models. Penetration testers and security analysts use Mindgard to scale their AI red teaming efforts, while developers benefit from integrated continuous testing of their AI deployments.

How does Mindgard help organizations meet compliance requirements and ensure their AI systems are auditable?

Mindgard provides empirical evidence of AI risk to the business for reporting and compliance purposes. MITRE and OWASP-compatible security posture reporting map Mindgard’s test results into the context of our customers’ security workflows and threat models and make AI security actionable and auditable for compliance teams.

What are the biggest challenges in staying ahead of the ever-evolving AI threat landscape?

We operate within an emerging market segment where researchers and engineers are only scratching the surface of risks within AI systems. This has meant a lot of education to both investors and customers (although the conversations have shifted away from hypotheticals towards more grounded problems). We have overcome this by being consistent with our messaging to prospects and investors to demonstrate and demystify AI risk.

What are your predictions for the future of AI security, and where do you see Mindgard’s role in shaping that future?

Heading into 2025, we expect perspectives to shift as AI security becomes increasingly difficult to ignore. High-profile AI security incidents will drive home the message: AI systems can’t be deployed unless they’re secure, the same as any software. However, as our customers’ (developers and security practitioners) understanding of AI security deepens, so will their expectations. Our recent fundraising will accelerate our ability to demonstrate the problems of AI security and invest in R&D so that our customers can stay ahead of increasingly complex threats.

Dr Peter Garraghan is CEO & co-founder at Mindgard, the leader in Artificial Intelligence Security Testing. Founded at Lancaster University and backed by cutting-edge research, Mindgard enables organizations to secure their AI systems from new threats that traditional application security tools cannot address. As a Professor of Computer Science at Lancaster University, Peter is an internationally recognized expert in AI security. He has devoted his career to developing advanced technologies to combat the growing threats facing AI. With over €11.6 million in research funding and more than 60 published scientific papers, Peter’s contributions span both scientific innovation and practical solutions.

Andriani has been working in Publishing Industry since 2010. She has worked in major Publishing Houses in UK and Greece, such as Cambridge University Press and ProQuest. She gained experience in different departments in Publishing, including editing, sales, marketing, research and book launch (event planning). She started as Social Media Manager in 4i magazine, but very quickly became the Editor in Chief. At the moment, she lives in Greece, where she is mentoring women with job and education matters; and she is the mother of 3 boys.