Interview With Kristian Kamber - Co-Founder and CEO of SplxAI

Shauli Zacks
Shauli Zacks Content Editor
Updated on: January 13, 2025
Shauli Zacks Shauli Zacks
Updated on: January 13, 2025 Content Editor

Generative AI is revolutionizing industries, but its rapid adoption brings unprecedented security challenges. Kristian Kamber, Co-Founder and CEO of SplxAI, leverages over a decade of experience in software and IT sales, including roles at leading companies like Zscaler, to address these critical issues. With a deep understanding of enterprise needs and the evolving tech landscape, Kamber co-founded SplxAI in 2023 to focus on securing large language models (LLMs) and helping organizations safely harness AI’s potential.

In this SafetyDetectives interview, Kamber discusses SplxAI’s innovative approach to offensive AI security, the emerging threats in generative AI, and the proactive measures companies must take to safeguard their AI systems. He also sheds light on how SplxAI is shaping the future of secure AI deployment with cutting-edge technologies and continuous research.

Can you introduce yourself and talk about your current role at SplxAI?

I’ve been working in software and IT sales for more than a decade, with experience at some notable Silicon Valley companies like AppDynamics and, most recently, Zscaler. My career has been centered on serving large enterprise customers, and I was able to achieve milestones like closing the largest ACV deal ever in Europe and leading incredibly talented sales teams.

Being immersed in the software and cybersecurity markets for so long has given me a unique perspective on the industry’s trajectory and the tech trends shaping the future. These insights eventually led me to co-found SplxAI in 2023. As the CEO, my primary focus is driving our global sales and go-to-market strategy, with a strong emphasis on the U.S., where our headquarters is based. I’m also responsible for establishing strategic partnerships and overseeing our fundraising efforts.

The transition from my role at Zscaler, a global leader in cloud security, to launching SplxAI was a natural next step. I co-founded the company with Ante Gojsalic, our CTO, who brings a wealth of expertise and experience in data science and AI. Together, we identified a critical need: as organizations race to adopt generative AI (GenAI), large language models (LLMs) introduce whole new types of risks that existing cybersecurity solutions are not able to address effectively.

In nearly every conversation we have with customers and prospects, we see a similar pattern – many of them are building in-house solutions on top of LLMs, but due to the inherent risks, these apps rarely make it into production. This results in wasted time, resources, and missed opportunities. SplxAI was born out of our mission to help enterprises and organizations safely and securely adopt these new technologies while keeping the risks at a minimum. It’s an exciting journey, and I’m excited to be at the forefront of this rapidly evolving industry.

What are the core services SplxAI offers and how does it stand out in the growing field of AI security?

At SplxAI, we’ve positioned ourselves as leaders in automating the offensive side of AI security. Our core services include comprehensive AI risk assessments and AI red teaming, all powered by a cutting-edge platform designed to help organizations deploy secure large language model (LLM) systems.

What sets us apart is our focus on offensive AI security and our shift-left approach. While many players in the AI security space emphasize defensive measures – such as providing guardrails for inline protection to filter inputs and outputs of LLMs – SplxAI addresses the need for offensive strategies. We enable enterprises to proactively identify vulnerabilities before attackers can exploit them.

What makes our platform stand out among competitors are the complexity and depth of our attack database, which is continuously updated with the latest zero-day vulnerabilities specific to LLMs. Beyond that, we are the only platform that fully automates testing for multi-modal vulnerabilities. This means we go beyond text-to-text interactions to address inputs from image uploads and voice-to-voice interactions – modalities that are quickly becoming the new standard as we move into 2025.

Another key differentiator is our dynamic remediation feature. Once vulnerabilities are identified, our platform automatically hardens system prompts – the core instructions for LLM apps – helping to secure these systems by embedding security and safety policies at the model layer. Additionally, we offer monitoring capabilities that actively scan for adversarial activity in real time. By analyzing logs and automating the threat-hunting process, we can identify and flag malicious inputs as they occur. This unique combination of offensive automation, multi-modal vulnerability testing, remediation features, and real-time monitoring positions SplxAI as a leader in an increasingly crowded AI security landscape, as most recently recognized by the OWASP community as well. We’re helping organizations stay ahead of security and safety risks in Generative AI and use the technology as a competitive advantage.

AI security is becoming increasingly critical. What are some of the biggest trends you’re seeing in AI security, particularly with Generative AI applications?

Generative AI continues to captivate the enterprise world, and its momentum is not slowing down. However, the fast pace of innovation is creating significant challenges for security teams, as they struggle to stay ahead of emerging and evolving threats. This year, in particular, is shaping up to be crucial for the AI security industry as we witness the rise of new technologies like Voice AI and Agentic AI workflows.

Voice AI assistants, built on top of large language models (LLMs), introduce completely new and unique security concerns. These assistants are inherently more complex to protect compared to traditional text-based Generative AI systems. One of the more alarming risks is the growing prevalence of deepfake exploits, which are becoming more sophisticated and frequently observed in real-world scenarios.

Agentic AI workflows, where autonomous AI systems communicate and execute tasks without direct human oversight, represent another emerging challenge. The lack of human intervention and the agents’ ability to “think” for themselves increase the complexity of securing these systems, as their autonomy can lead to vulnerabilities that currently no solution is able to address completely.

These trends highlight the urgent need for the industry to double down on active research. Unlike traditional cybersecurity, AI security is highly dependent on continuous research efforts due to the non-deterministic nature of Generative AI. We’re already seeing significant investments from major companies into AI research and dedicated security teams, as they recognize the importance of making this technology safe and trustworthy for widespread adoption.

At SplxAI, we believe that research-driven solutions are the key to addressing these challenges, and we’re committed to helping organizations navigate the rapidly evolving landscape of AI security.

What types of vulnerabilities do you find most common in AI systems, and how can organizations proactively address them?

One of the most common vulnerabilities we encounter are AI assistants that unintentionally expose sensitive and confidential business logic. These issues often arise from poorly engineered system prompts or misconfigured guardrails, leaving the system vulnerable to adversarial manipulation. In such cases, attackers can exploit these weaknesses to extract sensitive data that the large language model (LLM) has access to.

Another prevalent issue we find in internal AI assistants and agents designed to retrieve information from enterprise knowledge bases, such as Jira or Confluence. While these tools are invaluable for streamlining information retrieval for employees, they often compromise security by exposing confidential business information and data that should only be accessible to specific groups. The obscure nature of these AI workflows, especially when integrated with existing databases, makes securing them in real-time particularly challenging.

To address these risks, organizations must adopt a proactive approach. Tools like the SplxAI Platform enable thorough risk assessments and red teaming, which are critical for identifying vulnerabilities early in the design and development phases. By incorporating proactive testing and risk mitigation procedures upfront, organizations can protect their systems from vulnerabilities when launched into production.

Unfortunately, many companies still treat AI security as an afterthought, only addressing it after an exploit has occurred. This reactive approach is risky, as we’ve already seen numerous incidents where sensitive company data was compromised due to unsecured LLM applications. The lesson here is clear: AI security needs to be a priority from the start, not an after-the-fact consideration. By embedding robust security practices into the development lifecycle, organizations can safeguard their AI systems and protect their sensitive data.

As companies adopt AI at scale, what key steps should they take to safeguard their systems against potential risks?

Safeguarding Generative AI systems at scale requires organizations to adopt proven best practices. The first and most critical step is extensive testing and evaluation before deploying these systems, particularly when they’re intended for public or enterprise-wide use. It’s important to note that no AI system can ever be completely risk-free – there will always be unknown or undiscovered vulnerabilities. However, the goal is to minimize risks to an acceptable level by protecting against all currently known attack vectors and strategies relevant to the specific domain of the AI application.

One of the most effective ways to launch secure Generative AI systems is by engineering hardened system prompts. These prompts should embed comprehensive security and safety policies tailored to the application’s use case. Additionally, AI engineering teams should implement custom guardrails that filter LLM queries and protect sensitive data from accidental exposure. These guardrails need to strike a careful balance – they must not be overly restrictive, which could limit the system’s functionality, or overly permissive, which could be a door opener for new vulnerabilities.

Input and output validation through the user interface (UI) is another highly effective measure. By placing filters between the end-user and the LLM, organizations can ensure that all incoming queries and outgoing responses are screened for potential exploits or malicious activities. This added layer of protection helps reduce the likelihood of security breaches and ensures that the AI system operates within safe boundaries.

Ultimately, safeguarding AI systems at scale is about creating a comprehensive security framework that incorporates testing, hardening, guardrails, and continuous validation. By prioritizing these measures from the design phase onward, companies can mitigate risks and confidently adopt AI technologies at scale.

What role do you see platforms like SplxAI playing in shaping the future of secure AI development and deployment?

Platforms like SplxAI are essential for empowering developers to build and deploy AI systems without constantly worrying about potential security and safety risks. By providing automated tools and frameworks for AI risk assessments, red teaming, and vulnerability testing, we enable AI and security teams to focus on innovation while ensuring their systems are robust and secure.

A key aspect of platforms like ours is the ability to track and address zero-day vulnerabilities continuously. These vulnerabilities often emerge quickly and unpredictably in the ever-evolving GenAI landscape. To stay ahead, SplxAI relies on continuous usage and crowd-sourced insights to monitor the latest attack types and vulnerabilities that teams might encounter with their specific AI apps and assistants.

This ongoing feedback loop is paramount to staying ahead of the threat landscape. It ensures we can provide our users with the newest zero-day attack scenarios and keep their AI security postures up to date. By integrating these capabilities into a streamlined platform, SplxAI helps organizations adopt secure AI development practices from the ground up, shaping a future where safe and trustworthy AI becomes the standard.

About the Author
Shauli Zacks
Shauli Zacks
Content Editor
Updated on: January 13, 2025

About the Author

Shauli Zacks is a content editor at SafetyDetectives.

He has worked in the tech industry for over a decade as a writer and journalist. Shauli has interviewed executives from more than 350 companies to hear their stories, advice, and insights on industry trends. As a writer, he has conducted in-depth reviews and comparisons of VPNs, antivirus software, and parental control apps, offering advice both online and offline on which apps are best based on users' needs.

Shauli began his career as a journalist for his college newspaper, breaking stories about sports and campus news. After a brief stint in the online gaming industry, he joined a high-tech company and discovered his passion for online security. Leveraging his journalistic training, he researched not only his company’s software but also its competitors, gaining a unique perspective on what truly sets products apart.

He joined SafetyDetectives during the COVID years, finding that it allows him to combine his professional passions without being confined to focusing on a single product. This role provides him with the flexibility and freedom he craves, while helping others stay safe online.

Leave a Comment