SplxAI grabs $7M  to protect AI systems from emerging threats

SplxAI founders

Traditional security testing methods often fall short and struggle to match the swift progress in AI technology. With the rising necessity to protect AI chatbots and conversational AI systems — widely adopted by businesses yet fraught with considerable security risks — SplxAI introduces a forward-thinking approach to AI security. Consistently monitoring and testing AI systems for new threats, the company ensures that organisations remain one step ahead of possible attacks.

New York-based SplxAI has raised $7M in seed funding to enhance its platform for securing agentic AI systems. LAUNCHub Ventures led the funding round, which also included Rain Capital, Inovo, Runtime Ventures, DNV Ventures, and South Central Ventures.

This investment will accelerate the development and adoption of SplxAI’s platform, enabling organisations to secure their internal AI agents and customer-facing applications through automated testing, dynamic remediation, and continuous monitoring.

Leading the way in automated AI security

SplxAI was founded in 2023 by Kristian Kamber and Ante Gojsalić. Kamber, with a background in software and IT sales, including roles at companies like Zscaler, and Gojsalić, an AI consultant, recognized the critical need to address the security vulnerabilities in AI chatbots and large language models (LLMs) as organizations rapidly adopt these technologies.

Additionally, AI red teamers and researchers who excelled in capture-the-flag contests with Wiz and Black Hat are part of the team. Since launching its platform in August 2024, the company has achieved 127% quarter-over-quarter growth. Major clients include KPMG, Infobip, Brand Engagement Network, and Glean. SplxAI recently introduced Agentic Radar, an open-source software tool that maps dependencies in agentic workflows and identifies security gaps through static code analysis.

SplxAI has also achieved SOC2 Type I compliance, demonstrating its commitment to maintaining stringent security standards and operational security.

SplxAI was founded to address the “gaping security holes” in conversational AI systems, which are increasingly used by businesses but pose significant security risks. The founders recognised that traditional security solutions fall short in protecting applications built on LLMs, leading to a need for specialized AI security solutions

Kristian Kamber, CEO and co-founder of SplxAI, emphasised the company’s mission: “Our mission is to re-define how security leaders and AI practitioners test their AI applications and agentic workflows.” He highlighted the complexity of deploying AI agents at scale, noting that manual testing is no longer feasible given the rapid advancement of large language models (LLMs). “SplxAI’s advanced platform is the only scalable solution for securing agentic AI, providing security leaders with the tools they need to embrace AI confidently,” Kamber added.

Addressing the evolving AI security threat landscape

SplxAI’s mission is to secure and safeguard GenAI-powered conversational apps by providing advanced security and pentesting solutions. AI agents can enhance efficiency in financial institutions but also pose security risks. SplxAI’s platform offers ongoing automated security assessments to ensure these AI systems function safely and effectively.

Their goal is to empower enterprises to confidently adopt AI by delivering solutions tailored to the unique vulnerabilities of LLM-powered systems. They aim to ensure that AI assistants and agents remain effective, secure, and reliable, allowing organizations to leverage AI’s transformative power without compromising security.

SplxAI has collaborated with Hackrate, combining ethical hacking with automated pentesting to enhance AI security. This partnership highlights SplxAI’s proactive approach to addressing evolving AI security challenges.

Projections indicate that 33% of enterprise applications will incorporate agentic AI by 2028. This transition from simple LLM assistants to complex AI workflows introduces new security risks. SplxAI’s platform mitigates these threats by simulating sophisticated adversarial scenarios. It mimics the tactics of highly skilled attackers. By automatically detecting and neutralizing potential attack vectors, the platform enables businesses to deploy AI agents confidently. They won’t have to worry about prompt injections, off-topic responses, or hallucinations.

In line with broader cybersecurity trends, such as the adoption of AI for threat hunting and zero-trust architectures, SplxAI’s innovative approach is well-timed. Their solutions not only protect AI systems but also safeguard the reputation and operational integrity of businesses relying on AI chatbots.

Stan Sirakov, General Partner at LAUNCHub Ventures, expressed confidence in SplxAI’s approach: “SplxAI is the only vendor with a plan for managing this risk at scale. We are proud to advance the state of agentic AI security through our investment in SplxAI.” Sirakov will join SplxAI’s Board of Directors with this funding. Additionally, Sandy Dunn, former Brand Engagement Network CISO, will lead the development of SplxAI’s Governance, Risk, and Compliance offering as Chief Information Security Officer.

The post SplxAI grabs $7M  to protect AI systems from emerging threats appeared first on Tech Funding News.

Facebook
Twitter
LinkedIn

Related Posts

Scroll to Top