Rick Caccia, CEO and Co-Founding father of WitnessAI, has intensive expertise in launching safety and compliance merchandise. He has held management roles in product and advertising at Palo Alto Networks, Google, and Symantec. Caccia beforehand led product advertising at ArcSight by its IPO and subsequent operations as a public firm and served as the primary Chief Advertising Officer at Exabeam. He holds a number of levels from the College of California, Berkeley.
WitnessAI is creating a safety platform targeted on guaranteeing the secure and safe use of AI in enterprises. With every main technological shift—reminiscent of net, cell, and cloud computing—new safety challenges emerge, creating alternatives for business leaders to emerge. AI represents the subsequent frontier on this evolution.
The corporate goals to ascertain itself as a frontrunner in AI safety by combining experience in machine studying, cybersecurity, and large-scale cloud operations. Its crew brings deep expertise in AI growth, reverse engineering, and multi-cloud Kubernetes deployment, addressing the essential challenges of securing AI-driven applied sciences.
What impressed you to co-found WitnessAI, and what key challenges in AI governance and safety had been you aiming to unravel?
Once we first began the corporate, we thought that safety groups can be involved about assaults on their inner AI fashions. As a substitute, the primary 15 CISOs we spoke with mentioned the other, that widespread company LLM rollout was a great distance off, however the pressing downside was defending their workers’ use of different folks’s AI apps. We took a step again and noticed that the issue wasn’t keeping off scary cyberattacks, it was safely enabling firms to make use of AI productively. Whereas governance possibly much less horny than cyberattacks, it’s what safety and privateness groups truly wanted. They wanted visibility of what their workers had been doing with third-party AI, a method to implement acceptable use insurance policies, and a method to defend knowledge with out blocking use of that knowledge. In order that’s what we constructed.
Given your intensive expertise at Google Cloud, Palo Alto Networks, and different cybersecurity companies, how did these roles affect your method to constructing WitnessAI?
I’ve spoken with many CISOs through the years. One of the crucial frequent issues I hear from CISOs at the moment is, “I don’t wish to be ‘Physician No’ on the subject of AI; I wish to assist our workers use it to be higher.” As somebody who has labored with cybersecurity distributors for a very long time, this can be a very completely different assertion. It’s extra harking back to the dotcom-era, again when the Net was a brand new and transformative know-how. Once we constructed WitnessAI, we particularly began with product capabilities that helped prospects undertake AI safely; our message was that these things is like magic and naturally everybody needs to expertise magic. I feel that safety firms are too fast to play the worry card, and we wished to be completely different.
What units WitnessAI other than different AI governance and safety platforms out there at the moment?
Nicely, for one factor, most different distributors within the house are targeted totally on the safety half, and never on the governance half. To me, governance is just like the brakes on a automotive. In case you actually wish to get someplace shortly, you want efficient brakes along with a robust engine. Nobody goes to drive a Ferrari very quick if it has no brakes. On this case, your organization utilizing AI is the Ferrari, and WitnessAI is the brakes and steering wheel.
In distinction, most of our opponents give attention to theoretical scary assaults on a corporation’s AI mannequin. That may be a actual downside, however it’s a unique downside than getting visibility and management over how my workers are utilizing any of the 5,000+ AI apps already on the web. It’s quite a bit simpler for us so as to add an AI firewall (and we have now) than it’s for the AI firewall distributors so as to add efficient governance and danger administration.
How does WitnessAI steadiness the necessity for AI innovation with enterprise safety and compliance?
As I wrote earlier, we imagine that AI needs to be like magic – it will possibly enable you do superb issues. With that in thoughts, we expect AI innovation and safety are linked. In case your workers can use AI safely, they’ll use it typically and you’ll pull forward. In case you apply the standard safety mindset and lock it down, your competitor received’t do this, and they’re going to pull forward. All the pieces we do is about enabling secure adoption of AI. As one buyer advised me, “These things is magic, however most distributors deal with it prefer it was black magic, scary and one thing to worry.” At WitnessAI, we’re serving to to allow the magic.
Are you able to discuss in regards to the firm’s core philosophy concerning AI governance—do you see AI safety as an enabler moderately than a restriction?
We recurrently have CISOs come as much as us at occasions the place we have now offered, they usually inform us, “Your opponents are all about how scary AI is, and you’re the solely vendor that’s telling us the way to truly use it successfully.” Sundar Pichai at Google has mentioned that “AI might be extra profound than fireplace,” and that’s an attention-grabbing metaphor. Fireplace may be extremely damaging, as we have now seen not too long ago. However managed fireplace could make metal, which accelerates innovation. Generally at WitnessAI we speak about creating the innovation that allows our prospects to securely direct AI “fireplace” to create the equal of metal. Alternatively, in the event you assume AI is akin to magic, then maybe our aim is to offer you a magic wand, to direct and management it.
In both case, we completely imagine that safely enabling AI is the aim. Simply to offer you an instance, there are lots of knowledge loss prevention (DLP) instruments, it’s a know-how that’s been round eternally. And other people attempt to apply DLP to AI use, and possibly the DLP browser plug in sees that you’ve got typed in an extended immediate asking for assist together with your work, and that immediate advertently has a buyer ID quantity in it. What occurs? The DLP product blocks the immediate from going out, and also you by no means get a solution. That’s restriction. As a substitute, with WItnessAI, we are able to establish the identical quantity, and silently and surgically redact it on the fly, after which unredact it within the AI response, so that you just get a helpful reply whereas additionally retaining your knowledge secure. That’s enablement.
What are the largest dangers enterprises face when deploying generative AI, and the way does WitnessAI mitigate them?
The primary is visibility. Many individuals are stunned to be taught that the AI utility universe isn’t simply ChatGPT and now DeepSeek; there are actually hundreds of AI apps on the web and enterprises soak up dangers from workers utilizing these apps, so step one is getting visibility: which AI apps are my workers utilizing, what are they doing with these apps, and is it dangerous?
The second is management. Your authorized crew has constructed a complete acceptable use coverage for AI, one which ensures the security of buyer knowledge, citizen knowledge, mental property, in addition to worker security. How will you implement this coverage? Is it in your endpoint safety product? In your firewall? In your VPN? In your cloud? What if they’re all from completely different distributors? So, you want a method to outline and implement acceptable use coverage that’s constant throughout AI fashions, apps, clouds, and safety merchandise.
The third is safety of your personal apps. In 2025, we are going to see a lot sooner adoption of LLMs inside enterprises, after which sooner rollout of chat apps powered by these LLMs. So, enterprises want to ensure not solely that the apps are protected, but in addition that the apps don’t say “dumb” issues, like advocate a competitor.
We handle all three. We offer visibility into which apps individuals are accessing, how they’re utilizing these apps, coverage that’s primarily based on who you might be and what you are attempting to do, and really efficient instruments for stopping assaults reminiscent of jailbreaks or undesirable behaviors out of your bots.
How does WitnessAI’s AI observability function assist firms observe worker AI utilization and forestall “shadow AI” dangers?
WitnessAI connects to your community simply and silently builds a catalog of each AI app (and there are actually hundreds of them on the web) that your workers’ entry. We let you know the place these apps are situated, the place they host their knowledge, and many others so that you just perceive how dangerous these apps are. You possibly can activate dialog visibility, the place we use deep packet inspection to watch prompts and responses. We are able to classify prompts by danger and by intent. Intent may be “write code” or “write a company contract.” It’s necessary as a result of we then allow you to write intent-based coverage controls.
What function does AI coverage enforcement play in guaranteeing company AI compliance, and the way does WitnessAI streamline this course of?
Compliance means guaranteeing that your organization is following laws or insurance policies, and there are two elements to making sure compliance. The primary is that you need to be capable to establish problematic exercise. For instance, I have to know that an worker is utilizing buyer knowledge in a manner which may run afoul of a knowledge safety legislation. We do this with our observability platform. The second half is describing and imposing coverage towards that exercise. You don’t wish to merely know that buyer knowledge is leaking, you wish to cease it from leaking. So, we we have constructed a singular AI-specific coverage engine, Witness/CONTROL, that allows you to simply construct identification and intention-based insurance policies to guard knowledge, stop dangerous or unlawful responses, and many others. For instance, you may construct a coverage that claims one thing like, “Solely our authorized division can use ChatGPT to put in writing company contracts, and in the event that they accomplish that, routinely redact any PII.” Straightforward to say, and with WitnessAI, straightforward to implement.
How does WitnessAI handle considerations round LLM jailbreaks and immediate injection assaults?
Now we have a hardcore AI analysis crew—actually sharp. Early on, they constructed a system to create artificial assault knowledge, along with pulling in broadly obtainable coaching knowledge units. In consequence, we’ve benchmarked our immediate injection towards every part on the market, we’re over 99% efficient and recurrently catch assaults that the fashions themselves miss.
In follow, most firms we converse with wish to begin with worker app governance, after which a bit later they roll out an AI buyer app primarily based on their inner knowledge. So, they use Witness to guard their folks, then they activate the immediate injection firewall. One system, one constant method to construct insurance policies, straightforward to scale.
What are your long-term objectives for WitnessAI, and the place do you see AI governance evolving within the subsequent 5 years?
Up to now, we’ve solely talked a couple of person-to-chat app mannequin right here. Our subsequent section might be to deal with app to app, i.e agentic AI. We’ve designed the APIs in our platform to work equally effectively with each brokers and people. Past that, we imagine we’ve constructed a brand new method to get network-level visibility and coverage management within the AI age, and we’ll be rising the corporate with that in thoughts.
Thanks for the good interview, readers who want to be taught extra ought to go to WitnessAI.