Skip to main content

What is Zero Trust AI ?

Our Approach

At Costa, our background is in network security. We take an opinionated, no-holds-barred approach toward securing AI. We believe A represents a danger greater than anything we have seen in the history of computing — and we are here to help. We call our approach Zero Trust AI, and to us, that means:
  1. Do not trust the model, no matter the “good intent” of the creator,
  2. Do not trust the model provider, no matter the “definitely next level” security they promise,
  3. Do not trust the tools, no matter how “absolutely safe” they claim to be, and
  4. Do not trust the human or agent operating the model, no matter how much they protest that they will never make a mistake
Practically, this means we wrap every request and response in security. We secure information on its way into the model, sometimes stripping out things (like personal information), sometimes inserting things (like dummy API keys). We secure information on its way out of models, sometimes putting things back (like that personal information we took out earlier), or running analysis on code that was produced, to make sure it is actually safe.

AI is a Dynamic Landscape

Zero Trust AI is a moving target - there is no definitive list that you could set up today and be done. At Costa, we believe it’s our job to sit at the edge of cybersecurity (what we call the cybersecurity ‘coast’, hence ‘Costa’) and make sure that we always apply Current Best Practices to AI infrastructure.

Costa’s top five for security

The Costa platform includes quite a few security features built in. Here are the five most important things we give you:

1. Sensitive information filtering

Every request is filtered for personal information. See OWASP LLM02: 2025. We extract sensitive information and replace it with “dummy” information that is sent to the model, then re-replace before it gets back to the user.

2. Dynamic agency control

We use a combination of the current and prior tool requests and conversation outputs to give each individual request a Risk Score. This score is based on things like whether the tool has read or write access to internal information, whether it talks to the outside world, how powerful the model is, and the nature of any information provided by the user. See OWASP LLM06: 2025 for a description of excessive agency and why it is critical to prevent it.

3. Realtime output analysis

We run both synchronous and asynchronous analysis on outputs from models to make sure your code is protected. We run static code analysis on model outputs while the engineer is still coding and give them feedback inside their editor. Many companies analyze code at the time of commit, but we catch errors as they go into or come out of the models. We know which models are producing dangerous code and, if necessary, block further requests. See OWASP LLM05: 2025 Improper Output Handling and OWASP LLM04: 2025 Data and Model Poisoning for why this is necessary.

4. Dynamic provider and model routing

At Costa, we provide 💫 Cosmic Routers, which choose the best model for each individual part of a request, sometimes switching between models multiple times in a conversation. This not only dramatically lowers cost, it also protects against OWASP LLM03: 2025 Supply Chain attacks.

5. Realtime analytics

Costa gives you a lot of metrics - both for individual engineers about how they are using AI, and for administrators both for security and productivity. This leads to a deep understanding of how your business uses AI and protects from attacks due to OWASP LLM10: 2025

Directly control your own security

And most importantly, all of the different security tools we use are controllable through your organization’s dashboard. If you don’t like a particular model, you can block it. If you want to tune the aggressiveness of the information filtering way up, you can do that. Costa provides the tools, but the power is all yours.