AI agents security checklist: what to verify before deploying
Prompt injection, data residency, audit trail, zero-training. The checklist every buyer should run before signing.
The agent ecosystem is moving fast. Model capabilities improve quarterly; tooling matures; pricing pressure compounds. Treat any specific recommendation as a snapshot, not a permanent answer. The durable principles — operator gate, evaluation discipline, security posture — outlast the specific tool choices that look obvious today and dated next year.
The eight-item checklist
1. Data residency. Where physically does data live? EU buyers want EU. 2. Per-tenant isolation. No data crosses customer boundaries. 3. Encryption at rest and in transit. Standard but verify.
4. Prompt injection defence. What happens when user input tries to override agent instructions? 5. Audit log. Every action timestamped and queryable. 6. Zero-training agreement with LLM providers — your data not used for training.
7. Deletion. Cert of destruction on offboarding. 8. DPA. Data Processing Agreement signed before production data flows.
How to verify
Request written attestations. Penetration test reports if available. SOC 2 / ISO 27001 status. Ask for sample audit log output.
Vendors that hesitate to answer fail the test.
Adoption usually fails for organisational reasons, not technical ones. Workflows that touch multiple teams need explicit owners and explicit handoffs; agents amplify clarity but cannot create it. Spend time defining the operator gate and the escalation path before the rollout, not after.
Why this checklist exists and how to use it
AI services security gets discussed in marketing copy and ignored in actual contracts. Most security incidents involving AI vendors trace back to questions the buyer never asked. The eight items in this checklist are not exotic — they are the questions your security team would ask of any SaaS vendor handling customer data, applied specifically to AI agent services.
Use this checklist before signing, not after. Vendors who can answer all eight items crisply in writing are operating at the level of seriousness you should expect; vendors who hedge on any of them are either still maturing their security posture or hoping you will not press. Both are signals worth heeding.
Data residency: more than "where is the database"
Data residency in the AI agent context covers four flows: storage, compute, LLM API calls, and audit logs. All four need to be in the same jurisdiction for compliance to hold. The common failure: storage is correctly in EU, but the LLM calls route to OpenAI or Anthropic endpoints in US East. From a GDPR standpoint, the data has left the EU even if the database has not.
The fix is vendor configuration: route to EU endpoints for the LLM provider, monitor for any exception. Ask the vendor for a data flow diagram with explicit endpoint locations. Vendors who give you the diagram in minutes have built for compliance; vendors who get back to you next week have not.
Per-tenant isolation in practice
Per-tenant isolation prevents one customer's data from contaminating another customer's agent context. The naive implementation — single-tenant database column for customer_id, all customers share the same agent runtime — fails open in subtle ways. A prompt injection from one customer's data can read another customer's data if the runtime does not enforce the boundary at every layer.
The mature implementation runs each customer's agent in an isolated namespace with explicit access controls at storage, retrieval, and tool-use layers. Ask the vendor: how does the agent for tenant A access only tenant A's data? What enforces that boundary? What logs exist if the boundary is violated? Vendors with good answers have invested in this; vendors with vague answers have not.
Prompt injection defence
Prompt injection is the security category that did not exist for traditional SaaS but is central to AI agent services. A malicious input designed to override agent instructions can cause the agent to leak data, take unauthorised actions, or produce harmful output. Defences are layered: input sanitisation, system prompt isolation, output filtering, action allowlists, and human approval gates for sensitive operations.
No single defence is sufficient. Vendors who claim to have "solved" prompt injection are overstating their position. The honest answer is that prompt injection is partially mitigated through layered defences and human review on high-impact actions. Ask what those layers are and what testing the vendor does against known injection patterns.
Zero-training agreements and what they actually mean
Frontier model providers — Anthropic, OpenAI, Google — offer zero-training agreements for their API products. The agreement says: data sent to the API is not used to train future versions of the model. This is meaningful and the right baseline. It is also narrower than buyers sometimes assume — it covers training only, not other internal uses by the model provider (such as abuse monitoring, which usually has a short retention window).
Ask your AI vendor: which model providers do they use? Which agreements are in place? Does the vendor itself train on your data (some do for evaluation purposes)? If yes, what is the carve-out? The answers should be in writing in the DPA.
Frequently asked questions
What about US-only vendors for EU buyers?
Requires SCCs and additional safeguards under GDPR. Possible but adds compliance work; prefer EU-residency vendors when available.
Are open-source agent frameworks safer?
Different risk profile. You own all the security work. Often less safe than a managed service that has invested in it.
How do I run a security review of an AI agent vendor without an AI specialist on my team?
Use the eight-item checklist plus your standard SaaS security review (penetration test results, SOC 2, incident response plan, vendor management policies). The AI-specific items reduce to: data residency including LLM endpoints, per-tenant isolation, prompt injection defence, zero-training agreement. If you can get clear answers on those four, your gap to a full review is small.
What is the minimum security posture for handling PII through AI agents?
Encryption at rest and in transit, per-tenant isolation, EU data residency for EU subjects, zero-training agreements, signed DPA, deletion on request with certificate, audit log of every agent action. Most regulated industries also require SOC 2 Type II or ISO 27001 certification from the vendor.
Do AI agents fall under the EU AI Act?
Yes, to varying degrees by use case. High-risk applications (recruitment, credit scoring, education access, critical infrastructure) have substantial obligations including transparency, human oversight, bias testing, and registration. General productivity automation is lower-risk. Read the Act's risk classification or get legal advice specific to your use case.
How Logitelia builds and runs agents
Logitelia runs production AI agent teams across content, sales, ops, books, dev and research. Senior operator gate on every artifact, EU data residency, evaluation pipelines built into our runtime, zero-training agreements with LLM providers. Read about our approach or book a 30-minute call to discuss your specific scenario.
Security questions you don't ask before signing become customer-data incidents later. Run the checklist; vendors that pass are real; vendors that hedge are not.
Want to see how Logitelia ships this kind of work for your team?
Book intro call