NIS2 for CTOs: a technical checklist for your AI supply chain
The NIS2 technical annex nobody reads, translated into an actionable checklist. The controls your platform team should answer yes/no tomorrow morning — with the full downloadable resource at the bottom.
NIS2 took effect in October 2024. Since then, essential and important EU operators have had to audit their full digital supply chain. AI-as-a-service is supply chain. The model your app calls is a subprocessor. The GPU running it is a critical asset. And if any of that sits outside the EU, you need to justify it to the regulator with documented compensating controls.
This post is the checklist we hand to customers when they ask what they'd need to change to be compliant. It's not legal advice — it's the technical minimum a regulator will ask for.
Block 1 · Data governance (Art. 21.2.a)
Where is each prompt / dataset type processed?
Documented by region or datacenter, updated when subprocessor changes.
What data is sent to external models?
Classified inventory: PII, customer data, proprietary code, secrets.
Where are logs, embeddings, and vector stores stored?
Explicit residency + retention policy.
Is there DLP on outbound traffic to external APIs?
Active filters, not just written policy.
Block 2 · Subprocessor management (Art. 21.2.d)
Complete AI subprocessor list
Provider, data location, certifications, jurisdiction.
SLAs and audit clauses
Right to audit included; history of audits performed.
Incident migration plans
Documented and drilled.
Subprocessor change notifications
Provider warns X days before location or processing changes.
Block 3 · Access control and authentication (Art. 21.2.i)
Is every model call logged?
User, timestamp, prompt hash, model, source IP.
Do endpoints require mTLS or equivalent?
No network trust, cert-based auth.
Are API keys auto-rotated?
Yes, max 90-day window.
Is there role separation (admin / dev / viewer)?
Active RBAC, reviewed quarterly.
Block 4 · Encryption and residency (Art. 21.2.h)
At-rest encryption for storage holding prompts/responses
AES-256 minimum.
In-transit encryption client → model → client
TLS 1.3 end-to-end, no intermediate termination.
Master key management
Customer holds KEK, provider can't decrypt.
Backups
Encrypted under the same policy as production.
Block 5 · Continuity and incident response (Art. 21.2.b, 21.2.c)
Incident notification window
< 24h to customer, per Art. 23.
Recovery drills
Run at least once a year, documented.
RTO / RPO per critical AI asset
Defined, coherent with service criticality.
“Only 4% of global organizations reach 'mature' readiness to defend against today's cybersecurity threats.”
How GPU Solutions answers this checklist
The points above have a documented answer in our security package, which we share under NDA before a project starts. Summary of the part that matters most to the regulator:
- Processing 100% in Madrid, Tier III, Spanish operator incorporated in Spain (BIAI Technology Project S.L.).
- Subprocessor list constrained: us, NVIDIA (hardware support), the Tier III operator. None have access to your VM contents.
- VM-level isolation with hardware MIG. The slice is yours, full stop.
- ISO 27001 and ENS Media certified, not 'in progress'. Available for customer verification.
- mTLS on endpoints, SSH keys or SSO/OIDC for access, immutable logs for at least 12 months.
- Formal incident notification policy < 24h, with a dedicated direct channel per customer.
If what you need isn't the operational checklist but an executive read to take into your board — with the regulatory frame, the fines, the six questions for your vendors and a three-horizon action plan — we've published a 9-page downloadable PDF guide. It's at /ai-regulation-guide.