A small classifier LLM runs in front of every prompt your team sends to AI — not a keyword list, not a regex pile. It reads each prompt and decides where it belongs. Sensitive content stays on your servers, answered by a safe in-house model. Everything else passes through to whichever frontier model your team prefers — the best of both worlds, without the leak risk.
// an llm reads every prompt — not keywords, not regex. you get frontier-grade answers when it’s safe to send and a safe in-house model when it isn’t.
the two lanes
stays inside
secure servers · small in-house model
EU-managed or self-hosted
passes outside
your frontier model of choice
any provider — you pick
// an llm picks the lane — no keyword lists, no regex DLP, no model lock-in.
passes outside
your frontier model of choice
any provider — you pick
stays inside
secure servers · small in-house model
EU-managed or self-hosted
the two lanes
stays inside
secure servers · small in-house model
EU-managed or self-hosted
passes outside
your frontier model of choice
any provider — you pick
// an llm picks the lane — no keyword lists, no regex DLP, no model lock-in.
01what the gate catches
If it shouldn’t leave the building, the gate doesn’t let it.
A small classifier runs on your infra and reads every outgoing prompt. If it spots any of the categories below, the prompt stays inside. If it comes back clean, it passes through to your frontier model of choice.
// one local model picks the lane; the lane picks where the prompt is answered.
local classifier — every prompt, before it leaves
runs on your infra
detected · BSN · passport · address
Citizen identifiers
BSN, passport numbers, addresses — anything that ties a prompt to a named resident.
// sample
BSN 123456789 · Jan de Vries · Markt 1, Delft// once classified
any category fires
stays inside · in-house model
prompt comes back clean
passes outside · your frontier model
02inside the gate
From prompt to verdict, before it leaves your network.
Every prompt your team sends runs through the same steps before it can leave your network. Capture, classify, policy-check, score, filter, log. The whole thing happens locally, on CPU. We don’t pick the model — we just decide whether the prompt is safe to send to one.
// one binary call: stays inside, or passes outside. every verdict auditable after the fact.
intercept
captureevery prompt, before it can leave your network
classify
aiPII, secrets, code, contracts — detected before they leave
policy
never-send-bsn-upstreamvendor-contract-on-prempatient-data-blocklist
policy match
aicompliance writes the rules; the gate runs them
risk score
scoring// scored against your policy
risk score
aiexplainable, with a confidence number
filter
fusionstays inside if sensitive, passes outside if not
audit · arkintel/gate
- 14:02:31•
never-send-bsn-upstreaminside - 14:02:34→
no-pii · public-policyoutside - 14:02:38•
vendor-contract-stays-insideinside
audit log
fusionevery decision — reroute, replay, learn
// the two lanes
one filter · two destinations
stays inside
Secure servers, small in-house model
Sensitive prompts never touch a frontier provider. They’re answered on secure servers — our managed EU cloud or your own infra — by a smaller model that’s tuned for safety, not benchmarks.
- managed EU cloud or self-hosted on your infra
- data never leaves the chosen jurisdiction
- smaller, safe model that runs on the same servers
- every verdict logged with the entities that fired
passes outside
Your frontier model of choice
Safe prompts pass straight through to whichever frontier model your team picked — OpenAI, Anthropic, Mistral, your own deployment. We don’t lock you in. We just filter what gets sent.
- keep your existing model — we wire in around it
- no broker fees, no model lock-in
- switch frontier providers without re-onboarding the gate
- the gate is the pipe in between, never the destination
// no model lock-in, no shadow ChatGPT, no data crossing borders it shouldn’t.
04auditability
European by jurisdiction. Auditable by default.
Privacy Gate is privacy infrastructure for the rest of your AI stack — and the place sensitive prompts actually get answered. The classifier and the in-house model both run on secure servers; the audit log is yours, every verdict tied to the entities that fired.
// where it sits is the next section — your infra or our European cloud.
05how it works
An LLM picks the lane. Best provider, or stays internal.
Privacy Gate has one job: read every incoming prompt and decide whether it’s safe to send to a frontier provider, or sensitive enough to answer in-house. We wire it into your stack however you need — point your apps at our endpoint, or we install everything on your own servers. Either way, we don’t keep the prompt afterwards.
// what we don’t do
- log your prompts
- keep transcripts on our servers
- lock you into one provider’s model
- make you rewrite every app around a new SDK
- ban frontier AI and hope nobody notices
// one classifier, two lanes, zero prompt logs.
// one classifier, two lanes, zero prompt logs
endpoint · or on your servers- 01
Wire it into your stack
We meet your apps where they are. Point them at our endpoint, or we install Privacy Gate inside your network — whichever fits your environment. No rewrites, nothing to relearn.
- 02
An LLM reads the prompt
A small classifier reads every incoming prompt and picks a lane. Anything sensitive — PII, secrets, internal-only content — stays internal. Everything else passes through.
- 03
Best provider, or stays internal
Safe prompts go to whichever frontier model your team picked. Sensitive prompts answer from the in-house model on secure servers. Either way, the prompt isn’t logged.
Building on top of Privacy Gate?
Want programmatic access? We provision an API for you, scoped to your deployment — same classifier, same two lanes.
06where it runs · sovereign by design
Your hardware,
or our European cloud.
Two places, same software. European jurisdiction, open-weight models, default-deny egress at the network — either way.
Self-hosted. Inside your perimeter.
Deploy the full stack inside your own data centre, your private cloud account, or an air-gapped rack. We install it, you run it — with our team on the other end of a support channel. Default-deny egress at the network layer, not the application. Air-gap optional, with signed offline update bundles when you choose it.
- On-prem, private cloud, or air-gapped
- Default-deny egress · enforced at network layer
- Open weights — yours to keep
Arkintel-managed. EU-resident. Operated by us.
Deployed on Hetzner’s EU regions — Falkenstein, Helsinki, Nuremberg — a German-headquartered provider, operated end-to-end by Arkintel as a Dutch entity. Outside the reach of the US CLOUD Act and any foreign subpoena. Dedicated tenancy, EU-only egress, same APIs and audit story as the self-hosted path.
- EU jurisdiction · no US exposure
- Hetzner EU regions · operated by Arkintel
- Dedicated tenancy · EU-only egress
Want the best frontier models too?Add Privacy Gate to your self-hosted stack
Need the deployment story in detail?Read the self-hosted deep-dive
03the app suite
Production AI apps,
live today.
Five live products — secure, team-ready, and fully traceable. We can host them for you or you can self-host. Select an app below to see how it works.
Chat
Private chat. Configured exactly to your liking.
A self-hosted chat platform that runs entirely in your own building. Designed to guarantee there is no leaking of data or sensitive information, making it the perfect choice for sensitive or regulated industries.
- Runs entirely on your own infrastructure
- Configured exactly to your specific requirements
- No data leakage — safe for sensitive industries
- Everything auditable, nothing egressing without a rule
you
Compare quantum and classical encryption — one paragraph, plain English.
Auto
→ picked Llama
— all-rounder, runs on your GPUs
// taking it further
Wondering how the apps fit together?See the whole suite & customer builds
Need to deploy this inside your perimeter?How self-hosted works
Public sector, healthcare, legal, finance?See the regulated industries page
Want everything on your hardware, no exceptions?See the self-hosted deployment
—the rest of the suite
Privacy Gate is the bridge — pick what to put behind it.
The gate routes prompts. What it routes them between is the rest of the suite: Chat as the surface, Knowledge as the retrieval, Self-hosted as the home for sensitive content.
06 — deploy it
Give your team the best AI. Keep your data yours.
Run Privacy Gate on our European cloud, or install it on your own servers. Either way, we set it up, write the first rules with you, and hand over the keys.
We write the first set of rules with you — what counts as sensitive, what’s safe to send — and hand it over for your team to extend.
// cloud or your own servers — talk to a human, not a ticket queue.
// contact
reading inboxEmail us — humans, not a ticket queue.


