Skip to content
shadowiq
Pillar 03 · Enforce

Policy at runtime. Under 75 ms. p99.

Observation is not control. The ShadowIQ Gateway sits inline on every model call — blocks prompt injection, redacts PII, applies your policy-as-code, and calls a human only when the edge case demands it.

How it fits · Signature 0xENFR-1124

Inline. Fast. Deterministic. Every call, every model, every tenant.

Client requestSDK · proxy · middlewareuser: [email protected]model: openai/gpt-4oprompt: "SummarizeJohn Doe, SSN123-45-6789…"ShadowIQ Gatewayp99 74 ms · 14 policies liveinjection.classifierpii.ssn.detector!egress.allowlistcustomer.rule.42toxicity.classifierrate.limit.tenantALLOW · REDACT(pii.ssn)Forwarded to OpenAIResponse → streamed74 ms total · p99SEAL · Ed25519block 0x4e12a0anchored · Sigstorefp_a9c3…e71d
How it works

Three moves, fully automated.

No long onboarding, no hand-rolled detection rules. ShadowIQ ships with defaults tuned to the regulatory floor — customize only where your risk appetite demands.

1

Point traffic at the gateway.

One endpoint for OpenAI-compatible, Anthropic, Bedrock, Vertex, Azure OpenAI, or bring-your-own. SDKs for TypeScript, Python, Go — or drop-in at your existing proxy.

2

Declare policy-as-code.

YAML or Rego. Version-controlled, tested, and reviewed like any other deployable artifact. Roll out by tenant, environment, or workload.

3

Run, measure, tune.

Live latency budget, per-policy hit rate, false-positive tracker, and a shadow mode that observes before it enforces.

Capabilities · complete coverage

Every control a regulator or auditor will ask about.

Latency

Sub-75 ms p99

Parallel evaluation, pre-compiled policy WASM, and warm tenant pools keep the 99th percentile tight even during burst traffic.

Injection

Prompt injection defense

Multi-classifier ensemble (rule + small model + LLM-judge quorum) tuned on 2,400+ labeled adversarial samples.

PII

Redaction & tokenization

SSN, passport, PAN, and customer-schema detectors. Redact, hash, or tokenize inline — decision recorded.

Egress

Allowlist + deny

Model-of-record policies per tenant, per environment. Block OpenAI in Germany, Azure EU-only in Frankfurt — all config.

HITL

Human-in-the-loop

Risk-tier a decision to a human reviewer. Approvals flow through Slack or ServiceNow and close the policy loop in seconds.

Shadow

Observe before enforce

Shadow mode mirrors every call, logs would-be decisions, and tunes false-positive rate without impacting users.

Streaming

Output filtering at stream

Token-level output filters. Stops an unsafe answer mid-stream without tearing the entire response apart.

Deploy

SaaS · VPC · self-host

Same binary. Same policy bundle. Same evidence format. Pick the deployment that passes your procurement review.

Isolation

Multi-tenant residency

Hard tenant isolation, regional pinning, FIPS-validated crypto, and per-tenant HSM keys available.

Frequently asked

Answered by the architecture, not the sales deck.

Configurable fail mode per workload — fail-closed (block) for regulated workloads, fail-open with alert for less sensitive ones. Availability target: 99.99% with regional failover.

Yes. Self-hosted deployment ships as a signed container bundle with its own key material. Evidence anchoring uses an internal transparency log rather than our public service.

Policy-as-code tests run in CI like any other code. You get a signed test report plus shadow-mode telemetry from staging workloads before promotion to production.

No. Customer prompts and responses never enter our training data. This is contractual (DPA) and architectural (isolated evaluation environment with no egress to training infrastructure).