Skip to content
shadowiq
Use case · Data leakage prevention

Stop PII and IP from leaking into AI — before the model sees it.

Detection logs a leak. Prevention stops it. ShadowIQ redacts, tokenizes, or denies sensitive content inline, before a request ever leaves your perimeter.

What this is

Summary

ShadowIQ prevents AI data leakage by detecting PII, PCI, PHI, and customer-schema identifiers inline at the AI gateway, applying redaction, tokenization, or denial actions in under 75ms with cryptographically signed evidence of every decision.

How it fits · explainer

The before / after, in one picture.

PROBLEM · BEFORE SHADOWIQ
Employees pasting SSNs and customer data into ChatGPT.
siqSOLUTION · WITH SHADOWIQ
SSN, passport, PAN, PHI, and customer-schema identifiers — detected with context (prompt, retrieval, tool-use), not brittle regex.
PILLARS ENGAGEDEnforceEvidence
Where it hurts

You've heard this one before.

  • Employees pasting SSNs and customer data into ChatGPT.
  • Legacy DLP that doesn't understand prompt context.
  • PII redaction that mangles the response quality.
  • No record of what was almost leaked.
What we do about it

Three moves.

  1. 1
    Context-aware detection.

    SSN, passport, PAN, PHI, and customer-schema identifiers — detected with context (prompt, retrieval, tool-use), not brittle regex.

  2. 2
    Redact, tokenize, or deny.

    Configurable per-policy actions. Tokenization preserves answer quality; redaction is deterministic and reversible on policy approval.

  3. 3
    Signed 'almost-leaked' record.

    Every detection — whether acted on or not — is signed and queryable. Auditors can confirm your DLP worked; attackers can't claim it didn't.

Outcomes

Numbers, not adjectives.

0
PHI egress events (HIPAA tenant · 2.1M monthly calls)
99.4%
detection recall on SSN/PAN
< 0.3%
false-positive rate
Frequently asked

Asked, answered, sourced.

Yes. We integrate with Microsoft Purview, Symantec, and Forcepoint DLP via classification signals and policy sync. ShadowIQ adds the AI context those tools can't see.

Tokenization preserves semantic structure so the model can reason over placeholders and produce a useful answer. Deterministic tokens round-trip cleanly on approved policies.

Upload a schema (table.column with regex or enum) and we build a detector. Customer-schema detectors are versioned and signed like policies.

Ready to see the signet in motion?

Your 30-minute demo. A signed audit trail by the end of it.

We'll wire ShadowIQ into one live workload, block a prompt injection in real time, and hand you a cryptographic receipt — before the meeting ends.