Most companies have no idea which AI tools their employees actually use — or what data is being pasted into them. Floodlight gives you the inventory, the risk classification, and the regulatory mapping in seven days. Free.
No credit card. No backend account. The browser extension is open-source — you can read every byte of code that runs on your team's machines.
DLP and CASB tools were not built for the modern AI taxonomy. They flag "ChatGPT" as one thing — when in reality there are five distinct ChatGPT tiers with five different data policies. The difference between them is the difference between a SOC 2-covered Enterprise contract and pasting customer PII into an account that trains on it.
A typical 100-person company uses around 23 distinct AI products across browser, desktop and embedded surfaces. IT knows about three or four. Security teams discover the rest after an incident.
Article 4 (AI literacy) is in force since 2 February 2025. As a deployer of any third-party AI system, you must take measures to ensure sufficient AI literacy among staff — and demonstrate them.
API keys, customer data, source code — pasted into chat interfaces every day. Most companies don't find out until a leaked key triggers an alert in someone else's logs.
The audit runs entirely on your team's own machines. Pasted content is classified locally — only category labels and counts ever leave the endpoint. The actual prompts never do.
Sideload it on Chrome or Edge. The extension monitors visits to ~60 AI tool domains and classifies pasted content locally for emails, API keys, source code, regulated identifiers, and 12 other categories.
The extension runs silently. No popups, no prompts, no productivity drag. Metadata is logged on the device only — prompt content is never transmitted, and you can audit the source code yourself.
At the end of the period, export the local event log and we generate a 10–12 page PDF: tool inventory, risk classification, sensitive content events, regulatory mapping (EU AI Act, UK GDPR, FCA, ISO 42001), and recommendations.
Real example below. 11 pages, A4, mapped to specific regulatory obligations. Cites the actual usage data from the audit period — not a generic template.
We are asking your employees to install software on their work machines. That is a meaningful ask. The least we can do is publish every byte of it — the extension, the classifier, the risk scoring, the report template — so anyone can verify the privacy claims hold.
Pasted content is classified locally, in the browser, and immediately discarded. Only category labels and counts leave the endpoint. The actual prompt text is never logged, never transmitted, never stored.
Read the byte-by-byte disclosure →Every tool in the taxonomy gets a numeric risk score and a band based on four published factors: trains-on-input default, hosting jurisdiction, compliance certifications, retention defaults. Scoring rules are public and corrections are welcome via PR.
Read the scoring methodology →Browser extension, classifier, audit report generator, taxonomy data — all of it lives in one public repository under MIT licence. Fork it, audit it, run it offline, send a pull request.
github.com/floodlightsecurity/floodlight →We don't ship a closed-source agent. We don't require a hosted account to run an audit. We don't sell employee browsing data, ever, full stop. We don't pretend the extension catches mobile or BYOD usage — it doesn't, and the methodology section says so.
See the published limitations →Sign up, install the extension on a few volunteer browsers, and in seven days you'll have the full PDF in your inbox. We'll be in touch within one working day.