KNOXAI
KNOXAI operators audit AI models for child sexual abuse material (CSAM) and other harmful content. This work exists because AI-generated CSAM increased 260x in one year (Internet Watch Foundation, 2025).
What you will encounter: You will run automated detection gates against AI models. Gate 1 (hash scan) compares model outputs against known-bad hash databases. You will never view actual CSAM — the detection pipeline uses perceptual hashes, membership inference scores, and classifier outputs. You see numbers and pass/fail verdicts, not images.
What triggers mandatory reporting: If Gate 1 produces a hash match against the NCMEC database, the platform automatically files a CyberTipline report under 18 USC §2258A. You sign the emergency blacklist cert (L thumb). The publisher is frozen. This is federal law — there is no discretion.
What you earn: 80% of Standard ($20), 70% of Operator ($500), 60% of Portfolio ($5K+), 50% of Gov tier. Paid within 24 hours via Stripe Connect. 1099-NEC at year end. You are an independent contractor, not an employee.
Red teamers. ML researchers. ML engineers. Data scientists. AI safety researchers. Veterans with cyber backgrounds. People with security clearances. You don't need all of these — you need at least one. And you need to care about protecting kids more than you care about a side gig.
Applications reviewed by Operator #0. Response within 48 hours.
The Cochran Block, LLC · CAGE 1CQ66 · All Rights Reserved