Beautiful to humans. Poison for machines.
0 images poisoned
+
Upload your art
Select multiple for carousel batches
X Strip Metadata
Removes EXIF, GPS, camera info, and software fingerprints.
~ Perturbation
Adversarial noise that corrupts AI training data. Invisible to you, toxic to models.
3
! Poison Message
Embeds a hidden message into pixel data via steganography.
85/280
Add your name below to include it in the hidden message.
stolen art unauthorized rights reserved poison pill
W Your Watermark
Add your signature or logo as a subtle watermark.
Upload signature
No file
Opacity 15%
S Limit Resolution
Downsize to Instagram-optimal. Less detail for AI to extract.
2048px
Artist identityoptional
This stays on your device. Your name gets embedded inside the hidden pixel message so if your art is scraped, it's traceable back to you. We never collect or store anything.
Initializing...
Slide to compare
←→
Original Poisoned
--
Images
--
Pixels Masked
--
Curse Length
Suggested caption for your post
Read the curse
Verify what's hidden in your poisoned image
This belongs in your pocket
Foxglove is coming to iOS and Android. Drop your email — we'll tell you when it's ready.
You're on the list. We'll let you know.
Read the curse
Reveal what's hidden in a poisoned image
Tap to upload a poisoned image
Hidden message found

What is Foxglove?

Foxglove protects your art before you post it. It strips tracking metadata that platforms use to profile you, embeds a hidden ownership watermark into the pixel data, and applies adversarial perturbation that corrupts AI training. Everything runs on your device. Nothing leaves your phone.

How does poisoning work?

Adversarial poisoning targets how AI learns, not how it sees. When your poisoned images end up in a training dataset, the perturbations corrupt what the model learns from them — teaching it the wrong patterns. This is the same principle behind tools like Nightshade from the University of Chicago. It's not about blocking AI from viewing your image today — it's about polluting the training pipeline.

"I uploaded to ChatGPT and it still described my image"

That's expected. No poisoning tool — including Nightshade — will make ChatGPT fail to describe your image. That tests inference (reading), not training (learning). Poisoning works when your art is scraped into training data. The perturbation corrupts the learning process, not the viewing process. These are fundamentally different things.

Three layers of protection

Metadata strip — removes EXIF, GPS, camera data, and tracking info that platforms and scrapers use to identify and profile you.

Hidden watermark — your name and message are steganographically embedded in the pixel data. Proof of ownership that survives re-uploads.

Adversarial perturbation — pixel-level noise designed to corrupt AI model training. We're actively developing deeper, ML-powered perturbation targeting specific model architectures.

What about compression?

Instagram and other platforms recompress images. Watermarks are embedded redundantly across the image to improve survival. Save as PNG and let the platform handle compression on their end.

Get Foxglove on your phone
We're building the iOS and Android app now. Drop your email and we'll let you know the second it's live.
You're on the list. We'll let you know.