Foxglove protects your art before you post it. It strips tracking metadata that platforms use to profile you, embeds a hidden ownership watermark into the pixel data, and applies adversarial perturbation that corrupts AI training. Everything runs on your device. Nothing leaves your phone.
Adversarial poisoning targets how AI learns, not how it sees. When your poisoned images end up in a training dataset, the perturbations corrupt what the model learns from them — teaching it the wrong patterns. This is the same principle behind tools like Nightshade from the University of Chicago. It's not about blocking AI from viewing your image today — it's about polluting the training pipeline.
That's expected. No poisoning tool — including Nightshade — will make ChatGPT fail to describe your image. That tests inference (reading), not training (learning). Poisoning works when your art is scraped into training data. The perturbation corrupts the learning process, not the viewing process. These are fundamentally different things.
Metadata strip — removes EXIF, GPS, camera data, and tracking info that platforms and scrapers use to identify and profile you.
Hidden watermark — your name and message are steganographically embedded in the pixel data. Proof of ownership that survives re-uploads.
Adversarial perturbation — pixel-level noise designed to corrupt AI model training. We're actively developing deeper, ML-powered perturbation targeting specific model architectures.
Instagram and other platforms recompress images. Watermarks are embedded redundantly across the image to improve survival. Save as PNG and let the platform handle compression on their end.