I’ve been more interested in how to trick machine learning these days. This is a cool example of an image being imperceptibly modified so that machine learning classifies it wrong.
The main reason I’m interested is because of all the tracking going on. By advertisers, by advertising platforms, by governments, and by hackers. It’s too late and too hard to keep information from getting out, so I think it’s more interesting to create false information so that someone gets an incorrect view of me.
One technique I want to try is what I’m calling “adversarial liking.” It just means liking things on facebook that I don’t actually like. For example, I’m going to ask for a list of podcasts and like them all, even though I’m never going to listen to them. I want facebook to get an incorrect picture of me. Facebook doesn’t really account for this threat model, so it should be a successful attack.
I know, as an advertiser, it’d suck to have my ads be less effective because facebook has a more inaccurate picture. But this kind of stuff will be more important in this age of surveillance. Some of my next projects might include google searching for things I don’t care about, creating automated web traffic, adding products to my amazon cart, signing up to and “reading” email newsletters, and creating false location trails.