Mitigating nonconsensual algorithmic victimization: FAWKES go brrr

Putting your face online has never been more dangerous. Private companies are scraping public communities, purchasing access to existing databases and (nonconsensually!) using our data to hone their facial recognition models. What steps can we, as security practitioners, take to mitigate the potential impact of these breaches of privacy? How far are we ethically bound to take user privacy?

FAWKES is an algorithmic image cloaking tool developed by the SAND lab at the University of Chicago. In this talk, you will learn how FAWKES anonymizes images, recent improvements to the tool and how you can deploy FAWKES in your organization to protect your users from unauthorized facial scanning.