How to help/fool an object detector

26 November 2020, 14:56

By: Alexandru Balan, Jan Van Looy

Category: AI, Artificial Intelligence, Computer Vision, Machine Learning, ML6, Technical Blogpost

Surveillance cameras are a growing presence in public spaces across the world. It is predicted that their number will climb above 1 billion by the end of 2021. This fact, combined with the rapid increase of performance and availability of computing resources, has led to an increased interest in developing faster and more accurate people detection systems. Nowadays, free and open-source pre-trained models, such as YOLOv4 can run in real-time on commodity hardware with state-of-the-art performance and easy setup.

Given these advances, concerns regarding privacy have been rising sharply. For this reason, researchers have developed an interest in how these models work and, in some cases, how they can be tricked. Generative Adversarial Network (or GANs) have been used to this effect by creating “stealth images” and even stealth t-shirts, making the wearer largely invisible to object detection models. 

Our first goal in this blogpost is to explore if we can develop our own stealth images using open source technologies and test them. Moreover, early experiments with stealth images were carried out with smaller models that are currently no longer state of the art such as YOLO-tiny. Hence our second goal is to test if the techniques described still work for state of the art object detectors such as YOLOv4, which are much harder to fool. Finally, if we are to live in a world full of self-driving cars and robots, being seen rather than not being seen may also become a desirable goal so our third goal is to inverse the whole system and see if we can design an image for a t-shirt that can help object detectors recognize and avoid humans.


Interested in how we developed a stealth T-shirt and if object detectors recognize and avoid humans? Read the full interactive blogpost on our Medium blog.