We propose an approach for adversarial attacks on dense prediction models. Our proposed method can generate a single perturbation that can fool multiple blackbox detection and segmentation models simultaneously.
Service
Conference review: WACV 2024
Teaching Assistant: EE240 Pattern Recognition, 2023 Spring