Tentative Important Dates


ArtofRobust Workshop Schedule
Event Start time End time
Opening Remarks 9:00 9:10
Invited talk: Aleksander Madry 9:10 9:40
Invited talk: Deqing Sun 9:40 10:10
Invited talk: Bo Li 10:10 10:40
Invited talk: Cihang Xie 10:40 11:10
Invited talk: Alan Yuille 11:10 11:40
Oral Session 11:40 12:00
Challenge Session 12:00 12:10
Poster Session 12:10 12:30
Lunch (12:30-13:30)
Invited talk: Lingjuan Lyu 13:30 14:00
Invited talk: Judy Hoffman 14:00 14:30
Invited talk: Furong Huang 14:30 15:00
Invited talk: Ludwig Schmidt 15:00 15:30
Invited talk: Chaowei Xiao 15:30 16:00
Poster Session 16:00 17:00

Call for Papers

Recently, the success of deep learning in AI has attracted great attention from academia. However, research shows that the security and safety of deep models have caused great concerns in real applications. This workshop focuses on discovering and harnessing both the positive and negative aspects of adversarial machine learning (especially in computer vision). We welcome research contributions related to the following (but not limited to) topics:
  • Adversarial attacks against computer vision tasks
  • Improve the robustness of deep learning systems
  • Interpreting and understanding model robustness
  • Adversarial attacks for social good
  • Dataset and benchmark that might benefit the model robustness
Format: Submissions papers (.pdf format) must use the CVPR 2023 Author Kit for LaTeX/Word Zip file and be anonymized and follow CVPR 2023 author instructions. The workshop considers two types of submissions: (1) Long Paper: Papers are limited to 6 pages excluding references; (2) Extended Abstract: Papers are limited to 4 pages including references.

Submission Site: https://cmt3.research.microsoft.com/3rdAdvML2023
Submission Due: March 15, 2023, Anywhere on Earth (AoE)

Accepted Long Paper

  • Certified Adversarial Robustness Within Multiple Perturbation Bounds [Paper] [Supplementary Material] oral presentation
    Soumalya Nandi (Indian Institute of Science, Bangalore)*; Sravanti Addepalli (Indian Institute of Science); Harsh Rangwani (Indian Institute of Science); Venkatesh Babu RADHAKRISHNAN (Indian Institute of Science)
  • Adversarial Defense in Aerial Detection [Paper]
    Yuwei Chen (Aviation Industry Development Research Center of China)*; Shiyong Chu (Aviation Industry Development Research Center of China)
  • Investigating Catastrophic Overfitting in Fast Adversarial Training: A Self-fitting Perspective [Paper] [Supplementary Material]
    Zhengbao He (Shanghai Jiao Tong University)*; Tao Li (Shanghai Jiao Tong University); Sizhe Chen (Shanghai Jiao Tong University); Xiaolin Huang (Shanghai Jiao Tong University)
  • Universal Watermark Vaccine: Universal Adversarial Perturbations for Watermark Protection [Paper] oral presentation
    Jianbo Chen (Hunan University)*; Xinwei Liu (Institute of Information Engineering,Chinese Academy of Sciences); Siyuan Liang (Chinese Academy of Sciences); Xiaojun Jia (Institute of Information Engineering,Chinese Academy of Sciences); Yuan Xun (Institute of Information Engineering,Chinese Academy of Sciences)
  • Robustness with Query-efficient Adversarial Attack using Reinforcement Learning [Paper]
    Soumyendu Sarkar (Hewlett Packard Enterprise)*; Ashwin Ramesh Babu (Hewlett Packard Enterprise Labs); Sajad Mousavi (Hewlett Packard Enterprise); Sahand Ghorbanpour (HPE); Vineet Gundecha (Hewlett Packard Enterpise); Antonio Guillen (Hewlett Packard Enterprise); Ricardo Luna Gutierrez (Hewlett Packard Enterprise); Avisek Naug (Hewlett Packard Enterprise)
  • Don't FREAK Out: A Frequency-Inspired Approach to Detecting Backdoor Poisoned Samples in DNNs [Paper]
    Hasan Abed Al Kader Hammoud (King Abdullah University of Science and Technology )*; Adel Bibi (University of Oxford); Philip Torr (University of Oxford); Bernard Ghanem (KAUST)
  • Exploring Diversified Adversarial Robustness in Neural Networks via Robust Mode Connectivity [Paper]
    Ren Wang (Illinois Institute of Technology)*; Yuxuan Li (Harbin Institute of Technology); Sijia Liu (Michigan State University)
  • How many dimensions are required to find an adversarial example? [Paper] [Supplementary Material] oral presentation
    Charles Godfrey (Pacific Northwest National Lab)*; Henry Kvinge (Pacific Northwest National Lab); Elise Bishoff ( Pacific Northwest National Lab); Myles Mckay (Pacific Northwest National Lab); Davis Brown (Pacific Northwest National Laboratory); Timothy Doster (Pacific Northwest National Laboratory); Eleanor Byler (Pacific Northwest National Laboratory)
  • An Extended Study of Human-like Behavior under Adversarial Training [Paper]
    Paul Gavrikov (Offenburg University)*; Janis Keuper (Offenburg University); Margret Keuper (University of Mannheim)
  • Test-time Detection and Repair of Adversarial Samples via Masked Autoencoder [Paper]
    Yun-Yun Tsai (Columbia University)*; JuChin Chao (Columbia University); Albert Wen (Columbia ); Zhaoyuan Yang (GE Research); Chengzhi Mao (Columbia University); Tapan Shah (GE ); Junfeng Yang (Columbia University)
  • Deep Convolutional Sparse Coding Networks for Interpretable Image Fusion [Paper]
    Zixiang Zhao (Xi’an Jiaotong University)*; Jiangshe Zhang (Xi'an Jiaotong University); Haowen Bai (Xi'an Jiaotong University); Yicheng Wang (Xi'an Jiaotong University); yukun cui (Xi'an Jiaotong University); Lilun Deng (Xi’an Jiaotong University); Kai Sun (Xi'an Jiaotong University); Chunxia Zhang (Xi'an Jiaotong University); Junmin Liu (Xi'an Jiaotong University); Shuang Xu (Northwestern Polytechnical University)
  • Robustness Benchmarking of Image Classifiers for Physical Adversarial Attack Detection [Paper]
    Ojaswee . (Indian Institute of Science Education and Research Bhopal); Akshay Agarwal (IISER Bhopal)*
  • Generating Adversarial Samples in Mini-Batches May Be Detrimental To Adversarial Robustness [Paper]
    Timothy P Redgrave (University of Notre Dame)*; Colton R Crum (University of Notre Dame)
  • A Few Adversarial Tokens Can Break Vision Transformers [Paper] [Supplementary Material]
    Ameya Joshi (New York University)*; Sai Charitha Akula (New York University); Gauri Jagatap (New York University); Chinmay Hegde (New York University)
  • Dual-model Bounded Divergence Gating for Improved Clean Accuracy in Adversarial Robust Deep Neural Networks [Paper]
    Hossein Aboutalebi (University of Waterloo)*; Mohammad Javad Shafiee (University of Waterloo); Chi-en A Tai (University of Waterloo); Alexander Wong (University of Waterloo)
  • A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion [Paper]
    Haomin Zhuang (South China University of Technology)*; Yihua Zhang (Michgan State University); Sijia Liu (Michigan State University)
  • Implications of Solution Patterns on Adversarial Robustness [Paper]
    Hengyue Liang (University of Minnesota)*; Buyun Liang (University of Minnesota); Ying Cui (University of Minnesota); Tim Mitchell (Queens College / CUNY); Ju Sun (University of Minnesota)

Accepted Extended Abstract

  • Benchmarking the Physical-world Adversarial Robustness of Vehicle Detection [Paper]
    Tianyuan Zhang (Beihang University)*; Yisong Xiao (Beihang University); Xiaoya Zhang (Beihang University); Li Hao (Beihang University); lu wang (Beihang University)
  • Benchmarking the Robustness of Quantized Models [Paper]
    Yisong Xiao (Beihang University)*; Tianyuan Zhang (Beihang University); shunchang liu (Beihang University); Haotong Qin (Beihang University)
  • Higher Model Robustness by Meta-Optimization for Monocular Depth Estimation [Paper] [Supplementary Material]
    Cho-Ying Wu (University of Southern California)*; Yiqi Zhong (University of Southern California); Junying Wang (University of Southern California); Ulrich Neumann (USC)
  • Neural Architecture Design and Robustness: A Dataset [Paper] [Supplementary Material]
    Steffen Jung (MPII)*; Jovita Lukasik (University of Mannheim); Margret Keuper (University of Mannheim)
  • Boosting Cross-task Transferability of Adversarial Patches with Visual Relations [Paper]
    Wentao Ma (Beihang University)*; SongZe Li (Beihang University); Yisong Xiao (Beihang University); shunchang liu (Beihang University)