# TransferAttack **Repository Path**: nan-chao/TransferAttack ## Basic Information - **Project Name**: TransferAttack - **Description**: 基于迁移的攻击。原仓库在:https://github.com/Trustworthy-AI-Group/TransferAttack - **Primary Language**: Python - **License**: MIT - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 1 - **Forks**: 0 - **Created**: 2025-05-27 - **Last Updated**: 2025-09-17 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README
| Category | Attack | Main Idea |
|---|---|---|
| Gradient-based | FGSM (Goodfellow et al., 2015) | Add a small perturbation in the direction of gradient |
| I-FGSM (Kurakin et al., 2015) | Iterative version of FGSM | |
| MI-FGSM (Dong et al., 2018) | Integrate the momentum term into the I-FGSM | |
| NI-FGSM (Lin et al., 2020) | Integrate the Nesterov's accelerated gradient into I-FGSM | |
| PI-FGSM (Gao et al., 2020) | Reuse the cut noise and apply a heuristic project strategy to generate patch-wise noise | |
| VMI-FGSM (Wang et al., 2021) | Variance tuning MI-FGSM | |
| VNI-FGSM (Wang et al., 2021) | Variance tuning NI-FGSM | |
| EMI-FGSM (Wang et al., 2021) | Accumulate the gradients of several data points linearly sampled in the direction of previous gradient | |
| AI-FGTM (Zou et al., 2022) | Adopt Adam to adjust the step size and momentum using the tanh function | |
| I-FGS²M (Zhang et al., 2021) | Assigning staircase weights to each interval of the gradient | |
| SMI-FGRM (Han et al., 2023) | Substitute the sign function with data rescaling and use the depth first sampling technique to stabilize the update direction. | |
| VA-I-FGSM (Zhang et al., 2022) | Adopt a larger step size and auxiliary gradients from other categories | |
| RAP (Qin et al., 2022) | Inject the worst-case perturbation when calculating the gradient. | |
| PC-I-FGSM (Wan et al., 2023) | Gradient Prediction-Correction on MI-FGSM | |
| IE-FGSM (Peng et al., 2023) | Integrate anticipatory data point to stabilize the update direction. | |
| GRA (Zhu et al., 2023) | Correct the gradient using the average gradient of several data points sampled in the neighborhood and adjust the update gradient with a decay indicator | |
| GNP (Wu et al., 2023) | Introduce a gradient norm penalty (GNP) term into the loss function | |
| MIG (Ma et al., 2023) | Utilize integrated gradient to steer the generation of adversarial perturbations | |
| DTA (Yang et al., 2023) | Calculate the gradient on several examples using small stepsize | |
| PGN (Ge et al., 2023) | Penalizing gradient norm on the original loss function | |
| MEF (Qiu et al., 2024) | Construct a max-min bi-level optimization problem aimed at finding flat adversarial regions | |
| ANDA (Fang et al., 2024) | Explicitly characterize adversarial perturbations from a learned distribution by taking advantage of the asymptotic normality property of stochastic gradient ascent. | |
| GI-FGSM (Wang et al., 2024) | Use global momentum initialization to better stablize update direction. | |
| FGSRA (Wang et al., 2024) | Leverage frequency information and introduce similarity weights to assess neighborhood contribution. | |
| MUMODIG (Ren et al., 2025) | Improve integrated gradients attacks by generating integration paths through multiple baseline samples and enforcing the monotonicity of each path. | |
| GAA (Gan et al., 2025) | Aggregate adversarial examples in the neighborhood with worst-aware loss and substitute loss to obtain a flatter local minimum. | |
| Foolmix (Li et al., 2025) | Strengthen the transferability of adversarial examples by dual-blending and direction update strategy. | Input transformation-based | DIM (Xie et al., 2019) | Random resize and add padding to the input sample |
| TIM (Dong et al., 2019) | Adopt a Gaussian kernel to smooth the gradient before updating the perturbation | |
| SIM (Ling et al., 2020) | Calculate the average gradient of several scaled images | |
| DEM (Zou et al., 2020) | Calculate the average gradient of several DIM's transformed images | |
| Admix (Wang et al., 2021) | Mix up the images from other categories | |
| ATTA (Wu et al., 2021) | Train an adversarial transformation network to perform the input-transformation | |
| MaskBlock (Fan et al., 2022) | Calculate the average gradients of multiple randomly block-level masked images. | |
| SSM (Long et al., 2022) | Randomly scale images and add noise in the frequency domain | |
| AITL (Yuan et al., 2022) | Select the most effective combination of image transformations specific to the input image. | |
| PAM (Zhang et al., 2023) | Mix adversarial examples with base images, where ratios are genreated by a trianed semantic predictor, for gradient accumulation. | |
| LPM (Wei et al., 2023) | Boosting Adversarial Transferability with Learnable Patch-wise Masks | |
| SIA (Wang et al., 2023) | Split the image into blocks and apply various transformations to each block | |
| STM (Ge et al., 2023) | Transform the image using a style transfer network | |
| USMM (Wang et al., 2023) | Apply uniform scale and a mix mask from an image of a different category to the input image | |
| DeCowA (Lin et al., 2024) | Augments input examples via an elastic deformation, to obtain rich local details of the augmented inputs | |
| L2T (Zhu et al., 2024) | Optimizing the input-transformation trajectory along the adversarial iteration | |
| BSR (Wang et al., 2024) | Randomly shuffles and rotates the image blocks | |
| OPS (Guo et al., 2025) | Constructs a stochastic optimization problem by input transformation operators and random perturbations. | |
| Advanced objective | TAP (Zhou et al., 2018) | Maximize the difference of feature maps between benign sample and adversarial example and smooth the perturbation |
| ILA (Huang et al., 2019) | Enlarge the similarity of feature difference between the original adversarial example and benign sample | |
| ATA (Wu et al., 2020) | Add a regularizer on the difference between attention maps of benign sample and adversarial example | |
| YAILA (Li et al., 2020) | Establishe a linear map between intermediate-level discrepancies and classification loss | |
| FIA (Wang et al., 2021) | Minimize a weighted feature map in the intermediate layer | |
| IR (Wang et al., 2021) | Introduces the interaction regularizer into the objective function to minimize the interaction for better transferability | |
| TRAP (Wang et al., 2021) | Utilize affine transformations and reference feature map | |
| TAIG (Huang et al., 2022) | Adopt the integrated gradient to update perturbation | |
| FMAA (He et al., 2022) | Utilize momentum to calculate the weight matrix in FIA | |
| NAA (Zhang et al., 2022) | Compute the feature importance of each neuron with decomposition on integral | |
| RPA (Zhang et al., 2022) | Calculate the weight matrix in FIA on randomly patch-wise masked images | |
| Fuzziness_Tuned (Yang et al., 2023) | The logits vector is fuzzified using the confidence scaling mechanism and temperature scaling mechanism | |
| DANAA (Jin et al., 2023) | Utilize an adversarial non-linear path to compute feature importance for each neuron by decomposing the integral | |
| ILPD (Li et al., 2023) | Decays the intermediate-level perturbation from the benign features by mixing the features of benign samples and adversarial examples | |
| BFA (Wang et al., 2024) | Calcuate the weight matrix in FIA on adversarial examples generated by I-FGSM | |
| P2FA (Liu et al., 2025) | Enhance transferability by directly perturbing important features multiple times in the feature space and then inverting them back to the pixel space | |
| Model-related | SGM (Wu et al., 2020) | Utilize more gradients from the skip connections in the residual blocks |
| LinBP (Guo et al., 2020) | Calculates forward as normal but backpropagates the loss as if no ReLU is encountered in the forward pass | |
| PNA-PatchOut (Wei et al., 2022) | Ignore gradient of attention and randomly drop patches among the perturbation | |
| LLTA (Fang et al., 2022) | Adopt simple random resizing and padding for data augmentation and randomly alter backpropagation for model augmentation | |
| IAA (Zhu et al., 2022) | Replace ReLU with Softplus and decrease the weight of residual module | |
| SAPR (Zhou et al., 2022) | Randomly permute input tokens at each attention layer | |
| SETR (Naseer et al., 2022) | Ensemble and refine classifiers after each transformer block | |
| ATA_ViT (Wang et al., 2022) | Activate the uncertain attention and perturb the sensitive embedding to generate more transferable adversarial examples on ViTs | |
| DRA (Zhu et al., 2022) | Use fine-tuned models to push the image away from the original distribution while generating the adversarial examples. | |
| MTA (Qin et al., 2023) | Train a meta-surrogate model (MSM), whose adversarial examples can maximize the loss on a single or a set of pre-trained surrogate models | |
| MUP (Yang et al., 2023) | Mask unimportant parameters of surrogate models | |
| TGR (Zhang et al., 2023) | Scale the gradient and mask the maximum or minimum gradient magnitude | |
| DSM (Yang et al., 2022) | Train surrogate models in a knowledge distillation manner and adopt CutMix on the input | |
| DHF (Wang et al., 2023) | Mixup the feature of current examples and benign samples and randomly replaces the features with their means. | |
| BPA (Wang et al., 2023) | Recover the trunctaed gradient of non-linear layers | |
| AGS (Wang et al., 2024) | Train surrogate models with adversary-centric contrastive learning and adversarial invariant learning | |
| MetaSSA (Weng et al., 2024) | Utilizes low-frequency feature mixing for meta-train to compute gradients, averages gradients through adversarial feature mixing during meta-test, and updates adversarial examples using gradients from both steps. | |
| VDC (Zhang et al., 2024) | Adding virtual dense connections for dense gradient back-propagation in Attention maps and MLP blocks, without altering the forward pass. | |
| MA (Ma et al., 2024) | Minimize KL divergence in the predictions between the source and the witness model. | |
| ATT (Ming et al., 2024) | Adaptively re-scale token gradient, patch out under semantic guidance and truncate token gradient. | |
| FPR (Ren et al., 2025) | Refining attention maps and token embeddings of Vision Transformers from the forward propagation. | |
| AWT (Chen et al., 2025) | Adaptively re-scale token gradient, tune surrogate model weights without extra data and flatten local maxima to boost transferability. | |
| FAUG (Wang et al., 2025) | Inject random noise into intermediate features, diversify attack gradients and mitigate model-specific overfitting to amplify transferability. | |
| ANA (Chen et al., 2025) | Using masking operations and a lightweight alignment network to make surrogate models focus on critical regions of images, thereby generating adversarial examples with much higher transferability. | |
| Ensemble-based | Ens (Liu et al., 2017) | Generate the adversarial examplesusing multiple models |
| Ghost (Li et al., 2020) | Densely apply dropout and random scaling on the skip connection to generate several ghost networks to average the gradient | |
| SVRE (Xiong et al., 2020) | Use the stochastic variance reduced gradient to update the adversarial example | |
| LGV (Gubri et al., 2022) | Ensemble multiple weight sets from a few additional training epochs with a constant and high learning rate | |
| MBA (Li et al., 2023) | Maximize the average prediction loss on several models obtained by single run of fine-tuning the surrogate model using Bayes optimization | |
| AdaEA (Chen et al., 2023) | Adjust the weights of each surrogate model in ensemble attack using adjustment strategy and reducing conflicts between surrogate models by reducing disparity of gradients of them | |
| CWA (Chen et al., 2023) | Define the common weakness of an ensemble of models as the solution that is at the flat landscape and close to the models' local optima | |
| SMER (Tang., 2024) | Ensembles reweighing is introduced to refine ensemble weights by maximizing attack loss based on reinforcement learning | |
| Generation-based | CDTP (Naseer et al., 2019) | Train a generative model on datasets from different domains to learn domain-invariant perturbations |
| LTP (Nakka et al., 2021) | Introduce a loss function based on such mid-level features to learn an effective, transferable perturbation generator | |
| ADA (Kim et al., 2022) | Utilize a generator to stochastically perturb shared salient features across models to avoid poor local optima and explore the search space thoroughly | |
| GE-ADVGAN (Zhu et al., 2024) | Enhance the transferability of adversarial samples by incorporating gradient editing mechanisms and frequency domain exploration into the generative model's training process. | |
| DiffAttack (Chen et al., 2024) | An unrestricted attack based on diffusion models that can achieve both good transferability and imperceptibility. |
| Category | Attack | Main Idea |
|---|---|---|
| Input transformation-based | ODI (Byun et al., 2022) | Diverse inputs based on 3D objects |
| SU (Wei et al., 2023) | Optimize adversarial perturbation on the original and cropped images by minimizing prediction error and maximizing their feature similarity | |
| IDAA (Liu et al., 2024) | Design local mixup to randomly mix a group of transformed adversarial images, strengthening the input diversity | |
| Advanced objective | AA (Inkawhich et al., 2019) | Minimize the similarity of feature difference between the original adversarial example and target benign sample |
| PoTrip (Li et al., 2020) | Introduce the Poincare distance as the similarity metric to make the magnitude of gradient self-adaptive | |
| Logit (Zhao et al., 2021) | Replace the cross-entropy loss with logit loss | |
| Logit-Margin (Weng et al., 2023) | Downscale the logits using a temperature factor and an adaptive margin | |
| CFM (Byun et al., 2023) | Mix feature maps of adversarial examples with clean feature maps of benign images stocastically | |
| FFT (Zeng et al., 2024) | Fine-tuning a crafted adversarial example in the feature space | |
| Generation-based | TTP (Naseer et al., 2021) | Train a generative model to generate adversarial examples, of which both the global distribution and local neighborhood structure in the latent feature space are matched with the target class. |
| M3D (Zhao et al., 2023) | ||
| Ensemble-based | SASD_WS (Wu et al., 2024) | Incorporate Sharpness-Aware Self-Distillation (SASD) and Weight Scaling (WS) to promote the source model's generalization capability. |
| Category | Attacks | CNNs | ViTs | Defenses | ResNet-50 | VGG-16 | Mobilenet_v2 | Inception_v3 | ViT | PiT | Visformer | Swin | AT | HGD | RS | NRP | DiffPure |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Gradient-based | FGSM | 49.2 | 54.6 | 48.2 | 32.8 | 11.9 | 14.1 | 18.7 | 19.4 | 40.0 | 8.8 | 26.8 | 53.1 | 15.8 |
| I-FGSM | 99.6 | 36.5 | 33.6 | 17.7 | 7.5 | 11.7 | 14.4 | 15.7 | 39.7 | 5.9 | 26.2 | 42.3 | 8.7 | |
| MI-FGSM | 99.9 | 57.9 | 53.4 | 37.4 | 14.5 | 22.5 | 26.2 | 28.1 | 40.6 | 17.9 | 27.4 | 58.5 | 13.3 | |
| NI-FGSM | 100.0 | 66.5 | 59.3 | 38.9 | 15.4 | 23.4 | 30.1 | 29.7 | 40.8 | 18.2 | 27.6 | 57.7 | 12.7 | |
| PI-FGSM | 98.8 | 74.9 | 59.2 | 57.9 | 16.3 | 16.9 | 25.3 | 20.6 | 42.1 | 35.3 | 35.0 | 60.1 | 38.3 | |
| VMI-FGSM | 99.6 | 70.8 | 66.9 | 57.3 | 31.9 | 47.0 | 54.5 | 53.5 | 41.9 | 47.5 | 30.5 | 61.3 | 22.9 | |
| VNI-FGSM | 99.9 | 76.9 | 72.9 | 60.8 | 30.0 | 47.0 | 58.5 | 55.5 | 41.9 | 48.4 | 29.8 | 61.7 | 20.9 | |
| EMI-FGSM | 100.0 | 84.7 | 81.7 | 64.3 | 25.2 | 43.1 | 56.2 | 53.1 | 43.6 | 42.5 | 30.3 | 67.3 | 19.1 | |
| AI-FGTM | 99.7 | 52.9 | 49.0 | 33.0 | 13.2 | 20.5 | 26.1 | 25.8 | 40.5 | 16.4 | 27.3 | 55.1 | 11.9 | |
| I-FGS²M | 99.8 | 45.6 | 41.5 | 24.6 | 9.5 | 14.3 | 19.1 | 20.4 | 39.8 | 9.3 | 26.7 | 47.1 | 9.9 | |
| SMI-FGRM | 99.2 | 75.1 | 73.5 | 67.0 | 25.9 | 40.9 | 51.9 | 48.9 | 46.4 | 47.5 | 34.9 | 65.8 | 23.9 | |
| VA-I-FGSM | 99.4 | 53.2 | 46.4 | 26.8 | 9.6 | 12.2 | 15.7 | 17.7 | 40.3 | 7.6 | 26.8 | 53.8 | 9.9 | |
| RAP | 99.9 | 87.8 | 85.9 | 63.0 | 23.8 | 40.8 | 54.1 | 53.7 | 42.9 | 26.5 | 32.0 | 65.6 | 21.0 | |
| PC-I-FGSM | 99.8 | 59.1 | 53.8 | 36.7 | 15.2 | 21.4 | 26.5 | 28.4 | 40.7 | 17.9 | 27.8 | 57.0 | 14.0 | |
| IE-FGSM | 100.0 | 67.3 | 63.0 | 43.8 | 17.4 | 29.0 | 37.2 | 36.7 | 40.6 | 24.6 | 27.7 | 59.1 | 13.3 | |
| GRA | 97.5 | 84.6 | 83.5 | 81.9 | 46.3 | 61.9 | 72.9 | 70.9 | 49.7 | 73.6 | 43.1 | 75.4 | 41.3 | |
| GNP | 100.0 | 68.9 | 63.8 | 45.8 | 16.9 | 28.9 | 38.3 | 36.4 | 41.2 | 25.5 | 27.8 | 59.3 | 14.1 | |
| MIG | 100.0 | 69.0 | 63.7 | 52.1 | 24.8 | 36.5 | 48.4 | 44.3 | 41.2 | 35.5 | 28.7 | 60.6 | 18.6 | |
| DTA | 100.0 | 64.2 | 58.8 | 40.3 | 16.1 | 26.3 | 35.0 | 33.6 | 40.7 | 22.3 | 27.5 | 57.9 | 13.4 | |
| PGN | 98.7 | 88.7 | 86.3 | 85.5 | 50.1 | 68.7 | 76.9 | 73.6 | 49.0 | 75.2 | 42.3 | 78.0 | 44.5 | |
| MEF | 99.3 | 95.3 | 94.1 | 91.4 | 68.4 | 80.8 | 88.7 | 88.3 | 47.6 | 86.7 | 41.2 | 81.6 | 44.3 | |
| ANDA | 99.9 | 84.5 | 81.6 | 72.4 | 41.4 | 60.7 | 71.3 | 66.8 | 43.1 | 65.5 | 30.9 | 65.1 | 28.3 | GI-FGSM | 100.0 | 72.6 | 65.4 | 49.1 | 18.9 | 29.5 | 37.5 | 36.3 | 40.8 | 25.1 | 28.1 | 61.7 | 16.6 | FGSRA | 97.9 | 89.7 | 89.6 | 86.2 | 45.7 | 67.8 | 75.9 | 75.9 | 46.6 | 72.5 | 36.3 | 77.4 | 32.2 | MUMODIG | 98.6 | 87.4 | 85 | 79.9 | 45.4 | 65.6 | 75.8 | 71.7 | 43.4 | 69.7 | 31.7 | 67 | 27 | GAA | 95.5 | 84.6 | 82.5 | 81.8 | 43.3 | 59.9 | 70.7 | 66.3 | 49.3 | 71.1 | 43.5 | 75.9 | 40.9 | Foolmix | 99.1 | 74.2 | 70.0 | 61.7 | 30.8 | 43.6 | 55.0 | 51.8 | 42.0 | 46.3 | 30.0 | 62.9 | 22.5 |
| Input transformation-based | DIM | 98.7 | 71.0 | 66.2 | 57.1 | 27.5 | 39.7 | 49.5 | 45.3 | 41.4 | 42.0 | 28.8 | 58.3 | 19.9 |
| TIM | 97.8 | 57.9 | 46.9 | 38.9 | 15.3 | 16.5 | 23.2 | 19.0 | 41.7 | 25.4 | 32.5 | 56.6 | 32.3 | |
| SIM | 100.0 | 70.2 | 64.4 | 52.1 | 24.5 | 36.9 | 48.1 | 43.5 | 40.8 | 35.6 | 28.3 | 60.3 | 17.7 | |
| DEM | 99.9 | 93.1 | 89.3 | 91.0 | 48.2 | 63.5 | 78.2 | 71.4 | 45.5 | 74.8 | 36.0 | 59.3 | 35.6 | |
| Admix | 100.0 | 79.9 | 77.7 | 67.7 | 32.5 | 49.3 | 62.5 | 58.2 | 41.8 | 50.7 | 30.1 | 65.1 | 20.7 | |
| ATTA | 99.9 | 59.7 | 52.0 | 39.9 | 16.9 | 27.0 | 33.6 | 33.6 | 40.7 | 20.5 | 27.9 | 57.3 | 13.9 | |
| MaskBlock | 100.0 | 64.3 | 59.6 | 41.9 | 17.0 | 26.8 | 35.0 | 34.5 | 41.0 | 21.5 | 28.0 | 58.9 | 14.4 | |
| SSM | 98.0 | 88.8 | 86.4 | 83.1 | 50.7 | 68.3 | 76.3 | 75.7 | 46.0 | 72.9 | 36.5 | 75.3 | 35.0 | |
| AITL | 94.5 | 87.2 | 84.9 | 82.8 | 52.7 | 68.0 | 75.3 | 72.7 | 45.8 | 77.2 | 36.9 | 67.6 | 40.4 | |
| PAM | 100.0 | 81.3 | 77.0 | 73.3 | 27.1 | 42.1 | 58.2 | 53.2 | 43.0 | 51.2 | 30.2 | 69.0 | 19.9 | |
| LPM | 98.6 | 61.5 | 53.2 | 39.2 | 15.2 | 24.3 | 31.5 | 31.8 | 40.3 | 19.6 | 27.7 | 58.5 | 14.3 | |
| SIA | 99.4 | 94.3 | 92.7 | 83.1 | 50.0 | 76.7 | 86.5 | 83.6 | 44.0 | 79.5 | 32.1 | 74.9 | 27.8 | |
| STM | 97.5 | 90.2 | 88.9 | 88.3 | 56.7 | 73.0 | 80.3 | 77.5 | 49.0 | 82.7 | 41.8 | 79.3 | 43.6 | |
| USMM | 99.7 | 90.1 | 88.4 | 78.8 | 37.5 | 57.5 | 72.4 | 70.1 | 44.0 | 62.8 | 32.2 | 74.8 | 22.3 | |
| DeCowA | 92.6 | 98.7 | 97.7 | 94.2 | 59.8 | 63.9 | 82.6 | 75.6 | 50.5 | 90.1 | 43.1 | 77.5 | 41.3 | |
| L2T | 99.5 | 95.1 | 94.2 | 90.7 | 63.2 | 80.6 | 88.0 | 86.4 | 48.6 | 85.5 | 39.6 | 80.2 | 42.1 | |
| BSR | 99.0 | 96.8 | 95.6 | 90.8 | 54.3 | 79.9 | 89.3 | 84.7 | 44.8 | 84.7 | 33.2 | 76.3 | 32.4 | OPS | 99.5 | 98.1 | 97.8 | 98.2 | 88.8 | 93.8 | 96.7 | 95.7 | 57.8 | 96.9 | 64.6 | 90.7 | 83.5 |
| Advanced objective | TAP | 99.9 | 93.4 | 93.8 | 64.8 | 15.2 | 25.4 | 44.3 | 40.2 | 41.7 | 34.5 | 27.3 | 65.8 | 15.5 |
| ILA | 98.7 | 68.7 | 64.6 | 33.4 | 12.8 | 23.5 | 31.9 | 32.1 | 40.0 | 16.2 | 27.1 | 52.5 | 11.3 | ATA | 99.8 | 35.8 | 35.1 | 19.2 | 7.6 | 11.9 | 14.9 | 15.0 | 39.6 | 6.2 | 26.4 | 43.0 | 9.4 |
| YAILA | 74.8 | 81.7 | 73.6 | 38.4 | 13.2 | 17.2 | 30.5 | 28.0 | 40.1 | 22.9 | 26.5 | 51.2 | 9.5 | |
| FIA | 98.0 | 71.2 | 65.8 | 40.2 | 12.6 | 19.9 | 33.2 | 33.1 | 41.1 | 19.2 | 28.0 | 58.4 | 12.1 | |
| IR | 100.0 | 59.4 | 53.4 | 36.2 | 14.8 | 22.9 | 27.7 | 28.1 | 40.6 | 18.6 | 27.3 | 57.5 | 13.2 | |
| TRAP | 99.1 | 96.1 | 93.5 | 84.7 | 33.6 | 47.7 | 64.1 | 58.6 | 41.1 | 64.3 | 27.2 | 65.3 | 16.7 | |
| TAIG | 99.8 | 48.3 | 46.4 | 30.4 | 12.1 | 20.8 | 26.1 | 26.0 | 39.8 | 14.6 | 26.6 | 47.4 | 9.7 | |
| FMAA | 98.3 | 80.1 | 77.3 | 55.8 | 18.0 | 33.0 | 50.8 | 52.7 | 42.5 | 34.4 | 29.4 | 62.2 | 13.4 | |
| NAA | 84.7 | 61.6 | 58.6 | 38.2 | 19.5 | 29.6 | 36.6 | 39.0 | 40.3 | 25.8 | 28.0 | 52.7 | 12.6 | |
| RPA | 95.1 | 85.8 | 86.0 | 76.3 | 35.9 | 47.5 | 63.4 | 62.1 | 45.0 | 58.3 | 33.2 | 70.8 | 23.6 | |
| Fuzziness_Tuned | 100.0 | 57.3 | 51.8 | 33.7 | 14.1 | 21.7 | 26.7 | 26.6 | 40.3 | 16.4 | 27.1 | 57.1 | 12.5 | |
| DANAA | 96.8 | 86.3 | 82.5 | 69.9 | 23.4 | 36.0 | 54.3 | 51.6 | 43.4 | 51.4 | 31.0 | 71.8 | 17.6 | |
| ILPD | 97.9 | 87.4 | 85.8 | 72.2 | 44.6 | 60.7 | 69.3 | 67.6 | 43.8 | 56.8 | 32.5 | 66.3 | 28.1 | |
| BFA | 98.4 | 94.1 | 92.3 | 84.8 | 52.7 | 73.2 | 85.8 | 82.3 | 44.6 | 78.0 | 33.0 | 79.7 | 26.0 | |
| P2FA | 100.0 | 97.9 | 97.6 | 84.0 | 43.7 | 64.9 | 86.0 | 83.2 | 43.9 | 66.3 | 32.2 | 79.5 | 22.2 | |
| Model-related | SGM | 100.0 | 73.2 | 75.7 | 45.9 | 18.9 | 33.5 | 41.1 | 41.9 | 41.3 | 25.3 | 28.6 | 64.3 | 15.0 |
| LinBP | 100.0 | 87.7 | 81.2 | 47.5 | 18.9 | 22.7 | 44.0 | 37.2 | 41.3 | 19.9 | 27.9 | 59.9 | 17.1 | LLTA | 94.9 | 96.6 | 95.1 | 83.8 | 45.7 | 57.1 | 74.1 | 73.8 | 47.7 | 74.3 | 39.5 | 82.5 | 28.2 |
| PNA-PatchOut | 47.3 | 69.3 | 62.7 | 53.9 | 95.3 | 55.8 | 58.7 | 64.2 | 42.9 | 37.2 | 32.7 | 51.3 | 25.4 | |
| IAA | 100.0 | 81.7 | 74.7 | 55.2 | 19.5 | 30.1 | 41.9 | 40.7 | 42.1 | 26.8 | 29.6 | 66.7 | 15.9 | |
| SAPR | 49.2 | 74.5 | 69.1 | 59.1 | 99.9 | 57.2 | 63.2 | 68.1 | 43.4 | 37.8 | 32.7 | 54.8 | 24.0 | |
| SETR | 48.9 | 82.7 | 81.2 | 63.8 | 66.3 | 41.5 | 51.1 | 66.6 | 45.9 | 38.6 | 34.5 | 62.0 | 26.1 | |
| ATA_ViT | 24.7 | 86.7 | 39.9 | 58.2 | 25.7 | 14.5 | 15.2 | 9.9 | 52.7 | 52.1 | 37.4 | 89.3 | 45.4 | |
| DRA | 94.3 | 96.8 | 97.8 | 95.0 | 75.3 | 77.7 | 88.7 | 86.4 | 70.8 | 93.2 | 85.7 | 86.4 | 75.3 | |
| MTA | 68.7 | 89.8 | 82.5 | 68.8 | 20.4 | 24.4 | 37.3 | 33.6 | 40.4 | 48.9 | 26.4 | 55.9 | 14.5 | |
| MUP | 98.8 | 76.5 | 71.0 | 53.3 | 18.5 | 32.3 | 41.3 | 40.0 | 41.3 | 27.8 | 28.4 | 61.6 | 15.3 | |
| TGR | 57.9 | 80.3 | 76.1 | 61.4 | 99.8 | 61.3 | 69.2 | 75.1 | 45.4 | 47.1 | 37.0 | 58.0 | 34.0 | |
| DSM | 82.5 | 96.5 | 93.0 | 74.9 | 28.4 | 38.4 | 59.0 | 55.8 | 45.0 | 58.6 | 33.7 | 71.6 | 20.4 | |
| DHF | 99.9 | 74.4 | 70.1 | 51.8 | 25.3 | 40.8 | 48.5 | 47.4 | 41.4 | 35.6 | 29.0 | 62.2 | 16.1 | |
| BPA | 96.2 | 96.9 | 94.5 | 84.9 | 41.4 | 50.2 | 75.1 | 65.3 | 43.9 | 76.3 | 33.2 | 76.9 | 29.4 | |
| AGS | 74.1 | 86.5 | 85.2 | 82.6 | 28.2 | 24.9 | 45.2 | 35.7 | 45.3 | 67.1 | 34.0 | 56.7 | 38.0 | |
| MetaSSA | 98.2 | 85.3 | 74.7 | 79.0 | 37.5 | 46.7 | 63.6 | 52.3 | 45.3 | 65.0 | 35.8 | 68.7 | 36.8 | |
| VDC | 51.6 | 74.2 | 70.7 | 59.1 | 100.0 | 64.0 | 68.3 | 73.8 | 44.7 | 41.6 | 35.1 | 57.9 | 29.6 | |
| MA | 96.0 | 95.8 | 92.4 | 83.5 | 38.4 | 53.2 | 74.3 | 70.9 | 44.1 | 80.9 | 34.3 | 72.7 | 23.0 | |
| ATT | 61.1 | 79.7 | 76.4 | 63.0 | 100.0 | 68.3 | 75.0 | 81.6 | 45.2 | 49.7 | 36.1 | 60.7 | 33.4 | |
| FPR | 56.6 | 83.6 | 81.4 | 70.9 | 99.5 | 54.6 | 66.9 | 71.0 | 46.4 | 47.0 | 35.4 | 63.0 | 30.9 | |
| AWT | 98.6 | 91.5 | 90.4 | 86.5 | 60.0 | 75.7 | 81.0 | 81.3 | 50.5 | 77.4 | 44.9 | 80.5 | 49.2 | |
| FAUG | 95.1 | 69.5 | 66.4 | 56.5 | 26.1 | 38.9 | 47.6 | 45.6 | 42.1 | 38.2 | 30.2 | 62.8 | 21.0 | |
| ANA | 78.5 | 81.2 | 76.4 | 60.2 | 22.3 | 30.1 | 41.3 | 37.9 | 42.6 | 43.5 | 29.9 | 64.9 | 16.3 | |
| Ensemble-based | ENS | 99.4 | 100.0 | 99.9 | 99.5 | 32.2 | 52.6 | 67.5 | 65.3 | 43.4 | 63.0 | 31.4 | 93.8 | 20.6 |
| Ghost | 99.3 | 77.4 | 70.6 | 51.6 | 18.2 | 29.3 | 41.5 | 9.9 | 41.6 | 26.3 | 28.6 | 61.4 | 14.5 | |
| SVRE | 98.8 | 100.0 | 100.0 | 99.6 | 31.5 | 49.7 | 68.1 | 67.4 | 43.8 | 58.6 | 31.1 | 95.2 | 17.4 | |
| LGV | 91.4 | 97.2 | 95.8 | 80.6 | 28.0 | 35.8 | 58.8 | 55.9 | 44.6 | 58.2 | 32.7 | 67.2 | 18.0 | |
| MBA | 99.8 | 99.9 | 99.5 | 96.5 | 53.7 | 60.7 | 87.3 | 82.1 | 48.6 | 90.8 | 36.7 | 82.3 | 24.8 | |
| AdaEA | 99.2 | 100.0 | 99.9 | 99.3 | 32.1 | 51.4 | 67.6 | 65.1 | 43.5 | 66.3 | 31.3 | 95.5 | 20.7 | |
| CWA | 87.1 | 99.9 | 100.0 | 100.0 | 23.9 | 35.7 | 49.3 | 49.9 | 43.9 | 42.7 | 30.7 | 96.0 | 16.9 | |
| SMER | 96.4 | 95.0 | 96.6 | 96.1 | 33.0 | 51.8 | 67.7 | 68.7 | 43.0 | 62.0 | 30.6 | 90.7 | 17.4 | |
| Generation-based | CDTP | 97.1 | 99.2 | 98.4 | 95.9 | 38.3 | 44.6 | 91.6 | 68.7 | 42.1 | 94.8 | 31.3 | 70.8 | 32.5 |
| LTP | 99.6 | 99.8 | 98.4 | 98.8 | 53.5 | 69.7 | 98.4 | 94.1 | 41.1 | 97.3 | 29.4 | 69.8 | 26.0 | |
| ADA | 71.7 | 89.4 | 80.6 | 75.9 | 18.1 | 20.4 | 50.9 | 39.5 | 38.4 | 1.4 | 27.9 | 32.2 | 26.3 | |
| GE-ADVGAN | ||||||||||||||
| DiffAttack | 92.3 | 51.3 | 51.3 | 47.1 | 30.7 | 44.3 | 44.5 | 47.1 | 45.7 | 39.0 | 39.7 | 59.2 | 34.3 | |
| Category | Attacks | CNNs | ViTs | Defenses | ResNet-50 | VGG-16 | Mobilenet_v2 | Inception_v3 | ViT | PiT | Visformer | Swin | AT | HGD | RS | NRP | DiffPure | Input transformation-based | ODI | SU | 100.0 | 5.4 | 7.1 | 2.8 | 0.1 | 0.8 | 12.0 | 3.0 | 0.1 | 4.5 | 0.0 | 0.0 | 0.0 | IDAA | 71.7 | 5.0 | 4.8 | 2.1 | 0.3 | 2.1 | 5.0 | 3.6 | 0.1 | 0.1 | 0.0 | 0.2 | 0.0 | Advanced objective | AA | PoTrip | 99.8 | 4.5 | 5.6 | 6.1 | 0.9 | 4.1 | 10.8 | 4.2 | 0.0 | 6.0 | 0.0 | 0.0 | 0.0 | Logit | 99.8 | 2.9 | 2.5 | 1.8 | 0.6 | 2.2 | 8.6 | 2.5 | 0.0 | 3.3 | 0.0 | 0.0 | 0.0 | Logit-Margin | 99.9 | 3.1 | 2.8 | 1.7 | 0.2 | 2.5 | 7.7 | 3.4 | 0.1 | 2.3 | 0.0 | 0.1 | 0.0 | CFM | 99.1 | 54.2 | 62.9 | 46.9 | 21.6 | 38.1 | 63.7 | 46.0 | 0.2 | 54.2 | 0.0 | 0.5 | 0.8 | FFT | 98.7 | 8.5 | 9.4 | 2.4 | 0.5 | 4.9 | 14.4 | 7.5 | 0.1 | 5.6 | 0.0 | 0.0 | 0.0 | Generation-based | TTP | M3D | Ensemble-based | SASD_WS | 90.5 | 82.2 | 78.9 | 66.8 | 18.3 | 27.2 | 64.5 | 41.8 | 0.2 | 76.3 | 0.0 | 0.9 | 0.7 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Xiaosen Wang |
Zeyuan Yin |
Zeliang Zhang |
Kunyu Wang |
Zhijin Ge |
Yuyang Luo |