# FairDARTS **Repository Path**: mabeisi/FairDARTS ## Basic Information - **Project Name**: FairDARTS - **Description**: Fair DARTS: Eliminating Unfair Advantages in Differentiable Architecture Search - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2021-03-23 - **Last Updated**: 2021-09-07 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # [ECCV'20] Fair DARTS: Eliminating Unfair Advantages in Differentiable Architecture Search *: This is the official implementation of the [FairDARTS paper](https://arxiv.org/abs/1911.12126.pdf). Differentiable Architecture Search (DARTS) is now a widely disseminated weight-sharing neural architecture search method. However, there are two fundamental weaknesses remain untackled. First, we observe that the well-known aggregation of skip connections during optimization is caused by an unfair advantage in an exclusive competition. Second, there is a non-negligible incongruence when discretizing continuous architectural weights to a one-hot representation. Because of these two reasons, DARTS delivers a biased solution that might not even be suboptimal. In this paper, we present a novel approach to curing both frailties. Specifically, as unfair advantages in a pure exclusive competition easily induce a monopoly, we relax the choice of operations to be collaborative, where we let each operation have an equal opportunity to develop its strength. We thus call our method Fair DARTS. Moreover, we propose a zero-one loss to directly reduce the discretization gap. Experiments are performed on two mainstream search spaces, in which we achieve new state-of-the-art networks on ImageNet. ## User Guide ### Prerequisites > Python 3 `pip install -r requirements.txt` The `fairdarts` folder includes our search, train and evaluation code. The `darts` folder consists of random and noise experiments on the original DARTS. ### Run Search `python train_search.py --aux_loss_weight 10 --learning_rate 0.005 --batch_size 128 --parse_method threshold_sparse --save 'EXP-lr_0005_alw_10'` > Default batch-size is 128 ### Single Model Training `python train.py --auxiliary --cutout --arch FairDARTS_a --parse_method threshold --batch_size 128 --epoch 600` ### Single Model Evaluation ```bash python evaluate_model.py --arch FairDARTS_b --model_path ../best_model/FairDARTS-b.tar --parse_method threshold ``` ## Searched Architectures by FairDARTS **Note that we select architecture by barring with threshold σ, and |edge| <= 2 per node.** > FairDARTS_a: