DARTS:Differentiable Architecture Search

This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner. Unlike conventional approaches of applying evolution or reinforcement learning over a discrete and non-differentiable search space, our method is based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent. Extensive experiments on CIFAR-10, ImageNet, Penn Treebank and WikiText-2 show that our algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques. Our implementation has been made publicly available to facilitate further research on efficient architecture search algorithms.


1 💡Note

1.1 论文试图解决什么问题?

  • Dominant approaches treated the architecture search as a black-box optimization problem over a discrete domain, leading to a large number of architecture evaluation required.
  • Efficient architecture search

1.2 这是否是一个新的问题?

  • Numerous prior studies have explored various approaches based on reinforcement learning (RL), evolution, or Bayesian optimization.
  • But DARTS approaches the problem from a different angle.

1.3 这篇文章要验证一个什么科学假设?

Compared with previous methods, differentiable network architecture search based on bilevel optimization is efficient and transferable.

1.4 有哪些相关研究?如何归类?谁是这一课题在领域内值得关注的研究员?

  1. non-differentiable search techniques: reinforcement learning (RL), evolution, or Bayesian optimization.
    • Low efficiency
  2. searching architectures within a continuous domain
    • Seek to fine-tune a specific aspect of an architecture, as opposed to learning high-performance architecture building blocks with complex graph topologies with a rich search space.

1.5 🔴论文中提到的解决方案之关键是什么?

[!Note] An overview of DARTS DARTS:Differentiable Architecture Search--20230621 (a) Operations on the edges are initially unknown.

  1. Continuous relaxation of the search space by placing a mixture of candidate operations on each edge.

  2. Joint optimization of the mixing probabilities and the network weights by solving a bilevel optimization problem.

  3. Inducing the final architecture from the learned mixing probabilities.

1-DARTS:Differentiable Architecture Search--20230621

1.5.1 Search Space

Searching for a computation cell as the building block of the final architecture.

  1. A cell is a directed acyclic graph consisting of an ordered sequence of N nodes.
  2. Each node: latent representation
    • "Latent representation" refers to a representation form used in machine learning and deep learning to represent hidden features or abstract representations of data.
  3. Each directed edge: operation
  4. Each intermediate node is computed based on all of its predecessors:
  5. A special zero operation is also included to indicate a lack of connection between two nodes.
  6. learning the cell learning the operations on its edges.

1.5.2 Continuous Relaxation And Optimization

  1. Relaxation: Relax the categorical choice of a particular operation to a softmax over all possible operations:
    • : the operation mixing weights for a pair of nodes
    • learning a set of continuous variables
  2. [[Bilevel optimization]]: After relaxation, our goal is to jointly learn the architecture and the weights within all the mixed operations
    • optimize the validation loss, but using gradient descent.
    • the architecture could be viewed as a special type of hyper-parameter

1.5.3 Approximate Architecture Gradient

  1. One-step Approximation: (Also can be seen in Nesterov,MAML)Evaluating the architecture gradient exactly can be prohibitive due to the expensive inner optimization.
    • helps to converge to a better local optimum
  2. Chain rule:
    1. Let:
    2. So:
    3. Where:
    4. And:
    5. And:
    6. So:
  3. Finite difference approximation:The expression above contains an expensive matrix-vector product in its second term.
    • Central difference:
    • Taylor Formula:
      1. We have:
      2. Then:
      3. Replace with :
      4. Subtract one equation from another:
    • Where:
  4. First-order Approximation: , the second-order derivative will disappear.

1.5.4 Deriving Discrete Architectures

  1. The strength of an operation is defined as .
  2. Discretization: At the end of search, a discrete architecture can be obtained by replacing each mixed operation with the most likely operation
  3. Retain the top-k strongest operations.

1.5.5 Complexity Analysis

1.6 论文中的实验是如何设计的?

Search for the cell architectures using DARTS, and determine the best cells based on their validation performance 3-DARTS:Differentiable Architecture Search--20230621

  • operations sets:
    1. Convolutional Cells(Order: ; N = 7 nodes):
      1. 3 × 3 and 5 × 5 separable convolutions
      2. 3 × 3 and 5 × 5 dilated separable convolutions
      3. 3 × 3 max pooling
      4. 3 × 3 average pooling
      5. identity
      6. zero
    2. Recurrent Cells(N = 12 nodes):
      1. linear transformations
        1. tanh
        2. relu
        3. sigmoid
      2. identity
      3. zero

1.6.2 Architecture Evaluation

Use these cells to construct larger architectures, which we train from scratch and report their performance on the test set.

  • To evaluate the selected architecture, randomly initialize its weights (weights learned during the search process are discarded), train it from scratch, and report its performance on the test set.

1.6.3 Parameter Analysis

1.6.3.1 Alternative Optimization Strategies

  1. and are jointly optimized over the union of the training and validation sets using coordinate descent
    • Even worse than random search
  2. optimize simultaneously with (without alteration) using SGD
    • Worse than DARTS

1.6.3.2 Search with Increased Depth

  1. The enlarged discrepancy of the number of channels between architecture search and final evaluation.
  2. Searching with a deeper model might require different hyper-parameters due to the increased number of layers to back-prop through

1.7 论文中的实验及结果有没有很好地支持需要验证的科学假设?

  1. Result #1 :2-DARTS:Differentiable Architecture Search--20230621
  2. Result #2 :
    1. CIFAR-10 4-DARTS:Differentiable Architecture Search--20230621
    2. PTB 5-DARTS:Differentiable Architecture Search--20230621
  3. Result #3 :Transferability
    1. CIFAR-10 -> ImageNet 6-DARTS:Differentiable Architecture Search--20230621
    2. PTB -> WT27-DARTS:Differentiable Architecture Search--20230621

1.8 这篇论文到底有什么贡献?

  1. A novel algorithm for differentiable network architecture search based on bilevel optimization.
  2. Extensive experiments showing that gradient-based architecture search achieves highly competitive results and remarkable efficiency improvement.
  3. Transferable architectures learned by DARTS.

1.9 下一步呢?有什么工作可以继续深入?

  1. Differentiable architecture search on Graph neural networks.
  2. Parallel DARTS

2 代码解析

DARTS

文章作者: Haowei
文章链接: http://howiehsu0126.github.io/2023/06/23/DARTS/
版权声明: 本博客所有文章除特别声明外,均采用 CC BY-NC-SA 4.0 许可协议。转载请注明来自 Haowei Hub