提出為動態推論的模型設計自動機器學習中的架構搜索找出更好的架構。現有的動態推論以神經方塊為單位插入新的分類器達到效果,我們擔憂其方法的優化效果,於是提出使⽤自動搜索模型中的細胞來來動態推論。單位更細小的細胞可以達到顆粒度更細微的多分類器模型。我們所提出的架構其中包含了(一)完全連結細胞取代舊有模型中的細胞,(二)使用自動搜索架構的方法重新尋求更優化的網路架構以及(三)提早決定者來來加速推論。研究實驗設計於影像語義分割中。實驗結果展⽰可以接近一般通用模型的準確率下提升速度1.6倍,在高速模式下達到2.15倍的速度以及只有2%精準度下降。
Dynamic inference that adaptively skips parts of model execution based on the complexity of input data can effectively reduce the computation cost of deep learning models during the inference. However, current architectures for dynamic inference only consider the exits at the block level, whose results may not be suitable for different applications. In this paper, we present the Auto-Dynamic-DeepLab (ADD), a network architecture that enables the fine-grained dynamic inference for semantic image segmentation. To allow the exit points in the cell level, ADD utilizes Neural Architectural Search (NAS), supported by the framework of Auto-DeepLab, to seek the optimal network structure. In addition, ADDreplaces the cells in Auto-DeepLab with the densely connected cells to ease the interference among multiple classifiers and employs the earlier decision-maker to further optimize the performance. Experimental results show that ADD can achieve similar accuracy as Auto-DeepLab in terms of mIoU with 1.6 times speedup. For the fast mode, ADD can achieve 2.15 times speedup with only a 2% accuracy drop compared to those of Auto-DeepLab.