Embedding-Free Transformer with Inference Spatial Reduction for Efficient Semantic Segmentation

1 Sogang University, 2 LG Electronics, 3 Pusan National University
*Equal Contribution

Accepted by ECCV 2024


qual

Abstract

We present an Encoder-Decoder Attention Transformer, EDAFormer, which consists of the Embedding-Free Transformer (EFT) encoder and the all-attention decoder leveraging our Embedding-Free Attention (EFA) structure. The proposed EFA is a novel global context modeling mechanism that focuses on functioning the global non-linearity, not the specific roles of the query, key and value. For the decoder, we explore the optimized structure for considering the globality, which can improve the semantic segmentation performance. In addition, we propose a novel Inference Spatial Reduction (ISR) method for the computational efficiency. Different from the previous spatial reduction attention methods, our ISR method further reduces the key-value resolution at the inference phase, which can mitigate the computation-performance trade-off gap for the efficient semantic segmentation. Our EDAFormer shows the state-of-the-art performance with the efficient computation compared to the existing transformer-based semantic segmentation models on three public benchmarks, including ADE20K, Cityscapes and COCO-Stuff. Furthermore, our ISR method reduces the computational cost by up to 61% with minimal mIoU performance degradation on Cityscapes dataset.

🔥 Highlights

1. We propose a novel Embedding-Free Attention (EFA) structure, which removes the embeddings of the query, key and value in the attention mechanism. EFA leads to the competitive performance on image classification and semantic segmentation tasks. We empirically find that EFA is effective for our ISR in terms of considering the trade-off between computation and performance degradation.

Figure 1. Comparison of the previous attention structure and our EFA structure



2. We present EDAFormer, a powerful semantic segmentation model with the proposed Embedding-Free Transformer encoder and all-attention decoder. The all-attention decoder exploits the more number of the proposed EFA module at the higher level to capture the global context more effectively.
Overall architecture

Figure 2. (a) Overall architeucture of EDAFormer (b) Details of Embedding-Free Transformer block



3. We present Inference Spatial Reduction (ISR) method, which reduces the key-value spatial resolution more at the inference phase than the training phase. ISR enables the models to reduce the computational cost with less degradation in segmentation performance at the inference phase. Without additional training, ISR allows to selectively adjust computational costs of the trained model.

ISR

Figure 3. Overview of ISR method at the 1st stage of the encoder. Our ISR adjusts the reduction ratio at the inference, reducing the key and value tokens selectively. This framework can be performed at every stage that contains the self-attention structure. It leads to flexibly reduce the computational cost without disrupting the spatial structure.



4. EDAFormer surpasses the previous transformer-based segmentation models in terms of both efficiency and accuracy on ADE20K, Cityscapes and COCO-Stuff benchmarks.

Experimental Results

table1

Table 1. Performance comparison with the transformer-based state-of-the-art semantic segmentation models & Performance-Computation curves of our EDAFormer and existing segmentation models.

table2

Table 2. Performance comparison with the previous classification models on ImageNet.

table4

Table 3. Computation and performance of EDAFormer with ISR on three standard benchmarks. † indicates 'w/o ISR' where the same reduction ratio is applied at training and inference. ⋆ indicates the fine-tuning. Bold is optimal inference reduction ratio for our EDAFormer.

table5

Table 4. (a) Performance comparison of our model with ISR and our model trained with the increased reduction ratio. (b) Ablation on the effectiveness of EFA for ISR. Our EFA structure is effective for applying our ISR method.

table7

Table 5. Applying our ISR without finetuning to various transformer-based models. These results show the generalizability of our ISR.

Visualizations

BibTeX

@article{yu2024embedding,
  title={Embedding-Free Transformer with Inference Spatial Reduction for Efficient Semantic Segmentation},
  author={Yu, Hyunwoo and Cho, Yubin and Kang, Beoungwoo and Moon, Seunghun and Kong, Kyeongbo and Kang, Suk-Ju},
  journal={arXiv preprint arXiv:2407.17261},
  year={2024}
}