We present an Encoder-Decoder Attention Transformer, EDAFormer, which consists of the Embedding-Free Transformer (EFT) encoder and the all-attention decoder leveraging our Embedding-Free Attention (EFA) structure. The proposed EFA is a novel global context modeling mechanism that focuses on functioning the global non-linearity, not the specific roles of the query, key and value. For the decoder, we explore the optimized structure for considering the globality, which can improve the semantic segmentation performance. In addition, we propose a novel Inference Spatial Reduction (ISR) method for the computational efficiency. Different from the previous spatial reduction attention methods, our ISR method further reduces the key-value resolution at the inference phase, which can mitigate the computation-performance trade-off gap for the efficient semantic segmentation. Our EDAFormer shows the state-of-the-art performance with the efficient computation compared to the existing transformer-based semantic segmentation models on three public benchmarks, including ADE20K, Cityscapes and COCO-Stuff. Furthermore, our ISR method reduces the computational cost by up to 61% with minimal mIoU performance degradation on Cityscapes dataset.
1. We propose a novel Embedding-Free Attention (EFA) structure, which removes the embeddings of the query, key and value in the attention mechanism. EFA leads to the competitive performance on image classification and semantic segmentation tasks. We empirically find that EFA is effective for our ISR in terms of considering the trade-off between computation and performance degradation.
Figure 1. Comparison of the previous attention structure and our EFA structure
Figure 2. (a) Overall architeucture of EDAFormer (b) Details of Embedding-Free Transformer block
Figure 3. Overview of ISR method at the 1st stage of the encoder. Our ISR adjusts the reduction ratio at the inference, reducing the key and value tokens selectively. This framework can be performed at every stage that contains the self-attention structure. It leads to flexibly reduce the computational cost without disrupting the spatial structure.
Table 1. Performance comparison with the transformer-based state-of-the-art semantic segmentation models & Performance-Computation curves of our EDAFormer and existing segmentation models.
Table 2. Performance comparison with the previous classification models on ImageNet.
Table 3. Computation and performance of EDAFormer with ISR on three standard benchmarks. †indicates 'w/o ISR' where the same reduction ratio is applied at training and inference. ⋆ indicates the fine-tuning. Bold is optimal inference reduction ratio for our EDAFormer.
Table 4. (a) Performance comparison of our model with ISR and our model trained with the increased reduction ratio. (b) Ablation on the effectiveness of EFA for ISR. Our EFA structure is effective for applying our ISR method.
Table 5. Applying our ISR without finetuning to various transformer-based models. These results show the generalizability of our ISR.
@article{yu2024embedding,
title={Embedding-Free Transformer with Inference Spatial Reduction for Efficient Semantic Segmentation},
author={Yu, Hyunwoo and Cho, Yubin and Kang, Beoungwoo and Moon, Seunghun and Kong, Kyeongbo and Kang, Suk-Ju},
journal={arXiv preprint arXiv:2407.17261},
year={2024}
}