Figure 1 - uploaded by Pichao Wang
Content may be subject to copyright.
The comparision among TransReID, ViT-BoT, ResNet and ResNeSt on MSMT17. The computational cost of ResNet50 is taken as the baseline for inference time comparison.

The comparision among TransReID, ViT-BoT, ResNet and ResNeSt on MSMT17. The computational cost of ResNet50 is taken as the baseline for inference time comparison.

Source publication
Preprint
Full-text available
In this paper, we explore the Vision Transformer (ViT), a pure transformer-based model, for the object re-identification (ReID) task. With several adaptations, a strong baseline ViT-BoT is constructed with ViT as backbone, which achieves comparable results to convolution neural networks-(CNN-) based frameworks on several ReID benchmarks. Furthermor...

Context in source publication

Context 1
... the novel designed SIE and JPM modules, we propose the final model architecture termed as TranReID. As shown in Figure 1, TransReID achieves great Speed-accuracy tradeoff. ...

Similar publications

Preprint
Full-text available
Video transformers have recently emerged as a competitive alternative to 3D CNNs for video understanding. However, due to their large number of parameters and reduced inductive biases, these models require supervised pretraining on large-scale image datasets to achieve top performance. In this paper, we empirically demonstrate that self-supervised...

Citations

... TTSR [49] restored the texture information of the image super-resolution result based on the transformer. TransReID [50] applied the transformer to the field of retrieval for the first time and achieved similar results with the CNN-based method. Yu et al. [51] extend transformer model to Multimodal Transformer (MT) model for image captioning and significantly outperformed the previous state-of-the-art methods. ...
Preprint
Full-text available
Cross-view geo-localization is a task of matching the same geographic image from different views, e.g., unmanned aerial vehicle (UAV) and satellite. The most difficult challenges are the position shift and the uncertainty of distance and scale. Existing methods are mainly aimed at digging for more comprehensive fine-grained information. However, it underestimates the importance of extracting robust feature representation and the impact of feature alignment. The CNN-based methods have achieved great success in cross-view geo-localization. However it still has some limitations, e.g., it can only extract part of the information in the neighborhood and some scale reduction operations will make some fine-grained information lost. In particular, we introduce a simple and efficient transformer-based structure called Feature Segmentation and Region Alignment (FSRA) to enhance the model's ability to understand contextual information as well as to understand the distribution of instances. Without using additional supervisory information, FSRA divides regions based on the heat distribution of the transformer's feature map, and then aligns multiple specific regions in different views one on one. Finally, FSRA integrates each region into a set of feature representations. The difference is that FSRA does not divide regions manually, but automatically based on the heat distribution of the feature map. So that specific instances can still be divided and aligned when there are significant shifts and scale changes in the image. In addition, a multiple sampling strategy is proposed to overcome the disparity in the number of satellite images and that of images from other sources. Experiments show that the proposed method has superior performance and achieves the state-of-the-art in both tasks of drone view target localization and drone navigation. Code will be released at https://github.com/Dmmm1997/FSRA