Publications
Search
1.
Yu, Zifan; Tavakoli, Erfan Bank; Chen, Meida; You, Suya; Rao, Raghuveer; Agarwal, Sanjeev; Ren, Fengbo
TokenMotion: Motion-Guided Vision Transformer for Video Camouflaged Object Detection Via Learnable Token Selection Miscellaneous
2024, (arXiv:2311.02535 [cs]).
@misc{yu_tokenmotion_2024,
title = {TokenMotion: Motion-Guided Vision Transformer for Video Camouflaged Object Detection Via Learnable Token Selection},
author = {Zifan Yu and Erfan Bank Tavakoli and Meida Chen and Suya You and Raghuveer Rao and Sanjeev Agarwal and Fengbo Ren},
url = {http://arxiv.org/abs/2311.02535},
year = {2024},
date = {2024-02-01},
urldate = {2024-02-21},
publisher = {arXiv},
abstract = {The area of Video Camouflaged Object Detection (VCOD) presents unique challenges in the field of computer vision due to texture similarities between target objects and their surroundings, as well as irregular motion patterns caused by both objects and camera movement. In this paper, we introduce TokenMotion (TMNet), which employs a transformer-based model to enhance VCOD by extracting motion-guided features using a learnable token selection. Evaluated on the challenging MoCA-Mask dataset, TMNet achieves state-of-the-art performance in VCOD. It outperforms the existing state-of-the-art method by a 12.8% improvement in weighted F-measure, an 8.4% enhancement in S-measure, and a 10.7% boost in mean IoU. The results demonstrate the benefits of utilizing motion-guided features via learnable token selection within a transformer-based framework to tackle the intricate task of VCOD.},
note = {arXiv:2311.02535 [cs]},
keywords = {},
pubstate = {published},
tppubtype = {misc}
}
The area of Video Camouflaged Object Detection (VCOD) presents unique challenges in the field of computer vision due to texture similarities between target objects and their surroundings, as well as irregular motion patterns caused by both objects and camera movement. In this paper, we introduce TokenMotion (TMNet), which employs a transformer-based model to enhance VCOD by extracting motion-guided features using a learnable token selection. Evaluated on the challenging MoCA-Mask dataset, TMNet achieves state-of-the-art performance in VCOD. It outperforms the existing state-of-the-art method by a 12.8% improvement in weighted F-measure, an 8.4% enhancement in S-measure, and a 10.7% boost in mean IoU. The results demonstrate the benefits of utilizing motion-guided features via learnable token selection within a transformer-based framework to tackle the intricate task of VCOD.
Filter
2024
Yu, Zifan; Tavakoli, Erfan Bank; Chen, Meida; You, Suya; Rao, Raghuveer; Agarwal, Sanjeev; Ren, Fengbo
TokenMotion: Motion-Guided Vision Transformer for Video Camouflaged Object Detection Via Learnable Token Selection Miscellaneous
2024, (arXiv:2311.02535 [cs]).
Abstract | Links | BibTeX | Tags: Computer Science - Computer Vision and Pattern Recognition, Narrative
@misc{yu_tokenmotion_2024,
title = {TokenMotion: Motion-Guided Vision Transformer for Video Camouflaged Object Detection Via Learnable Token Selection},
author = {Zifan Yu and Erfan Bank Tavakoli and Meida Chen and Suya You and Raghuveer Rao and Sanjeev Agarwal and Fengbo Ren},
url = {http://arxiv.org/abs/2311.02535},
year = {2024},
date = {2024-02-01},
urldate = {2024-02-21},
publisher = {arXiv},
abstract = {The area of Video Camouflaged Object Detection (VCOD) presents unique challenges in the field of computer vision due to texture similarities between target objects and their surroundings, as well as irregular motion patterns caused by both objects and camera movement. In this paper, we introduce TokenMotion (TMNet), which employs a transformer-based model to enhance VCOD by extracting motion-guided features using a learnable token selection. Evaluated on the challenging MoCA-Mask dataset, TMNet achieves state-of-the-art performance in VCOD. It outperforms the existing state-of-the-art method by a 12.8% improvement in weighted F-measure, an 8.4% enhancement in S-measure, and a 10.7% boost in mean IoU. The results demonstrate the benefits of utilizing motion-guided features via learnable token selection within a transformer-based framework to tackle the intricate task of VCOD.},
note = {arXiv:2311.02535 [cs]},
keywords = {Computer Science - Computer Vision and Pattern Recognition, Narrative},
pubstate = {published},
tppubtype = {misc}
}
The area of Video Camouflaged Object Detection (VCOD) presents unique challenges in the field of computer vision due to texture similarities between target objects and their surroundings, as well as irregular motion patterns caused by both objects and camera movement. In this paper, we introduce TokenMotion (TMNet), which employs a transformer-based model to enhance VCOD by extracting motion-guided features using a learnable token selection. Evaluated on the challenging MoCA-Mask dataset, TMNet achieves state-of-the-art performance in VCOD. It outperforms the existing state-of-the-art method by a 12.8% improvement in weighted F-measure, an 8.4% enhancement in S-measure, and a 10.7% boost in mean IoU. The results demonstrate the benefits of utilizing motion-guided features via learnable token selection within a transformer-based framework to tackle the intricate task of VCOD.