12 Mar 2024 | Hanning Chen, Wenjun Huang, Yang Ni, Sanggeon Yun, Fei Wen, Hugo Latapie, Mohsen Imani
TaskCLIP is a novel framework for task-oriented object detection that aims to find objects suitable for specific tasks. The authors address the challenges of ambiguous semantics and data scarcity in task-oriented object detection by proposing a two-stage design: general object detection followed by task-guided object selection. Unlike previous all-in-one models, TaskCLIP leverages large Vision-Language Models (VLMs) to provide rich semantic knowledge and a uniform embedding space for images and texts. To improve the alignment between object images and their visual attributes, a transformer-based aligner is designed to re-calibrate the embeddings. Additionally, a trainable score function is employed to post-process the VLM matching results for object selection. Experimental results demonstrate that TaskCLIP outperforms state-of-the-art DETR-based models by 3.5% and requires only a single NVIDIA RTX 4090 for training and inference. The framework's effectiveness is validated through empirical experiments on the COCO-Tasks dataset, showing superior performance in both accuracy and training efficiency.TaskCLIP is a novel framework for task-oriented object detection that aims to find objects suitable for specific tasks. The authors address the challenges of ambiguous semantics and data scarcity in task-oriented object detection by proposing a two-stage design: general object detection followed by task-guided object selection. Unlike previous all-in-one models, TaskCLIP leverages large Vision-Language Models (VLMs) to provide rich semantic knowledge and a uniform embedding space for images and texts. To improve the alignment between object images and their visual attributes, a transformer-based aligner is designed to re-calibrate the embeddings. Additionally, a trainable score function is employed to post-process the VLM matching results for object selection. Experimental results demonstrate that TaskCLIP outperforms state-of-the-art DETR-based models by 3.5% and requires only a single NVIDIA RTX 4090 for training and inference. The framework's effectiveness is validated through empirical experiments on the COCO-Tasks dataset, showing superior performance in both accuracy and training efficiency.