🔥 Highlights:
(a) PixelRefer and Lite surpass prior MLLMs on diverse benchmarks.
(b) They achieve top results with fewer samples.
(c) PixelRefer-Lite cuts inference time and memory use.
Frameworks of two complementary paradigms for region-level representations in our approach:
(a) illustrates Vision-Object Framework, while (b) presents Object-Only Framework.
Architecture of our proposed Scale-Adaptive Object Tokenizer.
Overview of datasets used for model training.
Left: Data distribution for Foundational Object Perception training (1.4M samples).
Right: Data for Visual Instruction Tuning (0.8M samples).
Performance on image-level region understanding benchmarks:
Includes category-level (LVIS and PACO), detailed captioning (DLC-Bench and Ref-L4 [CLAIR]),
phrase-level (Ref-L4 and VG), and reasoning-level (Ferret-Reasoning).
Performance comparisons on VideoRefer-Bench.
Inference time and memory usage on DLC-Bench (Image) and HC-STVG (Video).
We report per-item inference time (s/item) and peak GPU memory (GB).
@article{yuan2025pixelrefer,
title = {PixelRefer: A Unified Framework for Spatio-Temporal Object Referring with Arbitrary Granularity},
author = {Yuqian Yuan and Wenqiao Zhang and Xin Li and Shihao Wang and Kehan Li and Wentong Li and Jun Xiao and Lei Zhang and Beng Chin Ooi},
year = {2025},
journal = {arXiv},
}
@inproceedings{yuan2025videorefer,
title = {Videorefer Suite: Advancing Spatial-Temporal Object Understanding with Video LLM},
author = {Yuqian Yuan and Hang Zhang and Wentong Li and Zesen Cheng and Boqiang Zhang and Long Li and Xin Li and Deli Zhao and Wenqiao Zhang and Yueting Zhuang and others},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference},
pages = {18970--18980},
year = {2025},
}