Revisit Large-Scale Image–Caption Data in Pre-training Multimodal Foundation Models
AuthorsZhengfeng Lai, Vasileios Saveris, Chen Chen, Hong-You Chen, Haotian Zhang, Bowen Zhang, Juan Lao Tebar, Wenze Hu, Zhe Gan, Peter Grasch, Meng Cao, Yinfei Yang
AuthorsZhengfeng Lai, Vasileios Saveris, Chen Chen, Hong-You Chen, Haotian Zhang, Bowen Zhang, Juan Lao Tebar, Wenze Hu, Zhe Gan, Peter Grasch, Meng Cao, Yinfei Yang
Recent advancements in multimodal models highlight the value of rewritten captions for improving performance, yet key challenges remain. Notably, the role of synthetic captions and their interaction with original web-crawled AltTexts in pre-training is still unclear. Additionally, different multimodal foundation models may have distinct preferences for specific caption formats while the efforts of studying the optimal captions for each foundation model remain limited. In this work, we introduce a novel, controllable, and scalable captioning pipeline that generates diverse caption formats tailored to various multimodal models. By focusing on short synthetic captions (SSC) and descriptive synthetic captions (DSC) as two examples, we systematically investigate their effects and interactions with AltTexts across models such as CLIP, multimodal LLMs, and diffusion models. Our findings reveal that a hybrid approach combining synthetic captions with AltTexts can improves both alignment and performance, with each model showing a preference for particular caption formats. Through comprehensive analysis, our work provides valuable insights into optimizing captioning strategies, advancing the pre-training of multimodal foundation models.