Chitrakshara: A Large Multilingual Multimodal Dataset for Indian languages

Published: 06 May 2025, Last Modified: 29 May 2025VLMs4All 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multimodal data, Indic Multimodal dataset, Interleaved dataset, Multilingual Multimodal dataset, Cultural VLMs
TL;DR: We introduce the multilingual multimodal Chitrakshara dataset series, covering 11 Indian languages sourced from Common Crawl, comprising (1) Chitrakshara-Interleaved and (2) Chitrakshara-Cap.
Abstract: Multimodal research has predominantly focused on single-image reasoning, with limited exploration of multi-image scenarios. Recent models have sought to enhance multi-image understanding through large-scale pretraining on interleaved image-text datasets. However, most Vision-Language Models (VLMs) are trained primarily on English datasets, leading to inadequate representation of Indian languages. To address this gap, we introduce the Chitrakshara dataset series, covering 11 Indian languages sourced from Common Crawl. It comprises (1) Chitrakshara-IL, a large-scale interleaved pretraining dataset with 193M images, 30B text tokens, and 50M multilingual documents, and (2) Chitrakshara-Cap, which includes 44M image-text pairs with 733M tokens. This paper details the data collection pipeline, including curation, filtering, and processing methodologies. Additionally, we present a comprehensive quality and diversity analysis to assess the dataset’s representativeness across Indic languages and its potential for developing more culturally inclusive VLMs.
Submission Number: 11
Loading