|
|
"main_content": "Introduction With the advances in generative language technologies powered by Large Language Models (LLMs; Brown et al., 2020; Rae et al., 2021; Chowdhery et al., 2022; OpenAI et al., 2023; Tay et al., 2023; Google, 2023), there has been a surge of interest in evaluating the multilingual capabilities of these models. Recent work (Ahuja et al., 2023a,b) shows a consistent performance gap between high resource languages and languages with Figure 1: Performance of state-of-the-art LLMs on different tasks in INDICGENBENCH. We observe a significant performance gap between English and Indic languages across LLMs. lower amounts of web resources available. To develop highly multilingual generative LLMs which should work equally well for 100s of languages spoken by billions of people in the world, it is crucial to evaluate their capabilities across a variety of languages to uncover performance gaps and guide future research. In this work we focus on India, a country with 1369 rationalized mother tongues spoken by more than a billion people.1 Making progress on language technologies for Indic languages will not only improve the state of affairs in this region, but will also provide valuable learning to the NLP community which will be applicable to other geographical regions and language families. There are has been much work from the community in building natural language understanding (NLU) models for Indic languages (Kakwani et al., 2020; Khanuja et al., 2021), as well as evaluation datasets (Doddapaneni et al., 2023; Mhaske et al., 2023) to support such models. In this work, our focus is to develop 1https://en.wikipedia.org/wiki/Languages_of_India arXiv:2404.16816v1 [cs.CL] 25 Apr 2024 \fTask Language Input Output #Languages Dataset Size (H / M / L) (Train / Dev / Test) CROSSSUM-IN (Cross-lingual Summarization) Hindi 9 / 7 / 13 2.9k / 2.9k / 14.5k FLORES-IN (Machine Translation) Konkani 9 / 7 / 13 / 28.9k / 29.3k XQUAD-IN (Multilingual QA) Punjabi 9 / 3 / 1.2k / 1.2k / 14.3k XORQA-IN-XX (Cross-lingual QA) Telugu 9 / 6 / 13 2.8k / 14k / 15.1k XORQA-IN-EN (Cross-lingual QA) Santali 9 / 6 / 13 2.8k / 14k / 15.1k Table 1: INDICGENBENCH, our proposed benchmark, consists of five tasks: Cross-lingual Summarization (CROSSSUM-IN), Machine Translation (FLORES-IN), Multilingual QA (XQUAD-IN) and Cross-lingual QA (XORQA-IN-XX and XORQA-IN-EN). An example from each task, the number of languages for which we collect evaluation data (divided by resourcefulness, higher (H), medium (M) and low (L)), and the number of training/validation/test instances per task is shown above. See Section 2 for details. a high-quality benchmark for evaluating generative language capabilities in a variety of Indic languages across various levels of resourcefulness. We release INDICGENBENCH, a multilingual, multi-way parallel benchmark for measuring language generation capabilities across diverse userfacing tasks in 29 Indic languages across 4 language families (Table 7). INDICGENBENCH extends existing benchmarks such as CrossSum (Bhattacharjee et al., 2023), XQuAD (Artetxe et al., 2020), XorQA (Asai et al., 2021), and FLORES (NLLB-Team et al., 2022) for additional Indic languages and is composed of tasks like cross-lingual summarization (CROSSSUM-IN), machine translation (FLORES-IN), cross-lingual reading comprehension (XORQA-IN-XX and XORQAIN-EN) and multilingual reading comprehension (XQUAD-IN). Each dataset consists of parallel examples in up to 29 low to comparatively higher resource Indic languages; and for some tasks (e.g. CROSSSUM-IN), INDICGENBENCH provides the first-ever evaluation datasets for as many as 18 of these languages. We also release a small training set in all tasks for efficient adaptation of LLMs. Our comprehensive evaluation of various state-ofthe-art proprietary and open-source LLMs on INDICGENBENCH shows that there is a significant gap in performance between English and Indic languages (see Figure 1). Our contributions are as follows: \u2022 Created and released INDICGENBENCH, a high quality text benchmark in diverse language generation tasks like summarization, question-answering, and translation across 29 Indic languages. INDICGENBENCH is the largest generation benchmark for Indic languages. \u2022 Comprehensive experimentation on SoTA LLMs (mT5, Gemma, BLOOM, LLaMA, GPT-3.5, GPT-4, PaLM-2) across various model sizes and training settings to benchmark their Indic language generation capabilities. \u2022 A qualitative analysis for assessing the gaps in current language technologies and define potential directions of future research. \f2 INDICGENBENCH INDICGENBENCH is a high-quality, humancurated benchmark to evaluate text generation capabilities of multilingual models on Indic languages. Our benchmark consists of 5 user-facing tasks (viz., summarization, machine translation, and question answering) across 29 Indic languages spanning 13 writing scripts and 4 language families. For certain tasks, INDICGENBENCH provides the first-ever evaluation dataset for up to 18 Indic languages. Table 1 provides summary of INDICGENBENCH and examples of instances across tasks present in it. Languages in INDICGENBENCH are divided into (relatively) Higher, Medium, and Low resource categories based on the availability of web text resources (see appendix \u00a7A for details).2 Higher (9): Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Tamil, Telugu, Urdu Medium (7): Assamese, Bhojpuri, Nepali, Odia, Punjabi, Pashto, Sanskrit Low (13): Awadhi, Haryanvi, Tibetan, Garhwali, Konkani, Chhattisgarhi, Rajasthani, Maithili, Manipuri, Malvi, Marwari, Santali, Bodo As evident from the lists above, our benchmark provides a broad-coverage over languages with respect to their resourcedness, allowing users to evaluate language models on relatively highresource languages such as Hindi and extremely low-resource languages such as Manipuri in Meitei script on a single benchmark. To curate the evaluation datasets for our benchmark, we use the following existing datasets as the source: CrossSum (Bhattacharjee et al., 2023) for cross-lingual summarization, FLORES (NLLB-Team et al., 2022) for machine translation, XQuAD (Artetxe et al., 2020) for multilingual QA, and XoRQA (Asai et al., 2021) for cross-lingual QA. From each of these datasets we select a subset of English examples to be a part of our benchmark, and then collect professional human translations for these examples in all target Indic languages. Some target languages are already covered by the source datasets in which case we re-purpose this existing data and only collect translations for the remaining languages. We also 2We note that the languages called relatively higher resource in this paper, e.g., Hindi or Bengali, are in fact mid-low Web resource when compared to English and other truly high resource languages. For example, using Wikipedia as a proxy for language resources, compared to 6.6M+ Wikipedia articles in English, there are only 160K Hindi Wikipedia articles. collect and release a small amount of training and validation data making possible evaluation of training techniques like fine-tuning, parameter-efficient training, in-context learning, and others. Why extend existing benchmarks? We chose to collect human translations of existing benchmarks as opposed to creating evaluation data from scratch due to various reasons: \u2022 Translation-based extension of existing benchmark results in multi-way parallel data, allowing researchers to attribute performance due to task knowledge vs. language understanding, and measure cross-lingual generalization \u2022 For many low-resource languages in INDICGENBENCH, clean text knowledge corpus (e.g., Wikipedia) is not available making it difficult to acquire source data for annotation \u2022 By focusing only on translation quality in the target Indic languages, we are able to leverage the quality control that went into designing the source benchmarks. Annotators were professional data labelers working as contractors at our organization and with a vendor. Annotators were paid competitive rates in compliance with applicable labor laws and prevailing market rates. Our pay rate to annotators varied across languages, ranging from USD 2.80 per hour for Pashto to USD 15.90 per hour for Tibetan. Cross-Lingual Summarization: CROSSSUM-IN We create CROSSSUM-IN based on CrossSum (Bhattacharjee et al., 2023), a dataset for crosslingual summarization, which in turn is derived from XL-Sum (Hasan et al., 2021b). CrossSum contains multi-way parallel data in 45 languages where BBC news articles as source in a language are paired with corresponding summaries in other languages. Based on their matching criteria, different languages have different amount of sourcetarget pairs. We sample 700 English article-summary pairs (100 each from train/dev and 500 from test) and ask human translators to translate the English summary into the target Indic languages. CrossSum already contains data for 9 of our 29 target languages; for these languages we sample 100/100/500 examples from the original dataset to maintain equity with other languages we collect data for. CROSSSUMIN contains a total of 20.3k examples across 29 Indic languages in our benchmark. \fMachine Translation: FLORES-IN FLORES-200 (NLLB-Team et al., 2022) is a human-annotated multi-way parallel machine translation (MT) benchmark for 200 languages where the same source English sentences are translated by humans into the target 200 languages. It contains data in 22 of our 29 target languages; we extend this by collecting human translations for the remaining 7 new languages leading to a MT benchmark in 29 Indic languages which we call FLORES-IN. FLORES-200 is divided into three splits: dev (997), devtest (1012), test (992), of which the test set it not public. We collect translations for all 997 dev and 1012 devtest sentences, yielding 2009 sentences per language. Collectively, FLORES-IN contains 58.2k examples across 29 Indic languages. Multilingual Question-Answering: XQUAD-IN We create an Indic Multilingual Question Answering task XQUAD-IN based on the multilingual reading comprehension dataset XQuAD (Artetxe et al., 2020). XQuAD is in turn derived from the SQuAD dataset (Rajpurkar et al., 2016), in which an English Wikipedia passage is paired with multiple question-answer (QA) pairs where the answers are short spans for the given passage. The authors of XQuAD collected human translations for 240 passages and 1190 QA pairs from the SQuAD v1.1 development set into 10 higher resource languages (Hindi being the only Indic language). To create XQUAD-IN, we use the 240 passages and 1190 QA pairs from XQuAD as our test set. We additionally selected 20 passages and 100 QA pairs from the original SQuAD v1.1 training and development sets each to create our training and development set. For all the 280 passages and 1390 QA pairs we collect professional human translations in 12 Indic languages.3 Overall, XQUAD-IN contains 3.3k passages and 16.6k QA pairs in 12 Indic languages. Cross-Lingual Question-Answering: XORQA-IN We create Indic Cross-lingual Question-Answering dataset XORQA-IN based on the XOR-TYDI QA dataset (Asai et al., 2021). XOR-TYDI contains questions in non-English languages paired with English evidence passages and short span answers from those passages (similar to SQuAD). It was 3XQUAD-IN contains all 9 higher-resource languages (see \u00a72) and 3 medium-resources languages, namely, Assamese, Odia, and Punjabi. created with the idea of developing NLP systems that can answer questions in users\u2019 native language by refering to sources in a high-resource language, such as English, which was more likely to contain the answer due to the information scarcity of lowresources languages on the web. The original XORTYDI contains data in 7 languages out of which Bengali and Telugu are the two Indic languages. To create XORQA-IN, we select the 302 Bengali and 237 Telugu examples (Bn/Te-question, Enpassage, En-answer) from the XOR-TYDI dev set as our test data.4 Additionally, we sample 600 examples (equally from Bengali and Telugu) from the training set of XOR-TYDI to create our training (100) and development (500) set. We then follow a two-staged translation process, where we first ask the human translators to translate the Bengali or Telugu question (Bn/Te-question) into English (En-question). In the second stage, we collect translations for these English questions (En-question) into target languages (Xx-question) and translations for the English answers (En-answer) into the target languages (Xx-answer). We create two tasks from this translated data: 1. XORQA-IN-EN: Each example contains (Xxquestion, En-passage, En-answer). This task is similar to the XOR-TYDI dataset in additional Indic languages. 2. XORQA-IN-XX: Each example contains (Xxquestion, En-passage, Xx-answer), where the task is to generate the answer in the same language as the question. We collect data for 28 Indic languages resulting in 32k examples.5 3 Experiments and Analysis We use INDICGENBENCH to benchmark multilingual and cross-lingual language generation capabilities of various LLMs on Indic languages. We perform experiments with a variety of open-source LLMs \u2014 mT5 (Xue et al., 2021), LLaMA (Touvron et al., 2023),6, BLOOMZ (Workshop et al., 2022), Gemma (Team et al., 2024); and proprietary LLMs \u2014 GPT-3.5, GPT-4 (OpenAI et al., 2023), and PaLM-2 (Anil et al., 2023). We compare and analyze the performance of these LLMs and their variants in terms of model sizes under different learning paradigms and set4XOR-TYDI has not publicly released its test set. 5We do not collect translations for Nepali. 6LLaMA-2 could not be used due to a restrictive licence \fModel (LLM) CROSSSUM-IN FLORES-IN XQUAD-IN XORQA-IN-XX XORQA-IN-EN Eval. Metric ChrF ChrF Token-F1 Token-F1 Token-F1 (enxx / xxen) Performance in English GPT-4 30.3 \u2013 / \u2013 64.8 \u2013 37.9 PaLM-2-L 41.1 \u2013 / \u2013 83.7 \u2013 71.4 Average Performance on INDICGENBENCH LLaMA-7B 3.7 11.5 / 21.6 3.8 7.4 10.4 LLaMA-13B 4.1 13.3 / 24.1 4.5 10.4 12.1 LLaMA-65B 4.6 18.1 / 32.7 7.1 16.5 16.3 BLOOM-7B 3.8 18.3 / 31.2 13.8 7.9 23.6 BLOOMZ-7B 1.2 40.8 / 48.4 53.7 7.0 49.0 Gemma-7B-PT 0.0 32.1 / 50.4 0.5 11.7 23.8 Gemma-7B-IT 11.6 18.6 / 29.2 35.3 13.5 24.8 GPT-3.5 16.3 29.2 / 47.7 33.2 21.6 35.5 GPT-4 17.6 32.1 / 54.5 55.7 23.4 46.0 PaLM-2-XXS 7.2 24.0 / 43.4 34.6 13.5 36.8 PaLM-2-XS 15.5 40.7 / 58.3 62.2 29.5 47.8 PaLM-2-S 18.5 43.5 / 61.6 66.7 31.6 57.4 PaLM-2-L 21.2 47.5 / 65.1 69.3 37.4 55.9 Table 2: One-shot performance on INDICGENBENCH across model sizes for all LLMs considered in our work (\u00a73.1). For each LLM family performance improves with increasing model size, with PaLM-2-L performing the best across most tasks. Compared to English, all models under-perform significantly highlighting shortcomings of current SoTA LLMs. See Section 3.1 for details. tings. We first evaluate model performance on one-shot prompting (\u00a73.1) and also measure performance across language categories based on resourcedness (\u00a73.2). We then evaluate the effect of number of in-context examples shown to the model as supervised data (\u00a73.3) and the effect of prompting in a higher-resource language such an English or Hindi (\u00a73.4). Using the training data contained in INDICGENBENCH, we measure how the performance of LLMs after fine-tuning compares with few-shot prompting (\u00a73.5). Finally, we perform qualitative analysis of models on INDICGENBENCH and highlight some areas of improvement for future model development (\u00a73.7). Evaluation Metrics For the cross-lingual summarization and translation tasks, CROSSSUM-IN and FLORES-IN, we report Character-F1 (ChrF) metric (Popovi\u00b4 c, 2015) since token-level metrics like ROUGE and BLEU are not reliable for lowresource languages (Bapna et al., 2022). To stay consistent with existing literature on QA tasks, we report SQuAD-style Token-F1 on our XQUAD-IN and XORQA-IN QA tasks. On FLORES-IN, we report translation performance in both directions\u2014translating from English to the target language (enxx) and vice-versa (xxen). 3.1 Comparison of LLMs on INDICGENBENCH In Table 2 we evaluate LLaMA, BLOOMZ, Gemma, GPT and PaLM-2 family of models on all tasks of INDICGENBENCH in a one-shot prompted setting. Numbers are averaged across all languages in the evaluation data. To compare, we also report English performance for GPT-4 and PaLM-2-L. We see across tasks that larger models from the same LLM family perform better. PaLM2-L performs the best among all LLMs considered, except for the XORQA-IN-EN task where PaLM-2-S performs slightly better. We find that open source LLaMA models perform much worse compared to proprietary models; even the largest LLaMA-65B model significantly underperforms the smallest PaLM-2-XXS model. Gemma7B instruction tuned model performs better than LLaMA-13B as well as LLaMA-65B on most tasks. BLOOMZ, which is an instruction tuned version of BLOOM (Workshop et al., 2022), pre-trained on large-scale multilingual data, works the best on three out of five tasks in INDICGENBENCH. On CROSSSUM-IN and XORQA-IN-XX it falls behind LLaMA and Gemma. Compared to English, we see significant room for improvement (20+ ChrF or Token-F1 points) across all tasks. 3.2 Performance across language categories In Table 3 we report one-shot performance across language categories defined in Section 2. We only show performance for Gemma-7B-IT, BLOOMZ7B, LLaMA-65B, GPT-4 and PaLM-2-L models here and report performance for the other models in appendix B.1. We find that there is a significant performance drop going from higher resourced languages to medium resourced ones, and further drop in lower resourced languages. We would like to point out two observations here: (a) In FLORES-IN, the performance for translating English to the target language (enxx) drops significantly from higher to lower resourced languages (56.9 \u219241.9 for PaLM-2-L) whereas the performance in the xxen direction does not fall this drastically (68.2 \u219262.6). A similar trend is seen when comparing XORQA-IN-XX and XORQA-IN-EN. This highlights that current LLMs are better at un\fCROSSSUM-IN FLORES-IN (enxx / xxen) XQUAD-IN XORQA-IN-XX XORQA-IN-EN Model High Medium Low High Medium Low High Medium High Medium Low High Medium Low LLaMA-65B 4.4 4.6 4.7 18.2 / 31.5 15.4 / 30.0 19.5 / 35.0 8.8 1.9 17.7 13.5 17.1 16.4 14.0 17.3 Gemma-7B-IT 13.9 11.5 10.0 17.6 / 33.7 15.0 / 26.1 21.3 / 27.7 38.8 24.8 18.9 8.3 12.2 29.5 23.9 21.9 BLOOMZ-7B 1.5 1.7 0.6 67.7 / 59.1 39.4 / 50.2 22.9 / 40.0 55.5 48.1 10.8 2.8 6.2 64.7 45.8 39.5 GPT-4 19.4 17.9 16.3 36.2 / 59.6 30.7 / 55.2 29.9 / 50.5 56.1 54.6 25.8 21.6 22.6 49.4 50.0 41.8 PaLM-2-L 25.2 23.1 17.5 56.9 / 68.2 45.9 / 65.6 41.9 / 62.6 72.5 59.8 41.9 36.7 34.6 57.3 57.9 53.9 Table 3: One-shot performance across language categories based on resourcedness defined in Section 2. For all tasks, we witness significantly lower performances in medium and low resource languages compared to the higher resource ones. Please see Table 9 in appendix B.1 for results on other models. See Section 3.2 for more details. derstanding these lower-resourced languages than generating fluent text in them. (b) In few cases, we see smaller performance deltas between medium and lower resourced languages compared to higher and medium categories. From our analysis, this can mainly be attributed to many languages in the lower category being similar to Hindi and written in the same Devanagari script. FLORES-IN XORQA-IN-XX Model (LLM) 0 1 5 0 1 2 3 LLaMA-7B 8.0 11.5 11.4 5.0 7.4 9.0 9.2 LLaMA-13B 8.6 13.3 13.4 6.3 10.4 12.2 13.1 LLaMA-65B 14.0 18.1 18.3 12.3 16.5 18.7 19.4 PaLM-2-XXS 0.8 24.0 26.9 8.9 13.5 15.8 17.5 PaLM-2-XS 20.1 40.7 42.3 21.4 29.5 32.2 33.2 PaLM-2-S 24.9 43.5 45.2 22.7 31.6 33.4 35.4 PaLM-2-L 31.1 47.5 49.3 31.9 37.4 39.7 41.1 Table 4: Performance by varying number of incontext exemplars for LLaMA and PaLM-2 models on FLORES-IN (enxx) and XORQA-IN-XX tasks (\u00a73.3). Performance improves with increasing amounts of supervision provided in-context. Refer appendix B.2 for results on other tasks and models. 3.3 In-context learning on INDICGENBENCH In this section we aim to understand the impact of the number of in-context examples shown to the LLM during few-shot prompting. Since CROSSSUM-IN and XQUAD-IN input passages are long, we are only able to perform 0-and-1-shot prompting. For XORQA-IN-XX and XORQA-IN-EN we perform 0-to-3-shot prompting, and for FLORES-IN we perform 0, 1 and 5-shot prompting. We show performance for FLORES-IN and XORQA-IN-XX in Table 4. Other results are shown in appendix B.5 due to space limitations. Across model families and sizes we observe that increasing the amount of supervision in terms of the in-context examples improves performance. 3.4 Transfer from high-resource languages For languages with no supervised data, one option to improve performance is utilizing existing supervised data another language as in-context exemplars. In this section we aim to study if the language in which the model is prompted plays a role in performance. In Table 5 we show performance when the model is prompted in English vs. Hindi, a representative higher resourced Indic language. For comparison, we also show performance when the in-context exemplar is in the same language as the test instance. We find that Hindi in-context exemplars are much more useful for all models as compared to their English counterparts. Surprisingly, for smaller models, performance with Hindi exemplars comes extremely close to prompting in the test language, even better sometimes. 3.5 Fine-tuning LLMs on INDICGENBENCH and Comparison with In-Context Learning As outlined in Section 2, we also release a small, high-quality training set for all tasks in INDICGENBENCH (except FLORES-IN which only has dev and test sets). This training data can be used to adapt LLMs to downstream tasks in Indic languages via fine-tuning and other training techniques. Table 6 shows our results of fine-tuning mT5 and PaLM-2 models and their comparison with incontext learning using PaLM-2. We fine-tune each model on training data from all available languages including English, use the development set for early stopping, and report numbers on the test set. For question-answering tasks that require generating short spans as answers, we find that older generation mT5 models significantly outperform smaller \fCROSSSUM-IN XQUAD-IN XORQA-IN-XX XORQA-IN-EN Model (1-Shot Lang) Higher Medium Low Higher Medium Higher Medium Low Higher Medium Low PaLM-2-XXS (En) 0.3 0.1 0.3 38.5 31.9 14.0 5.4 7.3 40.3 35.0 30.8 PaLM-2-XXS (Hi) 1.3 2.1 3.7 39.8 33.3 17.6 8.5 10.5 45.5 39.4 31.9 PaLM-2-XXS (Lang) 7.7 7.6 6.7 37.2 26.8 17.7 8.8 12.8 43.6 38.3 31.5 PaLM-2-XS (En) 0.3 0.2 0.5 64.3 62.2 30.6 23.9 20.8 35.9 32.1 27.2 PaLM-2-XS (Hi) 3.5 5.5 9.9 65.4 63.5 33.2 25.8 22.7 49.3 46.8 40.7 PaLM-2-XS (Lang) 18.4 16.4 13.0 65.1 53.3 35.8 27.6 26.1 53.3 51.5 42.2 PaLM-2-S (En) 0.4 0.2 0.5 67.4 66.8 27.5 19.9 19.9 48.6 47.1 40.8 PaLM-2-S (Hi) 4.4 6.9 13.2 68.5 67.5 34.2 27.0 24.9 58.3 57.0 49.0 PaLM-2-S (Lang) 22.4 19.8 15.1 69.9 57.3 36.6 30.3 28.6 60.1 61.4 53.6 PaLM-2-L (En) 0.4 0.2 0.6 71.7 69.8 37.7 33.2 29.7 28.7 27.5 26.2 PaLM-2-L (Hi) 4.7 7.0 13.8 72.6 71.0 39.7 34.6 31.2 45.5 44.8 41.5 PaLM-2-L (Lang) 25.2 23.1 17.5 72.5 59.8 41.9 36.7 34.6 57.3 57.9 53.9 Table 5: Effect of in-context exemplar language (\u00a73.4): Performance comparison when the one-shot exemplar is provided in English (En) or Hindi (Hi) as opposed to the language of the test instance (Lang). In-context prompting in the test language (Lang) provides the best performance, followed by Hindi (Hi) and then English (En). This follows the same order as relatedness between test and prompting language, highlighting the benefit of prompting in a language more related to the test language (e.g., Hindi compared to English in this case). CROSSSUM-IN XQUAD-IN XORQA-IN-XX XORQA-IN-EN Model Higher Medium Low Higher Medium Higher Medium Low Higher Medium Low mT5 models \u2013 Fine-Tuned mT5-B 19.5 18.9 15.1 46.2 30.9 3.8 4.0 5.5 31.7 31.4 30.8 mT5-L 20.5 19.9 15.5 54.3 38.6 11.8 11.0 10.4 56.8 53.7 45.4 mT5-XL 22.7 21.1 15.3 57.4 40.5 20.7 13.5 15.6 58.2 56.2 46.5 mT5-XXL 25.9 24.2 10.4 62.0 44.4 28.8 23.6 21.9 70.3 68.9 59.1 PaLM-2 models Fine-Tuned PaLM-2-XXS 22.5 19.7 16.5 41.2 18.1 18.1 10.9 12.9 60.2 56.9 50.9 PaLM-2-XS 28.5 25.6 18.8 40.2 16.9 30.4 23.6 19.6 69.1 66.6 56.6 PaLM-2 models Few-shot prompted PaLM-2-XXSF S 7.7 7.6 6.7 37.2 26.8 22.7 12.3 16.4 51.6 47.1 38.4 PaLM-2-XSF S 18.4 16.4 13.0 65.1 53.3 39.2 32.0 29.5 67.0 65.3 56.5 Table 6: (Top) Fine-tuning performance of mT5 and PaLM-2 models (\u00a73.5). Bold represents best numbers among fine-tuned models. PaLM-2 outperforms mT5 for longer-form generation task (CROSSSUM-IN), whereas mT5 models do well on short answer-span QA tasks. (Bottom) Comparison of in-context learning vs. fine-tuning on PaLM-2 models. In Green , we highlight the best PaLM-2 number (among fine-tuned and few-shot). For CROSSSUM-IN task requiring longer-form generation, fine-tuning outperforms few-shot prompting. PaLM-2 models in most cases.7 On CROSSSUMIN which requires generating a longer summary, we find that PaLM-2 models are more effective. For Question-Answering tasks, as the model size increases from PaLM-2-XXS to PaLM-2-XS, we see that in-context learning yields equal or better performance compared to fine-tuning the model. For example, in XORQA-IN-XX, as the model size increases from XXS to XS, we see that the gap between few-shot prompting and fine-tuning sig7Since the parameter count for PaLM-2 models is not public, we cannot attribute this performance difference to model sizes. nificantly increases from 2-4% (in XXS) to 9-10% (in XS). In the case of XQUAD-IN, we see that for the larger PaLM-2-XS model, its much better to perform in-context learning as compared to finetuning, for both medium and high resource Indic languages. For XORQA-IN-EN, in-context learning reaches the fine-tuning performance as model size increases to PaLM-2-XS. For the CROSSSUMIN, the gap between fine-tuning and in-context learning is reducing as model size increases, which reinforces that for even larger model sizes, it might be better to learn in-context. \f3.6 Analyzing Tokenizer across Indic languages Figure 2: Tokenizer fertility for different languages using OpenAI\u2019s Byte Pair Encoding. We note that midlow resource languages suffer from high token fertility. (Section 3.6) Figure 3: Percentage of in-context XQUAD-IN exemplars that fit in a 1920 token context window. Midlow resource languages\u2019 high token fertility (Figure 2) makes it impossible to perform few-shot prompting in these languages. (Section 3.6) In Figure 2, we compare the token fertility (average number of sub-words that a word is broken down into by the tokenizer) across all Indic langugaes in INDICGENBENCH.8 We find that the token fertility varies significantly across languages; from 4.1 for Pashto to 19.9 for Tibetan. A high token fertility is undesirable and can disproportionately effect a particular language\u2019s performance. For languages where text is broken into more number of tokens, fewer in-context examples 8We use OpenAI\u2019s BPE tokenizer (platform.openai.com/tokenizer). PaLM-2 tokenizer is not publicly available. can be input to the LLM during inference. This can negatively impact performance (see Table 4). In Figure 3, we show how the percentage of data that fits in a particular context length changes with number of in-context examples for various languages. For example, we see in Figure 3 that for medium resource languages with high token-fertility like Oriya and Punjabi we can in-corporate much fewer in-context examples, compared to Indic languages with lower token-fertility like Hindi and Marathi. 3.7 Qualitative Analysis We manually analyze predictions from the best performing model PaLM-2-L with the aim to understand the shortcomings of current LLMs and highlight areas of improvements for future research. We randomly select 20 examples each in the CROSSSUM-IN and FLORES-IN tasks for the following languages which are reviewed by native speakers: Awadhi, Haryanvi, Chhatisgarhi, Konkani, and Assamese. We found the following patterns of errors: Generation in a related language The languages Awadhi, Haryanvi, and Chhatisgarhi are related to a higher resource language Hindi and written in the same script Devanagari. We find that the model generates mixed-language output with words mixed from Hindi and also outputs incorrectly inflected forms of the main verbs in the output. We show couple of examples of this phenomena in Figure 5a in the appendix. Hallucination and Missing Information In the cross-lingual summarization task CROSSSUM-IN, we find that the model often outputs extra information that is not present in the source article. In translation, we have observed examples where come crucial information from the source sentence is missing from the generated output. Also, in some cases, the model fails to understand polysemous English words and generates translation for the incorrect sense. We show examples of these phenomena in Figures 4a, 4b, and 5b in the appendix. 4 Related Work In the last few years, many multilingual LLMs have been developed\u2014starting from mBART (Liu et al., 2020) trained on 25 languages to LLMs that are pre-trained on hundreds of languages, such as mT55 (Xue et al., 2021), PaLM-2 (Anil et al., 2023), GPT-4 (Achiam et al., 2023), Gemini (Google, 2023), and others. These LLMs \fare typically evaluated on individual multilingual tasks for Translation: WMT (Farhad et al., 2021), FLORES (NLLB-Team et al., 2022); QuestionAnswering: XQuAD (Artetxe et al., 2020), TyDiQA (Clark et al., 2020), XorQA (Asai et al., 2021); Summarization: XLSUM (Hasan et al., 2021a), CrossSum (Bhattacharjee et al., 2023); Reasoning: MGSM (Shi et al., 2022), XCOPA (Ponti et al., 2020) to name a few, or on multilingual benchmarks such as, XTREME (Hu et al., 2020) and XTREME-UP (Ruder et al., 2023). However, most of these evaluation resources contain only a handful of languages or do not contain data for low resource languages, especially Indics. Besides, cross-lingual evaluation data is even more sparse. This work is an effort to bridge these gaps by releasing INDICGENBENCH, a suite of datasets covering diverse cross-lingual and multilingual generation tasks in Indic languages. Most work on creating evaluation data on Indic languages have focused on natural language understanding (NLU) tasks. Kakwani et al. (2020) and Doddapaneni et al. (2023) have released NLU test sets in Indic languages for a wide variety of tasks such as QA and NLI. Naamapadam (Mhaske et al., 2023) is a named entity recognition dataset specifically for Indic languages, MASSIVE (FitzGerald et al., 2022) is a slot-filling and intent classification dataset available in 7 Indic languages, IndicGLUE (Kakwani et al., 2020) is an NLU benchmark for 11 Indic languages, whereas GLUECoS (Khanuja et al., 2020) is a Hindi-English code-mixed benchmark, containing various NLU tasks. The Belebele Benchmark (Bandarkar et al., 2023) is a multiplechoice machine reading comprehension dataset for 122 languages of which 17 are Indic. On the other hand, INDICGENBENCH is a natural language generation (NLG) benchmark. Recently, there has been work in creating evaluation benchmarks for natural language generation (NLG) on Indic languages. IndicNLG Suite (Kumar et al., 2022), consisting of 5 NLG taks in 11 Indic languages, is a leap in this direction. These datasets in this suite are automatically created, either using data from the web (e.g., Wikipedia) or using translation systems. There are few works which create evaluation data for individual tasks in Indic languages. For example, IndicTrans2 (Gala et al., 2023) creates an n-way parallel dataset for machine translation in 22 scheduled Indian Languages, Mukhyansh (Madasu et al., 2023) and PMIndiaSum (Urlana et al., 2023) are headline generation datasets for 8 and 14 Indic languages respectively, and TeSum (Urlana et al., 2022) is an abstractive summarization dataset in the Telugu language. Ramesh et al. (2022) introduced Samanantar, a large translation dataset covering 11 Indic languages. Our work complements IndicNLGSuite and the other datasets in multiple ways. INDICGENBENCH is manually annotated ensuring highquality, noise-free text which is not typically found on the web. Our benchmark contains evaluation data for a much larger set of languages spanning low, medium and high resource. Our datasets are multi-language parallel enabling better comparison among different languages. Lastly, we focus on a complementary and challenging set of tasks, including cross-lingual summarization, cross-lingual and multilingual question answering, and translation. 5" |