Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -8,6 +8,7 @@ pretty_name: NoLiMa
|
|
| 8 |
---
|
| 9 |
|
| 10 |
# NoLiMa: Long-Context Evaluation Beyond Literal Matching
|
|
|
|
| 11 |
|
| 12 |
## Abstract
|
| 13 |
Recent large language models (LLMs) support long contexts ranging from 128K to 1M tokens. A popular method for evaluating these capabilities is the needle-in-a-haystack (NIAH) test, which involves retrieving a "needle" (relevant information) from a "haystack" (long irrelevant context). Extensions of this approach include increasing distractors, fact chaining, and in-context reasoning. However, in these benchmarks, models can exploit existing literal matches between the needle and haystack to simplify the task. To address this, we introduce **NoLiMa**, a benchmark extending NIAH with a carefully designed needle set, where questions and needles have **minimal lexical overlap, requiring models to infer latent associations to locate the needle within the haystack**. We evaluate 12 popular LLMs that claim to support contexts of at least 128K tokens. While they perform well in short contexts (<1K), performance degrades significantly as context length increases. At 32K, for instance, 10 models drop below 50\% of their strong short-length baselines. Even GPT-4o, one of the top-performing exceptions, experiences a reduction from an almost-perfect baseline of 99.3\% to 69.7\%. Our analysis suggests these declines stem from the increased difficulty the attention mechanism faces in longer contexts when literal matches are absent, making it harder to retrieve relevant information.
|
|
@@ -22,15 +23,25 @@ Recent large language models (LLMs) support long contexts ranging from 128K to 1
|
|
| 22 |
| Gemini 1.5 Pro | 2M | 2K | 92.6 (78.7) | <ins>86.4</ins> | <ins>82.7</ins> | 75.4 | 63.9 | 55.5 | 48.2 |
|
| 23 |
| Jamba 1.5 Mini | 256K | <1K | 92.4 (78.6) | 76.3 | 74.1 | 70.8 | 62.2 | 52.7 | *43.6* |
|
| 24 |
| Command R+ | 128K | <1K | 90.9 (77.3) | 77.0 | 73.5 | 66.3 | *39.5* | *21.3* | *7.4* |
|
|
|
|
|
|
|
|
|
|
| 25 |
| Mistral Large 2 | 128K | 2K | 87.9 (74.7) | <ins>86.1</ins> | <ins>85.5</ins> | 73.3 | 51.5 | *32.6* | *18.7* |
|
| 26 |
| Claude 3.5 Sonnet | 200K | 4K | 87.6 (74.4) | <ins>85.4</ins> | <ins>84.0</ins> | <ins>77.6</ins> | 61.7 | 45.7 | *29.8* |
|
|
|
|
| 27 |
| Gemini 1.5 Flash | 1M | <1K | 84.7 (72.0) | 68.6 | 61.6 | 51.0 | 44.4 | *35.5* | *28.6* |
|
| 28 |
| GPT-4o mini | 128K | <1K | 84.9 (72.2) | 67.7 | 58.2 | 44.1 | *32.6* | *20.6* | *13.7* |
|
|
|
|
| 29 |
| Llama 3.1 8B | 128K | 1K | 76.7 (65.2) | <ins>65.7</ins> | 54.4 | 44.1 | *31.9* | *22.6* | *14.2* |
|
|
|
|
| 30 |
|
| 31 |
This table presents the performance results of selected models on NOLIMA tests. The **base score** represents a model’s accuracy on the task at short contexts (250, 500, and 1K) and serves as a controlled reference to measure performance degradation at longer contexts.
|
| 32 |
The **effective length** is defined as the longest context where a model maintains at least 85% of its base score. Scores above this threshold are <ins>underlined</ins>, while scores dropping below 50% of the base score are *italicized*.
|
| 33 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
### NoLiMa-Hard Results
|
| 35 |
| Models | Base Score | 4K | 8K | 16K | 32K |
|
| 36 |
|-----------------------|:---------:|:---:|:---:|:---:|:---:|
|
|
@@ -77,13 +88,11 @@ The evaluation code and needle set data is licensed under the [Adobe Research Li
|
|
| 77 |
## Cite
|
| 78 |
If you use the **NoLiMa** dataset, filtering pipeline, or code, please cite the paper:
|
| 79 |
```bibtex
|
| 80 |
-
@
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
|
| 84 |
-
|
| 85 |
-
|
| 86 |
-
primaryClass={cs.CL},
|
| 87 |
-
url={https://arxiv.org/abs/2502.05167},
|
| 88 |
}
|
| 89 |
```
|
|
|
|
| 8 |
---
|
| 9 |
|
| 10 |
# NoLiMa: Long-Context Evaluation Beyond Literal Matching
|
| 11 |
+
This repository contains the data associated with our **ICML 2025** paper, "[NoLiMa: Long-Context Evaluation Beyond Literal Matching](https://arxiv.org/abs/2502.05167)".
|
| 12 |
|
| 13 |
## Abstract
|
| 14 |
Recent large language models (LLMs) support long contexts ranging from 128K to 1M tokens. A popular method for evaluating these capabilities is the needle-in-a-haystack (NIAH) test, which involves retrieving a "needle" (relevant information) from a "haystack" (long irrelevant context). Extensions of this approach include increasing distractors, fact chaining, and in-context reasoning. However, in these benchmarks, models can exploit existing literal matches between the needle and haystack to simplify the task. To address this, we introduce **NoLiMa**, a benchmark extending NIAH with a carefully designed needle set, where questions and needles have **minimal lexical overlap, requiring models to infer latent associations to locate the needle within the haystack**. We evaluate 12 popular LLMs that claim to support contexts of at least 128K tokens. While they perform well in short contexts (<1K), performance degrades significantly as context length increases. At 32K, for instance, 10 models drop below 50\% of their strong short-length baselines. Even GPT-4o, one of the top-performing exceptions, experiences a reduction from an almost-perfect baseline of 99.3\% to 69.7\%. Our analysis suggests these declines stem from the increased difficulty the attention mechanism faces in longer contexts when literal matches are absent, making it harder to retrieve relevant information.
|
|
|
|
| 23 |
| Gemini 1.5 Pro | 2M | 2K | 92.6 (78.7) | <ins>86.4</ins> | <ins>82.7</ins> | 75.4 | 63.9 | 55.5 | 48.2 |
|
| 24 |
| Jamba 1.5 Mini | 256K | <1K | 92.4 (78.6) | 76.3 | 74.1 | 70.8 | 62.2 | 52.7 | *43.6* |
|
| 25 |
| Command R+ | 128K | <1K | 90.9 (77.3) | 77.0 | 73.5 | 66.3 | *39.5* | *21.3* | *7.4* |
|
| 26 |
+
| Llama 4 Maverick 🆕 | 1M | 2K | 90.1 (76.6) | <ins>81.6</ins> | <ins>78.3</ins> | 68.8 | ⏳ | ⏳ | ⏳ |
|
| 27 |
+
| Gemini Flash 2.0 🆕 | 1M | 4K | 89.4 (76.0) | <ins>87.7</ins> | <ins>87.5</ins> | <ins>77.9</ins> | 64.7 | 48.2 | *41.0* |
|
| 28 |
+
| Gemma 3 27B 🆕 | 128K | <1K | 88.6 (75.3) | 73.3 | 65.6 | 48.1 | *32.7* | *20.2* | *9.5* |
|
| 29 |
| Mistral Large 2 | 128K | 2K | 87.9 (74.7) | <ins>86.1</ins> | <ins>85.5</ins> | 73.3 | 51.5 | *32.6* | *18.7* |
|
| 30 |
| Claude 3.5 Sonnet | 200K | 4K | 87.6 (74.4) | <ins>85.4</ins> | <ins>84.0</ins> | <ins>77.6</ins> | 61.7 | 45.7 | *29.8* |
|
| 31 |
+
| Gemma 3 12B 🆕 | 128K | 1K | 87.4 (74.3) | <ins>74.7</ins> | 61.8 | *39.9* | *27.4* | *16.8* | *7.3* |
|
| 32 |
| Gemini 1.5 Flash | 1M | <1K | 84.7 (72.0) | 68.6 | 61.6 | 51.0 | 44.4 | *35.5* | *28.6* |
|
| 33 |
| GPT-4o mini | 128K | <1K | 84.9 (72.2) | 67.7 | 58.2 | 44.1 | *32.6* | *20.6* | *13.7* |
|
| 34 |
+
| Llama 4 Scout 🆕 | 10M | 1K | 81.7 (69.4) | <ins>72.3<ins> | 61.8 | 50.8 | *35.5* | *26.9* | *21.6* |
|
| 35 |
| Llama 3.1 8B | 128K | 1K | 76.7 (65.2) | <ins>65.7</ins> | 54.4 | 44.1 | *31.9* | *22.6* | *14.2* |
|
| 36 |
+
| Gemma 3 4B 🆕 | 128K | <1K | 73.6 (62.6) | 50.3 | *35.3* | *16.4* | *7.5* | *2.3* | *0.9* |
|
| 37 |
|
| 38 |
This table presents the performance results of selected models on NOLIMA tests. The **base score** represents a model’s accuracy on the task at short contexts (250, 500, and 1K) and serves as a controlled reference to measure performance degradation at longer contexts.
|
| 39 |
The **effective length** is defined as the longest context where a model maintains at least 85% of its base score. Scores above this threshold are <ins>underlined</ins>, while scores dropping below 50% of the base score are *italicized*.
|
| 40 |
|
| 41 |
+
#### ✨ Updates:
|
| 42 |
+
|
| 43 |
+
- [2025-04-10]: Added evaluation results on Gemma 3 models (4B/12B/27B), Gemini 2.0 Flash, and Llama 4 Scout. (Llama 4.0 Maverick evaluation in progress... ⏳)
|
| 44 |
+
|
| 45 |
### NoLiMa-Hard Results
|
| 46 |
| Models | Base Score | 4K | 8K | 16K | 32K |
|
| 47 |
|-----------------------|:---------:|:---:|:---:|:---:|:---:|
|
|
|
|
| 88 |
## Cite
|
| 89 |
If you use the **NoLiMa** dataset, filtering pipeline, or code, please cite the paper:
|
| 90 |
```bibtex
|
| 91 |
+
@inproceedings{modarressi2025nolima,
|
| 92 |
+
title={NoLiMa: Long-Context Evaluation Beyond Literal Matching},
|
| 93 |
+
author={Modarressi, Ali and Deilamsalehy, Hanieh and Dernoncourt, Franck and Bui, Trung and Rossi, Ryan A. and Yoon, Seunghyun and Schütze, Hinrich},
|
| 94 |
+
booktitle={Forty-second International Conference on Machine Learning},
|
| 95 |
+
year={2025},
|
| 96 |
+
url={https://arxiv.org/abs/2502.05167}
|
|
|
|
|
|
|
| 97 |
}
|
| 98 |
```
|