The dataset viewer is not available for this split.
Error code: FeaturesError
Exception: ArrowInvalid
Message: Schema at index 1 was different:
shards: list<item: struct<column_encodings: list<item: string>, column_names: list<item: string>, column_sizes: list<item: null>, compression: string, format: string, hashes: list<item: null>, raw_data: struct<basename: string, bytes: int64, hashes: struct<>>, samples: int64, size_limit: int64, version: int64, zip_data: struct<basename: string, bytes: int64, hashes: struct<>>>>
version: int64
vs
total_duplicated_tokens: int64
total_tokens_written: int64
total_tokens_skipped: int64
percentiles: struct<0th: int64, 10th: int64, 20th: int64, 30th: int64, 40th: int64, 50th: int64, 60th: int64, 70th: int64, 80th: int64, 90th: int64, 95th: int64, 99th: int64, 100th: int64>
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3422, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2187, in _head
return next(iter(self.iter(batch_size=n)))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2391, in iter
for key, example in iterator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__
for key, pa_table in self._iter_arrow():
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1904, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 527, in _iter_arrow
yield new_key, pa.Table.from_batches(chunks_buffer)
File "pyarrow/table.pxi", line 4116, in pyarrow.lib.Table.from_batches
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Schema at index 1 was different:
shards: list<item: struct<column_encodings: list<item: string>, column_names: list<item: string>, column_sizes: list<item: null>, compression: string, format: string, hashes: list<item: null>, raw_data: struct<basename: string, bytes: int64, hashes: struct<>>, samples: int64, size_limit: int64, version: int64, zip_data: struct<basename: string, bytes: int64, hashes: struct<>>>>
version: int64
vs
total_duplicated_tokens: int64
total_tokens_written: int64
total_tokens_skipped: int64
percentiles: struct<0th: int64, 10th: int64, 20th: int64, 30th: int64, 40th: int64, 50th: int64, 60th: int64, 70th: int64, 80th: int64, 90th: int64, 95th: int64, 99th: int64, 100th: int64>Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
mmBERT Pre-training Data P3
Phase 1 of 3: Diverse multilingual pre-training data mixture (trained for 2.3T tokens) used to train the mmBERT model suite.
NOTE: this is only P3 of the pre-training data due to HF limits, you need to download and combine all three into one folder
This dataset contains the pre-training phase data used to train all mmBERT encoder models. The data is provided in MDS format ready for use with Composer and the ModernBERT training repository.
π Data Composition
| Data Source | Tokens (B) | Percentage | Description |
|---|---|---|---|
| FineWeb2 | 1,196.6 | 60.2% | High-quality multilingual web crawl data |
| DCLM | 600.0 | 30.2% | High-quality English web crawl data |
| Starcoder | 100.6 | 5.1% | Code repositories and files |
| Arxiv | 27.8 | 1.4% | Academic preprints |
| StackExchange | 18.6 | 0.9% | Q&A forums |
| Tulu Flan | 15.3 | 0.8% | Instruction-following data |
| Dolmino Math | 11.2 | 0.6% | Mathematical content |
| PeS2o | 8.4 | 0.4% | Scientific papers |
| Wikipedia (MegaWika) | 4.7 | 0.2% | Encyclopedia articles |
| Books | 4.3 | 0.2% | Literature and reference books |
| StackExchange (Dolmino) | 1.4 | 0.1% | Curated Q&A content |
| Total | 1,989.0 | 100.0% | Diverse mixture for foundation training |
π Language Coverage
This phase covers 60 languages plus code, with an inverse temperature sampling schedule starting at Ο=0.7. Languages include:
- High-resource: English (34.5%), Russian (5.8%), German (4.4%), Spanish (4.5%), French (4.0%), Chinese (5.2%)
- Mid-resource: Italian, Portuguese, Japanese, Dutch, Polish, and 45 others
- Scripts: Latin, Cyrillic, Arabic, Chinese, Japanese, Thai, and many more
π Usage
For pre-training, see the ModernBERT repo: https://github.com/AnswerDotAI/ModernBERT
Direct Access
Use the script at this link to load any section of the dataset on the fly. This will fail if you try to access too many samples though, due to HF rate-limiting. To download the full dataset, use HF Hub's Snapshot Download.
Process your data...
## π Related Resources
- **Models**: [mmBERT Model Suite](https://huggingface.co/collections/jhu-clsp/mmbert-a-modern-multilingual-encoder-68b725831d7c6e3acc435ed4)
- **Phase 2**: [Mid-training Data](https://huggingface.co/datasets/jhu-clsp/mmbert-midtraining) (600B tokens)
- **Phase 3**: [Decay Phase Data](https://huggingface.co/datasets/jhu-clsp/mmbert-decay) (100B tokens)
- **Checkpoints**: [Training Checkpoints](https://huggingface.co/datasets/jhu-clsp/mmbert-checkpoints)
- **Paper**: [Arxiv link](https://arxiv.org/abs/2509.06888)
- **Code**: [GitHub Repository](https://github.com/jhu-clsp/mmBERT)
## Citation
```bibtex
@misc{marone2025mmbertmodernmultilingualencoder,
title={mmBERT: A Modern Multilingual Encoder with Annealed Language Learning},
author={Marc Marone and Orion Weller and William Fleshman and Eugene Yang and Dawn Lawrie and Benjamin Van Durme},
year={2025},
eprint={2509.06888},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2509.06888},
}
- Downloads last month
- 16,612