Datasets:
Tasks:
Text Generation
Formats:
parquet
Sub-tasks:
language-modeling
Languages:
Danish
Size:
10M - 100M
ArXiv:
DOI:
License:
Kenneth Enevoldsen
commited on
updated paper
Browse files- paper/figure_baseline.png +3 -0
- paper/paper.md +34 -0
paper/figure_baseline.png
ADDED
|
Git LFS Details
|
paper/paper.md
CHANGED
|
@@ -193,3 +193,37 @@ enviromental:
|
|
| 193 |
- common codebase lead to less duplication of dataset and reduces storage required
|
| 194 |
- continual ci running on large datasets could be a concern. Currently out tests use a total of XXX Co2-eq (estimated using codecarbon). however we have already seen people using training [@fineweb] and evaluating LLMs to appriximate dataset quality, such workflows could quickly incrase the co2 consumption.
|
| 195 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 193 |
- common codebase lead to less duplication of dataset and reduces storage required
|
| 194 |
- continual ci running on large datasets could be a concern. Currently out tests use a total of XXX Co2-eq (estimated using codecarbon). however we have already seen people using training [@fineweb] and evaluating LLMs to appriximate dataset quality, such workflows could quickly incrase the co2 consumption.
|
| 195 |
|
| 196 |
+
|
| 197 |
+
|
| 198 |
+
|
| 199 |
+
|
| 200 |
+
## Aditional content
|
| 201 |
+
|
| 202 |
+
Comparison table
|
| 203 |
+
|
| 204 |
+
|
| 205 |
+
| | Size | Sufficient Documentation | Data availability | Legal Status | Quality |
|
| 206 |
+
| ---------------------- | ---- | ------------------------ | ----------------- | --------------- | -------------- |
|
| 207 |
+
| Danish Dynaword (Ours) | 3.5B | Replicable^ | Open Access | Openly Licensed | Mixed (high) |
|
| 208 |
+
| Danish Gigaword* | | Documentary | Open Access | Openly Licensed | Mixed (high) |
|
| 209 |
+
| Common Corpus (dan) | | Replicable | Open Access | Openly Licensed | OCR (low) |
|
| 210 |
+
| Fineweb (dan) | | Replicable | Open Access | | Mixed (medium) |
|
| 211 |
+
|
| 212 |
+
|
| 213 |
+
|
| 214 |
+
<!--
|
| 215 |
+
Could we create an interesting figure of this Marton? See figure 1
|
| 216 |
+
better notion of quality? Potentially a bit more objective?
|
| 217 |
+
|
| 218 |
+
-->
|
| 219 |
+
|
| 220 |
+
*The Danish gigaword subsection included in Danish Dynaword. I.e. the subsection that is permissibly licensed.
|
| 221 |
+
^Some datasets are derived from Danish Gigaword, some of these subsection are not (currently) replicable
|
| 222 |
+
|
| 223 |
+
This follows the scheme from figure 1 (https://arxiv.org/abs/2501.08365)
|
| 224 |
+
|
| 225 |
+
Add comparison number of tokens comparison:
|
| 226 |
+
Common Corpus (DA) -
|
| 227 |
+
Gigaword (DA) - Open
|
| 228 |
+
M-Fineweb (DA) -
|
| 229 |
+
-->
|