Datasets:

File size: 17,244 Bytes
ed60dc1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
---
license: cc-by-sa-4.0
task_categories:
- translation
- text-generation
language:
- es
- en
tags:
- machine-translation
- parallel-corpus
- spanish-english
- domain-specific
- legal-administrative
- biomedical
- heritage
size_categories:
- 10M<n<100M
---

# Dataset Card for ALIA Parallel Translation Corpus

This corpus comprises **35,753,765 domain-specific parallel segments** (Spanish-English) designed for training and evaluating machine translation models in specialized domains. The corpus includes three main domains: **Legal-Administrative**, **Biomedical**, and **Heritage**, carefully curated to support document-level and multi-paragraph translation tasks beyond traditional sentence-level approaches.

## Table of Contents
- [Dataset Card for ALIA Parallel Translation Corpus](#dataset-card-for-alia-parallel-translation-corpus)
  - [Table of Contents](#table-of-contents)
  - [Dataset Details](#dataset-details)
    - [Dataset Description](#dataset-description)
    - [Dataset Sources](#dataset-sources)
    - [Uses](#uses)
  - [Dataset Structure](#dataset-structure)
    - [Data Instances](#data-instances)
    - [Data Fields](#data-fields)
    - [Data Statistics](#data-statistics)
    - [Example Usage](#example-usage)
  - [Dataset Creation](#dataset-creation)
    - [Source Data](#source-data)
    - [Data Collection and Processing](#data-collection-and-processing)
    - [Annotations](#annotations)
    - [Personal and Sensitive Information](#personal-and-sensitive-information)
  - [Considerations for Using the Data](#considerations-for-using-the-data)
    - [Social Impact of Dataset](#social-impact-of-dataset)
    - [Discussion of Biases](#discussion-of-biases)
    - [Other Known Limitations](#other-known-limitations)

## Dataset Details

### Dataset Description

The **ALIA Parallel Translation Corpus** is an extensive collection of Spanish-English parallel texts spanning three specialized domains: Legal-Administrative, Biomedical, and Heritage. With **35,753,765 parallel segments** totaling approximately **69.7 GB**, this corpus was developed as part of the ALIA project's machine translation activity to improve Spanish-English translation quality through continual pre-training and domain adaptation of language models.

The corpus prioritizes document-level and multi-paragraph translation contexts, moving beyond traditional sentence-level approaches. Each segment is identified by domain through a systematic ID prefix system:
- **00-XX-XXXXXX**: Biomedical domain (IBECS: 01, MedlinePlus: 02, PubMed: 03)
- **01-XX-XXXXXX**: Heritage domain
- **02-XX-XXXXXX**: Legal-Administrative domain (EURLEX: 01, EUROPAT: 02, UNPC: 03)

- **Curated by:** SINAI Research Group (Intelligent Systems for Information Access) - Universidad de Jaén, through the Center for Advanced Studies in Information and Communication Technologies (CEATIC).
- **Funded by:** This work is funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project Desarrollo de Modelos ALIA.
- **Language(s) (NLP):** es (Spanish), en (English)
- **License:** [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)

### Dataset Sources

- **Repository:** [ALIA Project - SINAI](https://github.com/sinai-uja/ALIA-UJA)

### Uses

The primary purpose of this corpus is to serve as a foundation for training and evaluating machine translation models specialized in Spanish-English translation for specific domains, with applications in:

- Training and fine-tuning large language models (LLMs) for domain-specific machine translation
- Continual pre-training for domain adaptation of translation models
- Evaluating translation quality using multiple metrics (BLEU, chrF++, COMET, COMET-Kiwi, TER, BLEURT, MetricX, MetricX-QE)
- Document-level and multi-paragraph translation research
- Comparative analysis of translation performance across specialized domains
- Benchmarking machine translation systems in legal, biomedical, and heritage contexts

## Dataset Structure

### Data Instances

Each instance in the corpus has the following structure:

```json
{
    "id": "000327881267",
    "text_es": "Análisis de costo-utilidad de la vacunación contra el virus del papiloma humano y el cribado cervical del paciente con cáncer de cuello uterino en Indonesia...",
    "text_en": "Although cervical cancer is a preventable disease, the clinical and economic burdens of cervical cancer are still substantial issues in Indonesia..."
}
```

### Data Fields

- **id** (string): Unique identifier following the domain prefix system:
  - First 2 digits: Domain code (00=Biomedical, 01=Heritage, 02=Legal-Administrative)
  - Next 2 digits: Source code within domain (see ID System below)
  - Remaining digits: Sequential segment identifier
- **text_es** (string): Source text in Spanish
- **text_en** (string): Target text in English

**ID System by Domain:**

| Domain | Prefix | Source | Source Code |
|--------|--------|--------|-------------|
| Biomedical | 00 | IBECS | 01 |
| Biomedical | 00 | MedlinePlus | 02 |
| Biomedical | 00 | PubMed | 03 |
| Heritage | 01 | PCI | - |
| Legal-Administrative | 02 | EURLEX | 01 |
| Legal-Administrative | 02 | EUROPAT | 02 |
| Legal-Administrative | 02 | UNPC | 03 |

### Data Statistics

The complete dataset contains **35,753,765 parallel segments**:

| Metric | Value |
|--------|-------|
| Total Instances | 35,753,765 |
| Size (Memory) | 69,756.33 MB |
| Columns | 3 |

**Domain Distribution** (by ID prefix):

| Domain | ID Prefix | Primary Sources |
|--------|-----------|-----------------|
| Biomedical | 00-XX-XXXXXX | IBECS, MedlinePlus, PubMed |
| Heritage | 01-XX-XXXXXX | PCI |
| Legal-Administrative | 02-XX-XXXXXX | EURLEX, EUROPAT, UNPC |

*Note: Exact domain distribution to be confirmed through ID prefix analysis.

**Segment Length Characteristics:**

| Metric | Spanish (text_es) | English (text_en) |
|--------|-------------------|-------------------|
| Shortest segment | 7 characters | 7 characters |
| Average segment length | ~800 characters | ~900 characters |
| Longest segments | >3,000 characters | >3,000 characters |

### Example Usage

To load the dataset:

```python
from datasets import load_dataset

# Load the complete dataset
data = load_dataset("sinai-uja/ALIA-parallel-translation", trust_remote_code=True)

# Load with streaming (recommended for this large corpus)
data = load_dataset("sinai-uja/ALIA-parallel-translation", trust_remote_code=True, streaming=True)

# Process in streaming mode
for example in data['train']:
    print(f"ID: {example['id']}")
    print(f"Spanish: {example['text_es'][:100]}...")
    print(f"English: {example['text_en'][:100]}...")
    break
```

Example of filtering by domain:

```python
from datasets import load_dataset

# Load with streaming
dataset = load_dataset("sinai-uja/ALIA-parallel-translation", streaming=True, split="train")

# Filter biomedical domain (ID starts with '00')
biomedical = dataset.filter(lambda x: x['id'].startswith('00'))

# Filter legal-administrative domain (ID starts with '02')
legal = dataset.filter(lambda x: x['id'].startswith('02'))

# Filter heritage domain (ID starts with '01')
heritage = dataset.filter(lambda x: x['id'].startswith('01'))

# Filter by specific source (e.g., PubMed: '0003')
pubmed = dataset.filter(lambda x: x['id'].startswith('0003'))

# Filter by specific source (e.g., EURLEX: '0201')
eurlex = dataset.filter(lambda x: x['id'].startswith('0201'))

# Example: Process first 1000 biomedical samples
count = 0
for example in biomedical:
    # Your processing here
    count += 1
    if count >= 1000:
        break
```

Example of batch processing:

```python
from datasets import load_dataset

# Load full dataset (requires ~70GB RAM)
data = load_dataset("sinai-uja/ALIA-parallel-translation")

# Access by index
example = data['train'][0]
print(f"ID: {example['id']}")
print(f"Spanish: {example['text_es'][:200]}...")
print(f"English: {example['text_en'][:200]}...")

# Get domain statistics
biomedical_count = sum(1 for ex in data['train'] if ex['id'].startswith('00'))
heritage_count = sum(1 for ex in data['train'] if ex['id'].startswith('01'))
legal_count = sum(1 for ex in data['train'] if ex['id'].startswith('02'))

print(f"Biomedical: {biomedical_count:,}")
print(f"Heritage: {heritage_count:,}")
print(f"Legal-Administrative: {legal_count:,}")
```

## Dataset Creation

### Source Data

The corpus integrates parallel texts from multiple authoritative sources across three specialized domains:

**Biomedical Domain (ID prefix: 00-XX-XXXXXX)**
- **IBECS (00-01-XXXXXX)**: Spanish bibliographic index of health sciences journal articles
- **MedlinePlus (00-02-XXXXXX)**: Trusted health information from the U.S. National Library of Medicine
- **PubMed (00-03-XXXXXX)**: Biomedical literature abstracts and articles from international journals

**Heritage Domain (ID prefix: 01-XX-XXXXXX)**
- **PCI**: Intangible Cultural Heritage (Patrimonio Cultural Inmaterial) documentation


**Legal-Administrative Domain (ID prefix: 02-XX-XXXXXX)**
- **EURLEX (02-01-XXXXXX)**: European Union legislation, regulations, and legal documents
- **EUROPAT (02-02-XXXXXX)**: European Patent Office documentation and technical patent descriptions
- **UNPC (02-03-XXXXXX)**: United Nations Parallel Corpus including resolutions, reports, and official documents

All data come from official, publicly accessible, and authoritative sources in their respective domains.

### Data Collection and Processing

The corpus was compiled from publicly available parallel texts from official and authoritative sources. The data collection focused on three specialized domains to support domain-specific machine translation research. Each source was assigned a systematic ID prefix to enable domain identification and filtering.

Quality control procedures included:
- Reformatting of corpus structure for consistency (particularly EURLEX)
- Removal of noisy or poorly aligned segments
- Deduplication of exact matches
- Validation of parallel alignment at the segment level

The final corpus is stored in Parquet format (Apache Arrow columnar storage) optimized for efficient access and processing at scale.

### Annotations

This dataset contains **no manual annotations**. All content consists of naturally parallel texts from authoritative bilingual sources:

**Structural Metadata:**
- **Domain labels**: Automatically assigned based on source corpus and encoded in ID prefix
- **Source identification**: Embedded in ID structure for provenance tracking
- **Alignment level**: Varies by source (sentence, paragraph, or document-level)

The corpus preserves the original parallel structure as published by official sources without additional interpretive layers.

### Personal and Sensitive Information

The corpus has been subjected to cleaning processes to remove sensitive or identifiable information according to data protection regulations. Documents come from public official and scientific sources. Some texts may contain:

**Biomedical Domain:**
- Patient information is de-identified in accordance with HIPAA and GDPR standards
- Research subjects appear only in aggregate statistical form
- Names of researchers, physicians, and institutions in published scientific literature

**Legal-Administrative Domain:**
- Names of public officials, legislators, and judges in official contexts
- References to public institutions and government organizations
- Patent inventor names (as required by patent law)
- Legal case references with participant anonymization where applicable

**Heritage Domain:**
- Names of cultural practitioners, artists, and heritage experts in official documentation
- References to communities and geographical locations

**User Responsibility:** Users are advised to apply additional privacy controls depending on the specific use case, particularly for applications involving personal data processing or sensitive domain applications (medical diagnosis, legal advice).

## Considerations for Using the Data

### Social Impact of Dataset

This corpus represents a significant advance in democratizing access to domain-specific machine translation resources for Spanish-English language pairs. It contributes to:

- **Improved Access to Specialized Information**: Facilitating cross-lingual access to legal, biomedical, and heritage documentation for researchers, professionals, and citizens
- **Research Advancement**: Providing standardized large-scale resources for evaluating document-level translation approaches
- **National AI Strategy**: Supporting Spain's strategic objective of developing foundational AI models in Spanish with ethical and transparency standards through the ALIA project
- **Reduced Language Barriers**: Enabling better communication in critical domains like healthcare, law, patent documentation, and cultural preservation
- **Professional Tool Development**: Supporting the creation of specialized translation tools for legal professionals, medical translators, and heritage workers
- **Multilingual Science**: Facilitating Spanish-language participation in international scientific discourse

### Discussion of Biases

The corpus reflects inherent biases from its source materials and domains:

**Domain-Specific Biases:**

**Biomedical Domain:**
- Predominantly reflects Western medical perspectives and research traditions
- Over-representation of clinical research from high-income countries
- Potential under-representation of traditional or alternative medical practices
- English source texts may reflect Anglo-American medical terminology

**Legal-Administrative Domain:**
- Reflects primarily EU and UN institutional language and legal frameworks
- May not represent all legal traditions, particularly non-Western systems
- Patent documentation biased toward European and international patent systems
- Administrative language reflects specific bureaucratic conventions

**Heritage Domain:**
- Limited by availability of digitized and translated heritage documentation
- Possible over-representation of officially recognized heritage over grassroots practices
- May under-represent certain cultural perspectives or minority communities
- Selection bias toward heritage deemed worthy of official documentation

**Language Biases:**
- **Spanish Varieties**: European Spanish may be over-represented compared to Latin American varieties, particularly in EU and PubMed sources
- **Register**: Formal and technical register dominates across all domains
- **Terminology**: Technical terminology may reflect specific translation conventions from source institutions
- **Translation Direction**: Some sources may be originally in English with Spanish translations, potentially affecting naturalness

**Temporal Biases:**
- More recent documents are better represented due to digitization availability
- Historical terminology evolution may not be fully captured
- Contemporary issues and concepts may be over-represented

**Socioeconomic Biases:**
- Sources primarily from institutional and governmental contexts
- May under-represent perspectives from developing regions
- Professional and academic language dominates over colloquial usage

### Other Known Limitations

**Data Quality:**
- **OCR Errors**: Historical documents may contain optical character recognition errors
- **Translation Quality**: Original translation quality varies by source and may not always meet professional standards
- **Alignment Precision**: Some segments may have approximate rather than exact alignment
- **Formatting Artifacts**: Residual formatting issues from document conversion processes

**Temporal Coverage:**
- Coverage varies significantly by source
- More complete for recent years (2000-2025) than historical periods
- Some domains have better temporal distribution than others

**Domain Specificity:**
- Vocabulary is limited to three specialized domains
- Does not generalize to other Spanish-English translation tasks (e.g., news, social media, conversational)
- Technical terminology may be too specialized for general-purpose translation

**Text Level Variability:**
- Not all sources provide consistent document-level segmentation
- Some sources artificially segment continuous documents
- Sentence-level alignments predominate despite document-level emphasis

**Alignment Granularity:**
- While document-level translation is prioritized, many sources only provide sentence-level alignments
- Mixed granularity across sources may affect training consistency

**Heritage Domain Limitations:**
- Smallest domain by volume
- May benefit from additional data collection or augmentation
- Limited coverage of certain heritage types or regions

**Source Diversity:**
- Some domains dominated by specific sources (e.g., UNPC in legal-administrative)
- Uneven distribution across source types
- Potential for domain-specific overfitting during training

---

**Contact:** [ALIA Project](https://www.alia.gob.es/) - [SINAI Research Group](https://sinai.ujaen.es) - [Universidad de Jaén](https://www.ujaen.es/)

**More Information:** [SINAI Research Group](https://sinai.ujaen.es) | [ALIA-UJA Project](https://github.com/sinai-uja/ALIA-UJA)