Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
parquet-converter commited on
Commit
52a0591
·
1 Parent(s): 80e9941

Update parquet files

Browse files
README.md DELETED
@@ -1,281 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - found
4
- language_creators:
5
- - found
6
- language:
7
- - en
8
- license:
9
- - cc-by-nc-sa-4.0
10
- multilinguality:
11
- - monolingual
12
- size_categories:
13
- - 10K<n<100K
14
- source_datasets:
15
- - original
16
- task_categories:
17
- - text-retrieval
18
- task_ids:
19
- - document-retrieval
20
- paperswithcode_id: null
21
- pretty_name: the RegIR datasets
22
- tags:
23
- - document-to-document-retrieval
24
- dataset_info:
25
- - config_name: eu2uk
26
- features:
27
- - name: document_id
28
- dtype: string
29
- - name: publication_year
30
- dtype: string
31
- - name: text
32
- dtype: string
33
- - name: relevant_documents
34
- sequence: string
35
- splits:
36
- - name: train
37
- num_bytes: 20665038
38
- num_examples: 1400
39
- - name: test
40
- num_bytes: 8844145
41
- num_examples: 300
42
- - name: validation
43
- num_bytes: 5852814
44
- num_examples: 300
45
- - name: uk_corpus
46
- num_bytes: 502468359
47
- num_examples: 52515
48
- download_size: 119685577
49
- dataset_size: 537830356
50
- - config_name: uk2eu
51
- features:
52
- - name: document_id
53
- dtype: string
54
- - name: publication_year
55
- dtype: string
56
- - name: text
57
- dtype: string
58
- - name: relevant_documents
59
- sequence: string
60
- splits:
61
- - name: train
62
- num_bytes: 55144655
63
- num_examples: 1500
64
- - name: test
65
- num_bytes: 14810460
66
- num_examples: 300
67
- - name: validation
68
- num_bytes: 15175644
69
- num_examples: 300
70
- - name: eu_corpus
71
- num_bytes: 57212422
72
- num_examples: 3930
73
- download_size: 31835104
74
- dataset_size: 142343181
75
- ---
76
-
77
- # Dataset Card for the RegIR datasets
78
-
79
- ## Table of Contents
80
- - [Dataset Description](#dataset-description)
81
- - [Dataset Summary](#dataset-summary)
82
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
83
- - [Languages](#languages)
84
- - [Dataset Structure](#dataset-structure)
85
- - [Data Instances](#data-instances)
86
- - [Data Fields](#data-fields)
87
- - [Data Splits](#data-splits)
88
- - [Dataset Creation](#dataset-creation)
89
- - [Curation Rationale](#curation-rationale)
90
- - [Source Data](#source-data)
91
- - [Annotations](#annotations)
92
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
93
- - [Considerations for Using the Data](#considerations-for-using-the-data)
94
- - [Social Impact of Dataset](#social-impact-of-dataset)
95
- - [Discussion of Biases](#discussion-of-biases)
96
- - [Other Known Limitations](#other-known-limitations)
97
- - [Additional Information](#additional-information)
98
- - [Dataset Curators](#dataset-curators)
99
- - [Licensing Information](#licensing-information)
100
- - [Citation Information](#citation-information)
101
- - [Contributions](#contributions)
102
-
103
- ## Dataset Description
104
-
105
- - **Homepage:** https://archive.org/details/eacl2021_regir_datasets
106
- - **Repository:** https://archive.org/details/eacl2021_regir_datasets
107
- - **Paper:** https://arxiv.org/abs/2101.10726
108
- - **Leaderboard:** N/A
109
- - **Point of Contact:** [Ilias Chalkidis](mailto:[email protected])
110
-
111
- ### Dataset Summary
112
-
113
- The European Union (EU) has a legislation scheme analogous to regulatory compliance for organizations. According to the Treaty on the Functioning of the European Union (TFEU), all published EU directives must take effect at the national level. Thus, all EU member states must adopt a law to transpose a newly issued directive within the period set by the directive (typically 2 years).
114
-
115
- Here, we have two datasets, EU2UK and UK2EU, containing EU directives and UK regulations, which can serve both as queries and documents under the ground truth assumption that a UK law is relevant to the EU directives it transposes and vice versa.
116
-
117
-
118
-
119
- ### Supported Tasks and Leaderboards
120
-
121
- The dataset supports:
122
-
123
- **EU2UK** (`eu2uk`): Given an EU directive *Q*, retrieve the set of relevant documents from the pool of all available UK regulations. Relevant documents are those that transpose the EU directive (*Q*).
124
-
125
- **UK2EU** (`uk2eu`): Given a UK regulation *Q*, retrieve the set of relevant documents from the pool of all available EU directives. Relevant documents are those that are being transposed by the UK regulations (*Q*).
126
-
127
-
128
- ### Languages
129
-
130
- All documents are written in English.
131
-
132
- ## Dataset Structure
133
-
134
- ### Data Instances
135
-
136
- ```json
137
- {
138
- "document_id": "31977L0794",
139
- "publication_year": "1977",
140
- "text": "Commission Directive 77/794/EEC ... of agricultural levies and customs duties",
141
- "relevant_documents": ["UKPGA19800048", "UKPGA19770036"]
142
- }
143
- ```
144
-
145
- ### Data Fields
146
-
147
- The following data fields are provided for query documents (`train`, `dev`, `test`):
148
-
149
- `document_id`: (**str**) The ID of the document.\
150
- `publication_year`: (**str**) The publication year of the document.\
151
- `text`: (**str**) The text of the document.\
152
- `relevant_documents`: (**List[str]**) The list of relevant documents, as represented by their `document_id`.
153
-
154
- The following data fields are provided for corpus documents (`corpus`):
155
-
156
- `document_id`: (**str**) The ID of the document.\
157
- `publication_year`: (**str**) The publication year of the document.\
158
- `text`: (**str**) The text of the document.\
159
-
160
- ### Data Splits
161
-
162
- #### EU2UK dataset
163
-
164
- | Split | No of Queries | Avg. relevant documents |
165
- | ------------------- | ------------------------------------ | --- |
166
- | Train | 1,400 | 1.79 |
167
- |Development | 300 | 2.09 |
168
- |Test | 300 | 1.74 |
169
- Document Pool (Corpus): 52,515 UK regulations
170
-
171
- #### UK2EU dataset
172
-
173
- | Split | No of Queries | Avg. relevant documents |
174
- | ------------------- | ------------------------------------ | --- |
175
- | Train | 1,500 | 1.90 |
176
- |Development | 300 | 1.46 |
177
- |Test | 300 | 1.29 |
178
- Document Pool (Corpus): 3,930 EU directives
179
-
180
- ## Dataset Creation
181
-
182
- ### Curation Rationale
183
-
184
- The dataset was curated by Chalkidis et al. (2021).\
185
- The transposition pairs are publicly available by the Publications Office of EU (https://publications.europa.eu/en).
186
-
187
- ### Source Data
188
-
189
- #### Initial Data Collection and Normalization
190
-
191
- The original data are available at EUR-Lex portal (https://eur-lex.europa.eu) and Legislation.GOV.UK (http://legislation.gov.uk/) in an unprocessed format.\
192
- The transposition pairs are provided by the EU member states (in our case, UK) and were downloaded from the SPARQL endpoint of the Publications Office of EU (http://publications.europa.eu/webapi/rdf/sparql).\
193
- For more information on the dataset curation, read Chalkidis et al. (2021).
194
-
195
- #### Who are the source language producers?
196
-
197
- [More Information Needed]
198
-
199
- ### Annotations
200
-
201
- #### Annotation process
202
-
203
- * The original data are available at EUR-Lex portal (https://eur-lex.europa.eu) and Legislation.GOV.UK (http://legislation.gov.uk/) in an unprocessed format.
204
- * The transposition pairs are provided by the EU member states (in our case, UK) and were downloaded from the SPARQL endpoint of the Publications Office of EU (http://publications.europa.eu/webapi/rdf/sparql).
205
-
206
-
207
- #### Who are the annotators?
208
-
209
- Publications Office of EU (https://publications.europa.eu/en)
210
-
211
- ### Personal and Sensitive Information
212
-
213
- The dataset does not include personal or sensitive information.
214
-
215
- ## Considerations for Using the Data
216
-
217
- ### Social Impact of Dataset
218
-
219
- [More Information Needed]
220
-
221
- ### Discussion of Biases
222
-
223
- [More Information Needed]
224
-
225
- ### Other Known Limitations
226
-
227
- [More Information Needed]
228
-
229
- ## Additional Information
230
-
231
- ### Dataset Curators
232
-
233
- Chalkidis et al. (2021)
234
-
235
- ### Licensing Information
236
-
237
- **EU Data**
238
-
239
- © European Union, 1998-2021
240
-
241
- The Commission’s document reuse policy is based on Decision 2011/833/EU. Unless otherwise specified, you can re-use the legal documents published in EUR-Lex for commercial or non-commercial purposes.
242
-
243
- The copyright for the editorial content of this website, the summaries of EU legislation and the consolidated texts, which is owned by the EU, is licensed under the Creative Commons Attribution 4.0 International licence​​ . This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
244
-
245
- Source: https://eur-lex.europa.eu/content/legal-notice/legal-notice.html \
246
- Read more: https://eur-lex.europa.eu/content/help/faq/reuse-contents-eurlex.html
247
-
248
- **UK Data**
249
-
250
- You are encouraged to use and re-use the Information that is available under this licence freely and flexibly, with only a few conditions.
251
-
252
- You are free to:
253
-
254
- - copy, publish, distribute and transmit the Information;
255
- - adapt the Information;
256
- - exploit the Information commercially and non-commercially for example, by combining it with other Information, or by including it in your own product or application.
257
-
258
- You must (where you do any of the above):
259
-
260
- acknowledge the source of the Information in your product or application by including or linking to any attribution statement specified by the Information Provider(s) and, where possible, provide a link to this licence: http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/.
261
-
262
- ### Citation Information
263
-
264
- *Ilias Chalkidis, Manos Fergadiotis, Nikos Manginas, Eva Katakalou and Prodromos Malakasiotis.*
265
- *Regulatory Compliance through Doc2Doc Information Retrieval: A case study in EU/UK legislation where text similarity has limitations*
266
- *Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2021). Online. 2021*
267
- ```
268
- @inproceedings{chalkidis-etal-2021-regir,
269
- title = "Regulatory Compliance through Doc2Doc Information Retrieval: A case study in EU/UK legislation where text similarity has limitations",
270
- author = "Chalkidis, Ilias and Fergadiotis, Manos and Manginas, Nikos and Katakalou, Eva, and Malakasiotis, Prodromos",
271
- booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2021)",
272
- year = "2021",
273
- address = "Online",
274
- publisher = "Association for Computational Linguistics",
275
- url = "https://arxiv.org/abs/2101.10726",
276
- }
277
- ```
278
-
279
- ### Contributions
280
-
281
- Thanks to [@iliaschalkidis](https://github.com/iliaschalkidis) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"eu2uk": {"description": "EURegIR: Regulatory Compliance IR (EU/UK)\n", "citation": "@inproceedings{chalkidis-etal-2021-regir,\n title = \"Regulatory Compliance through Doc2Doc Information Retrieval: A case study in EU/UK legislation where text similarity has limitations\",\n author = \"Chalkidis, Ilias and Fergadiotis, Emmanouil and Manginas, Nikos and Katakalou, Eva, and Malakasiotis, Prodromos\",\n booktitle = \"Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2021)\",\n year = \"2021\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://arxiv.org/abs/2101.10726\",\n}\n", "homepage": "https://archive.org/details/eacl2021_regir_dataset", "license": "CC BY-SA (Creative Commons / Attribution-ShareAlike)", "features": {"document_id": {"dtype": "string", "id": null, "_type": "Value"}, "publication_year": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "relevant_documents": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "eu_regulatory_ir", "config_name": "eu2uk", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 20665038, "num_examples": 1400, "dataset_name": "eu_regulatory_ir"}, "test": {"name": "test", "num_bytes": 8844145, "num_examples": 300, "dataset_name": "eu_regulatory_ir"}, "validation": {"name": "validation", "num_bytes": 5852814, "num_examples": 300, "dataset_name": "eu_regulatory_ir"}, "uk_corpus": {"name": "uk_corpus", "num_bytes": 502468359, "num_examples": 52515, "dataset_name": "eu_regulatory_ir"}}, "download_checksums": {"https://archive.org/download/eacl2021_regir_datasets/eu2uk.zip": {"num_bytes": 119685577, "checksum": "8afc26fbd3559aa800ce0adb3c35c0e4c2c99b93529bc20d5f32ce44f55ba286"}}, "download_size": 119685577, "post_processing_size": null, "dataset_size": 537830356, "size_in_bytes": 657515933}, "uk2eu": {"description": "EURegIR: Regulatory Compliance IR (EU/UK)\n", "citation": "@inproceedings{chalkidis-etal-2021-regir,\n title = \"Regulatory Compliance through Doc2Doc Information Retrieval: A case study in EU/UK legislation where text similarity has limitations\",\n author = \"Chalkidis, Ilias and Fergadiotis, Emmanouil and Manginas, Nikos and Katakalou, Eva, and Malakasiotis, Prodromos\",\n booktitle = \"Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2021)\",\n year = \"2021\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://arxiv.org/abs/2101.10726\",\n}\n", "homepage": "https://archive.org/details/eacl2021_regir_dataset", "license": "CC BY-SA (Creative Commons / Attribution-ShareAlike)", "features": {"document_id": {"dtype": "string", "id": null, "_type": "Value"}, "publication_year": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "relevant_documents": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "eu_regulatory_ir", "config_name": "uk2eu", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 55144655, "num_examples": 1500, "dataset_name": "eu_regulatory_ir"}, "test": {"name": "test", "num_bytes": 14810460, "num_examples": 300, "dataset_name": "eu_regulatory_ir"}, "validation": {"name": "validation", "num_bytes": 15175644, "num_examples": 300, "dataset_name": "eu_regulatory_ir"}, "eu_corpus": {"name": "eu_corpus", "num_bytes": 57212422, "num_examples": 3930, "dataset_name": "eu_regulatory_ir"}}, "download_checksums": {"https://archive.org/download/eacl2021_regir_datasets/uk2eu.zip": {"num_bytes": 31835104, "checksum": "767d4edc0c5210b80ccf24c31a6d28a62b64edffc69810f7027d34b0e1401194"}}, "download_size": 31835104, "post_processing_size": null, "dataset_size": 142343181, "size_in_bytes": 174178285}}
 
 
eu2uk/eu_regulatory_ir-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1e6e885d7b1a037be4e3ad8757acb5875d98ab4e6f074e83fc99167e9f9d37b5
3
+ size 3389691
eu2uk/eu_regulatory_ir-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1db01acb082da591a98d3f237cfbee3a33091cca1d899dbb85893ba51894ec74
3
+ size 8309659
eu2uk/eu_regulatory_ir-uk_corpus.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db9b9a836f41447af052118e2b69110d5debe72b13786ddb5652868b75d81b40
3
+ size 190706578
eu2uk/eu_regulatory_ir-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e1cfd480d0723d001c99902c6f52b5d196e442b01cb06e702c94e15979722a0f
3
+ size 2283338
eu_regulatory_ir.py DELETED
@@ -1,149 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
- """EURegIR: Regulatory Compliance IR (EU/UK)"""
16
-
17
-
18
- import json
19
- import os
20
-
21
- import datasets
22
-
23
-
24
- _CITATION = """\
25
- @inproceedings{chalkidis-etal-2021-regir,
26
- title = "Regulatory Compliance through Doc2Doc Information Retrieval: A case study in EU/UK legislation where text similarity has limitations",
27
- author = "Chalkidis, Ilias and Fergadiotis, Emmanouil and Manginas, Nikos and Katakalou, Eva, and Malakasiotis, Prodromos",
28
- booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2021)",
29
- year = "2021",
30
- address = "Online",
31
- publisher = "Association for Computational Linguistics",
32
- url = "https://arxiv.org/abs/2101.10726",
33
- }
34
- """
35
-
36
- _DESCRIPTION = """\
37
- EURegIR: Regulatory Compliance IR (EU/UK)
38
- """
39
-
40
- _HOMEPAGE = "https://archive.org/details/eacl2021_regir_dataset"
41
-
42
- _LICENSE = "CC BY-SA (Creative Commons / Attribution-ShareAlike)"
43
-
44
- _URLs = {
45
- "eu2uk": "https://archive.org/download/eacl2021_regir_datasets/eu2uk.zip",
46
- "uk2eu": "https://archive.org/download/eacl2021_regir_datasets/uk2eu.zip",
47
- }
48
-
49
-
50
- class EuRegulatoryIr(datasets.GeneratorBasedBuilder):
51
- """EURegIR: Regulatory Compliance IR (EU/UK)"""
52
-
53
- VERSION = datasets.Version("1.1.0")
54
-
55
- BUILDER_CONFIGS = [
56
- datasets.BuilderConfig(name="eu2uk", version=VERSION, description="EURegIR: Regulatory Compliance IR (EU2UK)"),
57
- datasets.BuilderConfig(name="uk2eu", version=VERSION, description="EURegIR: Regulatory Compliance IR (UK2EU)"),
58
- ]
59
-
60
- def _info(self):
61
- if self.config.name == "eu2uk":
62
- features = datasets.Features(
63
- {
64
- "document_id": datasets.Value("string"),
65
- "publication_year": datasets.Value("string"),
66
- "text": datasets.Value("string"),
67
- "relevant_documents": datasets.features.Sequence(datasets.Value("string")),
68
- }
69
- )
70
- else:
71
- features = datasets.Features(
72
- {
73
- "document_id": datasets.Value("string"),
74
- "publication_year": datasets.Value("string"),
75
- "text": datasets.Value("string"),
76
- "relevant_documents": datasets.features.Sequence(datasets.Value("string")),
77
- }
78
- )
79
- return datasets.DatasetInfo(
80
- # This is the description that will appear on the datasets page.
81
- description=_DESCRIPTION,
82
- # This defines the different columns of the dataset and their types
83
- features=features, # Here we define them above because they are different between the two configurations
84
- # If there's a common (input, target) tuple from the features,
85
- # specify them here. They'll be used if as_supervised=True in
86
- # builder.as_dataset.
87
- supervised_keys=None,
88
- # Homepage of the dataset for documentation
89
- homepage=_HOMEPAGE,
90
- # License for the dataset if available
91
- license=_LICENSE,
92
- # Citation for the dataset
93
- citation=_CITATION,
94
- )
95
-
96
- def _split_generators(self, dl_manager):
97
- """Returns SplitGenerators."""
98
- my_urls = _URLs[self.config.name]
99
- data_dir = dl_manager.download_and_extract(my_urls)
100
- return [
101
- datasets.SplitGenerator(
102
- name=datasets.Split.TRAIN,
103
- # These kwargs will be passed to _generate_examples
104
- gen_kwargs={
105
- "filepath": os.path.join(data_dir, "train.jsonl"),
106
- "split": "train",
107
- },
108
- ),
109
- datasets.SplitGenerator(
110
- name=datasets.Split.TEST,
111
- # These kwargs will be passed to _generate_examples
112
- gen_kwargs={"filepath": os.path.join(data_dir, "test.jsonl"), "split": "test"},
113
- ),
114
- datasets.SplitGenerator(
115
- name=datasets.Split.VALIDATION,
116
- # These kwargs will be passed to _generate_examples
117
- gen_kwargs={
118
- "filepath": os.path.join(data_dir, "dev.jsonl"),
119
- "split": "dev",
120
- },
121
- ),
122
- datasets.SplitGenerator(
123
- name=f"{self.config.name.split('2')[1]}_corpus",
124
- # These kwargs will be passed to _generate_examples
125
- gen_kwargs={
126
- "filepath": os.path.join(data_dir, "corpus.jsonl"),
127
- "split": f"{self.config.name.split('2')[1]}_corpus",
128
- },
129
- ),
130
- ]
131
-
132
- def _generate_examples(
133
- self, filepath, split # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
134
- ):
135
- """Yields examples as (key, example) tuples."""
136
- # This method handles input defined in _split_generators to yield (key, example) tuples from the dataset.
137
- # The `key` is here for legacy reason (tfds) and is not important in itself.
138
-
139
- with open(filepath, encoding="utf-8") as f:
140
- for id_, row in enumerate(f):
141
- data = json.loads(row)
142
- yield id_, {
143
- "document_id": data["document_id"],
144
- "text": data["text"],
145
- "publication_year": data["publication_year"],
146
- "relevant_documents": data["relevant_documents"]
147
- if split != f"{self.config.name.split('2')[1]}_corpus"
148
- else [],
149
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
uk2eu/eu_regulatory_ir-eu_corpus.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8a9e3de074b9cbd97240f79103fbab37f4e2eb9fa1cd3330c04ebe7b41e8e032
3
+ size 23079935
uk2eu/eu_regulatory_ir-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:35566927904492879b89bb52a499929a6e5b9a62aa6c36f79840b65813b7ea96
3
+ size 5478070
uk2eu/eu_regulatory_ir-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:644b96209aa356d348c43ecbf1fec59379ec87d180cc3773758e264d35b0c3e8
3
+ size 21019143
uk2eu/eu_regulatory_ir-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3317fbd114eb53adc33f104989febced8c8117a8e42d31b5fb83cb529aefc185
3
+ size 5606835