Joelito commited on
Commit
baf8bab
·
1 Parent(s): 57f878b

Upload 5 files

Browse files
Files changed (5) hide show
  1. README.md +125 -0
  2. prepare_data.py +35 -0
  3. test.jsonl.xz +3 -0
  4. train.jsonl.xz +3 -0
  5. val.jsonl.xz +3 -0
README.md ADDED
@@ -0,0 +1,125 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ # Dataset Card for MiningLegalArguments
5
+
6
+ ## Table of Contents
7
+ - [Table of Contents](#table-of-contents)
8
+ - [Dataset Description](#dataset-description)
9
+ - [Dataset Summary](#dataset-summary)
10
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
11
+ - [Languages](#languages)
12
+ - [Dataset Structure](#dataset-structure)
13
+ - [Data Instances](#data-instances)
14
+ - [Data Fields](#data-fields)
15
+ - [Data Splits](#data-splits)
16
+ - [Dataset Creation](#dataset-creation)
17
+ - [Curation Rationale](#curation-rationale)
18
+ - [Source Data](#source-data)
19
+ - [Annotations](#annotations)
20
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
21
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
22
+ - [Social Impact of Dataset](#social-impact-of-dataset)
23
+ - [Discussion of Biases](#discussion-of-biases)
24
+ - [Other Known Limitations](#other-known-limitations)
25
+ - [Additional Information](#additional-information)
26
+ - [Dataset Curators](#dataset-curators)
27
+ - [Licensing Information](#licensing-information)
28
+ - [Citation Information](#citation-information)
29
+ - [Contributions](#contributions)
30
+
31
+ ## Dataset Description
32
+
33
+ - **Homepage:** [GitHub](https://github.com/trusthlt/mining-legal-arguments)
34
+ - **Repository:**
35
+ - **Paper:** [ArXiv](https://arxiv.org/pdf/2208.06178.pdf)
36
+ - **Leaderboard:**
37
+ - **Point of Contact:**
38
+
39
+ ### Dataset Summary
40
+
41
+ [More Information Needed]
42
+
43
+ ### Supported Tasks and Leaderboards
44
+
45
+ [More Information Needed]
46
+
47
+ ### Languages
48
+
49
+ [More Information Needed]
50
+
51
+ ## Dataset Structure
52
+
53
+ ### Data Instances
54
+
55
+ [More Information Needed]
56
+
57
+ ### Data Fields
58
+
59
+ [More Information Needed]
60
+
61
+ ### Data Splits
62
+
63
+ [More Information Needed]
64
+
65
+ ## Dataset Creation
66
+
67
+ ### Curation Rationale
68
+
69
+ [More Information Needed]
70
+
71
+ ### Source Data
72
+
73
+ #### Initial Data Collection and Normalization
74
+
75
+ [More Information Needed]
76
+
77
+ #### Who are the source language producers?
78
+
79
+ [More Information Needed]
80
+
81
+ ### Annotations
82
+
83
+ #### Annotation process
84
+
85
+ [More Information Needed]
86
+
87
+ #### Who are the annotators?
88
+
89
+ [More Information Needed]
90
+
91
+ ### Personal and Sensitive Information
92
+
93
+ [More Information Needed]
94
+
95
+ ## Considerations for Using the Data
96
+
97
+ ### Social Impact of Dataset
98
+
99
+ [More Information Needed]
100
+
101
+ ### Discussion of Biases
102
+
103
+ [More Information Needed]
104
+
105
+ ### Other Known Limitations
106
+
107
+ [More Information Needed]
108
+
109
+ ## Additional Information
110
+
111
+ ### Dataset Curators
112
+
113
+ [More Information Needed]
114
+
115
+ ### Licensing Information
116
+
117
+ [More Information Needed]
118
+
119
+ ### Citation Information
120
+
121
+ [More Information Needed]
122
+
123
+ ### Contributions
124
+
125
+ Thanks to [@JoelNiklaus](https://github.com/JoelNiklaus) for adding this dataset.
prepare_data.py ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import ast
2
+
3
+ import pandas as pd
4
+
5
+ import os
6
+ from typing import Union
7
+
8
+ import datasets
9
+ from datasets import load_dataset
10
+
11
+
12
+
13
+ def save_and_compress(dataset: Union[datasets.Dataset, pd.DataFrame], name: str, idx=None):
14
+ if idx:
15
+ path = f"{name}_{idx}.jsonl"
16
+ else:
17
+ path = f"{name}.jsonl"
18
+
19
+ print("Saving to", path)
20
+ dataset.to_json(path, force_ascii=False, orient='records', lines=True)
21
+
22
+ print("Compressing...")
23
+ os.system(f'xz -zkf -T0 {path}') # -TO to use multithreading
24
+
25
+ def parse_to_list(example):
26
+ example["tokens"] = ast.literal_eval(example["tokens"])
27
+ example["labels"] = ast.literal_eval(example["labels"])
28
+ return example
29
+
30
+ for subset in ["agent", "argType"]:
31
+ for split in ["train", "val", "test"]:
32
+ dataset = load_dataset("csv", data_dir=f"original_data/{split}/{subset}", sep="\t", split="train")
33
+ dataset = dataset.map(parse_to_list)
34
+ print(dataset[0])
35
+ a save_and_compress(dataset, f"data/{subset}/{split}")
test.jsonl.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2d4aa8ebcd19cf98c63f9afeaf0ea85d5ff596edfbbfc09baa2cf371daa607c8
3
+ size 260116
train.jsonl.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bb36669f1f24f9045ac36a07bc37f1bfee48f031fde37137081b86594e343d3a
3
+ size 2056404
val.jsonl.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d91bb9de1f54f010ac12bc0e9d3d3c18e8fc02f325cdad22663e5048acaa45ca
3
+ size 262004