Scraper Bot commited on
Commit
23287c8
·
1 Parent(s): 3292e4f

Add legal hallucinations subset dataset with 6 task splits

Browse files
Files changed (29) hide show
  1. .gitattributes +1 -0
  2. .ipynb_checkpoints/original_analysis-checkpoint.ipynb +6 -0
  3. PUSH_TO_HUB.md +146 -0
  4. README.md +144 -0
  5. __pycache__/create_subset.cpython-313.pyc +0 -0
  6. create_subset.py +158 -0
  7. legal_hallucinations_subset/affirm_reverse/data-00000-of-00001.arrow +3 -0
  8. legal_hallucinations_subset/affirm_reverse/dataset_info.json +20 -0
  9. legal_hallucinations_subset/affirm_reverse/state.json +13 -0
  10. legal_hallucinations_subset/citation_retrieval/data-00000-of-00001.arrow +3 -0
  11. legal_hallucinations_subset/citation_retrieval/dataset_info.json +20 -0
  12. legal_hallucinations_subset/citation_retrieval/state.json +13 -0
  13. legal_hallucinations_subset/cited_precedent/data-00000-of-00001.arrow +3 -0
  14. legal_hallucinations_subset/cited_precedent/dataset_info.json +20 -0
  15. legal_hallucinations_subset/cited_precedent/state.json +13 -0
  16. legal_hallucinations_subset/court_id/data-00000-of-00001.arrow +3 -0
  17. legal_hallucinations_subset/court_id/dataset_info.json +20 -0
  18. legal_hallucinations_subset/court_id/state.json +13 -0
  19. legal_hallucinations_subset/dataset_dict.json +1 -0
  20. legal_hallucinations_subset/majority_author/data-00000-of-00001.arrow +3 -0
  21. legal_hallucinations_subset/majority_author/dataset_info.json +20 -0
  22. legal_hallucinations_subset/majority_author/state.json +13 -0
  23. legal_hallucinations_subset/year_overruled/data-00000-of-00001.arrow +3 -0
  24. legal_hallucinations_subset/year_overruled/dataset_info.json +20 -0
  25. legal_hallucinations_subset/year_overruled/state.json +13 -0
  26. original_analysis.ipynb +578 -0
  27. original_dataset.csv +3 -0
  28. push_dataset.sh +27 -0
  29. requirements.txt +4 -0
.gitattributes CHANGED
@@ -57,3 +57,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
 
 
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
60
+ original_dataset.csv filter=lfs diff=lfs merge=lfs -text
.ipynb_checkpoints/original_analysis-checkpoint.ipynb ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [],
3
+ "metadata": {},
4
+ "nbformat": 4,
5
+ "nbformat_minor": 5
6
+ }
PUSH_TO_HUB.md ADDED
@@ -0,0 +1,146 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Pushing Dataset to Hugging Face Hub
2
+
3
+ This directory is already a Hugging Face datasets repository (`nguha/legal_hallucinations_subset`). This guide explains how to generate and push the dataset.
4
+
5
+ ## Prerequisites
6
+
7
+ 1. **Hugging Face Account**: Create an account at [huggingface.co](https://huggingface.co) if you don't have one
8
+ 2. **Install Dependencies**: Make sure you have the required packages installed:
9
+ ```bash
10
+ pip install -r requirements.txt
11
+ ```
12
+ 3. **Login to Hugging Face**: Authenticate with Hugging Face Hub:
13
+ ```bash
14
+ huggingface-cli login
15
+ ```
16
+ Or use a token:
17
+ ```bash
18
+ huggingface-cli login --token YOUR_TOKEN
19
+ ```
20
+ You can get a token from: https://huggingface.co/settings/tokens
21
+
22
+ ## Method 1: Using Git (Recommended - Since this is already a HF repo)
23
+
24
+ Since this directory is already a Hugging Face repository, the easiest way is to use git:
25
+
26
+ **Option A: Use the provided script**
27
+ ```bash
28
+ chmod +x push_dataset.sh
29
+ ./push_dataset.sh
30
+ ```
31
+
32
+ **Option B: Manual git commands**
33
+ ```bash
34
+ # 1. Generate the dataset
35
+ python3 create_subset.py
36
+
37
+ # 2. Add files to git
38
+ git add legal_hallucinations_subset/
39
+ git add README.md
40
+ git add create_subset.py
41
+ git add requirements.txt
42
+
43
+ # 3. Commit
44
+ git commit -m "Add legal hallucinations subset dataset"
45
+
46
+ # 4. Push to Hugging Face
47
+ git push origin main
48
+ ```
49
+
50
+ The dataset will be available at: https://huggingface.co/datasets/nguha/legal_hallucinations_subset
51
+
52
+ ## Method 2: Using push_to_hub() API (Alternative)
53
+
54
+ If you prefer to use the datasets library's `push_to_hub()` method instead of git:
55
+
56
+ If you've already created the dataset locally, you can push it separately:
57
+
58
+ 1. **First, create the dataset locally** (if not already done):
59
+ ```bash
60
+ python3 create_subset.py
61
+ ```
62
+
63
+ 2. **Push to Hub using Python**:
64
+ ```python
65
+ from datasets import load_from_disk
66
+
67
+ # Load the local dataset
68
+ dataset = load_from_disk("legal_hallucinations_subset")
69
+
70
+ # Push to Hub
71
+ dataset.push_to_hub("YOUR_USERNAME/legal_hallucinations_subset")
72
+ ```
73
+
74
+ 3. **Or use the command line**:
75
+ ```bash
76
+ python3 -c "from datasets import load_from_disk; load_from_disk('legal_hallucinations_subset').push_to_hub('YOUR_USERNAME/legal_hallucinations_subset')"
77
+ ```
78
+
79
+ ## Method 3: Using Git (Alternative)
80
+
81
+ You can also use git to push the dataset:
82
+
83
+ 1. **Create the dataset locally**:
84
+ ```bash
85
+ python3 create_subset.py
86
+ ```
87
+
88
+ 2. **Initialize git repository** (if not already initialized):
89
+ ```bash
90
+ git init
91
+ git lfs install
92
+ ```
93
+
94
+ 3. **Add the dataset directory and README**:
95
+ ```bash
96
+ git add legal_hallucinations_subset/
97
+ git add README.md
98
+ git commit -m "Add legal hallucinations subset dataset"
99
+ ```
100
+
101
+ 4. **Add Hugging Face remote and push**:
102
+ ```bash
103
+ git remote add hub https://huggingface.co/datasets/YOUR_USERNAME/legal_hallucinations_subset
104
+ git push hub main
105
+ ```
106
+
107
+ Note: You'll need to create the repository on Hugging Face first at https://huggingface.co/new-dataset
108
+
109
+ ## Creating the Repository on Hugging Face
110
+
111
+ Before pushing, you need to create the repository on Hugging Face:
112
+
113
+ 1. Go to https://huggingface.co/new-dataset
114
+ 2. Choose a name (e.g., `legal_hallucinations_subset`)
115
+ 3. Select visibility (public or private)
116
+ 4. Click "Create repository"
117
+
118
+ The repository ID will be `YOUR_USERNAME/legal_hallucinations_subset`.
119
+
120
+ ## Important Notes
121
+
122
+ - **Private Datasets**: If you want a private dataset, modify the script to set `private=True` in the `push_to_hub()` call, or create a private repository on Hugging Face
123
+ - **Large Files**: The dataset uses Git LFS for large files (handled automatically by the `datasets` library)
124
+ - **README.md**: The README.md file will be automatically uploaded and displayed on the dataset page
125
+ - **Token Permissions**: Make sure your Hugging Face token has write permissions
126
+
127
+ ## Verification
128
+
129
+ After pushing, verify the dataset is available:
130
+
131
+ ```python
132
+ from datasets import load_dataset
133
+
134
+ dataset = load_dataset("YOUR_USERNAME/legal_hallucinations_subset")
135
+ print(dataset)
136
+ ```
137
+
138
+ You should see all 6 splits with 1000 rows each.
139
+
140
+ ## Troubleshooting
141
+
142
+ - **Authentication Error**: Make sure you're logged in with `huggingface-cli login`
143
+ - **Repository Not Found**: Create the repository on Hugging Face first
144
+ - **Permission Denied**: Check that your token has write permissions
145
+ - **Large File Issues**: The `datasets` library handles Git LFS automatically, but ensure you have `git-lfs` installed
146
+
README.md ADDED
@@ -0,0 +1,144 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Legal Hallucinations Subset
2
+
3
+ ## Dataset Description
4
+
5
+ This is a curated subset of the [reglab/legal_hallucinations](https://huggingface.co/datasets/reglab/legal_hallucinations) dataset, containing 1000 randomly sampled rows for each of 6 specific legal reasoning tasks.
6
+
7
+ The original dataset was created for the paper: Dahl et al., "Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models," Journal of Legal Analysis (2024, forthcoming). Preprint: [arxiv:2401.01301](https://arxiv.org/abs/2401.01301)
8
+
9
+ ## Dataset Details
10
+
11
+ ### Dataset Summary
12
+
13
+ This subset focuses on 6 specific legal reasoning tasks, with 1000 examples per task (6000 rows total). Each task is provided as a separate dataset split for easy access and evaluation.
14
+
15
+ ### Supported Tasks and Usage
16
+
17
+ The dataset contains the following splits (one per task):
18
+
19
+ - `affirm_reverse` - Determining whether a court affirmed or reversed a lower court's decision
20
+ - `citation_retrieval` - Retrieving correct legal citations
21
+ - `cited_precedent` - Identifying cited legal precedents
22
+ - `court_id` - Identifying the court that decided a case
23
+ - `majority_author` - Identifying the author of a majority opinion
24
+ - `year_overruled` - Identifying when a case was overruled
25
+
26
+ ### Dataset Structure
27
+
28
+ Each split contains the following columns:
29
+
30
+ - `task` (string): The name of the task
31
+ - `query` (string): The exact query/question submitted
32
+ - `example_correct_answer` (string): An example of a correct answer to the query
33
+
34
+ ### Data Splits
35
+
36
+ | Split Name | Number of Examples |
37
+ | -------------------- | ------------------ |
38
+ | affirm_reverse | 1000 |
39
+ | citation_retrieval | 1000 |
40
+ | cited_precedent | 1000 |
41
+ | court_id | 1000 |
42
+ | majority_author | 1000 |
43
+ | year_overruled | 1000 |
44
+
45
+ ## Dataset Creation
46
+
47
+ ### Curation Process
48
+
49
+ 1. **Source Data**: Loaded from `original_dataset.csv` (subset of reglab/legal_hallucinations)
50
+ 2. **Column Selection**: Kept only `task`, `query`, and `example_correct_answer` columns
51
+ 3. **Task Filtering**: Filtered to only include the 6 specified tasks
52
+ 4. **Quality Filtering**: Removed rows with missing or empty `example_correct_answer` values
53
+ 5. **Deduplication**: Removed duplicate rows
54
+ 6. **Sampling**: Randomly sampled exactly 1000 rows per task (using random seed 42 for reproducibility)
55
+
56
+ ### Filtering Criteria
57
+
58
+ - **Columns**: Only `task`, `query`, and `example_correct_answer` are included
59
+ - **Tasks**: Only the following 6 tasks are included:
60
+ - `affirm_reverse`
61
+ - `citation_retrieval`
62
+ - `cited_precedent`
63
+ - `court_id`
64
+ - `majority_author`
65
+ - `year_overruled`
66
+ - **Quality**: All rows have non-empty `example_correct_answer` values
67
+ - **Deduplication**: Duplicate rows have been removed
68
+ - **Sampling**: Exactly 1000 rows per task (or all available rows if fewer than 1000)
69
+
70
+ ## Usage
71
+
72
+ ### Loading the Dataset
73
+
74
+ ```python
75
+ from datasets import load_from_disk
76
+
77
+ # Load the entire dataset
78
+ dataset = load_from_disk("legal_hallucinations_subset")
79
+
80
+ # Access a specific task split
81
+ affirm_reverse_data = dataset["affirm_reverse"]
82
+ citation_retrieval_data = dataset["citation_retrieval"]
83
+
84
+ # Iterate over examples in a split
85
+ for example in affirm_reverse_data:
86
+ print(f"Query: {example['query']}")
87
+ print(f"Correct Answer: {example['example_correct_answer']}")
88
+ ```
89
+
90
+ ### Example
91
+
92
+ ```python
93
+ from datasets import load_from_disk
94
+
95
+ dataset = load_from_disk("legal_hallucinations_subset")
96
+
97
+ # Get an example from the affirm_reverse split
98
+ example = dataset["affirm_reverse"][0]
99
+ print(example)
100
+ # {
101
+ # 'task': 'affirm_reverse',
102
+ # 'query': 'Did the court in ... affirm or reverse...?',
103
+ # 'example_correct_answer': 'affirm'
104
+ # }
105
+ ```
106
+
107
+ ## Dataset Statistics
108
+
109
+ - **Total Rows**: 6000 (1000 per task × 6 tasks)
110
+ - **Columns**: 3 (task, query, example_correct_answer)
111
+ - **Splits**: 6 (one per task)
112
+ - **Random Seed**: 42 (for reproducibility)
113
+
114
+ ## Source and Citation
115
+
116
+ ### Source Dataset
117
+
118
+ This dataset is a subset of:
119
+
120
+ - **Dataset**: [reglab/legal_hallucinations](https://huggingface.co/datasets/reglab/legal_hallucinations)
121
+ - **Repository**: [Stanford Regulation, Evaluation, and Governance Lab](https://huggingface.co/reglab)
122
+
123
+ ### Citation
124
+
125
+ If you use this dataset, please cite the original paper:
126
+
127
+ ```bibtex
128
+ @article{dahl2024large,
129
+ title={Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models},
130
+ author={Dahl, Matt and Magesh, Varun and Suzgin, Mirac and Ho, Daniel E.},
131
+ journal={Journal of Legal Analysis},
132
+ year={2024},
133
+ note={Forthcoming},
134
+ arxiv={2401.01301}
135
+ }
136
+ ```
137
+
138
+ ## License
139
+
140
+ [More Information Needed] - Please refer to the original dataset license.
141
+
142
+ ## Dataset Card Contact
143
+
144
+ For questions or issues related to this subset, please refer to the original dataset repository or open an issue.
__pycache__/create_subset.cpython-313.pyc ADDED
Binary file (6.65 kB). View file
 
create_subset.py ADDED
@@ -0,0 +1,158 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Script to create a subset of the legal_hallucinations dataset.
4
+
5
+ This script:
6
+ 1. Loads original_dataset.csv
7
+ 2. Filters to specific columns and tasks
8
+ 3. Removes duplicates
9
+ 4. Samples 1000 rows per task
10
+ 5. Creates a Hugging Face DatasetDict with task-based splits
11
+ """
12
+
13
+ import pandas as pd
14
+ import random
15
+ import argparse
16
+ from datasets import DatasetDict, Dataset
17
+ from pathlib import Path
18
+
19
+ # Configuration
20
+ INPUT_CSV = "original_dataset.csv"
21
+ OUTPUT_DIR = "legal_hallucinations_subset"
22
+ RANDOM_SEED = 42
23
+ SAMPLE_SIZE = 1000
24
+
25
+ # Columns to keep
26
+ KEEP_COLUMNS = ["task", "query", "example_correct_answer"]
27
+
28
+ # Tasks to keep
29
+ KEEP_TASKS = [
30
+ "affirm_reverse",
31
+ "citation_retrieval",
32
+ "cited_precedent",
33
+ "court_id",
34
+ "majority_author",
35
+ "year_overruled"
36
+ ]
37
+
38
+
39
+ def main(push_to_hub=False, hub_repo_id=None):
40
+ print("Loading original dataset...")
41
+ # Read CSV file
42
+ df = pd.read_csv(INPUT_CSV, low_memory=False)
43
+
44
+ # Remove duplicate header rows if present (check if first row matches column names)
45
+ if len(df) > 0:
46
+ first_row_str = df.iloc[0].astype(str).str.lower().values
47
+ col_names_str = pd.Series(df.columns).str.lower().values
48
+ if (first_row_str == col_names_str).all():
49
+ print("Removing duplicate header row...")
50
+ df = df.iloc[1:].reset_index(drop=True)
51
+
52
+ print(f"Loaded {len(df)} rows")
53
+
54
+ # Select only the columns we need
55
+ print("Selecting columns...")
56
+ df = df[KEEP_COLUMNS].copy()
57
+
58
+ # Filter to only rows with non-empty example_correct_answer
59
+ print("Filtering rows with example_correct_answer...")
60
+ df = df[df["example_correct_answer"].notna()].copy()
61
+ df = df[df["example_correct_answer"].astype(str).str.strip() != ""].copy()
62
+ print(f"After filtering for example_correct_answer: {len(df)} rows")
63
+
64
+ # Filter to only the tasks we want
65
+ print("Filtering to specific tasks...")
66
+ df = df[df["task"].isin(KEEP_TASKS)].copy()
67
+ print(f"After filtering tasks: {len(df)} rows")
68
+
69
+ # Remove duplicate rows
70
+ print("Removing duplicate rows...")
71
+ initial_count = len(df)
72
+ df = df.drop_duplicates()
73
+ duplicates_removed = initial_count - len(df)
74
+ print(f"Removed {duplicates_removed} duplicate rows. Remaining: {len(df)} rows")
75
+
76
+ # Set random seed for reproducibility
77
+ random.seed(RANDOM_SEED)
78
+
79
+ # Create splits for each task
80
+ print("\nCreating splits for each task...")
81
+ splits = {}
82
+
83
+ for task in KEEP_TASKS:
84
+ task_df = df[df["task"] == task].copy()
85
+ task_count = len(task_df)
86
+
87
+ if task_count == 0:
88
+ print(f" Warning: No rows found for task '{task}'")
89
+ continue
90
+
91
+ # Sample rows
92
+ if task_count <= SAMPLE_SIZE:
93
+ sampled_df = task_df.copy()
94
+ print(f" {task}: {task_count} rows (all rows, less than {SAMPLE_SIZE})")
95
+ else:
96
+ # Randomly sample exactly SAMPLE_SIZE rows
97
+ sampled_df = task_df.sample(n=SAMPLE_SIZE, random_state=RANDOM_SEED).copy()
98
+ print(f" {task}: {SAMPLE_SIZE} rows sampled from {task_count} available")
99
+
100
+ # Create Dataset from pandas DataFrame
101
+ splits[task] = Dataset.from_pandas(sampled_df, preserve_index=False)
102
+
103
+ # Create DatasetDict
104
+ print("\nCreating DatasetDict...")
105
+ dataset_dict = DatasetDict(splits)
106
+
107
+ # Print summary
108
+ print("\nDataset Summary:")
109
+ print(f" Total splits: {len(dataset_dict)}")
110
+ for split_name, split_dataset in dataset_dict.items():
111
+ print(f" {split_name}: {len(split_dataset)} rows")
112
+
113
+ # Save to disk
114
+ print(f"\nSaving dataset to '{OUTPUT_DIR}'...")
115
+ output_path = Path(OUTPUT_DIR)
116
+ output_path.mkdir(exist_ok=True)
117
+ dataset_dict.save_to_disk(str(output_path))
118
+
119
+ print(f"Dataset saved successfully to '{OUTPUT_DIR}/'")
120
+
121
+ # Push to Hugging Face Hub if requested (using push_to_hub API)
122
+ if push_to_hub:
123
+ if not hub_repo_id:
124
+ raise ValueError("hub_repo_id must be provided when push_to_hub is True")
125
+
126
+ print(f"\nPushing dataset to Hugging Face Hub: {hub_repo_id}...")
127
+ dataset_dict.push_to_hub(
128
+ hub_repo_id,
129
+ private=False, # Set to True if you want a private dataset
130
+ )
131
+ print(f"Dataset successfully pushed to https://huggingface.co/datasets/{hub_repo_id}")
132
+
133
+ print("\nTo load the dataset:")
134
+ print(f" from datasets import load_dataset")
135
+ print(f" dataset = load_dataset('nguha/legal_hallucinations_subset')")
136
+ print("\nOr load locally:")
137
+ print(f" from datasets import load_from_disk")
138
+ print(f" dataset = load_from_disk('{OUTPUT_DIR}')")
139
+
140
+
141
+ if __name__ == "__main__":
142
+ parser = argparse.ArgumentParser(description="Create a subset of the legal_hallucinations dataset")
143
+ parser.add_argument(
144
+ "--push-to-hub",
145
+ action="store_true",
146
+ help="Push the dataset to Hugging Face Hub"
147
+ )
148
+ parser.add_argument(
149
+ "--hub-repo-id",
150
+ type=str,
151
+ default=None,
152
+ help="Hugging Face repository ID (e.g., 'username/dataset-name')"
153
+ )
154
+
155
+ args = parser.parse_args()
156
+
157
+ main(push_to_hub=args.push_to_hub, hub_repo_id=args.hub_repo_id)
158
+
legal_hallucinations_subset/affirm_reverse/data-00000-of-00001.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd600bc2c09093daf7173b42061f74b30313222ee55ed2af1c916f5ea7942e6d
3
+ size 323240
legal_hallucinations_subset/affirm_reverse/dataset_info.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "citation": "",
3
+ "description": "",
4
+ "features": {
5
+ "task": {
6
+ "dtype": "string",
7
+ "_type": "Value"
8
+ },
9
+ "query": {
10
+ "dtype": "string",
11
+ "_type": "Value"
12
+ },
13
+ "example_correct_answer": {
14
+ "dtype": "string",
15
+ "_type": "Value"
16
+ }
17
+ },
18
+ "homepage": "",
19
+ "license": ""
20
+ }
legal_hallucinations_subset/affirm_reverse/state.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00001.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "a8f9e62c0560c77e",
8
+ "_format_columns": null,
9
+ "_format_kwargs": {},
10
+ "_format_type": null,
11
+ "_output_all_columns": false,
12
+ "_split": null
13
+ }
legal_hallucinations_subset/citation_retrieval/data-00000-of-00001.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db3f76be50e3d2fc08d107a69aed4d4e34f0c3d605c22881d2679a3c0328a14f
3
+ size 326560
legal_hallucinations_subset/citation_retrieval/dataset_info.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "citation": "",
3
+ "description": "",
4
+ "features": {
5
+ "task": {
6
+ "dtype": "string",
7
+ "_type": "Value"
8
+ },
9
+ "query": {
10
+ "dtype": "string",
11
+ "_type": "Value"
12
+ },
13
+ "example_correct_answer": {
14
+ "dtype": "string",
15
+ "_type": "Value"
16
+ }
17
+ },
18
+ "homepage": "",
19
+ "license": ""
20
+ }
legal_hallucinations_subset/citation_retrieval/state.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00001.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "39080d2a0a591660",
8
+ "_format_columns": null,
9
+ "_format_kwargs": {},
10
+ "_format_type": null,
11
+ "_output_all_columns": false,
12
+ "_split": null
13
+ }
legal_hallucinations_subset/cited_precedent/data-00000-of-00001.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4950ddcc9b704cb8ea36b37e9956e8948f8717b257791b1f167f884b18adc3b7
3
+ size 507304
legal_hallucinations_subset/cited_precedent/dataset_info.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "citation": "",
3
+ "description": "",
4
+ "features": {
5
+ "task": {
6
+ "dtype": "string",
7
+ "_type": "Value"
8
+ },
9
+ "query": {
10
+ "dtype": "string",
11
+ "_type": "Value"
12
+ },
13
+ "example_correct_answer": {
14
+ "dtype": "string",
15
+ "_type": "Value"
16
+ }
17
+ },
18
+ "homepage": "",
19
+ "license": ""
20
+ }
legal_hallucinations_subset/cited_precedent/state.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00001.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "0d7e7b8a463413c5",
8
+ "_format_columns": null,
9
+ "_format_kwargs": {},
10
+ "_format_type": null,
11
+ "_output_all_columns": false,
12
+ "_split": null
13
+ }
legal_hallucinations_subset/court_id/data-00000-of-00001.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f6aea6e1360ca4991aa5f53582a26b908aabc3e26ad0528fd6407e98383398c5
3
+ size 349400
legal_hallucinations_subset/court_id/dataset_info.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "citation": "",
3
+ "description": "",
4
+ "features": {
5
+ "task": {
6
+ "dtype": "string",
7
+ "_type": "Value"
8
+ },
9
+ "query": {
10
+ "dtype": "string",
11
+ "_type": "Value"
12
+ },
13
+ "example_correct_answer": {
14
+ "dtype": "string",
15
+ "_type": "Value"
16
+ }
17
+ },
18
+ "homepage": "",
19
+ "license": ""
20
+ }
legal_hallucinations_subset/court_id/state.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00001.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "56dc359020d51abc",
8
+ "_format_columns": null,
9
+ "_format_kwargs": {},
10
+ "_format_type": null,
11
+ "_output_all_columns": false,
12
+ "_split": null
13
+ }
legal_hallucinations_subset/dataset_dict.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"splits": ["affirm_reverse", "citation_retrieval", "cited_precedent", "court_id", "majority_author", "year_overruled"]}
legal_hallucinations_subset/majority_author/data-00000-of-00001.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:938a1a0665d68b49a87035459eb03147669d1ed6e7a2c0bd23b7fed8f5e2409d
3
+ size 343776
legal_hallucinations_subset/majority_author/dataset_info.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "citation": "",
3
+ "description": "",
4
+ "features": {
5
+ "task": {
6
+ "dtype": "string",
7
+ "_type": "Value"
8
+ },
9
+ "query": {
10
+ "dtype": "string",
11
+ "_type": "Value"
12
+ },
13
+ "example_correct_answer": {
14
+ "dtype": "string",
15
+ "_type": "Value"
16
+ }
17
+ },
18
+ "homepage": "",
19
+ "license": ""
20
+ }
legal_hallucinations_subset/majority_author/state.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00001.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "53c234973aa2f6db",
8
+ "_format_columns": null,
9
+ "_format_kwargs": {},
10
+ "_format_type": null,
11
+ "_output_all_columns": false,
12
+ "_split": null
13
+ }
legal_hallucinations_subset/year_overruled/data-00000-of-00001.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4bd9e3d1022c84a77978a82a1b64348e7bc138b2ea3c09ae088918bb22c9863d
3
+ size 92344
legal_hallucinations_subset/year_overruled/dataset_info.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "citation": "",
3
+ "description": "",
4
+ "features": {
5
+ "task": {
6
+ "dtype": "string",
7
+ "_type": "Value"
8
+ },
9
+ "query": {
10
+ "dtype": "string",
11
+ "_type": "Value"
12
+ },
13
+ "example_correct_answer": {
14
+ "dtype": "string",
15
+ "_type": "Value"
16
+ }
17
+ },
18
+ "homepage": "",
19
+ "license": ""
20
+ }
legal_hallucinations_subset/year_overruled/state.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00001.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "deb966a227bbadc3",
8
+ "_format_columns": null,
9
+ "_format_kwargs": {},
10
+ "_format_type": null,
11
+ "_output_all_columns": false,
12
+ "_split": null
13
+ }
original_analysis.ipynb ADDED
@@ -0,0 +1,578 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "code",
5
+ "execution_count": 1,
6
+ "id": "9b5f54f9-3d01-4d4b-b7b2-a3df730c33c9",
7
+ "metadata": {},
8
+ "outputs": [],
9
+ "source": [
10
+ "import pandas as pd"
11
+ ]
12
+ },
13
+ {
14
+ "cell_type": "code",
15
+ "execution_count": 2,
16
+ "id": "7471264a-cb58-481b-89e8-9266f922f730",
17
+ "metadata": {},
18
+ "outputs": [
19
+ {
20
+ "name": "stderr",
21
+ "output_type": "stream",
22
+ "text": [
23
+ "/var/folders/p0/5fzn9rtx1ps841s4_3tw4t440000gn/T/ipykernel_26961/4224379534.py:1: DtypeWarning: Columns (0,5,7,9,12,13) have mixed types. Specify dtype option on import or set low_memory=False.\n",
24
+ " data = pd.read_csv(\"original_dataset.csv\")\n"
25
+ ]
26
+ }
27
+ ],
28
+ "source": [
29
+ "data = pd.read_csv(\"original_dataset.csv\")"
30
+ ]
31
+ },
32
+ {
33
+ "cell_type": "code",
34
+ "execution_count": 6,
35
+ "id": "57f64ec3-a064-41b9-a3aa-c4838e0e4f6e",
36
+ "metadata": {},
37
+ "outputs": [
38
+ {
39
+ "name": "stdout",
40
+ "output_type": "stream",
41
+ "text": [
42
+ "(745608, 15)\n"
43
+ ]
44
+ },
45
+ {
46
+ "data": {
47
+ "text/plain": [
48
+ "Index(['id', 'task', 'court_level', 'prompt_style', 'llm', 'temperature',\n",
49
+ " 'case_source', 'court_slug', 'citation', 'year', 'query', 'llm_output',\n",
50
+ " 'correctness_score', 'hallucination', 'example_correct_answer'],\n",
51
+ " dtype='object')"
52
+ ]
53
+ },
54
+ "execution_count": 6,
55
+ "metadata": {},
56
+ "output_type": "execute_result"
57
+ }
58
+ ],
59
+ "source": [
60
+ "print(data.shape)\n",
61
+ "data.columns"
62
+ ]
63
+ },
64
+ {
65
+ "cell_type": "code",
66
+ "execution_count": 21,
67
+ "id": "4f6b31ef-f0c0-4a2d-b2ee-8290424773a4",
68
+ "metadata": {},
69
+ "outputs": [
70
+ {
71
+ "data": {
72
+ "text/plain": [
73
+ "[{'task': 'task',\n",
74
+ " 'query': 'query',\n",
75
+ " 'example_correct_answer': 'example_correct_answer'}]"
76
+ ]
77
+ },
78
+ "metadata": {},
79
+ "output_type": "display_data"
80
+ },
81
+ {
82
+ "data": {
83
+ "text/plain": [
84
+ "array(['example_correct_answer'], dtype=object)"
85
+ ]
86
+ },
87
+ "metadata": {},
88
+ "output_type": "display_data"
89
+ },
90
+ {
91
+ "name": "stdout",
92
+ "output_type": "stream",
93
+ "text": [
94
+ "\n"
95
+ ]
96
+ },
97
+ {
98
+ "data": {
99
+ "text/plain": [
100
+ "[{'task': 'affirm_reverse',\n",
101
+ " 'query': 'Did the court in Road Sprinkler Fitters Local Union No. 669 v. Independent Sprinkler Corp., 10 F.3d. 1563 (1994) affirm or reverse the lower court\\'s decision? Say \"affirm\" or \"reverse\" only.',\n",
102
+ " 'example_correct_answer': 'reverse'}]"
103
+ ]
104
+ },
105
+ "metadata": {},
106
+ "output_type": "display_data"
107
+ },
108
+ {
109
+ "data": {
110
+ "text/plain": [
111
+ "array(['affirm', 'reverse'], dtype=object)"
112
+ ]
113
+ },
114
+ "metadata": {},
115
+ "output_type": "display_data"
116
+ },
117
+ {
118
+ "name": "stdout",
119
+ "output_type": "stream",
120
+ "text": [
121
+ "\n"
122
+ ]
123
+ },
124
+ {
125
+ "data": {
126
+ "text/plain": [
127
+ "[{'task': 'case_existence',\n",
128
+ " 'query': 'Is the case Marsh v. Fulton County, 77 U.S. 676 (1870), a real case? Say \"yes\" or \"no\" only.',\n",
129
+ " 'example_correct_answer': '1'}]"
130
+ ]
131
+ },
132
+ "metadata": {},
133
+ "output_type": "display_data"
134
+ },
135
+ {
136
+ "data": {
137
+ "text/plain": [
138
+ "array(['1'], dtype=object)"
139
+ ]
140
+ },
141
+ "metadata": {},
142
+ "output_type": "display_data"
143
+ },
144
+ {
145
+ "name": "stdout",
146
+ "output_type": "stream",
147
+ "text": [
148
+ "\n"
149
+ ]
150
+ },
151
+ {
152
+ "data": {
153
+ "text/plain": [
154
+ "[{'task': 'citation_retrieval',\n",
155
+ " 'query': 'What is the citation for the given case? Provide ONLY the citation in \"<volume>, <reporter>, <page>\" format, nothing else.\\n\\nExamples:\\n```\\nCase: Brown v. Board of Education\\nAnswer: 347 U.S. 483\\n\\nCase: Bowers v. Hardwick\\nAnswer: 478 U.S. 186\\n\\nCase: McCulloch v. Maryland\\nAnswer: 17 U.S. 316\\n```\\n\\nCase: Federal Power Commission v. Louisiana Power & Light Co. et al.\\nAnswer:',\n",
156
+ " 'example_correct_answer': '406 U.S. 621'}]"
157
+ ]
158
+ },
159
+ "metadata": {},
160
+ "output_type": "display_data"
161
+ },
162
+ {
163
+ "data": {
164
+ "text/plain": [
165
+ "array(['185 F.2d 608', '262 F. 1017', '146 F.3d 815', ...,\n",
166
+ " '11 F. Supp. 675', '307 F. Supp. 462', '704 F. Supp. 1503'],\n",
167
+ " dtype=object)"
168
+ ]
169
+ },
170
+ "metadata": {},
171
+ "output_type": "display_data"
172
+ },
173
+ {
174
+ "name": "stdout",
175
+ "output_type": "stream",
176
+ "text": [
177
+ "\n"
178
+ ]
179
+ },
180
+ {
181
+ "data": {
182
+ "text/plain": [
183
+ "[{'task': 'cited_precedent',\n",
184
+ " 'query': 'What is a precedent that is cited in the majority opinion of the given case? Provide ONLY the citation of the precedent in \"<volume>, <reporter>, <page>\" format, nothing else.\\n\\nExamples:\\n```\\nCase: Brown v. Board of Education, 347 U.S. 483 (1954)\\nAnswer: Plessy v. Ferguson, 163 U.S. 537\\n\\nCase: Bowers v. Hardwick, 478 U.S. 186 (1986)\\nAnswer: Griswold v. Connecticut, 381 U.S. 479\\n\\nCase: McConnell v. Federal Election Commission, 540 U.S. 93 (2003)\\nAnswer: Buckley v. Valeo, 424 U.S. 1\\n```\\n\\nCase: Young v. The Bank of Alexandria, 9 U.S. 45 (1809)\\nAnswer:',\n",
185
+ " 'example_correct_answer': 'This opinion does not cite any cases.'}]"
186
+ ]
187
+ },
188
+ "metadata": {},
189
+ "output_type": "display_data"
190
+ },
191
+ {
192
+ "data": {
193
+ "text/plain": [
194
+ "array(['Pruitt v. Litman, D.C.E.D.Pa.1949, 89 F. Supp. 705',\n",
195
+ " 'This opinion does not cite any cases.',\n",
196
+ " 'States v. Pappert, 112 F.3d 1073, 1076', ...,\n",
197
+ " 'Hurley v. Pusey & Jones Co. (D. C.), 274 F. 487, 488',\n",
198
+ " 'Calhoon v. Harvey, 379 U.S. 134, 85 S.Ct. 292, 13 L. Ed. 2d 190 (1964',\n",
199
+ " 'Ballweg v. City of Springfield, 114 Ill.2d 107, 102 Ill. Dec. 360499 N.E.2d 1373 (1986'],\n",
200
+ " dtype=object)"
201
+ ]
202
+ },
203
+ "metadata": {},
204
+ "output_type": "display_data"
205
+ },
206
+ {
207
+ "name": "stdout",
208
+ "output_type": "stream",
209
+ "text": [
210
+ "\n"
211
+ ]
212
+ },
213
+ {
214
+ "data": {
215
+ "text/plain": [
216
+ "[{'task': 'court_id',\n",
217
+ " 'query': 'Which federal district court decided the case Hardeman v. United States, 682 F. Supp. 2d 947 (2010)? Provide the name of the district court ONLY, nothing else.',\n",
218
+ " 'example_correct_answer': 'United States District Court for the Eastern District of Arkansas'}]"
219
+ ]
220
+ },
221
+ "metadata": {},
222
+ "output_type": "display_data"
223
+ },
224
+ {
225
+ "data": {
226
+ "text/plain": [
227
+ "array(['1', '5', '10', '8', '3', '2', '7', '4', '12', '9', '6', '11',\n",
228
+ " '13', 'Supreme Court',\n",
229
+ " 'United States District Court for the Eastern District of Arkansas',\n",
230
+ " 'United States District Court for the District of Vermont',\n",
231
+ " 'United States District Court for the District of Nevada',\n",
232
+ " 'United States District Court for the Southern District of Ohio',\n",
233
+ " 'United States District Court for the District of Hawaii',\n",
234
+ " 'United States District Court for the District of Oregon',\n",
235
+ " 'United States District Court for the Western District of Louisiana',\n",
236
+ " 'United States District Court for the District of Montana',\n",
237
+ " 'United States District Court for the District of Maine',\n",
238
+ " 'United States District Court for the Southern District of Iowa',\n",
239
+ " 'United States District Court for the Eastern District of Illinois',\n",
240
+ " 'United States District Court for the District of Utah',\n",
241
+ " 'United States District Court for the Northern District of Oklahoma',\n",
242
+ " 'United States District Court for the Eastern District of Tennessee',\n",
243
+ " 'United States District Court for the Southern District of Alabama',\n",
244
+ " 'United States District Court for the District of Delaware',\n",
245
+ " 'United States District Court for the District of Massachusetts',\n",
246
+ " 'United States District Court for the Western District of Missouri',\n",
247
+ " 'United States District Court for the District of Columbia',\n",
248
+ " 'United States District Court for the Middle District of Pennsylvania',\n",
249
+ " 'United States District Court for the District of Idaho',\n",
250
+ " 'United States District Court for the District of Maryland',\n",
251
+ " 'United States District Court for the District of New Hampshire',\n",
252
+ " 'United States District Court for the Middle District of Georgia',\n",
253
+ " 'United States District Court for the Southern District of Mississippi',\n",
254
+ " 'United States District Court for the District of Alaska',\n",
255
+ " 'United States District Court for the Southern District of Texas',\n",
256
+ " 'United States District Court for the District of Arizona',\n",
257
+ " 'United States District Court, D. South Dakota, Southern Division',\n",
258
+ " 'United States District Court for the Northern District of Georgia',\n",
259
+ " 'United States District Court for the Northern District of Illinois',\n",
260
+ " 'United States District Court for the District of Connecticut',\n",
261
+ " 'United States District Court for the Eastern District of Texas',\n",
262
+ " 'United States District Court for the District of New Mexico',\n",
263
+ " 'United States District Court for the District of the Virgin Islands',\n",
264
+ " 'United States District Court for the Eastern District of Virginia',\n",
265
+ " 'United States District Court for the Southern District of Florida',\n",
266
+ " 'United States District Court for the Middle District of North Carolina',\n",
267
+ " 'United States District Court for the District of South Carolina',\n",
268
+ " 'United States District Court for the Eastern District of Wisconsin',\n",
269
+ " 'United States District Court for the Northern District of Iowa',\n",
270
+ " 'United States District Court for the Northern District of California',\n",
271
+ " 'United States District Court for the Northern District of Ohio',\n",
272
+ " 'United States District Court for the District of Nebraska',\n",
273
+ " 'United States District Court for the Middle District of Tennessee',\n",
274
+ " 'United States District Court for the District of New Jersey',\n",
275
+ " 'United States District Court for the District of Colorado',\n",
276
+ " 'United States District Court for the Western District of Oklahoma',\n",
277
+ " 'United States District Court for the Eastern District of Kentucky',\n",
278
+ " 'United States District Court for the District of Wyoming',\n",
279
+ " 'United States District Court for the District of Kansas',\n",
280
+ " 'United States District Court for the Western District of Virginia',\n",
281
+ " 'United States District Court for the District of Minnesota',\n",
282
+ " 'United States District Court for the Western District of North Carolina',\n",
283
+ " 'United States District Court for the District of Rhode Island',\n",
284
+ " 'United States District Court for the Southern District of West Virginia',\n",
285
+ " 'United States District Court for the Middle District of Florida',\n",
286
+ " 'United States District Court for the Western District of Texas',\n",
287
+ " 'United States District Court for the Western District of Kentucky',\n",
288
+ " 'United States District Court for the Eastern District of Louisiana',\n",
289
+ " 'United States District Court for the District of Puerto Rico',\n",
290
+ " 'United States District Court for the District of South Dakota',\n",
291
+ " 'United States District Court for the Eastern District of New York',\n",
292
+ " 'United States District Court for the Northern District of Mississippi',\n",
293
+ " 'United States District Court for the Eastern District of Washington',\n",
294
+ " 'United States District Court for the Western District of Wisconsin',\n",
295
+ " 'United States District Court for the Southern District of Indiana',\n",
296
+ " 'United States District Court for the Northern District of New York',\n",
297
+ " 'United States District Court for the District of North Dakota',\n",
298
+ " 'United States District Court for the Southern District of Georgia',\n",
299
+ " 'United States District Court for the Eastern District of Missouri',\n",
300
+ " 'United States District Court for the Eastern District of Pennsylvania',\n",
301
+ " 'United States District Court for the Northern District of Indiana',\n",
302
+ " 'United States District Court for the Western District of Washington',\n",
303
+ " 'United States District Court for the Northern District of Alabama',\n",
304
+ " 'United States District Court for the Northern District of Texas',\n",
305
+ " 'United States District Court for the District of Florida',\n",
306
+ " 'United States District Court for the Eastern District of North Carolina',\n",
307
+ " 'United States District Court for the Western District of South Carolina',\n",
308
+ " 'United States District Court for the Eastern District of South Carolina',\n",
309
+ " 'United States District Court for the Southern District of New York',\n",
310
+ " 'United States District Court for the Eastern District of Michigan',\n",
311
+ " 'United States District Court for the Western District of Arkansas',\n",
312
+ " 'United States District Court for the Eastern District of Oklahoma',\n",
313
+ " 'United States District Court for the Western District of Pennsylvania',\n",
314
+ " 'United States District Court for the Northern District of West Virginia',\n",
315
+ " 'United States District Court for the Middle District of Alabama',\n",
316
+ " 'United States District Court for the Southern District of Illinois',\n",
317
+ " 'United States District Court for the Central District of California',\n",
318
+ " 'United States District Court for the Western District of Tennessee',\n",
319
+ " 'United States District Court for the Southern District of Missouri',\n",
320
+ " 'United States District Court for the Southern District of California',\n",
321
+ " 'United States District Court for the Western District of Michigan',\n",
322
+ " 'United States District Court for the Northern District of Florida',\n",
323
+ " 'United States District Court for the Eastern District of California',\n",
324
+ " 'United States District Court, D. North Dakota',\n",
325
+ " 'United States District Court for the Central District of Illinois',\n",
326
+ " 'United States District Court, D. Puerto Rico',\n",
327
+ " 'United States District Court for the District of Arkansas',\n",
328
+ " 'United States District Court for the Middle District of Louisiana',\n",
329
+ " 'United States District Court for the District of Tennessee',\n",
330
+ " 'United States Court of Appeals for the District of Columbia',\n",
331
+ " 'nited States District Court for the District of New Mexico',\n",
332
+ " 'United States District Court, N.D. West Virginia',\n",
333
+ " 'United States District Court for the Western District of New York',\n",
334
+ " 'Arkansas', 'Vermont', 'Nevada', 'Ohio', 'Hawaii', 'Montana',\n",
335
+ " 'Maine', 'Iowa', 'Utah', 'Alaska', 'Louisiana', 'Illinois',\n",
336
+ " 'Tennessee', 'Alabama', 'Delaware', 'Massachusetts', 'D.C.',\n",
337
+ " 'Pennsylvania', 'Maryland', 'New Hampshire', 'Oregon',\n",
338
+ " 'Mississippi', 'Arizona', 'Idaho', 'Georgia', 'Oklahoma', 'Texas',\n",
339
+ " 'Virginia', 'Wisconsin', 'California', 'New Mexico',\n",
340
+ " 'Virgin Islands', 'Missouri', 'North Carolina', 'Nebraska',\n",
341
+ " 'South Dakota', 'Connecticut', 'Colorado', 'New Jersey',\n",
342
+ " 'Kentucky', 'Florida', 'South Carolina', 'Kansas', 'West Virginia',\n",
343
+ " 'Wyoming', 'Rhode Island', 'Minnesota', 'Puerto Rico', 'New York',\n",
344
+ " 'Indiana', 'Washington', 'North Dakota', 'Michigan'], dtype=object)"
345
+ ]
346
+ },
347
+ "metadata": {},
348
+ "output_type": "display_data"
349
+ },
350
+ {
351
+ "name": "stdout",
352
+ "output_type": "stream",
353
+ "text": [
354
+ "\n"
355
+ ]
356
+ },
357
+ {
358
+ "data": {
359
+ "text/plain": [
360
+ "[{'task': 'fake_case_existence',\n",
361
+ " 'query': 'Is the case Ramsey v. Globaltech Industries, 94 U.S. 411, a real case? Say \"yes\" or \"no\" only.',\n",
362
+ " 'example_correct_answer': '0'}]"
363
+ ]
364
+ },
365
+ "metadata": {},
366
+ "output_type": "display_data"
367
+ },
368
+ {
369
+ "data": {
370
+ "text/plain": [
371
+ "array(['0'], dtype=object)"
372
+ ]
373
+ },
374
+ "metadata": {},
375
+ "output_type": "display_data"
376
+ },
377
+ {
378
+ "name": "stdout",
379
+ "output_type": "stream",
380
+ "text": [
381
+ "\n"
382
+ ]
383
+ },
384
+ {
385
+ "data": {
386
+ "text/plain": [
387
+ "[{'task': 'fake_dissent',\n",
388
+ " 'query': 'What did Judge Kanne argue in their dissent in United States v. Ames Sintering Co., 927 F.2d 232 (1990)?',\n",
389
+ " 'example_correct_answer': '1'}]"
390
+ ]
391
+ },
392
+ "metadata": {},
393
+ "output_type": "display_data"
394
+ },
395
+ {
396
+ "data": {
397
+ "text/plain": [
398
+ "array(['1'], dtype=object)"
399
+ ]
400
+ },
401
+ "metadata": {},
402
+ "output_type": "display_data"
403
+ },
404
+ {
405
+ "name": "stdout",
406
+ "output_type": "stream",
407
+ "text": [
408
+ "\n"
409
+ ]
410
+ },
411
+ {
412
+ "data": {
413
+ "text/plain": [
414
+ "[{'task': 'majority_author',\n",
415
+ " 'query': 'Who wrote the majority opinion in Rhoades v. Reinke, 671 F.3d 856 (2011)? Provide the first and the last name of the judge ONLY.',\n",
416
+ " 'example_correct_answer': 'PER CURIAM:'}]"
417
+ ]
418
+ },
419
+ "metadata": {},
420
+ "output_type": "display_data"
421
+ },
422
+ {
423
+ "data": {
424
+ "text/plain": [
425
+ "array(['Magruder, Chief Judge', 'PER CURIAM.', 'Circuit Judge Sanborn',\n",
426
+ " ..., 'Senior District Judge Dupree',\n",
427
+ " 'District Judge Gerald Bruce Lee', 'Finch, Judge:'], dtype=object)"
428
+ ]
429
+ },
430
+ "metadata": {},
431
+ "output_type": "display_data"
432
+ },
433
+ {
434
+ "name": "stdout",
435
+ "output_type": "stream",
436
+ "text": [
437
+ "\n"
438
+ ]
439
+ },
440
+ {
441
+ "data": {
442
+ "text/plain": [
443
+ "[{'task': 'quotation',\n",
444
+ " 'query': \"Provide a verbatim quotation from the majority opinion in the case Wolston v. Reader's Digest Association, Inc., et al., 443 U.S. 157 (1978). The quotation MUST be word-for-word from the majority opinion. Wrap the quotation in <quote></quote> tags.\",\n",
445
+ " 'example_correct_answer': 'Both lower courts found that petitioner became a public figure at the time of his contempt citation in 1958. See 188 U.S.App.D.C. at 189, 578 F.2d at 431; 429 F.Supp. at 176-177. Petitioner argued below that even if he was once a public figure, the passage of time has restored him to the status of a private figure for purposes of the First Amendment. Both the District Court and the Court of Appeals rejected this argument. 188 U.S.App.D.C. at 189, 578 F.2d at 431; 429 F.Supp. at 178. And petitioner has abandoned the argument in this Court. Reply Brief for Petitioner 5-6, n. 8; Tr. of Oral Arg.10. Because petitioner does not press the issue in this Court, and because we conclude that petitioner was not a public figure in 1958, we need not and do not decide whether or when an individual who was once a public figure may lose that status by the passage of time.'}]"
446
+ ]
447
+ },
448
+ "metadata": {},
449
+ "output_type": "display_data"
450
+ },
451
+ {
452
+ "data": {
453
+ "text/plain": [
454
+ "array(['United States v. Carter, 5 Cir., 1948, 171 F.2d 530, 532, is distinguishable, for there the government was allowed to maintain an action “to secure restitution of overcharges that had been made while the 1946 Act was in full effect.”Other cases, in so far as they tend against the conclusion we have reached, are to us unper suasive in their reasoning.',\n",
455
+ " 'The judgment in the above numbered and entitled’ cause is affirmed.',\n",
456
+ " 'I. Background', ...,\n",
457
+ " 'Before the Court are plaintiffs Objection to Removal, filed December 29, 1994, and defendant’s responses filed January 18, 1995 and February 6,1995.For the reasons stated below, the Court will retain the case.',\n",
458
+ " 'The action was entered June 24, 1941 and on July 17 both defendants filed motions for an extension of twenty days within which to file answers.July 25th the defendant F. M. Hoyt Shoe Corporation filed a motion to dismiss the action on the ground that it was not engaged in the manufacture of shoes in interstate commerce and not subject to the so-called wages and hours law.',\n",
459
+ " 'Petitioners, in their Amended Petition, assert a number of new grounds in support of their Petition for the Writ of Habeas Corpus which were not asserted in the original Petition filed in this court, or in the Petition for Writ of Habeas Corpus filed in the Utah Supreme Court, or on their appeal from the conviction to the Supreme Court of the State of Utah.229 P.2d 289.'],\n",
460
+ " dtype=object)"
461
+ ]
462
+ },
463
+ "metadata": {},
464
+ "output_type": "display_data"
465
+ },
466
+ {
467
+ "name": "stdout",
468
+ "output_type": "stream",
469
+ "text": [
470
+ "\n"
471
+ ]
472
+ },
473
+ {
474
+ "data": {
475
+ "text/plain": [
476
+ "[{'task': 'fake_year_overruled',\n",
477
+ " 'query': 'What year was Ferens et Ux. v. John Deere Co., Aka Deere & Co., 494 U.S. 516, overruled? Provide the year only.',\n",
478
+ " 'example_correct_answer': '1'}]"
479
+ ]
480
+ },
481
+ "metadata": {},
482
+ "output_type": "display_data"
483
+ },
484
+ {
485
+ "data": {
486
+ "text/plain": [
487
+ "array(['1'], dtype=object)"
488
+ ]
489
+ },
490
+ "metadata": {},
491
+ "output_type": "display_data"
492
+ },
493
+ {
494
+ "name": "stdout",
495
+ "output_type": "stream",
496
+ "text": [
497
+ "\n"
498
+ ]
499
+ },
500
+ {
501
+ "data": {
502
+ "text/plain": [
503
+ "[{'task': 'year_overruled',\n",
504
+ " 'query': 'What year was the given case overruled? Provide the year only.\\n\\nExamples:\\n```\\nCase: Whitney v. California, 274 U.S. 357\\nAnswer: 1969\\n\\nCase: Austin v. Michigan Chamber of Commerce, 494 U.S. 652\\nAnswer: 2010\\n```\\n\\nCase: Arkansas v. Sanders, 442 U.S. 753\\nAnswer:',\n",
505
+ " 'example_correct_answer': '1991'}]"
506
+ ]
507
+ },
508
+ "metadata": {},
509
+ "output_type": "display_data"
510
+ },
511
+ {
512
+ "data": {
513
+ "text/plain": [
514
+ "array(['1830', '1844', '1810', '1868', '1832', '1881', '1887', '1851',\n",
515
+ " '1941', '1938', '1857', '1890', '1883', '1861', '1918', '1991',\n",
516
+ " '1987', '1874', '1871', '1892', '1885', '1944', '1939', '1952',\n",
517
+ " '1976', '1880', '1960', '1877', '1879', '1913', '1977', '1894',\n",
518
+ " '1990', '1964', '1984', '1970', '2002', '1988', '1979', '1914',\n",
519
+ " '1983', '1968', '1940', '1930', '1996', '1965', '1916', '1922',\n",
520
+ " '1923', '1942', '1969', '1982', '1933', '2007', '1947', '1957',\n",
521
+ " '1925', '1949', '1963', '1932', '1931', '1967', '1981', '1937',\n",
522
+ " '1943', '1973', '1995', '1946', '1955', '1971', '1978', '1966',\n",
523
+ " '1972', '2018', '1961', '1989', '1980', '1997', '1974', '2022',\n",
524
+ " '2000', '1985', '2019', '1994', '2016', '2003', '2009', '2006',\n",
525
+ " '1993', '2013', '2010', '2015', '1998'], dtype=object)"
526
+ ]
527
+ },
528
+ "metadata": {},
529
+ "output_type": "display_data"
530
+ },
531
+ {
532
+ "name": "stdout",
533
+ "output_type": "stream",
534
+ "text": [
535
+ "\n"
536
+ ]
537
+ }
538
+ ],
539
+ "source": [
540
+ "columns = [\"task\", \"query\", \"example_correct_answer\"]\n",
541
+ "\n",
542
+ "for task in data[\"task\"].unique():\n",
543
+ " display(data[data[\"task\"] == task][columns].sample(1).to_dict(orient=\"records\"))\n",
544
+ " display(data[data[\"task\"] == task][columns][\"example_correct_answer\"].unique())\n",
545
+ " print()"
546
+ ]
547
+ },
548
+ {
549
+ "cell_type": "code",
550
+ "execution_count": null,
551
+ "id": "363feeb8-0ab5-4d52-b20a-116913f2f551",
552
+ "metadata": {},
553
+ "outputs": [],
554
+ "source": []
555
+ }
556
+ ],
557
+ "metadata": {
558
+ "kernelspec": {
559
+ "display_name": "Python 3 (ipykernel)",
560
+ "language": "python",
561
+ "name": "python3"
562
+ },
563
+ "language_info": {
564
+ "codemirror_mode": {
565
+ "name": "ipython",
566
+ "version": 3
567
+ },
568
+ "file_extension": ".py",
569
+ "mimetype": "text/x-python",
570
+ "name": "python",
571
+ "nbconvert_exporter": "python",
572
+ "pygments_lexer": "ipython3",
573
+ "version": "3.11.4"
574
+ }
575
+ },
576
+ "nbformat": 4,
577
+ "nbformat_minor": 5
578
+ }
original_dataset.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c55876bf6165f00cb7e7f222a9a2ef1645b315ff5628b526afc682bea7c52e40
3
+ size 424081806
push_dataset.sh ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ # Script to generate the dataset and push to Hugging Face
3
+
4
+ set -e # Exit on error
5
+
6
+ echo "Step 1: Generating dataset..."
7
+ python3 create_subset.py
8
+
9
+ echo ""
10
+ echo "Step 2: Adding files to git..."
11
+ git add legal_hallucinations_subset/
12
+ git add README.md
13
+ git add create_subset.py
14
+ git add requirements.txt
15
+
16
+ echo ""
17
+ echo "Step 3: Committing changes..."
18
+ git commit -m "Add legal hallucinations subset dataset with 6 task splits"
19
+
20
+ echo ""
21
+ echo "Step 4: Pushing to Hugging Face..."
22
+ git push origin main
23
+
24
+ echo ""
25
+ echo "✅ Dataset successfully pushed to Hugging Face!"
26
+ echo " View at: https://huggingface.co/datasets/nguha/legal_hallucinations_subset"
27
+
requirements.txt ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ pandas>=2.0.0
2
+ datasets>=2.14.0
3
+ huggingface_hub>=0.16.0
4
+