license: cc-by-sa-4.0
task_categories:
- text-retrieval
- question-answering
- text-ranking
language:
- en
tags:
- legal
- law
- legislative
size_categories:
- n<1K
source_datasets:
- reglab/housing_qa
dataset_info:
- config_name: default
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: float64
splits:
- name: test
num_examples: 500
- config_name: corpus
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: corpus
num_examples: 340
- config_name: queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: queries
num_examples: 107
configs:
- config_name: default
data_files:
- split: test
path: default.jsonl
- config_name: corpus
data_files:
- split: corpus
path: corpus.jsonl
- config_name: queries
data_files:
- split: queries
path: queries.jsonl
pretty_name: HousingQA (MTEB format)
HousingQA (MTEB format)
This is the version of the HousingQA evaluation dataset formatted in the Massive Text Embedding Benchmark (MTEB) information retrieval dataset format.
This dataset tests the ability of information retrieval models to retrieve relevant legislation to complex, reasoning-intensive legal questions.
Structure ποΈ
As per the MTEB information retrieval dataset format, this dataset comprises three splits, default, corpus, and queries.
The default split pairs questions (query-id) with relevant legislation (corpus-id), each pair having a score of 1.
The corpus split contains legislation, with the text of a law being stored in the text key and its id being stored in the _id key. There is also a title column, which is deliberately set to an empty string in all cases for compatibility with the mteb library.
The queries split contains questions, with the text of a question being stored in the text key and its id being stored in the _id key.
Methodology π§ͺ
To understand how HousingQA itself was created, refer to its documentation.
This dataset was formatted by taking the test split of HousingQA and treating questions as anchors and relevant legislation as positive passages. 500 examples were randomly sampled in order to keep the size of this dataset manageable.
License π
This dataset is licensed under CC BY 4.0.
Citation π
@misc{guha2023legalbench,
title={LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models},
author={Neel Guha and Julian Nyarko and Daniel E. Ho and Christopher RΓ© and Adam Chilton and Aditya Narayana and Alex Chohlas-Wood and Austin Peters and Brandon Waldon and Daniel N. Rockmore and Diego Zambrano and Dmitry Talisman and Enam Hoque and Faiz Surani and Frank Fagan and Galit Sarfaty and Gregory M. Dickinson and Haggai Porat and Jason Hegland and Jessica Wu and Joe Nudell and Joel Niklaus and John Nay and Jonathan H. Choi and Kevin Tobia and Margaret Hagan and Megan Ma and Michael Livermore and Nikon Rasumov-Rahe and Nils Holzenberger and Noam Kolt and Peter Henderson and Sean Rehaag and Sharad Goel and Shang Gao and Spencer Williams and Sunny Gandhi and Tom Zur and Varun Iyer and Zehua Li},
year={2023},
eprint={2308.11462},
archivePrefix={arXiv},
primaryClass={cs.CL}
}