bradfordlevy's picture
Added Bibtex
3652408 verified
metadata
configs:
  - config_name: default
    data_files:
      - split: train
        path:
          - bc-train/bc-*.jsonl.gz
      - split: validation
        path:
          - bc-validation/bc-*.jsonl.gz
  - config_name: bc-clean
    data_files:
      - split: train
        path:
          - bc-train/bc*.jsonl.gz
          - bc-deduped/bc*.jsonl.gz
      - split: validation
        path:
          - bc-validation/bc*.jsonl.gz
  - config_name: c4-en
    data_files:
      - split: train
        path:
          - c4-en/c4-train*.json.gz

🫘🧮 BeanCounter - Descriptive Sentences

Dataset Summary

BeanCounter - Descriptive Sentences consists of sentences extracted from the BeanCounter (🤗 Datasets, Paper) and C4 (🤗 Datasets, Paper) datasets where each sentence contains at least one demographic descriptor from one of the five axes: Gender and Sex, Sexual Orientation, Nationality, Race and Ethnicity and Religion. The descriptors and axes are taken from HolisticBias. Full details of how these sentences were collected can be found in Section 3 of Wang and Levy (2024).

We include three configurations of the dataset: bc-clean, default, and c4-en. These consist of:

  • bc-clean: 27.0M sentences from the clean subset of BeanCounter
  • default: 19.5M sentences from the default subset of BeanCounter (referred to as the "final" split in the paper)
  • c4-en: 132M sentences from the en subset of C4

How can I use this?

License

The dataset is provided under the ODC-By license. Cite our work as:

@inproceedings{
  wang2024beancounter,
  title={BeanCounter: A low-toxicity, large-scale, and open dataset of business-oriented text},
  author={Siyan Wang and Bradford Levy},
  booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
  year={2024},
  url={https://openreview.net/forum?id=HV5JhUZGpP}
}

In 🤗 Datasets

To load the bc-clean subset in Datasets, one can run:

from datasets import load_dataset

desc_sents = load_dataset(
    "blevy41/BeanCounter",
    name="bc-clean",
)

# Print out split info
print(desc_sents, "\n")

# Inspect an observation
print(f"COLUMNS IN DATA: {','.join(desc_sents['train'][1000].keys())}\n")
print(f"EXCERPT: \n\n{desc_sents['train'][1000]['sentence'][:1000]}")

Datasheets for Datasets

Please refer to the original datasets for full details of their creation.