File size: 2,755 Bytes
7717198
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8ab7ab9
 
ed0d4d6
8ab7ab9
 
 
 
 
 
 
 
 
 
3652408
 
 
 
 
 
 
 
8ab7ab9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ed0d4d6
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
---
configs:
  - config_name: default
    data_files:
      - split: train
        path:
          - "bc-train/bc-*.jsonl.gz"
      - split: validation
        path:
          - "bc-validation/bc-*.jsonl.gz"
  - config_name: bc-clean
    data_files:
      - split: train
        path:
          - "bc-train/bc*.jsonl.gz"
          - "bc-deduped/bc*.jsonl.gz"
      - split: validation
        path:
          - "bc-validation/bc*.jsonl.gz"
  - config_name: c4-en
    data_files:
      - split: train
        path:
          - "c4-en/c4-train*.json.gz"
---

# 🫘🧮 BeanCounter - Descriptive Sentences
## Dataset Summary
`BeanCounter - Descriptive Sentences` consists of sentences extracted from the BeanCounter ([🤗 Datasets](https://huggingface.co/datasets/blevy41/BeanCounter), [Paper](https://arxiv.org/abs/2409.17827)) and C4 ([🤗 Datasets](https://huggingface.co/datasets/allenai/c4), [Paper](https://arxiv.org/pdf/2104.08758)) datasets where each sentence contains at least one demographic descriptor from one of the five axes: Gender and Sex, Sexual Orientation, Nationality, Race and Ethnicity and Religion. The descriptors and axes are taken from [HolisticBias](https://aclanthology.org/2022.emnlp-main.625/). Full details of how these sentences were collected can be found in Section 3 of [Wang and Levy (2024)](https://arxiv.org/abs/2409.17827).

We include three configurations of the dataset: `bc-clean`, `default`, and `c4-en`. These consist of:

- `bc-clean`: 27.0M sentences from the `clean` subset of BeanCounter
- `default`: 19.5M sentences from the `default` subset of BeanCounter (referred to as the "final" split in the paper)
- `c4-en`: 132M sentences from the `en` subset of C4

## How can I use this?
### License
The dataset is provided under the [ODC-By](https://opendatacommons.org/licenses/by/1-0/) license. Cite our work as:
```text
@inproceedings{
  wang2024beancounter,
  title={BeanCounter: A low-toxicity, large-scale, and open dataset of business-oriented text},
  author={Siyan Wang and Bradford Levy},
  booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
  year={2024},
  url={https://openreview.net/forum?id=HV5JhUZGpP}
}
```

### In 🤗 Datasets
To load the `bc-clean` subset in Datasets, one can run:
```python
from datasets import load_dataset

desc_sents = load_dataset(
    "blevy41/BeanCounter",
    name="bc-clean",
)

# Print out split info
print(desc_sents, "\n")

# Inspect an observation
print(f"COLUMNS IN DATA: {','.join(desc_sents['train'][1000].keys())}\n")
print(f"EXCERPT: \n\n{desc_sents['train'][1000]['sentence'][:1000]}")
```

### Datasheets for Datasets
Please refer to the original datasets for full details of their creation.