hk_content_corpus / README.md
SolarisCipher's picture
Update README.md
0a577e7 verified
---
dataset_info:
features:
- name: text
dtype: string
tags:
- corpus
- Hong Kong
- diglossia
- Cantonese
- Traditional Chinese
language:
- yue
- zh
language_details: "yue-Hant-HK; zh-Hant-HK"
language_creators:
- found
annotations_creators:
- no-annotation
source_datasets:
- LIHKG forum posts
- OpenRice user reviews
- Apple Daily news articles
- Hong Kong Citizen Media
- Citizen News
- Stand News
- Hong Kong Inmedia
- Wikipedia (zh-HK)
configs:
- config_name: appledaily
data_files: "appledaily_article_dedup.csv"
sep: "\n\n"
- config_name: hkcitizenmedia
data_files: "hkcitizenmedia_article_dedup.csv"
sep: "\n\n"
- config_name: hkcnews
data_files: "hkcnews_article_dedup.csv"
sep: "\n\n"
- config_name: inmediahk
data_files: "inmedia_article_dedup.csv"
sep: "\n\n"
- config_name: lihkg
data_files: "lihkg_posts_dedup_demoji_128.csv"
sep: "\n\n"
- config_name: openrice
data_files: "openrice_review_dedup_demoji.csv"
sep: "\n\n"
- config_name: standnews
data_files: "thestandnews_article_dedup.csv"
sep: "\n\n"
- config_name: wiki_hk
data_files: "wiki_hk.csv"
sep: "\n\n"
license: cc-by-4.0
pretty_name: HK Content Corpus (Cantonese \& Traditional Chinese)
---
# HK Content Corpus (Cantonese \& Traditional Chinese)
## Dataset Description
- **Language:** Hong Kong Cantonese, Traditional Chinese
- **Size:** 9.44GB
- **Source:** public web sources (news sites, online forums, encyclopedia and restaurant reviews).
This dataset contains eight cleaned source-specific corpora of **Hong Kong Cantonese** and **Traditional Chinese** text, crawled from public websites and platforms.
It was initially created for the experiments reported in **https://doi.org/10.1145/3744341** which study the **effect of diglossia on Hong Kong language** modeling.
Each file stores plain UTF-8 text, where **each record occupies one line**, and **blank lines serve as separators**.
This dataset is also available at Zenodo: **https://doi.org/10.5281/zenodo.16882351**
We only change file extension from .corpus to .csv and add header row here for HuggingFace's dataset viewer function
👉 This cleaned corpus is derived from a larger MySQL database used to store raw text during the data collection stage.
If you need the original database for reprocessing or reproduction, please refer to:
https://huggingface.co/datasets/SolarisCipher/hk_content_corpus_mysql
### Files
| Filename | Description | SHA256 hash (without header row, same content as corpus file@Zenodo) |
|----------------------------------------|------------------------------------|----------------------------------------------------------------------|
| `appledaily_article_dedup.csv` | Apple Daily news articles | `0b6ad22b7a73230fd0e44af904c0cff1773cc871417f6ad3af11a783564bca15` |
| `hkcitizenmedia_article_dedup.csv` | HK Citizen Media articles | `373a5b369d5e402e58760861e2bff2e618e7ee7fd4494988a39be1156f9dba84` |
| `hkcnews_article_dedup.csv` | Hong Kong Citizen News | `67f909cfaf7d7a67df1dc79448f24622ec525a31b73b2e3293fbaad147470e69` |
| `inmedia_article_dedup.csv` | InMedia.hk articles | `135251f3e8ba7587018b7b2814b9ea5bbebf69c98adca73d3ea4c9f9b5571957` |
| `lihkg_posts_dedup_demoji_128.csv` | Lihkg forum posts (emoji removed) | `8e5e6de9c219aeccdaf13e9162b00cce2eeb7f595023a7bf19d4b5660395a3ee` |
| `openrice_review_dedup_demoji.csv` | OpenRice user reviews | `dd5835a7effe49bb96a31e0c0eab43dea17ef23b3e1d9cefdc186aca276897ce` |
| `thestandnews_article_dedup.csv` | Stand News articles | `847ef0f5809481caf4bf21100e4b34207513d724dafb9f42b162ef49079e7dba` |
| `wiki_hk.csv` | Wikipedia (zh-hk) | `bd33008802797b33df8484cf1113be6a0b38547fe13515f5c4edbf4ccad270db` |
## Intended Uses
- Language model pretraining and finetuning
- Hong Kong Cantonese/Traditional Chinese linguistic modeling
- Downstream tasks such as classification or generation
NOTE: HKNSL became effective since 2020-6-30, which can create bias on user content created afterwards. Those portion of data should be used with caution.
## Citation
If you use this dataset, please cite the following paper:
```bibtex
@article{Yung2025HKDiglossia,
author = {Yung, Yiu Cheong and Lin, Ying-Jia and Kao, Hung-Yu},
title = {Exploring the Effectiveness of Pre-training Language Models with Incorporation of Diglossia for Hong Kong Content},
journal = {ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP)},
volume = {24},
number = {7},
pages = {71:1--71:16},
year = {2025},
publisher = {Association for Computing Machinery},
doi = {10.1145/3744341}
}
```
and optionally also cite the dataset DOI:
```bibtex
@dataset{yung_2025_16882351,
author = {Yung, Yiu Cheong},
title = {HK Content Corpus (Cantonese \& Traditional Chinese)},
month = aug,
year = 2025,
publisher = {Zenodo},
doi = {10.5281/zenodo.16882351},
url = {https://doi.org/10.5281/zenodo.16882351},
}
```