Update README.md
Browse files
README.md
CHANGED
|
@@ -139,7 +139,7 @@ Please check [Hugging Face documentation](https://huggingface.co/docs/datasets/v
|
|
| 139 |
|
| 140 |
### Training fake image classifiers
|
| 141 |
For training a fake image classifier, it is necessary to pair the generated images with "real" images (here, "real" refers to images that are not generated by AI).
|
| 142 |
-
In our [paper](https://arxiv.org/abs/2411.04125), we used 11 different image datasets: [LAION](https://laion.ai/)([our training distribution](https://huggingface.co/datasets/OwensLab/CommunityForensics/blob/main/data/Real/laion_commfor_train_subset_2M.csv)
|
| 143 |
To accurately reproduce our training settings, it is necessary to download all datasets and pair them with the generated images.
|
| 144 |
We understand that this may be inconvenient for simple prototyping,
|
| 145 |
and thus we also release [Community Forensics-Small](https://huggingface.co/datasets/OwensLab/CommunityForensics-Small) dataset, which is paired with real datasets that have redistributable licenses and contains roughly 11% of the base dataset.
|
|
@@ -159,7 +159,7 @@ VISION 0.25 %
|
|
| 159 |
Forchheim 0.13 %
|
| 160 |
Metfaces 0.05 %
|
| 161 |
```
|
| 162 |
-
We clipped the `LAION` and `ImageNet` data to around 1.08M images to ensure that the ratio of real/fake is 1:1. We release the links to the LAION data subset we used
|
| 163 |
|
| 164 |
# Dataset Creation
|
| 165 |
## Curation Rationale
|
|
|
|
| 139 |
|
| 140 |
### Training fake image classifiers
|
| 141 |
For training a fake image classifier, it is necessary to pair the generated images with "real" images (here, "real" refers to images that are not generated by AI).
|
| 142 |
+
In our [paper](https://arxiv.org/abs/2411.04125), we used 11 different image datasets: [LAION](https://laion.ai/)([our training distribution](https://huggingface.co/datasets/OwensLab/CommunityForensics/blob/main/data/Real/laion_commfor_train_subset_2M.csv)), [ImageNet](https://www.image-net.org/), [COCO](https://cocodataset.org/), [FFHQ](https://github.com/NVlabs/ffhq-dataset), [CelebA](https://mmlab.ie.cuhk.edu.hk/projects/CelebA.html), [MetFaces](https://github.com/NVlabs/metfaces-dataset), [AFHQ-v2](https://github.com/clovaai/stargan-v2/), [Forchheim](https://faui1-files.cs.fau.de/public/mmsec/datasets/fodb/), [IMD2020](https://staff.utia.cas.cz/novozada/db/), [Landscapes HQ](https://github.com/universome/alis), and [VISION](https://lesc.dinfo.unifi.it/VISION/), for sampling the generators and training the [classifiers](https://huggingface.co/OwensLab/commfor-model-384).
|
| 143 |
To accurately reproduce our training settings, it is necessary to download all datasets and pair them with the generated images.
|
| 144 |
We understand that this may be inconvenient for simple prototyping,
|
| 145 |
and thus we also release [Community Forensics-Small](https://huggingface.co/datasets/OwensLab/CommunityForensics-Small) dataset, which is paired with real datasets that have redistributable licenses and contains roughly 11% of the base dataset.
|
|
|
|
| 159 |
Forchheim 0.13 %
|
| 160 |
Metfaces 0.05 %
|
| 161 |
```
|
| 162 |
+
We clipped the `LAION` and `ImageNet` data to around 1.08M images to ensure that the ratio of real/fake is 1:1. We release the links to the LAION data subset we used for training [here](https://huggingface.co/datasets/OwensLab/CommunityForensics/blob/main/data/Real/laion_commfor_train_subset_2M.csv). Please note that the data may not be exactly reproducible due to link rot.
|
| 163 |
|
| 164 |
# Dataset Creation
|
| 165 |
## Curation Rationale
|