Update README.md
Browse files
README.md
CHANGED
|
@@ -32,6 +32,7 @@ language:
|
|
| 32 |
*Please also check our [Community Forensics-Small](https://huggingface.co/datasets/OwensLab/CommunityForensics-Small) dataset, which contains approximately 11% of the base dataset and is paired with real data with redistributable licenses.*
|
| 33 |
|
| 34 |
*Changes:* \
|
|
|
|
| 35 |
*06/06/25: Community Forensics-Small released. Updated BibTeX to be CVPR instead of arXiv.* \
|
| 36 |
*04/09/25: Initial version released.*
|
| 37 |
|
|
@@ -138,11 +139,28 @@ Please check [Hugging Face documentation](https://huggingface.co/docs/datasets/v
|
|
| 138 |
|
| 139 |
### Training fake image classifiers
|
| 140 |
For training a fake image classifier, it is necessary to pair the generated images with "real" images (here, "real" refers to images that are not generated by AI).
|
| 141 |
-
In our [paper](https://arxiv.org/abs/2411.04125), we used 11 different image datasets: [LAION](https://laion.ai/), [ImageNet](https://www.image-net.org/), [COCO](https://cocodataset.org/), [FFHQ](https://github.com/NVlabs/ffhq-dataset), [CelebA](https://mmlab.ie.cuhk.edu.hk/projects/CelebA.html), [MetFaces](https://github.com/NVlabs/metfaces-dataset), [AFHQ-v2](https://github.com/clovaai/stargan-v2/), [Forchheim](https://faui1-files.cs.fau.de/public/mmsec/datasets/fodb/), [IMD2020](https://staff.utia.cas.cz/novozada/db/), [Landscapes HQ](https://github.com/universome/alis), and [VISION](https://lesc.dinfo.unifi.it/VISION/), for sampling the generators and training the classifiers.
|
| 142 |
To accurately reproduce our training settings, it is necessary to download all datasets and pair them with the generated images.
|
| 143 |
We understand that this may be inconvenient for simple prototyping,
|
| 144 |
and thus we also release [Community Forensics-Small](https://huggingface.co/datasets/OwensLab/CommunityForensics-Small) dataset, which is paired with real datasets that have redistributable licenses and contains roughly 11% of the base dataset.
|
| 145 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 146 |
# Dataset Creation
|
| 147 |
## Curation Rationale
|
| 148 |
This dataset is created to address the limited model diversity of the existing datasets for generated image detection. While some existing datasets contain millions of images, they are typically sampled from handful of generator models. We instead sample 2.7M images from 4803 generator models, approximately 34 times more generators than the most extensive previous dataset that we are aware of.
|
|
|
|
| 32 |
*Please also check our [Community Forensics-Small](https://huggingface.co/datasets/OwensLab/CommunityForensics-Small) dataset, which contains approximately 11% of the base dataset and is paired with real data with redistributable licenses.*
|
| 33 |
|
| 34 |
*Changes:* \
|
| 35 |
+
*10/06/25: Added 'real' data composition used for training, and released the links for LAION subsets we used.* \
|
| 36 |
*06/06/25: Community Forensics-Small released. Updated BibTeX to be CVPR instead of arXiv.* \
|
| 37 |
*04/09/25: Initial version released.*
|
| 38 |
|
|
|
|
| 139 |
|
| 140 |
### Training fake image classifiers
|
| 141 |
For training a fake image classifier, it is necessary to pair the generated images with "real" images (here, "real" refers to images that are not generated by AI).
|
| 142 |
+
In our [paper](https://arxiv.org/abs/2411.04125), we used 11 different image datasets: [LAION](https://laion.ai/)([our training distribution](https://huggingface.co/datasets/OwensLab/CommunityForensics/blob/main/data/Real/laion_commfor_train_subset_2M.csv)/[test data](https://huggingface.co/datasets/OwensLab/CommunityForensics/blob/main/data/Real/laion_commfor_test_subset_10K.csv)), [ImageNet](https://www.image-net.org/), [COCO](https://cocodataset.org/), [FFHQ](https://github.com/NVlabs/ffhq-dataset), [CelebA](https://mmlab.ie.cuhk.edu.hk/projects/CelebA.html), [MetFaces](https://github.com/NVlabs/metfaces-dataset), [AFHQ-v2](https://github.com/clovaai/stargan-v2/), [Forchheim](https://faui1-files.cs.fau.de/public/mmsec/datasets/fodb/), [IMD2020](https://staff.utia.cas.cz/novozada/db/), [Landscapes HQ](https://github.com/universome/alis), and [VISION](https://lesc.dinfo.unifi.it/VISION/), for sampling the generators and training the [classifiers](https://huggingface.co/OwensLab/commfor-model-384).
|
| 143 |
To accurately reproduce our training settings, it is necessary to download all datasets and pair them with the generated images.
|
| 144 |
We understand that this may be inconvenient for simple prototyping,
|
| 145 |
and thus we also release [Community Forensics-Small](https://huggingface.co/datasets/OwensLab/CommunityForensics-Small) dataset, which is paired with real datasets that have redistributable licenses and contains roughly 11% of the base dataset.
|
| 146 |
|
| 147 |
+
### Real data composition for training
|
| 148 |
+
When training our [classifiers](https://huggingface.co/OwensLab/commfor-model-384), we used the following real data composition:
|
| 149 |
+
```
|
| 150 |
+
LAION 40.30 %
|
| 151 |
+
imagenet 40.30 %
|
| 152 |
+
CelebA 7.15 %
|
| 153 |
+
COCO 4.39 %
|
| 154 |
+
LandscapesHQ 3.34 %
|
| 155 |
+
FFHQ 2.34 %
|
| 156 |
+
IMD2020 1.17 %
|
| 157 |
+
AFHQv2 0.59 %
|
| 158 |
+
VISION 0.25 %
|
| 159 |
+
Forchheim 0.13 %
|
| 160 |
+
Metfaces 0.05 %
|
| 161 |
+
```
|
| 162 |
+
We clipped the `LAION` and `ImageNet` data to around 1.08M images to ensure that the ratio of real/fake is 1:1. We release the links to the LAION data subset we used here: [train](https://huggingface.co/datasets/OwensLab/CommunityForensics/blob/main/data/Real/laion_commfor_train_subset_2M.csv)/[test](https://huggingface.co/datasets/OwensLab/CommunityForensics/blob/main/data/Real/laion_commfor_test_subset_10K.csv).
|
| 163 |
+
|
| 164 |
# Dataset Creation
|
| 165 |
## Curation Rationale
|
| 166 |
This dataset is created to address the limited model diversity of the existing datasets for generated image detection. While some existing datasets contain millions of images, they are typically sampled from handful of generator models. We instead sample 2.7M images from 4803 generator models, approximately 34 times more generators than the most extensive previous dataset that we are aware of.
|