|
|
--- |
|
|
dataset_info: |
|
|
features: |
|
|
- name: image |
|
|
dtype: image |
|
|
- name: caption |
|
|
dtype: string |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 171055893.125 |
|
|
num_examples: 1087 |
|
|
download_size: 170841790 |
|
|
dataset_size: 171055893.125 |
|
|
language: |
|
|
- en |
|
|
task_categories: |
|
|
- text-to-image |
|
|
annotations_creators: |
|
|
- machine-generated |
|
|
size_categories: |
|
|
- n<1K |
|
|
--- |
|
|
# Disclaimer |
|
|
This was inspired from https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions |
|
|
|
|
|
# Dataset Card for A subset of Vivian Maier's photographs BLIP captions |
|
|
The captions are generated with the [pre-trained BLIP model](https://github.com/salesforce/BLIP). |
|
|
|
|
|
For each row the dataset contains `image` and `caption` keys. `image` is a varying size PIL jpeg, and `caption` is the accompanying text caption. Only a train split is provided. |
|
|
|
|
|
## Examples |
|
|
|
|
|
|
|
|
 |
|
|
> A group of people |
|
|
|
|
|
 |
|
|
> person floating in the water |
|
|
|
|
|
 |
|
|
> a person standing next to a refrigerator |
|
|
|
|
|
## Citation |
|
|
|
|
|
If you use this dataset, please cite it as: |
|
|
|
|
|
``` |
|
|
@misc{cqueenccc2023vivian, |
|
|
author = {cQueenccc}, |
|
|
title = {Vivian Maier's photograph split BLIP captions}, |
|
|
year={2023}, |
|
|
howpublished= {\url{https://huggingface.co/datasets/cQueenccc/Vivian-Blip-Captions/}} |
|
|
} |
|
|
``` |