Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    CastError
Message:      Couldn't cast
query_id: int64
corpus_id: int64
score: int64
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 597
to
{'file_name': Value('string'), 'sound_id': Value('int64'), 'audio': Value('binary')}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1405, in compute_config_parquet_and_info_response
                  fill_builder_info(builder, hf_endpoint=hf_endpoint, hf_token=hf_token, validate=validate)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 578, in fill_builder_info
                  ) = retry_validate_get_features_num_examples_size_and_compression_ratio(
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 497, in retry_validate_get_features_num_examples_size_and_compression_ratio
                  validate(pf)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 535, in validate
                  raise TooBigRowGroupsError(
              worker.job_runners.config.parquet_and_info.TooBigRowGroupsError: Parquet file has too big row groups. First row group has 1972742805 which exceeds the limit of 300000000
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1815, in _prepare_split_single
                  for _, table in generator:
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 692, in wrapped
                  for item in generator(*args, **kwargs):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 106, in _generate_tables
                  yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 73, in _cast_table
                  pa_table = table_cast(pa_table, self.info.features.arrow_schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              query_id: int64
              corpus_id: int64
              score: int64
              -- schema metadata --
              pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 597
              to
              {'file_name': Value('string'), 'sound_id': Value('int64'), 'audio': Value('binary')}
              because column names don't match
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1428, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 994, in stream_convert_to_parquet
                  builder._prepare_split(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1702, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1858, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

file_name
string
sound_id
int64
audio
unknown
drainage pipe running.wav
235,940
"UklGRjJmIABXQVZFZm10IBAAAAABAAEARKwAAIhYAQACABAAZGF0YQ5mIAD+//7/AAAAAAAAAQABAAIAAQD+//v//f/+//7//P/(...TRUNCATED)
WATER SHOWER 001.wav
176,269
"UklGRiCoIwBXQVZFZm10IBAAAAABAAEARKwAAIhYAQACABAAZGF0YfynIwAtABgAGgALABUA//8PACAACwAWAPf/5/8sAFwAJwD(...TRUNCATED)
Chainsaw_4.wav
345,992
"UklGRgCxGwBXQVZFZm10IBAAAAABAAEARKwAAIhYAQACABAAZGF0YdywGwA5AAL/nP8+AMX+b/78/8UADwDc//r/9f4r/t39wf0(...TRUNCATED)
dumpster truck.wav
149,977
"UklGRr5tIgBXQVZFZm10IBAAAAABAAEARKwAAIhYAQACABAAZGF0YZptIgCZ/6r/uf+//8n/3P/r/+//8//9/wwAEwAAAND/kv9(...TRUNCATED)
Creek Running water.wav
320,289
"UklGRoirHQBXQVZFZm10IBAAAAABAAEARKwAAIhYAQACABAAZGF0YWSrHQD9//r//f8BAAMAAwAFAAgABQD7//P/7v/v//P/9//(...TRUNCATED)
CP_Whipping_Wind_Storm_Medium01.wav
238,377
"UklGRrxKIABXQVZFZm10IBAAAAABAAEARKwAAIhYAQACABAAZGF0YZhKIAABAAAAAAAAAAAAAAD+/wAA/v/+//3/AAD+//7//f/(...TRUNCATED)
Water - Leak, small.wav
146,346
"UklGRgQEHABXQVZFZm10IBAAAAABAAEARKwAAIhYAQACABAAZGF0YeADHAAR/xj/vP4V/7X/Z/8d/2X/ff/q/6YAggFZAYIAGAF(...TRUNCATED)
Crossroads with traffic lights #1.wav
350,536
"UklGRvR3IwBXQVZFZm10IBAAAAABAAEARKwAAIhYAQACABAAZGF0YdB3IwA/AEoAXQB5AIoAjwCHAH4AjACfAKwAswC+AMAAtgC(...TRUNCATED)
washing_machine_beginning.wav
44,050
"UklGRjQUJABXQVZFZm10IBAAAAABAAEARKwAAIhYAQACABAAZGF0YRAUJACm/4D/xP/6/w4AQwAxAHIAawByADgAVQCuAHcA2QA(...TRUNCATED)
Footsteps on Wood floor 1.wav
108,019
"UklGRvAMFgBXQVZFZm10IBAAAAABAAEARKwAAIhYAQACABAAZGF0YcwMFgDs/8n+OP7N/YD9Mv0C/QL97vzR/L78q/x6/ED8EPz(...TRUNCATED)
End of preview.
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

Language-based Audio Retrieval Dataset

This dataset is derived from the DCASE 2022 Challenge Task 6 (Subtask B) - Language-based Audio Retrieval evaluation dataset, originally published on Zenodo.

Overview

This dataset contains 1,000 audio files paired with natural language captions, designed for evaluating language-based audio retrieval systems. The dataset has been preprocessed and structured into parquet files for efficient loading and processing in machine learning workflows.

Dataset Structure

Files

  • corpus.parquet (1000 entries)
    Contains the audio corpus with embedded binary audio data.

    • file_name: Name of the audio file
    • sound_id: Unique identifier for each sound (from Freesound)
    • audio: Binary audio data (WAV format)
  • query.parquet (1000 queries)
    Contains natural language queries/captions for retrieval.

    • query_id: Identifier matching the sound_id
    • query: Natural language description of the audio
  • qrels.parquet (1000 relevance judgments)
    Ground truth relevance judgments for evaluation.

    • query_id: Query identifier
    • corpus_id: Corpus item identifier
    • score: Relevance score (1 = relevant)

Original Source Files

  • retrieval_audio/: Directory containing 1,000 WAV audio files
  • retrieval_audio_metadata.csv: Metadata for each audio file including:
    • File name, keywords, sound_id, Freesound URL
    • Start/end samples, manufacturer, license information
  • retrieval_captions.csv: Natural language captions for each audio file
  • retrieval_audio.7z: Compressed archive of audio files

Utility Files

  • dataset_creator.ipynb: Jupyter notebook used to process and create the parquet files
  • requirements.txt: Python dependencies
  • LICENSE: License information

Dataset Statistics

  • Total audio files: 1,000
  • Audio format: WAV (various sample rates from Freesound)
  • Caption format: Single natural language description per audio file
  • Audio sources: Freesound platform
  • Average audio duration: ~15-30 seconds (variable)

Usage Example

import pandas as pd
import pyarrow.parquet as pq

# Load the corpus
corpus = pq.read_table('corpus.parquet').to_pandas()
print(f"Corpus shape: {corpus.shape}")

# Load queries
queries = pq.read_table('query.parquet').to_pandas()
print(f"Number of queries: {len(queries)}")

# Load relevance judgments
qrels = pq.read_table('qrels.parquet').to_pandas()
print(f"Number of relevance judgments: {len(qrels)}")

# Access audio binary data
audio_binary = corpus.iloc[0]['audio']

# Access caption/query
caption = queries.iloc[0]['query']
print(f"Example caption: {caption}")

Example Data

Sample Audio Caption

"A liquid continuously being poured out and hitting a bottom base."

Sample Metadata

Task Description

This dataset is designed for language-based audio retrieval, where the goal is to:

  1. Given a natural language query (caption), retrieve the most relevant audio clip(s) from the corpus
  2. Evaluate retrieval performance using standard metrics (e.g., Recall@K, Mean Average Precision)

Each query has exactly one relevant audio file in the corpus (1-to-1 mapping).

Source Dataset Information

Original Dataset

  • Name: Language-based audio retrieval DCASE 2022 evaluation dataset
  • Version: 1.0
  • Published: May 29, 2022
  • Creator: Samuel Lipping (Tampere University)
  • DOI: 10.5281/zenodo.6590983

Audio Source

All audio files are sourced from the Freesound platform and are licensed under various Creative Commons licenses. Please refer to retrieval_audio_metadata.csv for specific license information for each file.

Development Dataset

This is the evaluation dataset for DCASE 2022 Task 6B. For training and development, use the Clotho v2.1 dataset available at: https://zenodo.org/record/4783391

License

  • Audio files: Licensed under various Creative Commons licenses as specified in retrieval_audio_metadata.csv (from Freesound platform)
  • Captions: Tampere University license (see LICENSE file)

Citation

If you use this dataset, please cite:

@dataset{lipping_2022_6590983,
  author       = {Lipping, Samuel},
  title        = {{Language-based audio retrieval DCASE 2022 
                   evaluation dataset}},
  month        = may,
  year         = 2022,
  publisher    = {Zenodo},
  version      = {1.0},
  doi          = {10.5281/zenodo.6590983},
  url          = {https://doi.org/10.5281/zenodo.6590983}
}

References

  1. Frederic Font, Gerard Roma, and Xavier Serra. 2013. Freesound technical demo. In Proceedings of the 21st ACM international conference on Multimedia (MM '13). ACM, New York, NY, USA, 411-412. DOI: https://doi.org/10.1145/2502081.2502245

  2. DCASE 2022 Challenge: https://dcase.community/challenge2022/

  3. Lipping, S. (2022). Language-based audio retrieval DCASE 2022 evaluation dataset (1.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.6590983

Related Links

Downloads last month
19