The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    CastError
Message:      Couldn't cast
query_id: int64
corpus_id: int64
score: int64
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 597
to
{'file_name': Value('string'), 'sound_id': Value('int64'), 'audio': Value('binary')}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1405, in compute_config_parquet_and_info_response
                  fill_builder_info(builder, hf_endpoint=hf_endpoint, hf_token=hf_token, validate=validate)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 578, in fill_builder_info
                  ) = retry_validate_get_features_num_examples_size_and_compression_ratio(
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 497, in retry_validate_get_features_num_examples_size_and_compression_ratio
                  validate(pf)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 535, in validate
                  raise TooBigRowGroupsError(
              worker.job_runners.config.parquet_and_info.TooBigRowGroupsError: Parquet file has too big row groups. First row group has 1972742805 which exceeds the limit of 300000000
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1815, in _prepare_split_single
                  for _, table in generator:
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 692, in wrapped
                  for item in generator(*args, **kwargs):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 106, in _generate_tables
                  yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 73, in _cast_table
                  pa_table = table_cast(pa_table, self.info.features.arrow_schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              query_id: int64
              corpus_id: int64
              score: int64
              -- schema metadata --
              pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 597
              to
              {'file_name': Value('string'), 'sound_id': Value('int64'), 'audio': Value('binary')}
              because column names don't match
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1428, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 994, in stream_convert_to_parquet
                  builder._prepare_split(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1702, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1858, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

file_name
string
sound_id
int64
audio
unknown
drainage pipe running.wav
235,940
"UklGRjJmIABXQVZFZm10IBAAAAABAAEARKwAAIhYAQACABAAZGF0YQ5mIAD+//7/AAAAAAAAAQABAAIAAQD+//v//f/+//7//P/(...TRUNCATED)
WATER SHOWER 001.wav
176,269
"UklGRiCoIwBXQVZFZm10IBAAAAABAAEARKwAAIhYAQACABAAZGF0YfynIwAtABgAGgALABUA//8PACAACwAWAPf/5/8sAFwAJwD(...TRUNCATED)
Chainsaw_4.wav
345,992
"UklGRgCxGwBXQVZFZm10IBAAAAABAAEARKwAAIhYAQACABAAZGF0YdywGwA5AAL/nP8+AMX+b/78/8UADwDc//r/9f4r/t39wf0(...TRUNCATED)
dumpster truck.wav
149,977
"UklGRr5tIgBXQVZFZm10IBAAAAABAAEARKwAAIhYAQACABAAZGF0YZptIgCZ/6r/uf+//8n/3P/r/+//8//9/wwAEwAAAND/kv9(...TRUNCATED)
Creek Running water.wav
320,289
"UklGRoirHQBXQVZFZm10IBAAAAABAAEARKwAAIhYAQACABAAZGF0YWSrHQD9//r//f8BAAMAAwAFAAgABQD7//P/7v/v//P/9//(...TRUNCATED)
CP_Whipping_Wind_Storm_Medium01.wav
238,377
"UklGRrxKIABXQVZFZm10IBAAAAABAAEARKwAAIhYAQACABAAZGF0YZhKIAABAAAAAAAAAAAAAAD+/wAA/v/+//3/AAD+//7//f/(...TRUNCATED)
Water - Leak, small.wav
146,346
"UklGRgQEHABXQVZFZm10IBAAAAABAAEARKwAAIhYAQACABAAZGF0YeADHAAR/xj/vP4V/7X/Z/8d/2X/ff/q/6YAggFZAYIAGAF(...TRUNCATED)
Crossroads with traffic lights #1.wav
350,536
"UklGRvR3IwBXQVZFZm10IBAAAAABAAEARKwAAIhYAQACABAAZGF0YdB3IwA/AEoAXQB5AIoAjwCHAH4AjACfAKwAswC+AMAAtgC(...TRUNCATED)
washing_machine_beginning.wav
44,050
"UklGRjQUJABXQVZFZm10IBAAAAABAAEARKwAAIhYAQACABAAZGF0YRAUJACm/4D/xP/6/w4AQwAxAHIAawByADgAVQCuAHcA2QA(...TRUNCATED)
Footsteps on Wood floor 1.wav
108,019
"UklGRvAMFgBXQVZFZm10IBAAAAABAAEARKwAAIhYAQACABAAZGF0YcwMFgDs/8n+OP7N/YD9Mv0C/QL97vzR/L78q/x6/ED8EPz(...TRUNCATED)
End of preview.