Datasets:
The dataset viewer is not available for this split.
Error code: StreamingRowsError Exception: ValueError Message: No valid stream found in input file. Is -1 of the desired media type? Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise return get_rows( ^^^^^^^^^ File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/src/services/worker/src/worker/utils.py", line 77, in get_rows rows_plus_one = list(itertools.islice(ds, rows_max_number + 1)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2431, in __iter__ for key, example in ex_iterable: ^^^^^^^^^^^ File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1953, in __iter__ batch = formatter.format_batch(pa_table) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 472, in format_batch batch = self.python_features_decoder.decode_batch(batch) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 234, in decode_batch return self.features.decode_batch(batch, token_per_repo_id=self.token_per_repo_id) if self.features else batch ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/features/features.py", line 2147, in decode_batch decode_nested_example(self[column_name], value, token_per_repo_id=token_per_repo_id) File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/features/features.py", line 1409, in decode_nested_example return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/features/audio.py", line 204, in decode_example audio = AudioDecoder(f, stream_index=self.stream_index, sample_rate=self.sampling_rate) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/src/services/worker/.venv/lib/python3.12/site-packages/torchcodec/decoders/_audio_decoder.py", line 66, in __init__ core.add_audio_stream( File "/src/services/worker/.venv/lib/python3.12/site-packages/torch/_ops.py", line 829, in __call__ return self._op(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ ValueError: No valid stream found in input file. Is -1 of the desired media type?
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
ASPED: Audio-Based Pedestrian Detection Dataset Card
Dataset Summary
The Audio Sensing for PEdestrian Detection (ASPED) v.b dataset is a comprehensive, 1,321-hour roadside collection of audio and video recordings designed for the task of pedestrian detection in the presence of vehicular noise. As urban sound emerges as a cost-effective and privacy-preserving alternative to vision-based or GPS-based monitoring, this dataset addresses the key challenge of detecting pedestrians in realistic, noisy urban environments.
The dataset was collected from multiple camera and recorder setups at a single location ("Fifth Street") on the Georgia Institute of Technology campus and contains recordings from 4 timely different sessions. Each recording includes 16 kHz mono audio synchronized with frame-level pedestrian annotations and 1 fps video thumbnails.
This dataset is released alongside ASPED v.a, which was captured in a vehicle-free environment, to facilitate cross-dataset evaluation and research into model generalization for acoustic event detection. The official Hugging Face repository for the dataset can be found at: https://huggingface.co/datasets/urbanaudiosensing/ASPEDvb.
Supported Tasks and Leaderboards
The dataset is primarily intended for audio-based pedestrian detection. It can also be used for related tasks, such as:
- Sound Event Detection in Noisy Environments
- Domain Adaptation for Acoustic Models
- Urban Soundscape Analysis
Dataset Structure
The dataset is organized by session, then by the specific physical setup location along Fifth Street (e.g., FifthSt_A, FifthSt_B). Each of these setup locations contains its own synchronized Audio, Labels, and Video data. A single setup location can contain audio from one or two recorders.
ASPEDvb/data/
βββ Session_07262023/
βββ FifthSt_A/
βββ Audio/
β βββ recorder1_DR-05X-01/
β βββ 0001.flac
β βββ ...
βββ Labels/
β βββ 0001.csv
β βββ ...
βββ Video/
βββ 0001.mp4
βββ ...
Data Fields
The label files (.csv
) provide detailed, frame-level annotations for the presence of pedestrians.
timestamp
: The exact date and time of the frame.frame
: The sequential frame number.recorder[N]_[X]m
: An integer representing the number of pedestrians detected within a specific radius (X = 1, 3, 6, or 9 meters) of a given recorder (N = 1, 2, ...).view_recorder[N]_[X]m
: A binary flag where a value of 1 indicates that the recorder's view for that specific radius is visually obstructed (e.g., by a passing bus or other object), and 0 indicates the view is clear.busFrame
: A binary flag indicating that the frame was visually obstructed by a bus. These frames were discarded during the modeling phase of the original study due to unreliable visual labels.
Data Instances
A sample row from a label file:
timestamp,frame,recorder1_1m,recorder1_3m,recorder1_6m,recorder1_9m,view_recorder1_1m,view_recorder1_3m,view_recorder1_6m,view_recorder1_9m,busFrame
2023-07-26 16:20:00,0,0,0,0,0,0,1,1,1,0
Dataset Creation
Curation Rationale: Pedestrian volume data provides critical insights for urban planning, safety improvements, and accessibility assessments. While vision-based systems are common, they suffer from limitations like visual occlusions and raise significant privacy concerns. Audio-based sensing offers a promising alternative as microphones are affordable, energy-efficient, and less intrusive. This dataset was created to spur research in this area, specifically by providing data that captures the challenge of detecting pedestrian-related sounds in environments with significant vehicular noise.
Data Source and Collection: The data was collected by researchers at the Center for Urban Resilience and Analytics (CURA) and the Music Informatics Group at the Georgia Institute of Technology. The ASPED v.b dataset was recorded near a road with vehicular traffic on the Georgia Tech campus in Atlanta. Audio was recorded at 16 kHz and synchronized with video from 6 GoPro cameras capturing 1 fps recordings. While data was collected simultaneously, any recordings from devices that experienced technical issues were excluded from the final dataset to ensure quality.
Citation Information
If you use this dataset in your research, please cite the following paper:
@inproceedings{kim2025audio,
author= "Kim, Yonghyun and Han, Chaeyeon and Sarode, Akash and Posner, Noah and Guhathakurta, Subhrajit and Lerch, Alexander",
title= "Audio-Based Pedestrian Detection in the Presence of Vehicular Noise",
booktitle = "Proceedings of the Detection and Classification of Acoustic Scenes and Events 2025 Workshop (DCASE2025)",
address = "Barcelona, Spain",
month = "October",
year = "2025"
}
Licensing Information
Creative Commons Attribution 4.0
- Downloads last month
- 543