Datasets:
license: cc-by-nc-4.0
task_categories:
- video-classification
language:
- en
tags:
- synthetic
- activity-recognition
- fall-detection
pretty_name: 'WanFall: A Synthetic Activity Recognition Dataset'
size_categories:
- 10K<n<100K
configs:
- config_name: labels
data_files:
- labels/wanfall.csv
default: true
description: >-
Temporal segment labels for all videos. Load splits to get train/val/test
paths.
- config_name: metadata
data_files:
- videos/metadata.csv
description: Video level metadata.
- config_name: random
data_files:
- split: train
path: splits/train.csv
- split: validation
path: splits/val.csv
- split: test
path: splits/test.csv
description: Random 80/10/10 train/val/test split (seed 42)
WanFall: A Synthetic Activity Recognition Dataset
This repository contains temporal segment annotations for WanFall, a synthetic activity recognition dataset focused on fall detection and related activities of daily living.
This dataset is currently under development and subject to change!
Overview
WanFall is a large-scale synthetic dataset designed for activity recognition research, with emphasis on fall detection and posture transitions. The dataset features computer-generated videos of human actors performing various activities in controlled virtual environments.
Key Features:
- ~12,000 video clips with dense temporal annotations
- 16 activity classes including falls, posture transitions, and static states
- 5.0625 seconds per video clip (81 frames @ 16 fps)
- Synthetic generation enabling diverse scenarios and controlled variation
- Dense temporal segmentation with frame-level precision
Dataset Statistics
- Total videos: 12,000
- Total temporal segments: 19,228
- Annotation format: Temporal segmentation (start/end timestamps) with rich metadata
- Video duration: 5.0625 seconds per clip
- Frame count: 81 frames per video
- Frame rate: 16 fps
- Default split: 80/10/10 train/val/test (seed 42)
- Train: 9,600 videos
- Validation: 1,200 videos
- Test: 1,200 videos
- Metadata fields: 12 demographic and scene attributes per video
Activity Categories
The dataset includes 16 activity classes organized into dynamic actions and static states:
Dynamic Actions (Transitions)
- 0. walk - Walking movement, including jogging and running
- 1. fall - Falling down action (from any previous state), beginning with the moment of lost control and ending with a resting state or activity change.
- 2. fallen - Person in fallen state (on ground after fall)
- 3. sit_down - Transitioning from standing to sitting
- 4. sitting - Stationary sitting posture
- 5. lie_down - Intentionally lying down (not falling)
- 6. lying - Stationary lying posture (after intentional lie_down)
- 7. stand_up Getting up, either from fallen or lying into sitting or into standing position (not only get up to standing)
- 8. standing - Stationary standing posture
- 9. other - Actions not fitting above categories
- 10. kneel_down - Transitioning to kneeling position
- 11. kneeling - Stationary kneeling posture
- 12. squat_down - Transitioning to squatting position
- 13. squatting - Stationary squatting posture
- 14. crawl - Crawling movement on hands and knees
- 15. jump - Jumping action
Label Format
The labels/wanfall.csv file contains temporal segments with rich metadata:
path,label,start,end,subject,cam,dataset,age_group,gender_presentation,monk_skin_tone,race_ethnicity_omb,bmi_band,height_band,environment_category,camera_shot,speed,camera_elevation,camera_azimuth,camera_distance
Core Fields:
path: Relative path to the video (without .mp4 extension, e.g., "fall/fall_ch_001")label: Activity class ID (0-15)start: Start time of the segment in secondsend: End time of the segment in secondssubject: Subject ID (-1for synthetic data)cam: Camera view ID (-1for single view)dataset: Dataset name (wanfall)
Demographic Metadata:
age_group: One of 6 age categories- toddlers_1_4, children_5_12, teenagers_13_17, young_adults_18_34, middle_aged_35_64, elderly_65_plus
gender_presentation: Visual gender presentation (male, female)monk_skin_tone: Monk Skin Tone scale (mst1-mst10)- 10-point scale representing diverse skin tones from lightest to darkest
- Developed by Dr. Ellis Monk for inclusive representation
race_ethnicity_omb: OMB race/ethnicity categories- white: White/European American
- black: Black/African American
- asian: Asian
- hispanic_latino: Hispanic/Latino
- aian: American Indian and Alaska Native
- nhpi: Native Hawaiian and Pacific Islander
- mena: Middle Eastern and North African
bmi_band: Body type (underweight, normal, overweight, obese)height_band: Height category (short, avg, tall)
Scene Metadata:
environment_category: Scene location (indoor, outdoor)camera_shot: Shot composition (static_wide, static_medium_wide)speed: Frame rate (24fps_rt, 25fps_rt, 30fps_rt, std_rt)camera_elevation: Camera height (eye, low, high, top)camera_azimuth: Camera angle (front, rear, left, right)camera_distance: Camera distance (medium, far)
Split Format
Split files in the splits/ directory list the video paths included in each partition:
path
fall/fall_ch_001
fall/fall_ch_002
...
Usage Example
from datasets import load_dataset
import pandas as pd
# Load the datasets
print("Loading WanFall dataset...")
# Note: All segment labels are in the "train" split when loaded from the labels config,
# but we join them with the actual train/val/test splits afterwards.
labels = load_dataset("simplexsigil2/wanfall", "labels")["train"]
# Load the random 80/10/10 split
random_split = load_dataset("simplexsigil2/wanfall", "random")
# Load video metadata (optional, for demographic filtering)
video_metadata = pd.read_csv("videos/metadata.csv")
print(f"Video metadata shape: {video_metadata.shape}")
# Convert labels to DataFrame
labels_df = pd.DataFrame(labels)
print(f"Labels dataframe shape: {labels_df.shape}")
print(f"Total temporal segments: {len(labels_df)}")
# Process each split (train, validation, test)
for split_name, split_data in random_split.items():
# Convert to DataFrame
split_df = pd.DataFrame(split_data)
# Join with labels on 'path'
merged_df = pd.merge(split_df, labels_df, on="path", how="left")
# Print statistics
print(f"\n{split_name} split: {len(split_df)} videos, {len(merged_df)} temporal segments")
# Print examples
if not merged_df.empty:
print(f"\n {split_name.upper()} EXAMPLES:")
random_samples = merged_df.sample(min(3, len(merged_df)))
for i, (_, row) in enumerate(random_samples.iterrows()):
print(f" Example {i+1}:")
print(f" Path: {row['path']}")
print(f" Label: {row['label']} (segment {row['start']:.2f}s - {row['end']:.2f}s)")
print(f" Age: {row['age_group']}, Gender: {row['gender_presentation']}")
print(f" Ethnicity: {row['race_ethnicity_omb']}, Environment: {row['environment_category']}")
print()
# Example: Filter by demographics
elderly_falls = labels_df[
(labels_df['age_group'] == 'elderly_65_plus') &
(labels_df['label'] == 1) # fall = label 1
]
print(f"\nElderly fall segments: {len(elderly_falls)} ({elderly_falls['path'].nunique()} unique videos)")
Label Mapping
LABEL_MAP = {
0: 'walk', 1: 'fall', 2: 'fallen', 3: 'sit_down',
4: 'sitting', 5: 'lie_down', 6: 'lying', 7: 'stand_up',
8: 'standing', 9: 'other', 10: 'kneel_down', 11: 'kneeling',
12: 'squat_down', 13: 'squatting', 14: 'crawl', 15: 'jump'
}
Technical Properties
Video Specifications
- Resolution: Variable (synthetic generation)
- Duration: 5.0625 seconds (consistent across all videos)
- Frame count: 81 frames
- Frame rate: 16 fps
- Format: MP4 (not included in this dataset, videos must be obtained separately)
Annotation Properties
- Temporal precision: Sub-second (timestamps with decimal precision)
- Coverage: Most frames are labeled, with some gaps
- Overlap handling: Segments are annotated chronologically
- Activity sequences: Natural transitions (e.g., walk β fall β fallen β stand_up)
Motion Types
Activities are classified into two main motion types:
Dynamic motions (e.g., walk, fall, stand_up):
- Labeled from the first frame where the motion begins
- End when the person reaches a resting state
Static states (e.g., fallen, sitting, lying):
- Begin when person comes to rest in that posture
- Continue until next motion begins
Label Sequences
Videos often contain natural sequences of activities:
- Fall sequence: walk β fall β fallen β stand_up
- Sit sequence: walk β sit_down β sitting β stand_up
- Lie sequence: walk β lie_down β lying β stand_up
Not all transitions include static states (e.g., a person might stand_up immediately after falling without a fallen state).
Demographic Diversity
The dataset includes rich demographic and scene metadata for every video, enabling bias analysis and cross-demographic evaluation. However, while age and gender and ethnicity are quite reliable with consistent generation, the attributes were merely provided with the generation prompts and due to model biases, the resulting videos can deviate.
Overview
Scene Variations
Beyond demographic diversity, the dataset includes:
- Environment: Indoor and outdoor settings
- Camera Angles: Multiple elevations (eye, low, high, top), azimuths (front, rear, left, right), and distances
- Camera Shots: Static wide and medium-wide compositions
- Frame Rates: Various speeds (24fps, 25fps, 30fps, standard real-time)
License
The annotations and split definitions in this repository are released under Creative Commons Attribution-NonCommercial 4.0 International License.
The video data is synthetic and must be obtained separately from the original source, more information in the future.
