--- license: cc-by-nc-4.0 task_categories: - video-classification language: - en tags: - synthetic - activity-recognition - fall-detection pretty_name: 'WanFall: A Synthetic Activity Recognition Dataset' size_categories: - 10K`middle_aged_35_64` (2,000) | | **Val** | 2,000 | `teenagers_13_17` (2,000) | | **Test** | 6,000 | `children_5_12` (2,000)
`toddlers_1_4` (2,000)
`elderly_65_plus` (2,000) | ```python dataset = load_dataset("simplexsigil2/wanfall", "cross_age") ``` ### 3. Cross-Ethnicity Split Evaluates generalization across racial/ethnic groups with maximum phenotypic distance. Train on White/Asian/Hispanic, test on Black/MENA/NHPI. | Split | Videos | Ethnicities | |-------|--------|-------------| | **Train** | 5,178 | `white` (1,709)
`asian` (1,691)
`hispanic_latino` (1,778) | | **Val** | 1,741 | `aian` (1,741) | | **Test** | 5,081 | `black` (1,684)
`mena` (1,680)
`nhpi` (1,717) | ```python dataset = load_dataset("simplexsigil2/wanfall", "cross_ethnicity") ``` ### 4. Cross-BMI Split Evaluates generalization across body types. Train on normal/underweight, test on obese. | Split | Videos | BMI Bands | |-------|--------|-----------| | **Train** | 6,066 | `normal` (3,040)
`underweight` (3,026) | | **Val** | 2,962 | `overweight` (2,962) | | **Test** | 2,972 | `obese` (2,972) | ```python dataset = load_dataset("simplexsigil2/wanfall", "cross_bmi") ``` **Note:** All cross-demographic splits contain the same videos, just organized differently. Total unique videos: 12,000. ## Usage The dataset provides flexible loading options depending on your use case. The key distinction is between **segment-level** and **video-level** samples. ### Loading Modes Overview | Mode | Sample Unit | Has start/end? | Has frame_labels? | Random Split Train Size | |------|-------------|----------------|-------------------|------------------------| | **Temporal Segments** | Segment | ✅ Yes | ❌ No | 15,344 segments (9,600 videos) | | **Frame-Wise Labels** | Video | ❌ No | ✅ Yes (81 labels) | 9,600 videos | ### 1. Temporal Segments (Default) Load temporal segment annotations where **each sample is a segment** with start/end times. Multiple segments can come from the same video. ```python dataset = load_dataset("simplexsigil2/wanfall", "random") # Each example is a SEGMENT (not a video) example = dataset['train'][0] print(example['path']) # "fall/fall_ch_001" print(example['label']) # 1 (activity class ID) print(example['start']) # 0.0 (start time in seconds) print(example['end']) # 1.006 (end time in seconds) print(example['age_group']) # Demographic metadata # Dataset contains multiple segments per video print(f"Total segments in train: {len(dataset['train'])}") # 15,344 print(f"Unique videos: {len(set([ex['path'] for ex in dataset['train']]))}") # 9,600 ``` **Use case:** Training models on activity classification where you want to extract and process only the relevant video segment for each activity. ### 2. Frame-Wise Labels Load dense frame-level labels where **each sample is a video** with 81 frame labels. Each video appears exactly once. ```python dataset = load_dataset("simplexsigil2/wanfall", "random", framewise=True) # Each example is a VIDEO (not a segment) example = dataset['train'][0] print(example['path']) # "fall/fall_ch_001" print(example['frame_labels']) # [1, 1, 1, ..., 11, 11] (81 labels) print(len(example['frame_labels'])) # 81 frames print(example['age_group']) # Demographic metadata included # Dataset contains one sample per video print(f"Total videos in train: {len(dataset['train'])}") # 9,600 videos ``` **Use case:** Training sequence models (e.g., temporal action segmentation) that process entire videos and predict frame-level labels. **Key features:** - Works with all split configs: Add `framewise=True` to any split - Efficient: 348KB compressed archive, automatically cached - Complete metadata: All demographic attributes included ### 3. Additional Configurations ```python # All segments without train/val/test splits dataset = load_dataset("simplexsigil2/wanfall", "labels") # 19,228 segments # Video metadata only (no labels) dataset = load_dataset("simplexsigil2/wanfall", "metadata") # 12,000 videos # Paths only (minimal memory footprint) dataset = load_dataset("simplexsigil2/wanfall", "random", paths_only=True) ``` ### Practical Examples #### Label Conversion Labels are stored as integers (0-15) but can be converted to strings: ```python dataset = load_dataset("simplexsigil2/wanfall", "random") label_feature = dataset['train'].features['label'] # Convert integer to string label_name = label_feature.int2str(1) # "fall" # Convert string to integer label_id = label_feature.str2int("walk") # 0 # Access all label names all_labels = label_feature.names # ['walk', 'fall', 'fallen', ...] ``` #### Filter by Demographics ```python dataset = load_dataset("simplexsigil2/wanfall", "labels") segments = dataset['train'] # Filter elderly fall segments elderly_falls = [ ex for ex in segments if ex['age_group'] == 'elderly_65_plus' and ex['label'] == 1 ] print(f"Found {len(elderly_falls)} elderly fall segments") # Filter by multiple demographics indoor_male_falls = [ ex for ex in segments if ex['environment_category'] == 'indoor' and ex['gender_presentation'] == 'male' and ex['label'] == 1 ] ``` #### Cross-Demographic Evaluation ```python # Train on young adults, test on children and elderly cross_age = load_dataset("simplexsigil2/wanfall", "cross_age", framewise=True) # Train contains only: young_adults_18_34, middle_aged_35_64 for example in cross_age['train'][:5]: print(f"Train video: {example['path']}, age: {example['age_group']}") # Test contains: children_5_12, toddlers_1_4, elderly_65_plus for example in cross_age['test'][:5]: print(f"Test video: {example['path']}, age: {example['age_group']}") ``` #### Training Loop Example ```python from datasets import load_dataset import torch # Load dataset with frame-wise labels dataset = load_dataset("simplexsigil2/wanfall", "random", framewise=True) for epoch in range(num_epochs): for example in dataset['train']: video_path = example['path'] frame_labels = torch.tensor(example['frame_labels']) # (81,) # Load video frames (user must implement) # frames = load_video(video_root / f"{video_path}.mp4") # (81, H, W, 3) # Forward pass # outputs = model(frames) # loss = criterion(outputs, frame_labels) # loss.backward() ``` ## Annotation Guidelines ### Temporal Precision Annotations use sub-second accuracy with decimal timestamps (e.g., `start: 0.0, end: 1.006`). Most frames in videos are labeled, with minimal gaps between activities. ### Activity Sequences Videos contain natural transitions between activities. Common sequences include: ``` walk → fall → fallen → stand_up walk → sit_down → sitting → stand_up walk → lie_down → lying → stand_up standing → squat_down → squatting → stand_up ``` Not all transitions include static states. For example, a person might `stand_up` immediately after falling without a `fallen` state. ### Motion Types **Dynamic Actions** (transitions and movements): - Labeled from the **first frame** where the motion begins - End when the person reaches a **resting state** or begins a new action - If one motion is followed by another, the transition occurs at the first frame showing movement not explained by the previous action **Static States** (stationary postures): - Begin when person **comes to rest** in that posture - Continue until the next motion begins - Example for `sitting`: Does not start when the body touches the chair, but when the body loses its tension and settles into the seated position ### Label Boundaries - **Dynamic → Dynamic**: Transition at first frame of new motion - **Dynamic → Static**: Static begins when movement stops and body settles - **Static → Dynamic**: Dynamic begins at first frame of movement ## Demographic Distribution Rich demographic and scene metadata enables bias analysis and cross-demographic evaluation. ![Demographic Overview](figures/demographic_overview.png) **Note:** Metadata represents generation prompts. Due to generative model biases, actual visual attributes may deviate, particularly for ethnicity and body type. Age and gender are generally more reliable. **Scene Variations:** - Environments: Indoor/outdoor settings - Camera angles: 4 elevations × 4 azimuths × 2 distances - Shot types: Static wide and medium-wide ## Video Data **Videos are NOT included in this repository.** This dataset contains only annotations and metadata. ### Video Specifications - **Duration:** 5.0625 seconds per clip - **Frame count:** 81 frames - **Frame rate:** 16 fps - **Format:** MP4 (H.264) - **Resolution:** Variable (synthetic generation) ### Accessing Videos Videos will be released at a later point in time. Information about access will be provided here when available. When videos become available, they should be organized with the following structure: ``` video_root/ ├── fall/ │ ├── fall_ch_001.mp4 │ ├── fall_ch_002.mp4 │ └── ... ├── fallen/ │ ├── fallen_ch_001.mp4 │ └── ... └── ... ``` The `path` field in the CSV corresponds to the relative path without the `.mp4` extension (e.g., `"fall/fall_ch_001"` → `video_root/fall/fall_ch_001.mp4`). ## License [![License: CC BY-NC 4.0](https://img.shields.io/badge/License-CC%20BY--NC%204.0-lightgrey.svg)](https://creativecommons.org/licenses/by-nc/4.0/) Annotations and metadata released under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/). Video data is synthetic and subject to separate terms.