Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    RuntimeError
Message:      Dataset scripts are no longer supported, but found wanfall.py
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                                   ^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1031, in dataset_module_factory
                  raise e1 from None
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 989, in dataset_module_factory
                  raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}")
              RuntimeError: Dataset scripts are no longer supported, but found wanfall.py

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

License: CC BY-NC 4.0

WanFall: A Synthetic Activity Recognition Dataset

Synthetic activity recognition dataset with 12,000 videos focused on fall detection and activities of daily living. Features rich demographic metadata and multiple evaluation protocols for bias analysis.

Status: Under active development, subject to change.

Dataset Statistics

Property Value
Videos 12,000 (5.0625s each)
Temporal Segments 19,228
Activity Classes 16
Frames per Video 81 frames @ 16fps
Annotation Formats Temporal segments OR frame-wise labels
Metadata Fields 12 (6 demographic + 6 scene)

Quick Start

from datasets import load_dataset

# Random split with temporal segments (default)
dataset = load_dataset("simplexsigil2/wanfall", "random")
print(f"Train: {len(dataset['train'])} segments")  # 15,344 segments

# Random split with frame-wise labels (81 per video)
dataset = load_dataset("simplexsigil2/wanfall", "random", framewise=True)
print(f"Train: {len(dataset['train'])} videos")    # 9,600 videos

# Cross-demographic evaluation
cross_age = load_dataset("simplexsigil2/wanfall", "cross_age")

# Access example
example = dataset['train'][0]
print(f"Video: {example['path']}")
print(f"Activity: {example['label']} ({example['start']:.2f}s - {example['end']:.2f}s)")
print(f"Demographics: {example['age_group']}, {example['race_ethnicity_omb']}")

Activity Classes

16 activity classes covering falls, posture transitions, and static states:

LABEL_MAP = {
    0: "walk",        # Walking movement, including jogging and running
    1: "fall",        # Falling down action (loss of control)
    2: "fallen",      # Person on ground after fall
    3: "sit_down",    # Transition from standing to sitting
    4: "sitting",     # Stationary sitting posture
    5: "lie_down",    # Intentionally lying down (not falling)
    6: "lying",       # Stationary lying posture
    7: "stand_up",    # Getting up (to sitting or standing)
    8: "standing",    # Stationary standing posture
    9: "other",       # Unclassified activities
    10: "kneel_down", # Transition to kneeling
    11: "kneeling",   # Stationary kneeling posture
    12: "squat_down", # Transition to squatting
    13: "squatting",  # Stationary squatting posture
    14: "crawl",      # Crawling movement on hands and knees
    15: "jump",       # Jumping action
}

Motion Types:

  • Dynamic (0-3, 5, 7, 9-10, 12, 14-15): Transitions and movements
  • Static (2, 4, 6, 8, 11, 13): Stationary postures

Data Format

CSV Columns (19 fields)

# Core annotation fields
path                    # Video path (e.g., "fall/fall_ch_001")
label                   # Activity class ID (0-15)
start                   # Segment start time (seconds)
end                     # Segment end time (seconds)
subject                 # -1 (synthetic data)
cam                     # -1 (single view)
dataset                 # "wanfall"

# Demographic metadata (6 fields)
age_group               # toddlers_1_4, children_5_12, teenagers_13_17, young_adults_18_34, middle_aged_35_64, elderly_65_plus
gender_presentation     # male, female
monk_skin_tone          # mst1-mst10 (Monk Skin Tone scale)
race_ethnicity_omb      # white, black, asian, hispanic_latino, aian, nhpi, mena (OMB categories)
bmi_band                # underweight, normal, overweight, obese
height_band             # short, avg, tall

# Scene metadata (6 fields)
environment_category    # indoor, outdoor
camera_shot             # static_wide, static_medium_wide
speed                   # 24fps_rt, 25fps_rt, 30fps_rt, std_rt
camera_elevation        # eye, low, high, top
camera_azimuth          # front, rear, left, right
camera_distance         # medium, far

References:

Split Configurations

1. Random Split (80/10/10)

Standard baseline with random video assignment (seed 42).

Split Videos Segments
Train 9,600 15,344
Val 1,200 1,956
Test 1,200 1,928
dataset = load_dataset("simplexsigil2/wanfall", "random")

2. Cross-Age Split

Evaluates generalization across age groups. Train on adults, test on children and elderly.

Split Videos Age Groups
Train 4,000 young_adults_18_34 (2,000)
middle_aged_35_64 (2,000)
Val 2,000 teenagers_13_17 (2,000)
Test 6,000 children_5_12 (2,000)
toddlers_1_4 (2,000)
elderly_65_plus (2,000)
dataset = load_dataset("simplexsigil2/wanfall", "cross_age")

3. Cross-Ethnicity Split

Evaluates generalization across racial/ethnic groups with maximum phenotypic distance. Train on White/Asian/Hispanic, test on Black/MENA/NHPI.

Split Videos Ethnicities
Train 5,178 white (1,709)
asian (1,691)
hispanic_latino (1,778)
Val 1,741 aian (1,741)
Test 5,081 black (1,684)
mena (1,680)
nhpi (1,717)
dataset = load_dataset("simplexsigil2/wanfall", "cross_ethnicity")

4. Cross-BMI Split

Evaluates generalization across body types. Train on normal/underweight, test on obese.

Split Videos BMI Bands
Train 6,066 normal (3,040)
underweight (3,026)
Val 2,962 overweight (2,962)
Test 2,972 obese (2,972)
dataset = load_dataset("simplexsigil2/wanfall", "cross_bmi")

Note: All cross-demographic splits contain the same videos, just organized differently. Total unique videos: 12,000.

Usage

The dataset provides flexible loading options depending on your use case. The key distinction is between segment-level and video-level samples.

Loading Modes Overview

Mode Sample Unit Has start/end? Has frame_labels? Random Split Train Size
Temporal Segments Segment βœ… Yes ❌ No 15,344 segments (9,600 videos)
Frame-Wise Labels Video ❌ No βœ… Yes (81 labels) 9,600 videos

1. Temporal Segments (Default)

Load temporal segment annotations where each sample is a segment with start/end times. Multiple segments can come from the same video.

dataset = load_dataset("simplexsigil2/wanfall", "random")

# Each example is a SEGMENT (not a video)
example = dataset['train'][0]
print(example['path'])      # "fall/fall_ch_001"
print(example['label'])     # 1 (activity class ID)
print(example['start'])     # 0.0 (start time in seconds)
print(example['end'])       # 1.006 (end time in seconds)
print(example['age_group']) # Demographic metadata

# Dataset contains multiple segments per video
print(f"Total segments in train: {len(dataset['train'])}")  # 15,344
print(f"Unique videos: {len(set([ex['path'] for ex in dataset['train']]))}") # 9,600

Use case: Training models on activity classification where you want to extract and process only the relevant video segment for each activity.

2. Frame-Wise Labels

Load dense frame-level labels where each sample is a video with 81 frame labels. Each video appears exactly once.

dataset = load_dataset("simplexsigil2/wanfall", "random", framewise=True)

# Each example is a VIDEO (not a segment)
example = dataset['train'][0]
print(example['path'])         # "fall/fall_ch_001"
print(example['frame_labels']) # [1, 1, 1, ..., 11, 11] (81 labels)
print(len(example['frame_labels']))  # 81 frames
print(example['age_group'])    # Demographic metadata included

# Dataset contains one sample per video
print(f"Total videos in train: {len(dataset['train'])}")  # 9,600 videos

Use case: Training sequence models (e.g., temporal action segmentation) that process entire videos and predict frame-level labels.

Key features:

  • Works with all split configs: Add framewise=True to any split
  • Efficient: 348KB compressed archive, automatically cached
  • Complete metadata: All demographic attributes included

3. Additional Configurations

# All segments without train/val/test splits
dataset = load_dataset("simplexsigil2/wanfall", "labels")  # 19,228 segments

# Video metadata only (no labels)
dataset = load_dataset("simplexsigil2/wanfall", "metadata")  # 12,000 videos

# Paths only (minimal memory footprint)
dataset = load_dataset("simplexsigil2/wanfall", "random", paths_only=True)

Practical Examples

Label Conversion

Labels are stored as integers (0-15) but can be converted to strings:

dataset = load_dataset("simplexsigil2/wanfall", "random")
label_feature = dataset['train'].features['label']

# Convert integer to string
label_name = label_feature.int2str(1)      # "fall"

# Convert string to integer
label_id = label_feature.str2int("walk")   # 0

# Access all label names
all_labels = label_feature.names           # ['walk', 'fall', 'fallen', ...]

Filter by Demographics

dataset = load_dataset("simplexsigil2/wanfall", "labels")
segments = dataset['train']

# Filter elderly fall segments
elderly_falls = [
    ex for ex in segments
    if ex['age_group'] == 'elderly_65_plus' and ex['label'] == 1
]
print(f"Found {len(elderly_falls)} elderly fall segments")

# Filter by multiple demographics
indoor_male_falls = [
    ex for ex in segments
    if ex['environment_category'] == 'indoor'
    and ex['gender_presentation'] == 'male'
    and ex['label'] == 1
]

Cross-Demographic Evaluation

# Train on young adults, test on children and elderly
cross_age = load_dataset("simplexsigil2/wanfall", "cross_age", framewise=True)

# Train contains only: young_adults_18_34, middle_aged_35_64
for example in cross_age['train'][:5]:
    print(f"Train video: {example['path']}, age: {example['age_group']}")

# Test contains: children_5_12, toddlers_1_4, elderly_65_plus
for example in cross_age['test'][:5]:
    print(f"Test video: {example['path']}, age: {example['age_group']}")

Training Loop Example

from datasets import load_dataset
import torch

# Load dataset with frame-wise labels
dataset = load_dataset("simplexsigil2/wanfall", "random", framewise=True)

for epoch in range(num_epochs):
    for example in dataset['train']:
        video_path = example['path']
        frame_labels = torch.tensor(example['frame_labels'])  # (81,)

        # Load video frames (user must implement)
        # frames = load_video(video_root / f"{video_path}.mp4")  # (81, H, W, 3)

        # Forward pass
        # outputs = model(frames)
        # loss = criterion(outputs, frame_labels)
        # loss.backward()

Annotation Guidelines

Temporal Precision

Annotations use sub-second accuracy with decimal timestamps (e.g., start: 0.0, end: 1.006). Most frames in videos are labeled, with minimal gaps between activities.

Activity Sequences

Videos contain natural transitions between activities. Common sequences include:

walk β†’ fall β†’ fallen β†’ stand_up
walk β†’ sit_down β†’ sitting β†’ stand_up
walk β†’ lie_down β†’ lying β†’ stand_up
standing β†’ squat_down β†’ squatting β†’ stand_up

Not all transitions include static states. For example, a person might stand_up immediately after falling without a fallen state.

Motion Types

Dynamic Actions (transitions and movements):

  • Labeled from the first frame where the motion begins
  • End when the person reaches a resting state or begins a new action
  • If one motion is followed by another, the transition occurs at the first frame showing movement not explained by the previous action

Static States (stationary postures):

  • Begin when person comes to rest in that posture
  • Continue until the next motion begins
  • Example for sitting: Does not start when the body touches the chair, but when the body loses its tension and settles into the seated position

Label Boundaries

  • Dynamic β†’ Dynamic: Transition at first frame of new motion
  • Dynamic β†’ Static: Static begins when movement stops and body settles
  • Static β†’ Dynamic: Dynamic begins at first frame of movement

Demographic Distribution

Rich demographic and scene metadata enables bias analysis and cross-demographic evaluation.

Demographic Overview

Note: Metadata represents generation prompts. Due to generative model biases, actual visual attributes may deviate, particularly for ethnicity and body type. Age and gender are generally more reliable.

Scene Variations:

  • Environments: Indoor/outdoor settings
  • Camera angles: 4 elevations Γ— 4 azimuths Γ— 2 distances
  • Shot types: Static wide and medium-wide

Video Data

Videos are NOT included in this repository. This dataset contains only annotations and metadata.

Video Specifications

  • Duration: 5.0625 seconds per clip
  • Frame count: 81 frames
  • Frame rate: 16 fps
  • Format: MP4 (H.264)
  • Resolution: Variable (synthetic generation)

Accessing Videos

Videos will be released at a later point in time. Information about access will be provided here when available.

When videos become available, they should be organized with the following structure:

video_root/
β”œβ”€β”€ fall/
β”‚   β”œβ”€β”€ fall_ch_001.mp4
β”‚   β”œβ”€β”€ fall_ch_002.mp4
β”‚   └── ...
β”œβ”€β”€ fallen/
β”‚   β”œβ”€β”€ fallen_ch_001.mp4
β”‚   └── ...
└── ...

The path field in the CSV corresponds to the relative path without the .mp4 extension (e.g., "fall/fall_ch_001" β†’ video_root/fall/fall_ch_001.mp4).

License

License: CC BY-NC 4.0

Annotations and metadata released under CC BY-NC 4.0. Video data is synthetic and subject to separate terms.

Downloads last month
315