language:
- en
license: other
pretty_name: 'POLAR: Posture-Level Action Recognition Dataset'
size_categories: 10K<n<100K
tags:
- computer-vision
- image-classification
- object-detection
- action-recognition
- human-pose-estimation
dataset_info:
features:
- name: image
dtype: image
- name: objects
list_of:
- name: id
dtype: int64
- name: bbox
list_of:
dtype: float64
- name: category
dtype: int64
splits:
- name: train
- name: val
- name: test
supervised_keys:
- image
- objects
task_templates:
- task: image-object-detection
citations:
- title: 'POLAR: Posture-level Action Recognition Dataset'
authors:
- Wentao Ma
- Shuang Liang
year: 2021
doi: 10.17632/hvnsh7rwz7.1
url: https://data.mendeley.com/datasets/hvnsh7rwz7/1
POLAR: Posture-Level Action Recognition Dataset
Disclaimer
This dataset is a restructured and YOLO-formatted version of the original POsture-Level Action Recognition (POLAR) dataset. I do not claim ownership or licensing rights over this dataset. For full details, including original licensing and usage terms, please refer to the original dataset on Mendeley Data.
Motivation
The original POLAR dataset, while comprehensive, has a somewhat complex structure that can make it challenging to navigate and integrate with modern object detection frameworks like YOLO. To address this, I reorganized the dataset into a clean, split-based format and converted the annotations to YOLO-compatible labels. This makes it easier to use for training action recognition models directly.
Description
The POLAR (POsture-Level Action Recognition) dataset focuses on nine categories of human actions directly tied to posture: bending, jumping, lying, running, sitting, squatting, standing, stretching, and walking. It contains a total of 35,324 images and covers approximately 99% of posture-level human actions in daily life, based on the authors' analysis of the PASCAL VOC dataset.
This dataset is suitable for tasks such as:
- Image Classification
- Action Recognition
- Object Detection (with YOLO-formatted bounding boxes around persons)
Each image features a single or multiple persons with bounding box annotations labeled by their primary action/pose.
Dataset Structure
The dataset is pre-split into train, val, and test sets. The directory structure is as follows:
POLAR/
βββ Annotations/ # Original JSON annotation files (for reference)
β βββ test/
β βββ train/
β βββ val/
βββ images/ # Original images (.jpg)
β βββ test/
β βββ train/
β βββ val/
βββ labels/ # YOLO-formatted .txt label files
β βββ test/
β βββ train/
β βββ val/
βββ splits/ # Split definition files
β βββ test.txt
β βββ train.txt
β βββ val.txt
βββ dataset.yaml # YOLO configuration file (for training)
- splits/: Text files listing image filenames (one per line, without extensions) for each split.
- labels/: For each image (e.g.,
images/train/p1_00001.jpg), there is a correspondinglabels/train/p1_00001.txtwith YOLO-format annotations (class ID + normalized bounding box coordinates). - dataset.yaml: Pre-configured for Ultralytics YOLO training (see YOLO Dataset Format for details).
Changes Made
Compared to the original dataset, the following modifications were applied:
Restructured Splits:
- Organized images and annotations into explicit train, val, and test subfolders.
- Used the original split definitions from the provided
.txtfiles insplits/to ensure consistency.
YOLO Formatting:
- Converted JSON annotations to YOLO
.txtfiles in thelabels/folder. - Each line in a
.txtfile follows the format:<class_id> <center_x> <center_y> <norm_width> <norm_height>(normalized to [0,1]). - Class IDs map to actions as follows (0-8):
- 0: bending
- 1: jumping
- 2: lying
- 3: running
- 4: sitting
- 5: squatting
- 6: standing
- 7: stretching
- 8: walking
- Included a ready-to-use
dataset.yamlfor YOLOv8+ training.
- Converted JSON annotations to YOLO
These changes simplify setup while preserving the original data integrity.
Usage
Training with YOLO (Ultralytics)
- Clone or download this dataset to your working directory.
- Install Ultralytics:
pip install ultralytics. - Train a model (e.g., using YOLOv8 nano):
yolo detect train data=dataset.yaml model=yolov8n.pt epochs=100 imgsz=640- This assumes the YAML is in the root (
POLAR/). - Adjust
epochs,imgsz, or other hyperparameters as needed. - YOLO will automatically pair images with labels based on filenames.
- This assumes the YAML is in the root (
For more details on YOLO integration, see the Ultralytics documentation.
Citation
If you use this dataset in your research, please cite the original work:
Ma, Wentao; Liang, Shuang (2021), βPOLAR: Posture-level Action Recognition Datasetβ, Mendeley Data, V1, doi: 10.17632/hvnsh7rwz7.1.
Last updated: October 20, 2025