sayantan47's picture
Update README.md
3d6ada5 verified
|
raw
history blame
5.86 kB
metadata
language:
  - en
license: other
pretty_name: 'POLAR: Posture-Level Action Recognition Dataset'
size_categories: 10K<n<100K
tags:
  - computer-vision
  - image-classification
  - object-detection
  - action-recognition
  - human-pose-estimation
dataset_info:
  features:
    - name: image
      dtype: image
    - name: objects
      list_of:
        - name: id
          dtype: int64
        - name: bbox
          list_of:
            dtype: float64
        - name: category
          dtype: int64
  splits:
    - name: train
    - name: val
    - name: test
  supervised_keys:
    - image
    - objects
  task_templates:
    - task: image-object-detection
citations:
  - title: 'POLAR: Posture-level Action Recognition Dataset'
    authors:
      - Wentao Ma
      - Shuang Liang
    year: 2021
    doi: 10.17632/hvnsh7rwz7.1
    url: https://data.mendeley.com/datasets/hvnsh7rwz7/1

POLAR: Posture-Level Action Recognition Dataset

Disclaimer

This dataset is a restructured and YOLO-formatted version of the original POsture-Level Action Recognition (POLAR) dataset. I do not claim ownership or licensing rights over this dataset. For full details, including original licensing and usage terms, please refer to the original dataset on Mendeley Data.

Motivation

The original POLAR dataset, while comprehensive, has a somewhat complex structure that can make it challenging to navigate and integrate with modern object detection frameworks like YOLO. To address this, I reorganized the dataset into a clean, split-based format and converted the annotations to YOLO-compatible labels. This makes it easier to use for training action recognition models directly.

Description

The POLAR (POsture-Level Action Recognition) dataset focuses on nine categories of human actions directly tied to posture: bending, jumping, lying, running, sitting, squatting, standing, stretching, and walking. It contains a total of 35,324 images and covers approximately 99% of posture-level human actions in daily life, based on the authors' analysis of the PASCAL VOC dataset.

This dataset is suitable for tasks such as:

  • Image Classification
  • Action Recognition
  • Object Detection (with YOLO-formatted bounding boxes around persons)

Each image features a single or multiple persons with bounding box annotations labeled by their primary action/pose.

Dataset Structure

The dataset is pre-split into train, val, and test sets. The directory structure is as follows:

POLAR/
β”œβ”€β”€ Annotations/          # Original JSON annotation files (for reference)
β”‚   β”œβ”€β”€ test/
β”‚   β”œβ”€β”€ train/
β”‚   └── val/
β”œβ”€β”€ images/               # Original images (.jpg)
β”‚   β”œβ”€β”€ test/
β”‚   β”œβ”€β”€ train/
β”‚   └── val/
β”œβ”€β”€ labels/               # YOLO-formatted .txt label files
β”‚   β”œβ”€β”€ test/
β”‚   β”œβ”€β”€ train/
β”‚   └── val/
β”œβ”€β”€ splits/               # Split definition files
β”‚   β”œβ”€β”€ test.txt
β”‚   β”œβ”€β”€ train.txt
β”‚   └── val.txt
└── dataset.yaml          # YOLO configuration file (for training)
  • splits/: Text files listing image filenames (one per line, without extensions) for each split.
  • labels/: For each image (e.g., images/train/p1_00001.jpg), there is a corresponding labels/train/p1_00001.txt with YOLO-format annotations (class ID + normalized bounding box coordinates).
  • dataset.yaml: Pre-configured for Ultralytics YOLO training (see YOLO Dataset Format for details).

Changes Made

Compared to the original dataset, the following modifications were applied:

  1. Restructured Splits:

    • Organized images and annotations into explicit train, val, and test subfolders.
    • Used the original split definitions from the provided .txt files in splits/ to ensure consistency.
  2. YOLO Formatting:

    • Converted JSON annotations to YOLO .txt files in the labels/ folder.
    • Each line in a .txt file follows the format: <class_id> <center_x> <center_y> <norm_width> <norm_height> (normalized to [0,1]).
    • Class IDs map to actions as follows (0-8):
      • 0: bending
      • 1: jumping
      • 2: lying
      • 3: running
      • 4: sitting
      • 5: squatting
      • 6: standing
      • 7: stretching
      • 8: walking
    • Included a ready-to-use dataset.yaml for YOLOv8+ training.

These changes simplify setup while preserving the original data integrity.

Usage

Training with YOLO (Ultralytics)

  1. Clone or download this dataset to your working directory.
  2. Install Ultralytics: pip install ultralytics.
  3. Train a model (e.g., using YOLOv8 nano):
    yolo detect train data=dataset.yaml model=yolov8n.pt epochs=100 imgsz=640
    
    • This assumes the YAML is in the root (POLAR/).
    • Adjust epochs, imgsz, or other hyperparameters as needed.
    • YOLO will automatically pair images with labels based on filenames.

For more details on YOLO integration, see the Ultralytics documentation.

Citation

If you use this dataset in your research, please cite the original work:

Ma, Wentao; Liang, Shuang (2021), β€œPOLAR: Posture-level Action Recognition Dataset”, Mendeley Data, V1, doi: 10.17632/hvnsh7rwz7.1.


Last updated: October 20, 2025