Lightweight CNNs (156k parameters each) for chess board analysis from 32Γ32 pixel square images. The project includes three specialized models trained on synthetic data from chess.com/lichess boards, piece sets, arrow overlays, and centering variations:
- Pieces Model: Classifies 13 classes (6 white pieces, 6 black pieces, empty squares) for board state recognition and FEN generation
- Arrows Model: Classifies 49 classes representing arrow overlay patterns for detecting chess analysis annotations
- Snap Model: Classifies 2 classes (centered vs off-centered pieces) for automated board analysis and piece positioning validation
Quick Start
pip install chess-cv
from chess_cv import load_bundled_model
# Load pre-trained models (weights included in package)
pieces_model = load_bundled_model('pieces')
arrows_model = load_bundled_model('arrows')
snap_model = load_bundled_model('snap')
# Make predictions
piece_predictions = pieces_model(image_tensor)
arrow_predictions = arrows_model(image_tensor)
snap_predictions = snap_model(image_tensor)
Alternative: Load latest version from Hugging Face Hub
from huggingface_hub import hf_hub_download
from chess_cv.model import SimpleCNN
import mlx.core as mx
# Download latest weights from Hugging Face
model_path = hf_hub_download(repo_id="S1M0N38/chess-cv", filename="pieces.safetensors")
model = SimpleCNN(num_classes=13)
weights = mx.load(str(model_path))
model.load_weights(list(weights.items()))
model.eval()
Models
This repository contains three specialized models for chess board analysis:
βοΈ Pieces Model (pieces.safetensors)
Overview:
The pieces model classifies chess square images into 13 classes: 6 white pieces (wP, wN, wB, wR, wQ, wK), 6 black pieces (bP, bN, bB, bR, bQ, bK), and empty squares (xx). This model is designed for board state recognition and FEN generation from chess board images.
Training:
- Architecture: SimpleCNN (156k parameters)
- Input: 32Γ32px RGB square images
- Data: ~93,000 synthetic images from 55 board styles Γ 64 piece sets
- Augmentation: Aggressive augmentation with arrow overlays (80%), highlight overlays (25%), move overlays (50%), random crops, horizontal flips, color jitter, rotation (Β±10Β°), and Gaussian noise
- Optimizer: AdamW (weight_decay=0.001) with LR scheduler (warmup + cosine decay: 0β0.001β1e-5)
- Training: 1000 epochs, batch size 64
Performance:
| Dataset | Accuracy | F1-Score (Macro) |
|---|---|---|
| Test Data | 99.90% | 99.90% |
| S1M0N38/chess-cv-openboard * | - | 98.56% |
| S1M0N38/chess-cv-chessvision * | - | 92.28% |
* Dataset with unbalanced class distribution (e.g. many more samples for empty square class), so accuracy is not representative.
β Arrows Model (arrows.safetensors)
Overview:
The arrows model classifies chess square images into 49 classes representing different arrow overlay patterns: 20 arrow heads, 12 arrow tails, 8 middle segments (for straight and diagonal arrows), 4 corner pieces (for knight-move arrows), and empty squares (xx). This model enables detection and reconstruction of arrow annotations commonly used in chess analysis interfaces. The NSEW naming convention (North/South/East/West) indicates arrow orientation and direction.
Training:
- Architecture: SimpleCNN (156k parameters, same as pieces model)
- Input: 32Γ32px RGB square images
- Data:
4.5M synthetic images from 55 board styles Γ arrow overlays (3.14M train, ~672K val, ~672K test) - Augmentation: Conservative augmentation with highlight overlays (25%), move overlays (50%), random crops, and minimal color jitter/noise. No horizontal flips to preserve arrow directionality
- Optimizer: AdamW (lr=0.0005, weight_decay=0.00005)
- Training: 20 epochs, batch size 128
Performance:
| Dataset | Accuracy | F1-Score (Macro) |
|---|---|---|
| Test Data (synthetic) | 99.99% | 99.99% |
The arrows model is optimized for detecting directional annotations while maintaining spatial consistency across the board.
Limitation: Classification accuracy degrades when multiple arrow components overlap in a single square.
π Snap Model (snap.safetensors)
Overview:
The snap model classifies chess square images into 2 classes: centered ("ok") and off-centered ("bad") pieces. This model is designed for automated board analysis and piece positioning validation, helping ensure proper piece placement in digital chess interfaces and automated analysis systems.
Training:
- Architecture: SimpleCNN (156k parameters)
- Input: 32Γ32px RGB square images
- Data:
1.4M synthetic images from centered and off-centered piece positions (985,960 train, ~211,574 validate, ~210,466 test) - Augmentation: Conservative augmentation with arrow overlays (50%), highlight overlays (20%), move overlays (50%), mouse overlays (80%), horizontal flips (50%), color jitter, and Gaussian noise. No rotation or geometric transformations to preserve centering semantics
- Optimizer: AdamW (weight_decay=0.001) with LR scheduler (warmup + cosine decay: 0β0.001β1e-5)
- Training: 200 epochs, batch size 64
Performance:
| Dataset | Accuracy | F1-Score (Macro) |
|---|---|---|
| Test Data (synthetic) | 99.93% | 99.93% |
The snap model is optimized for detecting piece centering issues while maintaining robustness to various board styles and visual conditions.
Use Cases:
- Automated board state validation
- Piece positioning quality control
- Chess interface usability testing
- Digital chess board quality assurance
Training Your Own Model
To train or evaluate a model yourself:
git clone https://github.com/S1M0N38/chess-cv.git
cd chess-cv
uv sync --all-extras
# Generate training data for a specific model
chess-cv preprocessing pieces # or 'arrows' or 'snap'
# Train model
chess-cv train pieces # or 'arrows' or 'snap'
# Evaluate model
chess-cv test pieces # or 'arrows' or 'snap'
See the Setup Guide and Train and Evaluate for detailed instructions on data generation, training configuration, and evaluation.
Limitations
- Requires precisely cropped 32Γ32 pixel square images (no board detection)
- Trained on synthetic data; may not generalize to real-world photos
- Not suitable for non-standard piece designs
- Optimized for Apple Silicon (slower on CPU)
For detailed documentation, architecture details, and advanced usage, see the full documentation.
Citation
@software{bertolotto2025chesscv,
author = {Bertolotto, Simone},
title = {{Chess CV}},
url = {https://github.com/S1M0N38/chess-cv},
year = {2025}
}
Repo: github.com/S1M0N38/chess-cv β’ PyPI: pypi.org/project/chess-cv
- Downloads last month
- 17
Evaluation results
- Accuracy on Chess CV Test Datasetself-reported0.999
- F1 Score (Macro) on Chess CV Test Datasetself-reported0.999
- Accuracy on Chess CV OpenBoard Datasetself-reported0.993
- F1 Score (Macro) on Chess CV OpenBoard Datasetself-reported0.986
- Accuracy on Chess CV ChessVision Datasetself-reported0.931
- F1 Score (Macro) on Chess CV ChessVision Datasetself-reported0.923
- Accuracy on Chess CV Arrows Test Datasetself-reported1.000
- F1 Score (Macro) on Chess CV Arrows Test Datasetself-reported1.000
- Accuracy on Chess CV Snap Test Datasetself-reported0.999
- F1 Score (Macro) on Chess CV Snap Test Datasetself-reported0.999