|
|
--- |
|
|
license: mit |
|
|
task_categories: |
|
|
- time-series-forecasting |
|
|
- robotics |
|
|
- video-classification |
|
|
- feature-extraction |
|
|
tags: |
|
|
- blender |
|
|
- camera-tracking |
|
|
- vfx |
|
|
- optical-flow |
|
|
- computer-vision |
|
|
pretty_name: AutoSolve Telemetry |
|
|
size_categories: |
|
|
- n<1K |
|
|
--- |
|
|
|
|
|
# π§ͺ AutoSolve Research Dataset (Beta) |
|
|
|
|
|
> **Community-driven telemetry for 3D Camera Tracking** |
|
|
|
|
|
This dataset collects anonymized tracking sessions from the [AutoSolve Blender Addon](https://github.com/UsamaSQ/AutoSolve). It trains an adaptive learning system that predicts optimal tracking settings (Search Size, Pattern Size, Motion Models) based on footage characteristics. |
|
|
|
|
|
--- |
|
|
|
|
|
## π€ How to Contribute |
|
|
|
|
|
Your data makes AutoSolve smarter for everyone. |
|
|
|
|
|
### Step 1: Export from Blender |
|
|
|
|
|
1. Open Blender and go to the **Movie Clip Editor**. |
|
|
2. In the **AutoSolve** panel, find the **Research Beta** sub-panel. |
|
|
3. Click **Export** (exports as `autosolve_telemetry_YYYYMMDD_HHMMSS.zip`). |
|
|
|
|
|
### Step 2: Upload Here |
|
|
|
|
|
1. Click the **"Files and versions"** tab at the top of this page. |
|
|
2. Click **"Add file"** β **"Upload file"**. (You need to be Logged-In to HuggingFace to upload) |
|
|
3. Drag and drop your `.zip` file. |
|
|
4. _(Optional)_ Add a brief description: e.g., "10 drone shots, 4K 30fps, outdoor" |
|
|
5. Click **"Commit changes"** (creates a Pull Request). |
|
|
|
|
|
**Note:** Contributions are reviewed before merging to ensure data quality and privacy compliance. |
|
|
|
|
|
### Step 3: Join the Community |
|
|
|
|
|
Have questions or want to discuss your contributions? |
|
|
|
|
|
**Discord:** [Join our community](https://discord.gg/qUvrXHP9PU) |
|
|
**Documentation:** [Full contribution guide](https://github.com/UsamaSQ/AutoSolve/blob/main/CONTRIBUTING_DATA.md) |
|
|
|
|
|
--- |
|
|
|
|
|
## π Dataset Structure |
|
|
|
|
|
Each ZIP file contains anonymized numerical telemetry: |
|
|
|
|
|
### 1. Session Records (`/sessions/*.json`) |
|
|
|
|
|
Individual tracking attempts with complete metrics. |
|
|
|
|
|
**What's Included:** |
|
|
|
|
|
- **Footage Metadata:** Resolution, FPS, Frame Count |
|
|
- **Settings Used:** Pattern Size, Search Size, Correlation, Motion Model |
|
|
- **Results:** Solve Error, Bundle Count, Success/Failure |
|
|
- **Camera Intrinsics:** Focal Length, Sensor Size, Distortion Coefficients (K1, K2, K3) |
|
|
- **Motion Analysis:** Motion Class (LOW/MEDIUM/HIGH), Parallax Score, Velocity Statistics |
|
|
- **Feature Density:** Count of trackable features per 9-grid region (from Blender's detect_features) |
|
|
- **Time Series:** Per-frame active tracks, dropout rates, velocity profiles |
|
|
- **Track Lifecycle:** Per-marker survival, jitter, reprojection error |
|
|
- **Track Healing:** Anchor tracks, healing attempts, gap interpolation results |
|
|
- **Track Averaging:** Merged segment counts |
|
|
|
|
|
**Example Session:** |
|
|
|
|
|
```json |
|
|
{ |
|
|
"schema_version": 1, |
|
|
"timestamp": "2025-12-12T10:30:00", |
|
|
"resolution": [1920, 1080], |
|
|
"fps": 30, |
|
|
"frame_count": 240, |
|
|
"settings": { |
|
|
"pattern_size": 17, |
|
|
"search_size": 91, |
|
|
"correlation": 0.68, |
|
|
"motion_model": "LocRot" |
|
|
}, |
|
|
"success": true, |
|
|
"solve_error": 0.42, |
|
|
"bundle_count": 45, |
|
|
"motion_class": "MEDIUM", |
|
|
"visual_features": { |
|
|
"feature_density": { |
|
|
"center": 12, |
|
|
"top-left": 8, |
|
|
"top-right": 6 |
|
|
}, |
|
|
"motion_magnitude": 0.015, |
|
|
"edge_density": { |
|
|
"center": 0.85, |
|
|
"top-left": 0.42 |
|
|
} |
|
|
} |
|
|
"healing_stats": { |
|
|
"candidates_found": 5, |
|
|
"heals_attempted": 3, |
|
|
"heals_successful": 2, |
|
|
"avg_gap_frames": 15.0 |
|
|
} |
|
|
} |
|
|
``` |
|
|
|
|
|
### 2. Behavior Records (`/behavior/*.json`) |
|
|
|
|
|
**THE KEY LEARNING DATA** - How experts improve tracking. |
|
|
|
|
|
**What's Captured:** |
|
|
|
|
|
- **Track Additions:** π Which markers users manually add (region, position, quality) |
|
|
- **Track Deletions:** Which markers users remove (region, lifespan, error, reason) |
|
|
- **Settings Adjustments:** Which parameters users changed (before/after values) |
|
|
- **Re-solve Results:** Whether user changes improved solve error |
|
|
- **Marker Refinements:** Manual position adjustments |
|
|
- **Net Track Change:** How many tracks were added vs removed |
|
|
- **Region Reinforcement:** Which regions pros manually populated |
|
|
|
|
|
**Purpose:** Teaches the AI how experts **improve** tracking, not just cleanup. |
|
|
|
|
|
**Example Behavior:** |
|
|
|
|
|
```json |
|
|
{ |
|
|
"schema_version": 1, |
|
|
"clip_fingerprint": "a7f3c89b2e71d6f0", |
|
|
"contributor_id": "x7f2k9a1", |
|
|
"iteration": 3, |
|
|
"track_additions": [ |
|
|
{ |
|
|
"track_name": "Track.042", |
|
|
"region": "center", |
|
|
"initial_frame": 45, |
|
|
"position": [0.52, 0.48], |
|
|
"lifespan_achieved": 145, |
|
|
"had_bundle": true, |
|
|
"reprojection_error": 0.32 |
|
|
} |
|
|
], |
|
|
"track_deletions": [ |
|
|
{ |
|
|
"track_name": "Track.003", |
|
|
"region": "top-right", |
|
|
"lifespan": 12, |
|
|
"had_bundle": false, |
|
|
"reprojection_error": 2.8, |
|
|
"inferred_reason": "high_error" |
|
|
} |
|
|
], |
|
|
"net_track_change": 3, |
|
|
"region_additions": { "center": 2, "bottom-center": 1 }, |
|
|
"re_solve": { |
|
|
"attempted": true, |
|
|
"error_before": 0.87, |
|
|
"error_after": 0.42, |
|
|
"improvement": 0.45, |
|
|
"improved": true |
|
|
} |
|
|
} |
|
|
``` |
|
|
|
|
|
### 3. Model State (`model.json`) |
|
|
|
|
|
The user's local statistical model state showing learned patterns. |
|
|
|
|
|
--- |
|
|
|
|
|
## π What Gets Collected |
|
|
|
|
|
Each contribution includes: |
|
|
|
|
|
β
**Numerical Metrics** |
|
|
|
|
|
- Tracking settings that worked (or failed) |
|
|
- Motion analysis (velocity, direction, parallax) |
|
|
- Per-track survival and quality metrics |
|
|
- Feature density counts per region |
|
|
|
|
|
β
**Camera Characteristics** |
|
|
|
|
|
- Focal length and sensor size |
|
|
- Lens distortion coefficients |
|
|
- Principal point coordinates |
|
|
|
|
|
β
**Time Series Data** |
|
|
|
|
|
- Per-frame active track counts |
|
|
- Track dropout rates |
|
|
- Velocity profiles over time |
|
|
|
|
|
--- |
|
|
|
|
|
## π Data Privacy & Ethics |
|
|
|
|
|
We take privacy seriously. This dataset contains **numerical telemetry only**. |
|
|
|
|
|
β **NOT Collected:** |
|
|
|
|
|
- Images, video frames, or pixel data |
|
|
- File paths or project names |
|
|
- User identifiers (IPs, usernames, emails) |
|
|
- System information |
|
|
|
|
|
β
**Only Collected:** |
|
|
|
|
|
- Resolution, FPS, frame count |
|
|
- Mathematical motion vectors |
|
|
- Tracking settings and success metrics |
|
|
- Feature density counts (not actual features) |
|
|
|
|
|
_For complete schema documentation, see [TRAINING_DATA.md](https://github.com/UsamaSQ/AutoSolve/blob/main/TRAINING_DATA.md)_ |
|
|
|
|
|
--- |
|
|
|
|
|
## π Usage for Researchers |
|
|
|
|
|
This data is ideal for training models related to: |
|
|
|
|
|
### Hyperparameter Optimization |
|
|
|
|
|
Predicts optimal tracking settings (Search Size, Pattern Size, Correlation, Motion Models) based on footage characteristics and motion analysis. |
|
|
|
|
|
### Outlier Detection |
|
|
|
|
|
Identifying "bad" 2D tracks before camera solve using lifecycle and jitter patterns. |
|
|
|
|
|
### Motion Classification |
|
|
|
|
|
Classifying camera motion types (Drone, Handheld, Tripod) from sparse optical flow and feature density. |
|
|
|
|
|
### Temporal Modeling |
|
|
|
|
|
Predicting track dropout using RNN/LSTM trained on per-frame time series data. |
|
|
|
|
|
--- |
|
|
|
|
|
## π» Loading the Dataset |
|
|
|
|
|
### Python Example |
|
|
|
|
|
```python |
|
|
import json |
|
|
import zipfile |
|
|
from pathlib import Path |
|
|
from collections import defaultdict |
|
|
|
|
|
# Load a contributed ZIP |
|
|
zip_path = Path('autosolve_telemetry_20251212_103045.zip') |
|
|
|
|
|
with zipfile.ZipFile(zip_path, 'r') as zf: |
|
|
# Read manifest |
|
|
manifest = json.loads(zf.read('manifest.json')) |
|
|
print(f"Export Version: {manifest['export_version']}") |
|
|
print(f"Sessions: {manifest['session_count']}") |
|
|
print(f"Behaviors: {manifest['behavior_count']}") |
|
|
|
|
|
# Load all sessions |
|
|
sessions = [] |
|
|
for filename in zf.namelist(): |
|
|
if filename.startswith('sessions/') and filename.endswith('.json'): |
|
|
session_data = json.loads(zf.read(filename)) |
|
|
sessions.append(session_data) |
|
|
|
|
|
# Analyze by footage class |
|
|
by_class = defaultdict(list) |
|
|
for s in sessions: |
|
|
width = s['resolution'][0] |
|
|
fps = s['fps'] |
|
|
motion = s.get('motion_class', 'MEDIUM') |
|
|
cls = f"{'HD' if width >= 1920 else 'SD'}_{int(fps)}fps_{motion}" |
|
|
by_class[cls].append(s['success']) |
|
|
|
|
|
# Success rates per class |
|
|
print("\nSuccess Rates by Footage Class:") |
|
|
for cls, results in sorted(by_class.items()): |
|
|
rate = sum(results) / len(results) |
|
|
print(f" {cls}: {rate:.1%} ({len(results)} sessions)") |
|
|
``` |
|
|
|
|
|
### Feature Extraction Example |
|
|
|
|
|
```python |
|
|
# Extract feature density patterns |
|
|
feature_densities = [] |
|
|
for session in sessions: |
|
|
vf = session.get('visual_features', {}) |
|
|
density = vf.get('feature_density', {}) |
|
|
if density: |
|
|
feature_densities.append({ |
|
|
'motion_class': session.get('motion_class'), |
|
|
'center': density.get('center', 0), |
|
|
'edges': sum([ |
|
|
density.get('top-left', 0), |
|
|
density.get('top-right', 0), |
|
|
density.get('bottom-left', 0), |
|
|
density.get('bottom-right', 0) |
|
|
]) / 4, |
|
|
'success': session['success'] |
|
|
}) |
|
|
|
|
|
# Analyze: Do edge-heavy clips succeed more? |
|
|
import pandas as pd |
|
|
df = pd.DataFrame(feature_densities) |
|
|
print(df.groupby('success')['edges'].mean()) |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## π Dataset Statistics |
|
|
|
|
|
**Current Status:** Beta Collection Phase |
|
|
|
|
|
**Target:** |
|
|
|
|
|
- 100+ unique footage types |
|
|
- 500+ successful tracking sessions |
|
|
- Diverse motion classes and resolutions |
|
|
|
|
|
**Contribute** to help us reach production-ready dataset size! π |
|
|
|
|
|
--- |
|
|
|
|
|
## π Citation |
|
|
|
|
|
If you use this dataset in your research, please cite: |
|
|
|
|
|
```bibtex |
|
|
@misc{autosolve-telemetry-2025, |
|
|
title={AutoSolve Telemetry: Community-Driven Camera Tracking Dataset}, |
|
|
author={Bin Shahid, Usama}, |
|
|
year={2025}, |
|
|
publisher={HuggingFace}, |
|
|
url={https://huggingface.co/datasets/UsamaSQ/autosolve-telemetry} |
|
|
} |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## π€ Community & Support |
|
|
|
|
|
**Repository:** [GitHub.com/UsamaSQ/AutoSolve](https://github.com/UsamaSQ/AutoSolve) |
|
|
**Discord:** [Join our community](https://discord.gg/qUvrXHP9PU) |
|
|
**Maintainer:** Usama Bin Shahid |
|
|
|
|
|
Your contributions make AutoSolve better for everyone! π |
|
|
|