Spaces:
Sleeping
Sleeping
Commit
·
346b70f
0
Parent(s):
initial commit
Browse files- .claude/commands/nourish.md +51 -0
- .claude/commands/peel.md +34 -0
- .claude/commands/plant.md +41 -0
- .gitattributes +2 -0
- .gitignore +68 -0
- .python-version +1 -0
- CLAUDE.md +36 -0
- LICENSE +21 -0
- README.md +13 -0
- app.py +685 -0
- context.md +39 -0
- examples/Duck.glb +3 -0
- examples/Lantern.glb +3 -0
- layers/context-template.md +34 -0
- layers/structure.md +68 -0
- pyproject.toml +36 -0
- requirements.txt +8 -0
- src/__init__.py +105 -0
- src/atlas/__init__.py +7 -0
- src/atlas/builder.py +59 -0
- src/atlas/context.md +33 -0
- src/context.md +38 -0
- src/extraction/__init__.py +12 -0
- src/extraction/context.md +35 -0
- src/extraction/reader.py +136 -0
- src/extraction/sampler.py +42 -0
- src/mesh/__init__.py +7 -0
- src/mesh/context.md +32 -0
- src/mesh/uvmapper.py +85 -0
- src/palette/__init__.py +9 -0
- src/palette/color_space.py +129 -0
- src/palette/context.md +36 -0
- src/palette/mapper.py +41 -0
- src/palette/quantizer.py +51 -0
- src/preprocessing/__init__.py +3 -0
- src/preprocessing/context.md +31 -0
- src/preprocessing/simplifier.py +29 -0
.claude/commands/nourish.md
ADDED
|
@@ -0,0 +1,51 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 🍃 /nourish
|
| 2 |
+
|
| 3 |
+
Complete conversation by updating context and applying cleanup.
|
| 4 |
+
|
| 5 |
+
## Auto-Loaded Context:
|
| 6 |
+
|
| 7 |
+
@CLAUDE.md
|
| 8 |
+
@layers/structure.md
|
| 9 |
+
@layers/context-template.md
|
| 10 |
+
|
| 11 |
+
User arguments: "$ARGUMENTS"
|
| 12 |
+
|
| 13 |
+
## Steps
|
| 14 |
+
|
| 15 |
+
### 1. Identify Changes
|
| 16 |
+
|
| 17 |
+
- Detect modified, added, or deleted files in conversation
|
| 18 |
+
- Map changes to their parent folders and components
|
| 19 |
+
|
| 20 |
+
### 2. Update Context Chain
|
| 21 |
+
|
| 22 |
+
Traverse upward through context tiers (see CLAUDE.md):
|
| 23 |
+
|
| 24 |
+
- **Tier 2**: Update relevant `context.md` files to reflect current code state
|
| 25 |
+
- **Tier 1**: Update `layers/structure.md` if structure/commands/stack changed
|
| 26 |
+
- Follow all rules from CLAUDE.md, especially "No History" principle
|
| 27 |
+
|
| 28 |
+
### 3. Apply Cleanup
|
| 29 |
+
|
| 30 |
+
Fix obvious issues encountered:
|
| 31 |
+
|
| 32 |
+
- Remove comments; code should be self-explanatory without comments
|
| 33 |
+
- Remove dead code and unused files
|
| 34 |
+
- Consolidate duplicate patterns
|
| 35 |
+
- Apply CLAUDE.md principles (simplicity, reuse, single responsibility)
|
| 36 |
+
|
| 37 |
+
### 4. Verify
|
| 38 |
+
|
| 39 |
+
- Context accurately reflects current state
|
| 40 |
+
- Project is leaner or same size as before
|
| 41 |
+
- No history references in code or context
|
| 42 |
+
|
| 43 |
+
## Output
|
| 44 |
+
|
| 45 |
+
Report conversation completion: updated context files and improvements made.
|
| 46 |
+
|
| 47 |
+
## Guidelines
|
| 48 |
+
|
| 49 |
+
- When updating context, don't over-specify implementation details
|
| 50 |
+
- If changes were internal (e.g. business logic), it may not be necessary to update context
|
| 51 |
+
- Context should be even shorter after updates, avoiding context rot
|
.claude/commands/peel.md
ADDED
|
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 🧄 /peel
|
| 2 |
+
|
| 3 |
+
Load relevant context for the current conversation.
|
| 4 |
+
|
| 5 |
+
## Auto-Loaded Context:
|
| 6 |
+
|
| 7 |
+
@CLAUDE.md
|
| 8 |
+
@layers/structure.md
|
| 9 |
+
|
| 10 |
+
User arguments: "$ARGUMENTS"
|
| 11 |
+
|
| 12 |
+
## Steps
|
| 13 |
+
|
| 14 |
+
### 1. Parse Work Area
|
| 15 |
+
|
| 16 |
+
- Analyze user request or arguments
|
| 17 |
+
- Determine relevant components or features
|
| 18 |
+
- Assess required context depth
|
| 19 |
+
|
| 20 |
+
### 2. Load Targeted Context
|
| 21 |
+
|
| 22 |
+
- Read relevant `context.md` files for identified areas
|
| 23 |
+
- Skip unrelated component contexts to minimize tokens
|
| 24 |
+
|
| 25 |
+
### 3. Confirm Scope
|
| 26 |
+
|
| 27 |
+
- Brief summary of loaded context
|
| 28 |
+
- State understanding of work focus
|
| 29 |
+
- Note any assumptions made
|
| 30 |
+
|
| 31 |
+
## Guidelines
|
| 32 |
+
|
| 33 |
+
- Load only what's needed for the current task
|
| 34 |
+
- Defer code reading until necessary
|
.claude/commands/plant.md
ADDED
|
@@ -0,0 +1,41 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 🌱 /plant
|
| 2 |
+
|
| 3 |
+
Initialize context management structure for the project.
|
| 4 |
+
|
| 5 |
+
## Auto-Loaded Context:
|
| 6 |
+
|
| 7 |
+
@CLAUDE.md
|
| 8 |
+
@layers/structure.md
|
| 9 |
+
@layers/context-template.md
|
| 10 |
+
|
| 11 |
+
User arguments: "$ARGUMENTS"
|
| 12 |
+
|
| 13 |
+
## Steps
|
| 14 |
+
|
| 15 |
+
### 1. Analyze Project
|
| 16 |
+
|
| 17 |
+
- Scan technology stack, build tools, directory structure
|
| 18 |
+
- Identify major components and entry points
|
| 19 |
+
- Understand existing patterns and conventions
|
| 20 |
+
|
| 21 |
+
### 2. Fill Core Templates
|
| 22 |
+
|
| 23 |
+
- Update `CLAUDE.md` with project-specific details
|
| 24 |
+
- Complete `layers/structure.md` with actual stack, commands, layout
|
| 25 |
+
- Remove template placeholders
|
| 26 |
+
|
| 27 |
+
### 3. Create Component Context
|
| 28 |
+
|
| 29 |
+
- Generate `context.md` files for major folders using templates
|
| 30 |
+
- Document purpose, scope, dependencies for each component
|
| 31 |
+
- Place in appropriate directories
|
| 32 |
+
|
| 33 |
+
### 4. Validate
|
| 34 |
+
|
| 35 |
+
- Ensure coverage of main components
|
| 36 |
+
- Check that context hierarchy makes sense
|
| 37 |
+
- Verify no placeholder text remains
|
| 38 |
+
|
| 39 |
+
## Output
|
| 40 |
+
|
| 41 |
+
List created/updated files by tier and any areas needing attention.
|
.gitattributes
ADDED
|
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
|
|
|
| 1 |
+
*.glb filter=lfs diff=lfs merge=lfs -text
|
| 2 |
+
*.gltf filter=lfs diff=lfs merge=lfs -text
|
.gitignore
ADDED
|
@@ -0,0 +1,68 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Environment variables
|
| 2 |
+
.env
|
| 3 |
+
.env.local
|
| 4 |
+
.env.*.local
|
| 5 |
+
|
| 6 |
+
# Python
|
| 7 |
+
__pycache__/
|
| 8 |
+
*.py[cod]
|
| 9 |
+
*$py.class
|
| 10 |
+
*.so
|
| 11 |
+
*.egg
|
| 12 |
+
*.egg-info/
|
| 13 |
+
dist/
|
| 14 |
+
build/
|
| 15 |
+
eggs/
|
| 16 |
+
.eggs/
|
| 17 |
+
*.manifest
|
| 18 |
+
*.spec
|
| 19 |
+
|
| 20 |
+
# Virtual environments
|
| 21 |
+
.venv/
|
| 22 |
+
venv/
|
| 23 |
+
ENV/
|
| 24 |
+
env/
|
| 25 |
+
|
| 26 |
+
# Testing
|
| 27 |
+
.pytest_cache/
|
| 28 |
+
.coverage
|
| 29 |
+
.coverage.*
|
| 30 |
+
htmlcov/
|
| 31 |
+
.tox/
|
| 32 |
+
.nox/
|
| 33 |
+
*.cover
|
| 34 |
+
.hypothesis/
|
| 35 |
+
|
| 36 |
+
# Project
|
| 37 |
+
input/
|
| 38 |
+
output/
|
| 39 |
+
dev/
|
| 40 |
+
|
| 41 |
+
# IDE
|
| 42 |
+
.vscode/
|
| 43 |
+
.idea/
|
| 44 |
+
*.swp
|
| 45 |
+
*.swo
|
| 46 |
+
*~
|
| 47 |
+
.DS_Store
|
| 48 |
+
|
| 49 |
+
# Logs
|
| 50 |
+
*.log
|
| 51 |
+
pip-log.txt
|
| 52 |
+
|
| 53 |
+
# Blender
|
| 54 |
+
*.blend1
|
| 55 |
+
*.blend2
|
| 56 |
+
|
| 57 |
+
# Temporary files
|
| 58 |
+
*.tmp
|
| 59 |
+
*.bak
|
| 60 |
+
.cache/
|
| 61 |
+
temp/
|
| 62 |
+
tmp/
|
| 63 |
+
|
| 64 |
+
.ruff_cache/
|
| 65 |
+
.claude/settings.local.json
|
| 66 |
+
|
| 67 |
+
# UV specific
|
| 68 |
+
uv.lock
|
.python-version
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
3.10
|
CLAUDE.md
ADDED
|
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# AI Context - Working Agreement
|
| 2 |
+
|
| 3 |
+
<project-description>
|
| 4 |
+
Mesh Palettizer - A web application for simplifying 3D model textures using optimized color palettes. Processes GLB/GLTF models to create clean, palettized textures for stylized rendering and game development.
|
| 5 |
+
</project-description>
|
| 6 |
+
|
| 7 |
+
**Required**: Read [layers/structure.md](layers/structure.md) before proceeding with any task
|
| 8 |
+
|
| 9 |
+
## Context Management System
|
| 10 |
+
|
| 11 |
+
- **Tier 0 — global**: `CLAUDE.md` (root). Global standards and system overview
|
| 12 |
+
- **Tier 1 — project**: `layers/structure.md`. Project map (stack, commands, layout, entry points)
|
| 13 |
+
- **Tier 2 — folder context**: `context.md` in any folder; one per folder; explains purpose/structure of that folder
|
| 14 |
+
- **Tier 3 — implementation**: Code files (scripts)
|
| 15 |
+
|
| 16 |
+
## Rules
|
| 17 |
+
|
| 18 |
+
- **Priority**: Your number one priority is to manage your own context; always load appropriate context before doing anything else
|
| 19 |
+
- **No History**: CRITICAL - Code and context must NEVER reference their own history. Write everything as the current, final state. Never include comments like "changed from X to Y" or "previously was Z". This is a severe form of context rot
|
| 20 |
+
- **Simplicity**: Keep code simple, elegant, concise, and readable
|
| 21 |
+
- **Structure**: Keep files small and single-responsibility; separate concerns (MVC/ECS as appropriate)
|
| 22 |
+
- **Reuse**: Reuse before adding new code; avoid repetition
|
| 23 |
+
- **Comments**: Code should be self-explanatory without comments; use concise comments only when necessary
|
| 24 |
+
- **State**: Single source of truth; caches/derivations only
|
| 25 |
+
- **Data**: Favor data-driven/declarative design
|
| 26 |
+
- **Fail Fast**: Make bugs immediately visible rather than hiding them; favor simplicity over defensive patterns
|
| 27 |
+
- **Backwards Compatibility**: Unless stated otherwise, favor simplicity over backwards compatibility; the design rules above should make breaking changes easy to trace and fix
|
| 28 |
+
|
| 29 |
+
## Security
|
| 30 |
+
|
| 31 |
+
- **Inputs & secrets**: Validate inputs; secrets only in env; never log sensitive data
|
| 32 |
+
- **Auth**: Gateway auth; server-side token validation; sanitize inputs
|
| 33 |
+
|
| 34 |
+
## Tools
|
| 35 |
+
|
| 36 |
+
- **Context7**: Use as needed to fetch documentation
|
LICENSE
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
MIT License
|
| 2 |
+
|
| 3 |
+
Copyright (c) 2025 Dylan Ebert
|
| 4 |
+
|
| 5 |
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
| 6 |
+
of this software and associated documentation files (the "Software"), to deal
|
| 7 |
+
in the Software without restriction, including without limitation the rights
|
| 8 |
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
| 9 |
+
copies of the Software, and to permit persons to whom the Software is
|
| 10 |
+
furnished to do so, subject to the following conditions:
|
| 11 |
+
|
| 12 |
+
The above copyright notice and this permission notice shall be included in all
|
| 13 |
+
copies or substantial portions of the Software.
|
| 14 |
+
|
| 15 |
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
| 16 |
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
| 17 |
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
| 18 |
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
| 19 |
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
| 20 |
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
| 21 |
+
SOFTWARE.
|
README.md
ADDED
|
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
title: MeshPalettizer
|
| 3 |
+
emoji: 😻
|
| 4 |
+
colorFrom: blue
|
| 5 |
+
colorTo: purple
|
| 6 |
+
sdk: gradio
|
| 7 |
+
sdk_version: 5.46.1
|
| 8 |
+
app_file: app.py
|
| 9 |
+
pinned: false
|
| 10 |
+
short_description: Convert textured meshes to palettized meshes
|
| 11 |
+
---
|
| 12 |
+
|
| 13 |
+
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
app.py
ADDED
|
@@ -0,0 +1,685 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
import gradio as gr
|
| 3 |
+
import trimesh
|
| 4 |
+
import numpy as np
|
| 5 |
+
import tempfile
|
| 6 |
+
import zipfile
|
| 7 |
+
import requests
|
| 8 |
+
import os
|
| 9 |
+
from pathlib import Path
|
| 10 |
+
from typing import List, Tuple, Optional, Dict, Any
|
| 11 |
+
from src import convert_meshes
|
| 12 |
+
|
| 13 |
+
|
| 14 |
+
def load_mesh(
|
| 15 |
+
file_path: str,
|
| 16 |
+
) -> Optional[Tuple[List[Tuple[str, trimesh.Trimesh]], Optional[Dict]]]:
|
| 17 |
+
try:
|
| 18 |
+
loaded = trimesh.load(str(file_path))
|
| 19 |
+
if isinstance(loaded, trimesh.Scene):
|
| 20 |
+
mesh_list = []
|
| 21 |
+
scene_data = {"graph": loaded.graph, "transforms": {}}
|
| 22 |
+
|
| 23 |
+
for geom_name, geom in loaded.geometry.items():
|
| 24 |
+
if hasattr(geom, "faces") and len(geom.faces) > 0:
|
| 25 |
+
mesh_list.append((geom_name, geom))
|
| 26 |
+
|
| 27 |
+
# Store transform for this geometry
|
| 28 |
+
nodes = loaded.graph.geometry_nodes.get(geom_name, [])
|
| 29 |
+
if nodes:
|
| 30 |
+
scene_data["transforms"][geom_name] = loaded.graph.get(
|
| 31 |
+
nodes[0]
|
| 32 |
+
)[0]
|
| 33 |
+
else:
|
| 34 |
+
scene_data["transforms"][geom_name] = np.eye(4)
|
| 35 |
+
|
| 36 |
+
return (mesh_list, scene_data) if mesh_list else None
|
| 37 |
+
elif hasattr(loaded, "faces"):
|
| 38 |
+
# Single mesh case
|
| 39 |
+
return ([("mesh", loaded)], None)
|
| 40 |
+
else:
|
| 41 |
+
return None
|
| 42 |
+
except Exception as e:
|
| 43 |
+
print(f"Error loading {file_path}: {e}")
|
| 44 |
+
return None
|
| 45 |
+
|
| 46 |
+
|
| 47 |
+
def export_processed_meshes(result, output_path, progress, total_files):
|
| 48 |
+
"""Export processed meshes, reconstructing scenes when appropriate."""
|
| 49 |
+
processed_models = []
|
| 50 |
+
|
| 51 |
+
# Group meshes by their original file
|
| 52 |
+
file_groups = {}
|
| 53 |
+
for name, mesh in result.meshes:
|
| 54 |
+
# Extract file name from combined name (e.g., "Lantern_LanternPole_Body" -> "Lantern")
|
| 55 |
+
if "_" in name and result.scene_metadata:
|
| 56 |
+
parts = name.split("_", 1)
|
| 57 |
+
file_name = parts[0]
|
| 58 |
+
mesh_name = parts[1] if len(parts) > 1 else "mesh"
|
| 59 |
+
else:
|
| 60 |
+
file_name = name
|
| 61 |
+
mesh_name = "mesh"
|
| 62 |
+
|
| 63 |
+
if file_name not in file_groups:
|
| 64 |
+
file_groups[file_name] = []
|
| 65 |
+
file_groups[file_name].append((mesh_name, mesh))
|
| 66 |
+
|
| 67 |
+
# Export each file group
|
| 68 |
+
for i, (file_name, meshes) in enumerate(file_groups.items()):
|
| 69 |
+
progress_desc = f"Saving {file_name}..."
|
| 70 |
+
progress(
|
| 71 |
+
(total_files + 1 + i / len(file_groups)) / (total_files + 2),
|
| 72 |
+
desc=progress_desc,
|
| 73 |
+
)
|
| 74 |
+
|
| 75 |
+
# Check if this file had scene metadata
|
| 76 |
+
has_scene = result.scene_metadata and file_name in result.scene_metadata
|
| 77 |
+
|
| 78 |
+
if has_scene and len(meshes) > 1:
|
| 79 |
+
# Reconstruct and export as Scene
|
| 80 |
+
scene = trimesh.Scene()
|
| 81 |
+
scene_data = result.scene_metadata[file_name]
|
| 82 |
+
|
| 83 |
+
for mesh_name, mesh in meshes:
|
| 84 |
+
# Get transform for this mesh
|
| 85 |
+
transform = scene_data["transforms"].get(mesh_name, np.eye(4))
|
| 86 |
+
|
| 87 |
+
# Add to scene with proper naming
|
| 88 |
+
scene.add_geometry(
|
| 89 |
+
mesh, node_name=mesh_name, geom_name=mesh_name, transform=transform
|
| 90 |
+
)
|
| 91 |
+
|
| 92 |
+
# Export the scene
|
| 93 |
+
model_path = output_path / f"{file_name}_palettized.glb"
|
| 94 |
+
scene.export(str(model_path))
|
| 95 |
+
processed_models.append(str(model_path))
|
| 96 |
+
else:
|
| 97 |
+
# Export individual meshes
|
| 98 |
+
for mesh_name, mesh in meshes:
|
| 99 |
+
if len(meshes) > 1:
|
| 100 |
+
model_path = output_path / f"{file_name}_{mesh_name}_palettized.glb"
|
| 101 |
+
else:
|
| 102 |
+
model_path = output_path / f"{file_name}_palettized.glb"
|
| 103 |
+
mesh.export(str(model_path), include_normals=True)
|
| 104 |
+
processed_models.append(str(model_path))
|
| 105 |
+
|
| 106 |
+
return processed_models
|
| 107 |
+
|
| 108 |
+
|
| 109 |
+
def download_from_urls(
|
| 110 |
+
urls_text: str, progress=gr.Progress()
|
| 111 |
+
) -> Tuple[List[str], List[str]]:
|
| 112 |
+
if not urls_text or not urls_text.strip():
|
| 113 |
+
return [], []
|
| 114 |
+
|
| 115 |
+
urls = [url.strip() for url in urls_text.strip().split("\n") if url.strip()]
|
| 116 |
+
downloaded_files = []
|
| 117 |
+
failed_urls = []
|
| 118 |
+
|
| 119 |
+
temp_dir = tempfile.mkdtemp(prefix="glb_downloads_")
|
| 120 |
+
|
| 121 |
+
for i, url in enumerate(urls):
|
| 122 |
+
progress((i + 1) / len(urls), desc=f"Downloading {i + 1}/{len(urls)}...")
|
| 123 |
+
|
| 124 |
+
try:
|
| 125 |
+
filename = os.path.basename(url.split("?")[0])
|
| 126 |
+
if not filename or not filename.endswith((".glb", ".gltf")):
|
| 127 |
+
filename = f"model_{i + 1}.glb"
|
| 128 |
+
|
| 129 |
+
file_path = os.path.join(temp_dir, filename)
|
| 130 |
+
|
| 131 |
+
response = requests.get(url, timeout=30)
|
| 132 |
+
response.raise_for_status()
|
| 133 |
+
|
| 134 |
+
with open(file_path, "wb") as f:
|
| 135 |
+
f.write(response.content)
|
| 136 |
+
|
| 137 |
+
downloaded_files.append(file_path)
|
| 138 |
+
except Exception as e:
|
| 139 |
+
print(f"Failed to download {url}: {e}")
|
| 140 |
+
failed_urls.append(url)
|
| 141 |
+
|
| 142 |
+
return downloaded_files, failed_urls
|
| 143 |
+
|
| 144 |
+
|
| 145 |
+
def process_batch(
|
| 146 |
+
files: List[Any],
|
| 147 |
+
atlas_size: int,
|
| 148 |
+
sample_rate: float,
|
| 149 |
+
simplify_details: bool,
|
| 150 |
+
detail_filter_diameter: int,
|
| 151 |
+
detail_color_sigma: int,
|
| 152 |
+
detail_space_sigma: int,
|
| 153 |
+
progress=gr.Progress(),
|
| 154 |
+
) -> Tuple[Optional[str], List[str], Optional[str], str, Dict]:
|
| 155 |
+
|
| 156 |
+
if not files:
|
| 157 |
+
return None, [], None, "No files to process.", {}
|
| 158 |
+
|
| 159 |
+
progress(0, desc="Starting batch processing...")
|
| 160 |
+
|
| 161 |
+
output_dir = tempfile.mkdtemp(prefix="glb_atlas_")
|
| 162 |
+
output_path = Path(output_dir)
|
| 163 |
+
|
| 164 |
+
mesh_list = []
|
| 165 |
+
failed_files = []
|
| 166 |
+
scene_metadata = {}
|
| 167 |
+
|
| 168 |
+
for i, file in enumerate(files):
|
| 169 |
+
if hasattr(file, "name"):
|
| 170 |
+
file_path = file.name
|
| 171 |
+
display_name = Path(file.name).name
|
| 172 |
+
else:
|
| 173 |
+
file_path = file
|
| 174 |
+
display_name = Path(file).name
|
| 175 |
+
|
| 176 |
+
progress((i + 1) / (len(files) + 2), desc=f"Loading {display_name}...")
|
| 177 |
+
|
| 178 |
+
file_name = Path(file_path).stem
|
| 179 |
+
|
| 180 |
+
loaded_data = load_mesh(file_path)
|
| 181 |
+
if loaded_data is not None:
|
| 182 |
+
meshes, scene_data = loaded_data
|
| 183 |
+
|
| 184 |
+
# Store scene data if present
|
| 185 |
+
if scene_data:
|
| 186 |
+
scene_metadata[file_name] = scene_data
|
| 187 |
+
|
| 188 |
+
# Add all meshes from this file to the list
|
| 189 |
+
for mesh_name, mesh in meshes:
|
| 190 |
+
# Create unique name combining file and mesh names
|
| 191 |
+
if len(meshes) > 1:
|
| 192 |
+
combined_name = f"{file_name}_{mesh_name}"
|
| 193 |
+
else:
|
| 194 |
+
combined_name = file_name
|
| 195 |
+
mesh_list.append((combined_name, mesh))
|
| 196 |
+
else:
|
| 197 |
+
failed_files.append(display_name)
|
| 198 |
+
|
| 199 |
+
if not mesh_list:
|
| 200 |
+
return (
|
| 201 |
+
None,
|
| 202 |
+
[],
|
| 203 |
+
None,
|
| 204 |
+
"No valid meshes could be loaded from the uploaded files.",
|
| 205 |
+
{},
|
| 206 |
+
)
|
| 207 |
+
|
| 208 |
+
try:
|
| 209 |
+
progress(len(files) / (len(files) + 2), desc="Generating texture atlas...")
|
| 210 |
+
detail_sensitivity = (
|
| 211 |
+
(detail_filter_diameter, detail_color_sigma, detail_space_sigma)
|
| 212 |
+
if simplify_details
|
| 213 |
+
else None
|
| 214 |
+
)
|
| 215 |
+
result = convert_meshes(
|
| 216 |
+
mesh_list,
|
| 217 |
+
atlas_size=atlas_size,
|
| 218 |
+
face_sampling_ratio=sample_rate,
|
| 219 |
+
simplify_details=simplify_details,
|
| 220 |
+
detail_sensitivity=detail_sensitivity,
|
| 221 |
+
scene_metadata=scene_metadata,
|
| 222 |
+
)
|
| 223 |
+
|
| 224 |
+
atlas_path = output_path / "shared_palette.png"
|
| 225 |
+
result.atlas.save(atlas_path)
|
| 226 |
+
|
| 227 |
+
# Export processed meshes, reconstructing scenes when appropriate
|
| 228 |
+
processed_models = export_processed_meshes(
|
| 229 |
+
result, output_path, progress, len(files)
|
| 230 |
+
)
|
| 231 |
+
|
| 232 |
+
status = f"✓ Processed {len(result.meshes)} model(s)\n📊 Atlas: {atlas_size}×{atlas_size} pixels"
|
| 233 |
+
if failed_files:
|
| 234 |
+
status += f"\n⚠ Failed: {len(failed_files)} file(s)"
|
| 235 |
+
|
| 236 |
+
# Extract display names for the processed models
|
| 237 |
+
display_names = []
|
| 238 |
+
for model_path in processed_models:
|
| 239 |
+
model_name = Path(model_path).stem
|
| 240 |
+
if model_name.endswith("_palettized"):
|
| 241 |
+
model_name = model_name[:-11] # Remove "_palettized" suffix
|
| 242 |
+
display_names.append(model_name)
|
| 243 |
+
|
| 244 |
+
metadata = {
|
| 245 |
+
"models": processed_models,
|
| 246 |
+
"names": display_names,
|
| 247 |
+
"atlas_path": str(atlas_path),
|
| 248 |
+
"output_dir": output_dir,
|
| 249 |
+
"total": len(processed_models),
|
| 250 |
+
}
|
| 251 |
+
|
| 252 |
+
progress(1.0, desc="Processing complete!")
|
| 253 |
+
|
| 254 |
+
first_model = processed_models[0] if processed_models else None
|
| 255 |
+
return str(atlas_path), processed_models, first_model, status, metadata
|
| 256 |
+
|
| 257 |
+
except Exception as e:
|
| 258 |
+
return None, [], None, f"Error during processing: {str(e)}", {}
|
| 259 |
+
|
| 260 |
+
|
| 261 |
+
def update_model_viewer(
|
| 262 |
+
direction: str, current_index: int, metadata: Dict
|
| 263 |
+
) -> Tuple[Optional[str], int, str]:
|
| 264 |
+
|
| 265 |
+
if not metadata or "models" not in metadata:
|
| 266 |
+
return None, 0, "No models to display"
|
| 267 |
+
|
| 268 |
+
models = metadata["models"]
|
| 269 |
+
names = metadata["names"]
|
| 270 |
+
total = metadata["total"]
|
| 271 |
+
|
| 272 |
+
if not models:
|
| 273 |
+
return None, 0, "No models available"
|
| 274 |
+
|
| 275 |
+
if direction == "next":
|
| 276 |
+
new_index = (current_index + 1) % total
|
| 277 |
+
elif direction == "prev":
|
| 278 |
+
new_index = (current_index - 1) % total
|
| 279 |
+
else:
|
| 280 |
+
new_index = 0
|
| 281 |
+
|
| 282 |
+
model_path = models[new_index]
|
| 283 |
+
model_name = names[new_index]
|
| 284 |
+
|
| 285 |
+
label = f"Model {new_index + 1} of {total}: {model_name}"
|
| 286 |
+
|
| 287 |
+
return model_path, new_index, label
|
| 288 |
+
|
| 289 |
+
|
| 290 |
+
def create_download_zip(metadata: Dict) -> Optional[str]:
|
| 291 |
+
|
| 292 |
+
if not metadata or "output_dir" not in metadata:
|
| 293 |
+
return None
|
| 294 |
+
|
| 295 |
+
output_dir = Path(metadata["output_dir"])
|
| 296 |
+
zip_path = output_dir / "glb_atlas_output.zip"
|
| 297 |
+
|
| 298 |
+
try:
|
| 299 |
+
with zipfile.ZipFile(zip_path, "w", zipfile.ZIP_DEFLATED) as zipf:
|
| 300 |
+
if "atlas_path" in metadata:
|
| 301 |
+
atlas_path = Path(metadata["atlas_path"])
|
| 302 |
+
if atlas_path.exists():
|
| 303 |
+
zipf.write(atlas_path, atlas_path.name)
|
| 304 |
+
|
| 305 |
+
if "models" in metadata:
|
| 306 |
+
for model_path in metadata["models"]:
|
| 307 |
+
model_file = Path(model_path)
|
| 308 |
+
if model_file.exists():
|
| 309 |
+
zipf.write(model_file, model_file.name)
|
| 310 |
+
|
| 311 |
+
return str(zip_path)
|
| 312 |
+
except Exception as e:
|
| 313 |
+
print(f"Error creating ZIP: {e}")
|
| 314 |
+
return None
|
| 315 |
+
|
| 316 |
+
|
| 317 |
+
with gr.Blocks(
|
| 318 |
+
title="Mesh Palettizer",
|
| 319 |
+
theme=gr.themes.Soft(),
|
| 320 |
+
css="""
|
| 321 |
+
#atlas-display img {
|
| 322 |
+
width: 100%;
|
| 323 |
+
height: 100%;
|
| 324 |
+
object-fit: contain;
|
| 325 |
+
image-rendering: pixelated;
|
| 326 |
+
image-rendering: -moz-crisp-edges;
|
| 327 |
+
image-rendering: crisp-edges;
|
| 328 |
+
}
|
| 329 |
+
""",
|
| 330 |
+
) as demo:
|
| 331 |
+
model_index = gr.State(value=0)
|
| 332 |
+
processing_metadata = gr.State(value={})
|
| 333 |
+
|
| 334 |
+
gr.Markdown(
|
| 335 |
+
"""
|
| 336 |
+
# 🎨 Mesh Palettizer
|
| 337 |
+
|
| 338 |
+
Simplify 3D model textures using optimized color palettes.
|
| 339 |
+
Upload GLB/GLTF models to create clean, palettized textures for stylized rendering.
|
| 340 |
+
"""
|
| 341 |
+
)
|
| 342 |
+
|
| 343 |
+
with gr.Row():
|
| 344 |
+
with gr.Column(scale=1):
|
| 345 |
+
with gr.Tabs() as input_tabs:
|
| 346 |
+
with gr.Tab("📁 Upload Files"):
|
| 347 |
+
file_input = gr.File(
|
| 348 |
+
label="Select GLB/GLTF Files",
|
| 349 |
+
file_count="multiple",
|
| 350 |
+
file_types=[".glb", ".gltf"],
|
| 351 |
+
type="filepath",
|
| 352 |
+
)
|
| 353 |
+
|
| 354 |
+
gr.Examples(
|
| 355 |
+
examples=[[["examples/Duck.glb", "examples/Lantern.glb"]]],
|
| 356 |
+
inputs=file_input,
|
| 357 |
+
label="Example Models",
|
| 358 |
+
)
|
| 359 |
+
|
| 360 |
+
with gr.Tab("🔗 Load from URLs"):
|
| 361 |
+
url_input = gr.Textbox(
|
| 362 |
+
label="Enter URLs (one per line)",
|
| 363 |
+
placeholder="https://example.com/model1.glb\nhttps://example.com/model2.glb",
|
| 364 |
+
lines=5,
|
| 365 |
+
interactive=True,
|
| 366 |
+
)
|
| 367 |
+
|
| 368 |
+
atlas_size = gr.Dropdown(
|
| 369 |
+
choices=[8, 16, 32, 64, 128, 256, 512, 1024],
|
| 370 |
+
value=32,
|
| 371 |
+
label="Atlas Size",
|
| 372 |
+
info="N×N pixels",
|
| 373 |
+
)
|
| 374 |
+
|
| 375 |
+
with gr.Accordion("Advanced", open=False):
|
| 376 |
+
sample_rate = gr.Slider(
|
| 377 |
+
minimum=0.01,
|
| 378 |
+
maximum=1.0,
|
| 379 |
+
value=0.1,
|
| 380 |
+
step=0.01,
|
| 381 |
+
label="Sampling Rate",
|
| 382 |
+
info="% of faces to sample",
|
| 383 |
+
)
|
| 384 |
+
simplify_details = gr.Checkbox(
|
| 385 |
+
value=True,
|
| 386 |
+
label="Remove Texture Details",
|
| 387 |
+
info="Apply bilateral filter to remove fine details (scales, fur, etc.)",
|
| 388 |
+
)
|
| 389 |
+
|
| 390 |
+
with gr.Row(visible=True) as detail_controls:
|
| 391 |
+
detail_filter_diameter = gr.Slider(
|
| 392 |
+
minimum=5,
|
| 393 |
+
maximum=15,
|
| 394 |
+
value=9,
|
| 395 |
+
step=2,
|
| 396 |
+
label="Filter Diameter",
|
| 397 |
+
info="Pixel neighborhood diameter (higher = stronger smoothing)",
|
| 398 |
+
)
|
| 399 |
+
|
| 400 |
+
detail_color_sigma = gr.Slider(
|
| 401 |
+
minimum=25,
|
| 402 |
+
maximum=150,
|
| 403 |
+
value=75,
|
| 404 |
+
step=5,
|
| 405 |
+
label="Color Sensitivity",
|
| 406 |
+
info="Color difference threshold (higher = more colors mixed)",
|
| 407 |
+
)
|
| 408 |
+
|
| 409 |
+
detail_space_sigma = gr.Slider(
|
| 410 |
+
minimum=25,
|
| 411 |
+
maximum=150,
|
| 412 |
+
value=75,
|
| 413 |
+
step=5,
|
| 414 |
+
label="Spatial Sensitivity",
|
| 415 |
+
info="Spatial extent (higher = pixels farther apart influence each other)",
|
| 416 |
+
)
|
| 417 |
+
|
| 418 |
+
process_btn = gr.Button("🚀 Process", variant="primary", size="lg")
|
| 419 |
+
|
| 420 |
+
status_text = gr.Textbox(
|
| 421 |
+
label="Status", lines=2, interactive=False, show_label=False
|
| 422 |
+
)
|
| 423 |
+
|
| 424 |
+
with gr.Column(scale=2):
|
| 425 |
+
with gr.Tabs():
|
| 426 |
+
with gr.Tab("📊 Palette"):
|
| 427 |
+
atlas_image = gr.Image(
|
| 428 |
+
label="Color Palette",
|
| 429 |
+
type="filepath",
|
| 430 |
+
show_download_button=True,
|
| 431 |
+
height=400,
|
| 432 |
+
container=True,
|
| 433 |
+
elem_id="atlas-display",
|
| 434 |
+
)
|
| 435 |
+
|
| 436 |
+
with gr.Tab("🎮 3D Preview"):
|
| 437 |
+
model_label = gr.Markdown("")
|
| 438 |
+
model_viewer = gr.Model3D(
|
| 439 |
+
label="Model", height=400, clear_color=[0.95, 0.95, 0.95, 1.0]
|
| 440 |
+
)
|
| 441 |
+
|
| 442 |
+
with gr.Row():
|
| 443 |
+
prev_btn = gr.Button("��", size="sm")
|
| 444 |
+
model_counter = gr.Markdown(
|
| 445 |
+
"Model 1 of 1", elem_id="model-counter"
|
| 446 |
+
)
|
| 447 |
+
next_btn = gr.Button("▶", size="sm")
|
| 448 |
+
|
| 449 |
+
with gr.Row():
|
| 450 |
+
download_btn = gr.Button(
|
| 451 |
+
"📦 Download All", variant="secondary", size="lg"
|
| 452 |
+
)
|
| 453 |
+
download_file = gr.File(label="Package", visible=False)
|
| 454 |
+
|
| 455 |
+
def toggle_detail_controls(enabled):
|
| 456 |
+
return gr.update(visible=enabled)
|
| 457 |
+
|
| 458 |
+
simplify_details.change(
|
| 459 |
+
fn=toggle_detail_controls, inputs=[simplify_details], outputs=[detail_controls]
|
| 460 |
+
)
|
| 461 |
+
|
| 462 |
+
def process_from_files(
|
| 463 |
+
files,
|
| 464 |
+
atlas_size,
|
| 465 |
+
sample_rate,
|
| 466 |
+
simplify_details,
|
| 467 |
+
detail_filter_diameter,
|
| 468 |
+
detail_color_sigma,
|
| 469 |
+
detail_space_sigma,
|
| 470 |
+
):
|
| 471 |
+
if not files:
|
| 472 |
+
return (
|
| 473 |
+
None,
|
| 474 |
+
None,
|
| 475 |
+
"Please upload files first.",
|
| 476 |
+
{},
|
| 477 |
+
0,
|
| 478 |
+
"",
|
| 479 |
+
"",
|
| 480 |
+
gr.update(visible=False),
|
| 481 |
+
)
|
| 482 |
+
|
| 483 |
+
atlas_path, models, first_model, status, metadata = process_batch(
|
| 484 |
+
files,
|
| 485 |
+
atlas_size,
|
| 486 |
+
sample_rate,
|
| 487 |
+
simplify_details,
|
| 488 |
+
detail_filter_diameter,
|
| 489 |
+
detail_color_sigma,
|
| 490 |
+
detail_space_sigma,
|
| 491 |
+
)
|
| 492 |
+
|
| 493 |
+
if models:
|
| 494 |
+
viewer_label = metadata["names"][0]
|
| 495 |
+
counter_text = f"Model 1 of {len(models)}"
|
| 496 |
+
else:
|
| 497 |
+
viewer_label = ""
|
| 498 |
+
counter_text = ""
|
| 499 |
+
|
| 500 |
+
return (
|
| 501 |
+
atlas_path,
|
| 502 |
+
first_model,
|
| 503 |
+
status,
|
| 504 |
+
metadata,
|
| 505 |
+
0,
|
| 506 |
+
viewer_label,
|
| 507 |
+
counter_text,
|
| 508 |
+
gr.update(visible=False),
|
| 509 |
+
)
|
| 510 |
+
|
| 511 |
+
def process_from_urls(
|
| 512 |
+
urls_text,
|
| 513 |
+
atlas_size,
|
| 514 |
+
sample_rate,
|
| 515 |
+
simplify_details,
|
| 516 |
+
detail_filter_diameter,
|
| 517 |
+
detail_color_sigma,
|
| 518 |
+
detail_space_sigma,
|
| 519 |
+
):
|
| 520 |
+
if not urls_text or not urls_text.strip():
|
| 521 |
+
return (
|
| 522 |
+
None,
|
| 523 |
+
None,
|
| 524 |
+
"Please enter URLs first.",
|
| 525 |
+
{},
|
| 526 |
+
0,
|
| 527 |
+
"",
|
| 528 |
+
"",
|
| 529 |
+
gr.update(visible=False),
|
| 530 |
+
)
|
| 531 |
+
|
| 532 |
+
downloaded_files, failed_urls = download_from_urls(urls_text)
|
| 533 |
+
|
| 534 |
+
if not downloaded_files:
|
| 535 |
+
error_msg = "Failed to download any files."
|
| 536 |
+
if failed_urls:
|
| 537 |
+
error_msg += f" URLs that failed: {len(failed_urls)}"
|
| 538 |
+
return None, None, error_msg, {}, 0, "", "", gr.update(visible=False)
|
| 539 |
+
|
| 540 |
+
atlas_path, models, first_model, status, metadata = process_batch(
|
| 541 |
+
downloaded_files,
|
| 542 |
+
atlas_size,
|
| 543 |
+
sample_rate,
|
| 544 |
+
simplify_details,
|
| 545 |
+
detail_filter_diameter,
|
| 546 |
+
detail_color_sigma,
|
| 547 |
+
detail_space_sigma,
|
| 548 |
+
)
|
| 549 |
+
|
| 550 |
+
if failed_urls:
|
| 551 |
+
status += f"\n⚠ Failed to download {len(failed_urls)} URL(s)"
|
| 552 |
+
|
| 553 |
+
if models:
|
| 554 |
+
viewer_label = metadata["names"][0]
|
| 555 |
+
counter_text = f"Model 1 of {len(models)}"
|
| 556 |
+
else:
|
| 557 |
+
viewer_label = ""
|
| 558 |
+
counter_text = ""
|
| 559 |
+
|
| 560 |
+
return (
|
| 561 |
+
atlas_path,
|
| 562 |
+
first_model,
|
| 563 |
+
status,
|
| 564 |
+
metadata,
|
| 565 |
+
0,
|
| 566 |
+
viewer_label,
|
| 567 |
+
counter_text,
|
| 568 |
+
gr.update(visible=False),
|
| 569 |
+
)
|
| 570 |
+
|
| 571 |
+
def process_wrapper(
|
| 572 |
+
files,
|
| 573 |
+
urls_text,
|
| 574 |
+
atlas_size,
|
| 575 |
+
sample_rate,
|
| 576 |
+
simplify_details,
|
| 577 |
+
detail_filter_diameter,
|
| 578 |
+
detail_color_sigma,
|
| 579 |
+
detail_space_sigma,
|
| 580 |
+
):
|
| 581 |
+
if files and len(files) > 0:
|
| 582 |
+
return process_from_files(
|
| 583 |
+
files,
|
| 584 |
+
atlas_size,
|
| 585 |
+
sample_rate,
|
| 586 |
+
simplify_details,
|
| 587 |
+
detail_filter_diameter,
|
| 588 |
+
detail_color_sigma,
|
| 589 |
+
detail_space_sigma,
|
| 590 |
+
)
|
| 591 |
+
elif urls_text and urls_text.strip():
|
| 592 |
+
return process_from_urls(
|
| 593 |
+
urls_text,
|
| 594 |
+
atlas_size,
|
| 595 |
+
sample_rate,
|
| 596 |
+
simplify_details,
|
| 597 |
+
detail_filter_diameter,
|
| 598 |
+
detail_color_sigma,
|
| 599 |
+
detail_space_sigma,
|
| 600 |
+
)
|
| 601 |
+
else:
|
| 602 |
+
return (
|
| 603 |
+
None,
|
| 604 |
+
None,
|
| 605 |
+
"Please provide files or URLs.",
|
| 606 |
+
{},
|
| 607 |
+
0,
|
| 608 |
+
"",
|
| 609 |
+
"",
|
| 610 |
+
gr.update(visible=False),
|
| 611 |
+
)
|
| 612 |
+
|
| 613 |
+
process_btn.click(
|
| 614 |
+
fn=process_wrapper,
|
| 615 |
+
inputs=[
|
| 616 |
+
file_input,
|
| 617 |
+
url_input,
|
| 618 |
+
atlas_size,
|
| 619 |
+
sample_rate,
|
| 620 |
+
simplify_details,
|
| 621 |
+
detail_filter_diameter,
|
| 622 |
+
detail_color_sigma,
|
| 623 |
+
detail_space_sigma,
|
| 624 |
+
],
|
| 625 |
+
outputs=[
|
| 626 |
+
atlas_image,
|
| 627 |
+
model_viewer,
|
| 628 |
+
status_text,
|
| 629 |
+
processing_metadata,
|
| 630 |
+
model_index,
|
| 631 |
+
model_label,
|
| 632 |
+
model_counter,
|
| 633 |
+
download_file,
|
| 634 |
+
],
|
| 635 |
+
)
|
| 636 |
+
|
| 637 |
+
def navigate_prev(current_index, metadata):
|
| 638 |
+
model_path, new_index, _ = update_model_viewer("prev", current_index, metadata)
|
| 639 |
+
counter_text = (
|
| 640 |
+
f"Model {new_index + 1} of {metadata['total']}"
|
| 641 |
+
if metadata and "total" in metadata
|
| 642 |
+
else ""
|
| 643 |
+
)
|
| 644 |
+
name_text = (
|
| 645 |
+
metadata["names"][new_index] if metadata and "names" in metadata else ""
|
| 646 |
+
)
|
| 647 |
+
return model_path, new_index, name_text, counter_text
|
| 648 |
+
|
| 649 |
+
def navigate_next(current_index, metadata):
|
| 650 |
+
model_path, new_index, _ = update_model_viewer("next", current_index, metadata)
|
| 651 |
+
counter_text = (
|
| 652 |
+
f"Model {new_index + 1} of {metadata['total']}"
|
| 653 |
+
if metadata and "total" in metadata
|
| 654 |
+
else ""
|
| 655 |
+
)
|
| 656 |
+
name_text = (
|
| 657 |
+
metadata["names"][new_index] if metadata and "names" in metadata else ""
|
| 658 |
+
)
|
| 659 |
+
return model_path, new_index, name_text, counter_text
|
| 660 |
+
|
| 661 |
+
prev_btn.click(
|
| 662 |
+
fn=navigate_prev,
|
| 663 |
+
inputs=[model_index, processing_metadata],
|
| 664 |
+
outputs=[model_viewer, model_index, model_label, model_counter],
|
| 665 |
+
)
|
| 666 |
+
|
| 667 |
+
next_btn.click(
|
| 668 |
+
fn=navigate_next,
|
| 669 |
+
inputs=[model_index, processing_metadata],
|
| 670 |
+
outputs=[model_viewer, model_index, model_label, model_counter],
|
| 671 |
+
)
|
| 672 |
+
|
| 673 |
+
def prepare_download(metadata):
|
| 674 |
+
zip_path = create_download_zip(metadata)
|
| 675 |
+
if zip_path:
|
| 676 |
+
return gr.update(value=zip_path, visible=True)
|
| 677 |
+
return gr.update(visible=False)
|
| 678 |
+
|
| 679 |
+
download_btn.click(
|
| 680 |
+
fn=prepare_download, inputs=[processing_metadata], outputs=[download_file]
|
| 681 |
+
)
|
| 682 |
+
|
| 683 |
+
|
| 684 |
+
if __name__ == "__main__":
|
| 685 |
+
demo.launch(share=False, server_name="0.0.0.0", server_port=7860, show_error=True)
|
context.md
ADDED
|
@@ -0,0 +1,39 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Mesh Palettizer
|
| 2 |
+
|
| 3 |
+
Web application for simplifying 3D model textures with optimized color palettes
|
| 4 |
+
|
| 5 |
+
## Purpose
|
| 6 |
+
|
| 7 |
+
- Convert GLB/GLTF models to use palettized textures
|
| 8 |
+
- Provide web interface for batch processing
|
| 9 |
+
- Support file uploads and URL downloads for model inputs
|
| 10 |
+
- Generate clean, simplified textures for stylized rendering
|
| 11 |
+
|
| 12 |
+
## Layout
|
| 13 |
+
|
| 14 |
+
```
|
| 15 |
+
conversion/
|
| 16 |
+
├── context.md # This file
|
| 17 |
+
├── app.py # Gradio web interface
|
| 18 |
+
├── src/ # Core conversion library
|
| 19 |
+
│ └── context.md # Library context
|
| 20 |
+
├── examples/ # Example GLB models
|
| 21 |
+
│ ├── Duck.glb
|
| 22 |
+
│ └── Lantern.glb
|
| 23 |
+
└── requirements.txt # HF Spaces dependencies
|
| 24 |
+
```
|
| 25 |
+
|
| 26 |
+
## Scope
|
| 27 |
+
|
| 28 |
+
- In-scope: GLB/GLTF processing, batch operations, web interface
|
| 29 |
+
- Out-of-scope: Direct Unity integration, texture detail preservation
|
| 30 |
+
|
| 31 |
+
## Entrypoints
|
| 32 |
+
|
| 33 |
+
- `app.py` - Web interface with example models and batch processing
|
| 34 |
+
- `src.convert_meshes()` - Core conversion API
|
| 35 |
+
|
| 36 |
+
## Dependencies
|
| 37 |
+
|
| 38 |
+
- Internal: src module
|
| 39 |
+
- External: trimesh, gradio, PIL, numpy, scikit-learn, requests
|
examples/Duck.glb
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:65bf938f54d6073e619e76e007820bbf980cdc3dc0daec0d94830ffc4ae54ab5
|
| 3 |
+
size 120484
|
examples/Lantern.glb
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2d7ea69e1f4b7f5b9f00854978b20f41b9effa392272c50f4c98b5a8db3152b2
|
| 3 |
+
size 9872848
|
layers/context-template.md
ADDED
|
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Context Template
|
| 2 |
+
|
| 3 |
+
Concise context for any folder; link to code or subfolders for deeper context
|
| 4 |
+
|
| 5 |
+
## Purpose
|
| 6 |
+
|
| 7 |
+
- [What this folder/module does]
|
| 8 |
+
|
| 9 |
+
## Layout
|
| 10 |
+
|
| 11 |
+
```
|
| 12 |
+
[FOLDER]/
|
| 13 |
+
context.md # This file, folder context (Tier 2)
|
| 14 |
+
├── [FILE1].[ext] # File
|
| 15 |
+
├── [FILE2].[ext] # File
|
| 16 |
+
├── [subfolder]/ # Subfolder
|
| 17 |
+
│ ├── context.md # Subfolder context
|
| 18 |
+
│ ├── [FILE3].[ext] # File
|
| 19 |
+
│ └── [FILE4].[ext] # File
|
| 20 |
+
```
|
| 21 |
+
|
| 22 |
+
## Scope
|
| 23 |
+
|
| 24 |
+
- [In-scope]
|
| 25 |
+
- [Out-of-scope]
|
| 26 |
+
|
| 27 |
+
## Entrypoints
|
| 28 |
+
|
| 29 |
+
- [Primary entry files or functions and when they are called]
|
| 30 |
+
|
| 31 |
+
## Dependencies
|
| 32 |
+
|
| 33 |
+
- [Internal modules]
|
| 34 |
+
- [External libraries/services]
|
layers/structure.md
ADDED
|
@@ -0,0 +1,68 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Project Structure
|
| 2 |
+
|
| 3 |
+
Mesh Palettizer - Web application for 3D model texture simplification
|
| 4 |
+
|
| 5 |
+
## Stack
|
| 6 |
+
|
| 7 |
+
- Runtime: Python 3.10+
|
| 8 |
+
- 3D Processing: Trimesh
|
| 9 |
+
- Image Processing: Pillow, NumPy, OpenCV
|
| 10 |
+
- ML/Clustering: Scikit-learn
|
| 11 |
+
- Spatial Indexing: SciPy
|
| 12 |
+
- Web Interface: Gradio
|
| 13 |
+
|
| 14 |
+
## Commands
|
| 15 |
+
|
| 16 |
+
- Web interface: `uv run python app.py`
|
| 17 |
+
- Install: `uv sync`
|
| 18 |
+
- Lint: `uv run ruff check .`
|
| 19 |
+
- Format: `uv run black src/`
|
| 20 |
+
|
| 21 |
+
## Layout
|
| 22 |
+
|
| 23 |
+
```
|
| 24 |
+
conversion/
|
| 25 |
+
├── CLAUDE.md # Global context (Tier 0)
|
| 26 |
+
├── app.py # Web interface with Gradio
|
| 27 |
+
├── README.md # Public documentation
|
| 28 |
+
├── pyproject.toml # Dependencies
|
| 29 |
+
├── context.md # Project context (Tier 2)
|
| 30 |
+
├── src/ # Core tool - reusable texture palettizer
|
| 31 |
+
│ ├── context.md # Module context (Tier 2)
|
| 32 |
+
│ ├── __init__.py # Main API: convert_meshes()
|
| 33 |
+
│ ├── preprocessing/ # Texture detail removal
|
| 34 |
+
│ │ ├── __init__.py
|
| 35 |
+
│ │ └── simplifier.py # Bilateral filter for artifact removal
|
| 36 |
+
│ ├── extraction/ # Color extraction from meshes
|
| 37 |
+
│ │ ├── __init__.py
|
| 38 |
+
│ │ ├── sampler.py # Face sampling with area weighting
|
| 39 |
+
│ │ └── reader.py # Texture/material color reading
|
| 40 |
+
│ ├── palette/ # Palette generation and mapping
|
| 41 |
+
│ │ ├── __init__.py
|
| 42 |
+
│ │ ├── quantizer.py # K-means clustering in LAB space
|
| 43 |
+
│ │ ├── mapper.py # Nearest color mapping via KD-tree
|
| 44 |
+
│ │ └── color_space.py # RGB/LAB conversion
|
| 45 |
+
│ ├── atlas/ # Texture atlas generation
|
| 46 |
+
│ │ ├── __init__.py
|
| 47 |
+
│ │ └── builder.py # Atlas construction with UV mapping
|
| 48 |
+
│ └── mesh/ # Mesh transformation
|
| 49 |
+
│ ├── __init__.py
|
| 50 |
+
│ └── uvmapper.py # UV remapping to atlas
|
| 51 |
+
└── layers/
|
| 52 |
+
└── structure.md # This file (Tier 1)
|
| 53 |
+
```
|
| 54 |
+
|
| 55 |
+
## Architecture
|
| 56 |
+
|
| 57 |
+
GLB processing with K-means color quantization and configurable palette sizes
|
| 58 |
+
|
| 59 |
+
- **Pattern**: Direct transformation pipeline
|
| 60 |
+
- **Flow**: Load meshes → Simplify textures → Sample colors → Quantize to palette → Create atlas → Apply → Export
|
| 61 |
+
- **Palette sizes**: Powers of 2 (8×8=64 colors, 16×16=256, 32×32=1024, etc.)
|
| 62 |
+
- **State**: None - pure transformation
|
| 63 |
+
- **Optimization**: Random sampling, vectorized operations, LAB color space
|
| 64 |
+
|
| 65 |
+
## Entry Points
|
| 66 |
+
|
| 67 |
+
`app.py` - Gradio web interface with configurable detail removal sensitivity
|
| 68 |
+
`src.convert_meshes()` - Core API with palette generation and detail control
|
pyproject.toml
ADDED
|
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[project]
|
| 2 |
+
name = "mesh-palettizer"
|
| 3 |
+
version = "0.1.0"
|
| 4 |
+
description = "Simplify 3D model textures using optimized color palettes"
|
| 5 |
+
requires-python = ">=3.10"
|
| 6 |
+
dependencies = [
|
| 7 |
+
"trimesh[easy]>=4.0.0",
|
| 8 |
+
"pillow>=10.0.0",
|
| 9 |
+
"numpy>=1.24.0",
|
| 10 |
+
"scipy>=1.10.0",
|
| 11 |
+
"scikit-learn>=1.3.0",
|
| 12 |
+
"gradio>=4.0.0",
|
| 13 |
+
"requests>=2.31.0",
|
| 14 |
+
"dotenv>=0.9.9",
|
| 15 |
+
"opencv-python>=4.8.0",
|
| 16 |
+
]
|
| 17 |
+
|
| 18 |
+
[project.optional-dependencies]
|
| 19 |
+
dev = [
|
| 20 |
+
"pytest>=7.4.0",
|
| 21 |
+
"ruff>=0.1.0",
|
| 22 |
+
"requests>=2.31.0",
|
| 23 |
+
"python-dotenv>=1.0.0",
|
| 24 |
+
]
|
| 25 |
+
|
| 26 |
+
[build-system]
|
| 27 |
+
requires = ["hatchling"]
|
| 28 |
+
build-backend = "hatchling.build"
|
| 29 |
+
|
| 30 |
+
[tool.hatch.build.targets.wheel]
|
| 31 |
+
packages = ["src"]
|
| 32 |
+
|
| 33 |
+
[dependency-groups]
|
| 34 |
+
dev = [
|
| 35 |
+
"black>=25.9.0",
|
| 36 |
+
]
|
requirements.txt
ADDED
|
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
trimesh[easy]>=4.0.0
|
| 2 |
+
pillow>=10.0.0
|
| 3 |
+
numpy>=1.24.0
|
| 4 |
+
scipy>=1.10.0
|
| 5 |
+
scikit-learn>=1.3.0
|
| 6 |
+
gradio>=4.0.0
|
| 7 |
+
requests>=2.31.0
|
| 8 |
+
opencv-python>=4.11.0
|
src/__init__.py
ADDED
|
@@ -0,0 +1,105 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import numpy as np
|
| 2 |
+
from .palette import create_palette, PaletteMapper
|
| 3 |
+
from .atlas import create_atlas
|
| 4 |
+
from .mesh import apply_atlas
|
| 5 |
+
|
| 6 |
+
__all__ = ["convert_meshes", "ConversionConfig", "ConversionResult"]
|
| 7 |
+
|
| 8 |
+
|
| 9 |
+
class ConversionConfig:
|
| 10 |
+
def __init__(
|
| 11 |
+
self,
|
| 12 |
+
atlas_size=16,
|
| 13 |
+
face_sampling_ratio=0.1,
|
| 14 |
+
simplify_details=True,
|
| 15 |
+
detail_sensitivity=None,
|
| 16 |
+
):
|
| 17 |
+
self.atlas_size = atlas_size
|
| 18 |
+
self.face_sampling_ratio = face_sampling_ratio
|
| 19 |
+
self.simplify_details = simplify_details
|
| 20 |
+
self.detail_sensitivity = detail_sensitivity
|
| 21 |
+
self.total_palette_colors = atlas_size * atlas_size
|
| 22 |
+
|
| 23 |
+
|
| 24 |
+
class ConversionResult:
|
| 25 |
+
def __init__(self, meshes, atlas, palette, scene_metadata=None):
|
| 26 |
+
self.meshes = meshes
|
| 27 |
+
self.atlas = atlas
|
| 28 |
+
self.palette = palette
|
| 29 |
+
self.scene_metadata = scene_metadata
|
| 30 |
+
|
| 31 |
+
|
| 32 |
+
def convert_meshes(
|
| 33 |
+
mesh_list,
|
| 34 |
+
atlas_size=16,
|
| 35 |
+
face_sampling_ratio=0.1,
|
| 36 |
+
simplify_details=True,
|
| 37 |
+
detail_sensitivity=None,
|
| 38 |
+
scene_metadata=None,
|
| 39 |
+
):
|
| 40 |
+
if not mesh_list:
|
| 41 |
+
raise ValueError("No meshes provided")
|
| 42 |
+
|
| 43 |
+
config = ConversionConfig(
|
| 44 |
+
atlas_size, face_sampling_ratio, simplify_details, detail_sensitivity
|
| 45 |
+
)
|
| 46 |
+
|
| 47 |
+
print(
|
| 48 |
+
f"Processing {len(mesh_list)} mesh(es) with {config.total_palette_colors}-color palette"
|
| 49 |
+
)
|
| 50 |
+
|
| 51 |
+
all_sampled_colors = []
|
| 52 |
+
meshes_with_valid_geometry = []
|
| 53 |
+
mesh_face_colors = {}
|
| 54 |
+
|
| 55 |
+
for mesh_name, mesh_object in mesh_list:
|
| 56 |
+
if mesh_object is None or len(mesh_object.faces) == 0:
|
| 57 |
+
print(f"Skipping {mesh_name}: no valid geometry")
|
| 58 |
+
continue
|
| 59 |
+
|
| 60 |
+
print(
|
| 61 |
+
f"Analyzing {mesh_name}: {len(mesh_object.faces)} faces, {len(mesh_object.vertices)} vertices"
|
| 62 |
+
)
|
| 63 |
+
meshes_with_valid_geometry.append((mesh_name, mesh_object))
|
| 64 |
+
|
| 65 |
+
from .extraction.reader import get_face_colors
|
| 66 |
+
face_colors = get_face_colors(
|
| 67 |
+
mesh_object,
|
| 68 |
+
simplify_details=config.simplify_details,
|
| 69 |
+
detail_sensitivity=config.detail_sensitivity,
|
| 70 |
+
)
|
| 71 |
+
mesh_face_colors[mesh_name] = face_colors
|
| 72 |
+
|
| 73 |
+
from .extraction import sample_colors
|
| 74 |
+
face_areas = mesh_object.area_faces if hasattr(mesh_object, "area_faces") else None
|
| 75 |
+
sampled_colors = sample_colors(face_colors, config.face_sampling_ratio, face_areas)
|
| 76 |
+
all_sampled_colors.extend(sampled_colors)
|
| 77 |
+
|
| 78 |
+
if not meshes_with_valid_geometry:
|
| 79 |
+
raise ValueError("No valid meshes found")
|
| 80 |
+
|
| 81 |
+
combined_color_array = np.array(all_sampled_colors, dtype=np.uint8)
|
| 82 |
+
print(
|
| 83 |
+
f"Total sampled colors: {len(combined_color_array)} from {len(meshes_with_valid_geometry)} mesh(es)"
|
| 84 |
+
)
|
| 85 |
+
|
| 86 |
+
quantized_palette = create_palette(combined_color_array, size=config.atlas_size)
|
| 87 |
+
texture_atlas_image, color_to_uv_mapping = create_atlas(
|
| 88 |
+
quantized_palette, size=config.atlas_size
|
| 89 |
+
)
|
| 90 |
+
nearest_color_mapper = PaletteMapper(quantized_palette)
|
| 91 |
+
|
| 92 |
+
atlas_applied_meshes = []
|
| 93 |
+
for mesh_name, mesh_object in meshes_with_valid_geometry:
|
| 94 |
+
print(f"Applying atlas to {mesh_name}")
|
| 95 |
+
mesh_with_atlas_texture = apply_atlas(
|
| 96 |
+
mesh_object, texture_atlas_image, color_to_uv_mapping, nearest_color_mapper,
|
| 97 |
+
face_colors=mesh_face_colors[mesh_name]
|
| 98 |
+
)
|
| 99 |
+
atlas_applied_meshes.append((mesh_name, mesh_with_atlas_texture))
|
| 100 |
+
|
| 101 |
+
print(f"✓ Successfully processed {len(atlas_applied_meshes)} mesh(es)")
|
| 102 |
+
|
| 103 |
+
return ConversionResult(
|
| 104 |
+
atlas_applied_meshes, texture_atlas_image, quantized_palette, scene_metadata
|
| 105 |
+
)
|
src/atlas/__init__.py
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from .builder import build_atlas, palette_to_atlas
|
| 2 |
+
|
| 3 |
+
__all__ = ["build_atlas", "palette_to_atlas", "create_atlas"]
|
| 4 |
+
|
| 5 |
+
|
| 6 |
+
def create_atlas(palette, size=16):
|
| 7 |
+
return palette_to_atlas(palette, atlas_size=size)
|
src/atlas/builder.py
ADDED
|
@@ -0,0 +1,59 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from PIL import Image
|
| 2 |
+
import numpy as np
|
| 3 |
+
|
| 4 |
+
DEFAULT_FILL_COLOR = [128, 128, 128]
|
| 5 |
+
UV_PIXEL_CENTER_OFFSET = 0.5
|
| 6 |
+
|
| 7 |
+
|
| 8 |
+
def build_atlas(colors, width, height):
|
| 9 |
+
atlas_pixel_array = create_empty_atlas_array(width, height)
|
| 10 |
+
fill_atlas_with_palette(atlas_pixel_array, colors, width, height)
|
| 11 |
+
return Image.fromarray(atlas_pixel_array, "RGB")
|
| 12 |
+
|
| 13 |
+
|
| 14 |
+
def create_empty_atlas_array(width, height):
|
| 15 |
+
return np.zeros((height, width, 3), dtype=np.uint8)
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
def fill_atlas_with_palette(atlas_array, palette_colors, width, height):
|
| 19 |
+
palette_index = 0
|
| 20 |
+
|
| 21 |
+
for row in range(height):
|
| 22 |
+
for column in range(width):
|
| 23 |
+
if palette_index < len(palette_colors):
|
| 24 |
+
atlas_array[row, column] = palette_colors[palette_index]
|
| 25 |
+
palette_index += 1
|
| 26 |
+
else:
|
| 27 |
+
atlas_array[row, column] = DEFAULT_FILL_COLOR
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
def palette_to_atlas(palette, atlas_size=16):
|
| 31 |
+
atlas_image = build_atlas(palette, atlas_size, atlas_size)
|
| 32 |
+
uv_mapping = create_color_to_uv_mapping(palette, atlas_size)
|
| 33 |
+
return atlas_image, uv_mapping
|
| 34 |
+
|
| 35 |
+
|
| 36 |
+
def create_color_to_uv_mapping(palette, atlas_dimensions):
|
| 37 |
+
color_to_uv_coordinates = {}
|
| 38 |
+
palette_index = 0
|
| 39 |
+
|
| 40 |
+
for row in range(atlas_dimensions):
|
| 41 |
+
for column in range(atlas_dimensions):
|
| 42 |
+
if palette_index < len(palette):
|
| 43 |
+
current_color = palette[palette_index]
|
| 44 |
+
rgb_tuple = convert_to_rgb_tuple(current_color)
|
| 45 |
+
uv_position = calculate_uv_for_pixel(column, row, atlas_dimensions)
|
| 46 |
+
color_to_uv_coordinates[rgb_tuple] = uv_position
|
| 47 |
+
palette_index += 1
|
| 48 |
+
|
| 49 |
+
return color_to_uv_coordinates
|
| 50 |
+
|
| 51 |
+
|
| 52 |
+
def convert_to_rgb_tuple(color_array):
|
| 53 |
+
return tuple(int(channel) for channel in color_array)
|
| 54 |
+
|
| 55 |
+
|
| 56 |
+
def calculate_uv_for_pixel(pixel_x, pixel_y, atlas_size):
|
| 57 |
+
u_coordinate = (pixel_x + UV_PIXEL_CENTER_OFFSET) / atlas_size
|
| 58 |
+
v_coordinate = 1.0 - (pixel_y + UV_PIXEL_CENTER_OFFSET) / atlas_size
|
| 59 |
+
return (u_coordinate, v_coordinate)
|
src/atlas/context.md
ADDED
|
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Atlas Module
|
| 2 |
+
|
| 3 |
+
Texture atlas generation from color palettes
|
| 4 |
+
|
| 5 |
+
## Purpose
|
| 6 |
+
|
| 7 |
+
- Create texture atlases from palettes
|
| 8 |
+
- Generate UV mappings for colors
|
| 9 |
+
|
| 10 |
+
## Layout
|
| 11 |
+
|
| 12 |
+
```
|
| 13 |
+
atlas/
|
| 14 |
+
├── context.md # This file
|
| 15 |
+
├── __init__.py # API: create_atlas()
|
| 16 |
+
└── builder.py # Atlas construction logic
|
| 17 |
+
```
|
| 18 |
+
|
| 19 |
+
## Scope
|
| 20 |
+
|
| 21 |
+
- In-scope: Atlas image creation, UV coordinate generation
|
| 22 |
+
- Out-of-scope: Mesh processing, color quantization
|
| 23 |
+
|
| 24 |
+
## Entrypoints
|
| 25 |
+
|
| 26 |
+
- `create_atlas(palette, size)` - Main atlas creation
|
| 27 |
+
- `palette_to_atlas(palette, atlas_size)` - Build with UV mapping
|
| 28 |
+
- `build_atlas(colors, width, height)` - Low-level builder
|
| 29 |
+
|
| 30 |
+
## Dependencies
|
| 31 |
+
|
| 32 |
+
- Internal: None
|
| 33 |
+
- External: Pillow, NumPy
|
src/context.md
ADDED
|
@@ -0,0 +1,38 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Mesh Palettizer Core
|
| 2 |
+
|
| 3 |
+
Core library for 3D model texture palette optimization
|
| 4 |
+
|
| 5 |
+
## Purpose
|
| 6 |
+
|
| 7 |
+
- Extract colors from mesh textures and materials
|
| 8 |
+
- Generate optimized palettes via perceptual clustering
|
| 9 |
+
- Build color palettes and remap mesh UVs
|
| 10 |
+
- Support multi-mesh scenes with shared textures
|
| 11 |
+
|
| 12 |
+
## Layout
|
| 13 |
+
|
| 14 |
+
```
|
| 15 |
+
src/
|
| 16 |
+
├── context.md # This file
|
| 17 |
+
├── __init__.py # Main API: convert_meshes()
|
| 18 |
+
├── preprocessing/ # Texture detail removal
|
| 19 |
+
├── extraction/ # Color extraction from meshes
|
| 20 |
+
├── palette/ # Palette generation and mapping
|
| 21 |
+
├── atlas/ # Palette texture creation
|
| 22 |
+
└── mesh/ # UV remapping
|
| 23 |
+
```
|
| 24 |
+
|
| 25 |
+
## Scope
|
| 26 |
+
|
| 27 |
+
- In-scope: Mesh processing, color quantization, UV mapping
|
| 28 |
+
- Out-of-scope: File I/O, API calls, web interface
|
| 29 |
+
|
| 30 |
+
## Entrypoints
|
| 31 |
+
|
| 32 |
+
- `convert_meshes(mesh_list, atlas_size, face_sampling_ratio, simplify_details, detail_sensitivity, scene_metadata)` - Main API
|
| 33 |
+
- Returns: `ConversionResult` with processed meshes, palette, and scene metadata
|
| 34 |
+
|
| 35 |
+
## Dependencies
|
| 36 |
+
|
| 37 |
+
- Internal: All submodules
|
| 38 |
+
- External: numpy, trimesh, PIL, scikit-learn, scipy
|
src/extraction/__init__.py
ADDED
|
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from .sampler import sample_colors
|
| 2 |
+
from .reader import get_face_colors
|
| 3 |
+
|
| 4 |
+
__all__ = ["sample_colors", "get_face_colors"]
|
| 5 |
+
|
| 6 |
+
|
| 7 |
+
def extract_colors(
|
| 8 |
+
mesh, face_sampling_ratio=0.1, simplify_details=True, detail_sensitivity=None
|
| 9 |
+
):
|
| 10 |
+
face_colors = get_face_colors(mesh, simplify_details, detail_sensitivity)
|
| 11 |
+
face_areas = mesh.area_faces if hasattr(mesh, "area_faces") else None
|
| 12 |
+
return sample_colors(face_colors, face_sampling_ratio, face_areas)
|
src/extraction/context.md
ADDED
|
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Color Extraction Module
|
| 2 |
+
|
| 3 |
+
Extract colors from 3D mesh materials and textures
|
| 4 |
+
|
| 5 |
+
## Purpose
|
| 6 |
+
|
| 7 |
+
- Sample colors from mesh faces
|
| 8 |
+
- Read from textures via UV coordinates
|
| 9 |
+
- Apply area-weighted sampling
|
| 10 |
+
|
| 11 |
+
## Layout
|
| 12 |
+
|
| 13 |
+
```
|
| 14 |
+
extraction/
|
| 15 |
+
├── context.md # This file
|
| 16 |
+
├── __init__.py # API: extract_colors()
|
| 17 |
+
├── sampler.py # Random sampling strategies
|
| 18 |
+
└── reader.py # Read colors from materials
|
| 19 |
+
```
|
| 20 |
+
|
| 21 |
+
## Scope
|
| 22 |
+
|
| 23 |
+
- In-scope: Color extraction, texture sampling, UV mapping
|
| 24 |
+
- Out-of-scope: Color quantization, atlas generation
|
| 25 |
+
|
| 26 |
+
## Entrypoints
|
| 27 |
+
|
| 28 |
+
- `extract_colors(mesh, sample_rate, simplify_details, detail_sensitivity)` - Main extraction function
|
| 29 |
+
- `get_face_colors(mesh, simplify_details, detail_sensitivity)` - Read colors from mesh
|
| 30 |
+
- `sample_colors(colors, sample_rate, areas)` - Sample with weighting
|
| 31 |
+
|
| 32 |
+
## Dependencies
|
| 33 |
+
|
| 34 |
+
- Internal: preprocessing.simplify_texture
|
| 35 |
+
- External: NumPy, trimesh, OpenCV
|
src/extraction/reader.py
ADDED
|
@@ -0,0 +1,136 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import numpy as np
|
| 2 |
+
from ..preprocessing import simplify_texture
|
| 3 |
+
|
| 4 |
+
DEFAULT_GRAY_VALUE = 128
|
| 5 |
+
TEXTURE_SAMPLING_CHUNK_SIZE = 10000
|
| 6 |
+
|
| 7 |
+
|
| 8 |
+
def get_face_colors(mesh, simplify_details=True, detail_sensitivity=None):
|
| 9 |
+
extracted_colors = try_extract_from_material(
|
| 10 |
+
mesh, simplify_details, detail_sensitivity
|
| 11 |
+
)
|
| 12 |
+
|
| 13 |
+
if extracted_colors is None:
|
| 14 |
+
extracted_colors = try_extract_from_face_colors(mesh)
|
| 15 |
+
|
| 16 |
+
if extracted_colors is None:
|
| 17 |
+
extracted_colors = create_default_gray_colors(len(mesh.faces))
|
| 18 |
+
|
| 19 |
+
return extracted_colors
|
| 20 |
+
|
| 21 |
+
|
| 22 |
+
def try_extract_from_material(mesh, simplify_details=True, detail_sensitivity=None):
|
| 23 |
+
if not hasattr(mesh.visual, "material"):
|
| 24 |
+
return None
|
| 25 |
+
|
| 26 |
+
material = mesh.visual.material
|
| 27 |
+
texture_image = get_texture_image(material)
|
| 28 |
+
|
| 29 |
+
if texture_image and has_valid_uv_coordinates(mesh):
|
| 30 |
+
# Always create a copy to avoid issues with shared textures
|
| 31 |
+
if hasattr(texture_image, "copy"):
|
| 32 |
+
texture_image = texture_image.copy()
|
| 33 |
+
|
| 34 |
+
if simplify_details:
|
| 35 |
+
if detail_sensitivity is not None:
|
| 36 |
+
d, sigma_color, sigma_space = detail_sensitivity
|
| 37 |
+
texture_image = simplify_texture(
|
| 38 |
+
texture_image,
|
| 39 |
+
enabled=True,
|
| 40 |
+
d=d,
|
| 41 |
+
sigma_color=sigma_color,
|
| 42 |
+
sigma_space=sigma_space,
|
| 43 |
+
)
|
| 44 |
+
else:
|
| 45 |
+
texture_image = simplify_texture(texture_image)
|
| 46 |
+
return sample_colors_from_texture(mesh, texture_image)
|
| 47 |
+
|
| 48 |
+
if has_main_color(material):
|
| 49 |
+
return create_uniform_color_array(material.main_color, len(mesh.faces))
|
| 50 |
+
|
| 51 |
+
return None
|
| 52 |
+
|
| 53 |
+
|
| 54 |
+
def get_texture_image(material):
|
| 55 |
+
if hasattr(material, "baseColorTexture") and material.baseColorTexture is not None:
|
| 56 |
+
return material.baseColorTexture
|
| 57 |
+
if hasattr(material, "image") and material.image is not None:
|
| 58 |
+
return material.image
|
| 59 |
+
return None
|
| 60 |
+
|
| 61 |
+
|
| 62 |
+
def has_valid_uv_coordinates(mesh):
|
| 63 |
+
return hasattr(mesh.visual, "uv") and mesh.visual.uv is not None
|
| 64 |
+
|
| 65 |
+
|
| 66 |
+
def has_main_color(material):
|
| 67 |
+
return hasattr(material, "main_color") and material.main_color is not None
|
| 68 |
+
|
| 69 |
+
|
| 70 |
+
def create_uniform_color_array(color, face_count):
|
| 71 |
+
rgb_values = np.array(color[:3], dtype=np.uint8)
|
| 72 |
+
return np.tile(rgb_values, (face_count, 1))
|
| 73 |
+
|
| 74 |
+
|
| 75 |
+
def try_extract_from_face_colors(mesh):
|
| 76 |
+
if hasattr(mesh.visual, "face_colors") and mesh.visual.face_colors is not None:
|
| 77 |
+
return mesh.visual.face_colors[:, :3].astype(np.uint8)
|
| 78 |
+
return None
|
| 79 |
+
|
| 80 |
+
|
| 81 |
+
def create_default_gray_colors(face_count):
|
| 82 |
+
return np.full((face_count, 3), DEFAULT_GRAY_VALUE, dtype=np.uint8)
|
| 83 |
+
|
| 84 |
+
|
| 85 |
+
def sample_colors_from_texture(mesh, texture_image):
|
| 86 |
+
try:
|
| 87 |
+
rgb_texture_array = convert_to_rgb_array(texture_image)
|
| 88 |
+
texture_height, texture_width = rgb_texture_array.shape[:2]
|
| 89 |
+
|
| 90 |
+
uv_coordinates = mesh.visual.uv
|
| 91 |
+
mesh_faces = mesh.faces
|
| 92 |
+
|
| 93 |
+
sampled_face_colors = np.zeros((len(mesh_faces), 3), dtype=np.uint8)
|
| 94 |
+
|
| 95 |
+
for chunk_start in range(0, len(mesh_faces), TEXTURE_SAMPLING_CHUNK_SIZE):
|
| 96 |
+
chunk_end = min(chunk_start + TEXTURE_SAMPLING_CHUNK_SIZE, len(mesh_faces))
|
| 97 |
+
current_chunk_faces = mesh_faces[chunk_start:chunk_end]
|
| 98 |
+
|
| 99 |
+
face_vertex_uvs = uv_coordinates[current_chunk_faces].reshape(-1, 3, 2)
|
| 100 |
+
pixel_x_coords = convert_u_to_pixel_x(
|
| 101 |
+
face_vertex_uvs[:, :, 0], texture_width
|
| 102 |
+
)
|
| 103 |
+
pixel_y_coords = convert_v_to_pixel_y(
|
| 104 |
+
face_vertex_uvs[:, :, 1], texture_height
|
| 105 |
+
)
|
| 106 |
+
|
| 107 |
+
sampled_vertex_colors = rgb_texture_array[
|
| 108 |
+
pixel_y_coords.ravel(), pixel_x_coords.ravel(), :3
|
| 109 |
+
]
|
| 110 |
+
per_face_colors = sampled_vertex_colors.reshape(
|
| 111 |
+
len(current_chunk_faces), 3, 3
|
| 112 |
+
)
|
| 113 |
+
average_face_colors = np.mean(per_face_colors, axis=1).astype(np.uint8)
|
| 114 |
+
|
| 115 |
+
sampled_face_colors[chunk_start:chunk_end] = average_face_colors
|
| 116 |
+
|
| 117 |
+
return sampled_face_colors
|
| 118 |
+
except (IndexError, ValueError):
|
| 119 |
+
return create_default_gray_colors(len(mesh.faces))
|
| 120 |
+
|
| 121 |
+
|
| 122 |
+
def convert_to_rgb_array(image):
|
| 123 |
+
if hasattr(image, "convert"):
|
| 124 |
+
image = image.convert("RGB")
|
| 125 |
+
return np.array(image, dtype=np.uint8)
|
| 126 |
+
|
| 127 |
+
|
| 128 |
+
def convert_u_to_pixel_x(u_values, width):
|
| 129 |
+
pixel_values = (u_values * width).astype(int)
|
| 130 |
+
return np.clip(pixel_values, 0, width - 1)
|
| 131 |
+
|
| 132 |
+
|
| 133 |
+
def convert_v_to_pixel_y(v_values, height):
|
| 134 |
+
flipped_v_values = 1 - v_values
|
| 135 |
+
pixel_values = (flipped_v_values * height).astype(int)
|
| 136 |
+
return np.clip(pixel_values, 0, height - 1)
|
src/extraction/sampler.py
ADDED
|
@@ -0,0 +1,42 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import numpy as np
|
| 2 |
+
|
| 3 |
+
LARGE_MESH_THRESHOLD = 1000
|
| 4 |
+
MINIMUM_SAMPLE_COUNT = 100
|
| 5 |
+
MIN_AREA_WEIGHT = 1
|
| 6 |
+
MAX_AREA_WEIGHT = 10
|
| 7 |
+
|
| 8 |
+
|
| 9 |
+
def sample_colors(face_colors, sampling_ratio=0.1, face_areas=None):
|
| 10 |
+
total_faces = len(face_colors)
|
| 11 |
+
|
| 12 |
+
if total_faces > LARGE_MESH_THRESHOLD:
|
| 13 |
+
number_to_sample = max(int(total_faces * sampling_ratio), MINIMUM_SAMPLE_COUNT)
|
| 14 |
+
randomly_selected_indices = np.random.choice(
|
| 15 |
+
total_faces, size=number_to_sample, replace=False
|
| 16 |
+
)
|
| 17 |
+
selected_face_colors = face_colors[randomly_selected_indices]
|
| 18 |
+
print(
|
| 19 |
+
f"Sampled {number_to_sample} of {total_faces} faces ({sampling_ratio*100:.1f}%)"
|
| 20 |
+
)
|
| 21 |
+
|
| 22 |
+
if face_areas is not None:
|
| 23 |
+
selected_areas = face_areas[randomly_selected_indices]
|
| 24 |
+
area_based_repetitions = compute_area_weights(selected_areas)
|
| 25 |
+
return np.repeat(selected_face_colors, area_based_repetitions, axis=0)
|
| 26 |
+
|
| 27 |
+
return selected_face_colors
|
| 28 |
+
else:
|
| 29 |
+
print(f"Using all {total_faces} faces (below sampling threshold)")
|
| 30 |
+
|
| 31 |
+
if face_areas is not None:
|
| 32 |
+
area_based_repetitions = compute_area_weights(face_areas)
|
| 33 |
+
return np.repeat(face_colors, area_based_repetitions, axis=0)
|
| 34 |
+
|
| 35 |
+
return face_colors
|
| 36 |
+
|
| 37 |
+
|
| 38 |
+
def compute_area_weights(areas):
|
| 39 |
+
mean_area = areas.mean()
|
| 40 |
+
normalized_weights = areas / mean_area
|
| 41 |
+
integer_weights = normalized_weights.astype(int)
|
| 42 |
+
return integer_weights.clip(MIN_AREA_WEIGHT, MAX_AREA_WEIGHT)
|
src/mesh/__init__.py
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from .uvmapper import apply_atlas_to_mesh
|
| 2 |
+
|
| 3 |
+
__all__ = ["apply_atlas_to_mesh", "apply_atlas"]
|
| 4 |
+
|
| 5 |
+
|
| 6 |
+
def apply_atlas(mesh, atlas, color_to_uv, mapper, face_colors=None):
|
| 7 |
+
return apply_atlas_to_mesh(mesh, atlas, color_to_uv, mapper, face_colors)
|
src/mesh/context.md
ADDED
|
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Mesh Module
|
| 2 |
+
|
| 3 |
+
Apply texture atlases to 3D meshes
|
| 4 |
+
|
| 5 |
+
## Purpose
|
| 6 |
+
|
| 7 |
+
- Remap mesh UVs to atlas coordinates
|
| 8 |
+
- Transform meshes with new textures
|
| 9 |
+
|
| 10 |
+
## Layout
|
| 11 |
+
|
| 12 |
+
```
|
| 13 |
+
mesh/
|
| 14 |
+
├── context.md # This file
|
| 15 |
+
├── __init__.py # API: apply_atlas()
|
| 16 |
+
└── uvmapper.py # UV remapping logic
|
| 17 |
+
```
|
| 18 |
+
|
| 19 |
+
## Scope
|
| 20 |
+
|
| 21 |
+
- In-scope: UV remapping, mesh transformation, vertex duplication
|
| 22 |
+
- Out-of-scope: Mesh loading, color extraction
|
| 23 |
+
|
| 24 |
+
## Entrypoints
|
| 25 |
+
|
| 26 |
+
- `apply_atlas(mesh, atlas, color_to_uv, mapper, face_colors)` - Main application
|
| 27 |
+
- `apply_atlas_to_mesh(mesh, atlas, color_to_uv, mapper, face_colors)` - Detailed UV remapping with cached colors
|
| 28 |
+
|
| 29 |
+
## Dependencies
|
| 30 |
+
|
| 31 |
+
- Internal: extraction.reader (for get_face_colors)
|
| 32 |
+
- External: NumPy, trimesh
|
src/mesh/uvmapper.py
ADDED
|
@@ -0,0 +1,85 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import numpy as np
|
| 2 |
+
import trimesh
|
| 3 |
+
|
| 4 |
+
DEFAULT_UV_FALLBACK = (0.5, 0.5)
|
| 5 |
+
|
| 6 |
+
|
| 7 |
+
def apply_atlas_to_mesh(mesh, atlas_texture, color_uv_mapping, palette_mapper, face_colors=None):
|
| 8 |
+
from ..extraction.reader import get_face_colors
|
| 9 |
+
|
| 10 |
+
if face_colors is None:
|
| 11 |
+
original_face_colors = get_face_colors(mesh)
|
| 12 |
+
else:
|
| 13 |
+
original_face_colors = face_colors
|
| 14 |
+
palette_matched_colors = palette_mapper.map_colors(original_face_colors)
|
| 15 |
+
|
| 16 |
+
uv_coordinates_per_face = generate_face_uv_coordinates(
|
| 17 |
+
palette_matched_colors, color_uv_mapping, len(mesh.faces)
|
| 18 |
+
)
|
| 19 |
+
|
| 20 |
+
duplicated_mesh = create_mesh_with_per_face_vertices(mesh, uv_coordinates_per_face)
|
| 21 |
+
apply_texture_to_mesh(duplicated_mesh, atlas_texture, uv_coordinates_per_face)
|
| 22 |
+
|
| 23 |
+
return duplicated_mesh
|
| 24 |
+
|
| 25 |
+
|
| 26 |
+
def generate_face_uv_coordinates(face_colors, color_to_uv_map, total_faces):
|
| 27 |
+
uv_coordinates_array = np.zeros((total_faces, 3, 2), dtype=np.float32)
|
| 28 |
+
faces_without_mapping = 0
|
| 29 |
+
|
| 30 |
+
for face_index, face_color in enumerate(face_colors):
|
| 31 |
+
rgb_color_tuple = convert_color_to_tuple(face_color)
|
| 32 |
+
uv_position = color_to_uv_map.get(rgb_color_tuple, DEFAULT_UV_FALLBACK)
|
| 33 |
+
|
| 34 |
+
if (
|
| 35 |
+
uv_position == DEFAULT_UV_FALLBACK
|
| 36 |
+
and rgb_color_tuple not in color_to_uv_map
|
| 37 |
+
):
|
| 38 |
+
faces_without_mapping += 1
|
| 39 |
+
|
| 40 |
+
assign_uniform_uv_to_face(uv_coordinates_array, face_index, uv_position)
|
| 41 |
+
|
| 42 |
+
if faces_without_mapping > 0:
|
| 43 |
+
print(f"Warning: {faces_without_mapping} faces had unmapped colors")
|
| 44 |
+
|
| 45 |
+
return uv_coordinates_array
|
| 46 |
+
|
| 47 |
+
|
| 48 |
+
def convert_color_to_tuple(color):
|
| 49 |
+
return tuple(int(channel) for channel in color)
|
| 50 |
+
|
| 51 |
+
|
| 52 |
+
def assign_uniform_uv_to_face(uv_array, face_index, uv_coordinate):
|
| 53 |
+
uv_array[face_index] = [uv_coordinate, uv_coordinate, uv_coordinate]
|
| 54 |
+
|
| 55 |
+
|
| 56 |
+
def create_mesh_with_per_face_vertices(original_mesh, face_uv_coordinates):
|
| 57 |
+
face_vertex_positions = original_mesh.vertices[original_mesh.faces]
|
| 58 |
+
flattened_vertices = face_vertex_positions.reshape(-1, 3)
|
| 59 |
+
sequential_face_indices = np.arange(len(flattened_vertices)).reshape(-1, 3)
|
| 60 |
+
|
| 61 |
+
duplicated_mesh = trimesh.Trimesh(
|
| 62 |
+
vertices=flattened_vertices,
|
| 63 |
+
faces=sequential_face_indices,
|
| 64 |
+
vertex_normals=None,
|
| 65 |
+
process=False,
|
| 66 |
+
)
|
| 67 |
+
|
| 68 |
+
return duplicated_mesh
|
| 69 |
+
|
| 70 |
+
|
| 71 |
+
def apply_texture_to_mesh(mesh, atlas_image, face_uv_array):
|
| 72 |
+
flattened_uv_coordinates = face_uv_array.reshape(-1, 2)
|
| 73 |
+
|
| 74 |
+
pbr_material = trimesh.visual.material.PBRMaterial(
|
| 75 |
+
baseColorTexture=atlas_image,
|
| 76 |
+
baseColorFactor=[1.0, 1.0, 1.0, 1.0],
|
| 77 |
+
roughnessFactor=1.0,
|
| 78 |
+
metallicFactor=0.0,
|
| 79 |
+
)
|
| 80 |
+
|
| 81 |
+
texture_visual = trimesh.visual.texture.TextureVisuals(
|
| 82 |
+
uv=flattened_uv_coordinates, image=atlas_image, material=pbr_material
|
| 83 |
+
)
|
| 84 |
+
|
| 85 |
+
mesh.visual = texture_visual
|
src/palette/__init__.py
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from .quantizer import quantize_colors
|
| 2 |
+
from .mapper import PaletteMapper
|
| 3 |
+
|
| 4 |
+
__all__ = ["quantize_colors", "PaletteMapper", "create_palette"]
|
| 5 |
+
|
| 6 |
+
|
| 7 |
+
def create_palette(colors, size=16):
|
| 8 |
+
total_palette_colors = size * size
|
| 9 |
+
return quantize_colors(colors, target_color_count=total_palette_colors)
|
src/palette/color_space.py
ADDED
|
@@ -0,0 +1,129 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import numpy as np
|
| 2 |
+
|
| 3 |
+
RGB_MAX_VALUE = 255.0
|
| 4 |
+
SRGB_GAMMA = 2.4
|
| 5 |
+
SRGB_LINEAR_THRESHOLD = 0.04045
|
| 6 |
+
SRGB_OFFSET = 0.055
|
| 7 |
+
SRGB_SCALE = 1.055
|
| 8 |
+
SRGB_LINEAR_SCALE = 12.92
|
| 9 |
+
|
| 10 |
+
D65_ILLUMINANT_X = 0.95047
|
| 11 |
+
D65_ILLUMINANT_Z = 1.08883
|
| 12 |
+
|
| 13 |
+
LAB_EPSILON = 216 / 24389
|
| 14 |
+
LAB_KAPPA = 24389 / 27
|
| 15 |
+
LAB_DELTA = 16 / 116
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
def rgb_to_lab(rgb):
|
| 19 |
+
normalized_rgb = rgb.astype(np.float32) / RGB_MAX_VALUE
|
| 20 |
+
linear_rgb = apply_inverse_srgb_gamma(normalized_rgb)
|
| 21 |
+
xyz_values = convert_linear_rgb_to_xyz(linear_rgb)
|
| 22 |
+
normalized_xyz = normalize_xyz_by_d65_illuminant(xyz_values)
|
| 23 |
+
return convert_xyz_to_lab(normalized_xyz)
|
| 24 |
+
|
| 25 |
+
|
| 26 |
+
def apply_inverse_srgb_gamma(normalized_rgb):
|
| 27 |
+
above_threshold = normalized_rgb > SRGB_LINEAR_THRESHOLD
|
| 28 |
+
|
| 29 |
+
linearized_high_values = ((normalized_rgb + SRGB_OFFSET) / SRGB_SCALE) ** SRGB_GAMMA
|
| 30 |
+
linearized_low_values = normalized_rgb / SRGB_LINEAR_SCALE
|
| 31 |
+
|
| 32 |
+
return np.where(above_threshold, linearized_high_values, linearized_low_values)
|
| 33 |
+
|
| 34 |
+
|
| 35 |
+
def convert_linear_rgb_to_xyz(linear_rgb):
|
| 36 |
+
srgb_to_xyz_matrix = np.array(
|
| 37 |
+
[
|
| 38 |
+
[0.4124564, 0.3575761, 0.1804375],
|
| 39 |
+
[0.2126729, 0.7151522, 0.0721750],
|
| 40 |
+
[0.0193339, 0.1191920, 0.9503041],
|
| 41 |
+
],
|
| 42 |
+
dtype=np.float32,
|
| 43 |
+
)
|
| 44 |
+
return linear_rgb @ srgb_to_xyz_matrix.T
|
| 45 |
+
|
| 46 |
+
|
| 47 |
+
def normalize_xyz_by_d65_illuminant(xyz):
|
| 48 |
+
normalized = xyz.copy()
|
| 49 |
+
normalized[:, 0] /= D65_ILLUMINANT_X
|
| 50 |
+
normalized[:, 2] /= D65_ILLUMINANT_Z
|
| 51 |
+
return normalized
|
| 52 |
+
|
| 53 |
+
|
| 54 |
+
def convert_xyz_to_lab(normalized_xyz):
|
| 55 |
+
above_epsilon = normalized_xyz > LAB_EPSILON
|
| 56 |
+
|
| 57 |
+
cubic_root_values = normalized_xyz ** (1 / 3)
|
| 58 |
+
linear_scaled_values = (LAB_KAPPA * normalized_xyz + 16) / 116
|
| 59 |
+
|
| 60 |
+
f_transformed = np.where(above_epsilon, cubic_root_values, linear_scaled_values)
|
| 61 |
+
|
| 62 |
+
lab_values = np.zeros_like(normalized_xyz)
|
| 63 |
+
lab_values[:, 0] = 116 * f_transformed[:, 1] - 16
|
| 64 |
+
lab_values[:, 1] = 500 * (f_transformed[:, 0] - f_transformed[:, 1])
|
| 65 |
+
lab_values[:, 2] = 200 * (f_transformed[:, 1] - f_transformed[:, 2])
|
| 66 |
+
|
| 67 |
+
return lab_values
|
| 68 |
+
|
| 69 |
+
|
| 70 |
+
def lab_to_rgb(lab):
|
| 71 |
+
xyz_values = convert_lab_to_xyz(lab)
|
| 72 |
+
linear_rgb = convert_xyz_to_linear_rgb(xyz_values)
|
| 73 |
+
normalized_rgb = apply_srgb_gamma(linear_rgb)
|
| 74 |
+
return convert_normalized_to_8bit_rgb(normalized_rgb)
|
| 75 |
+
|
| 76 |
+
|
| 77 |
+
def convert_lab_to_xyz(lab):
|
| 78 |
+
f_y = (lab[:, 0] + 16) / 116
|
| 79 |
+
f_x = lab[:, 1] / 500 + f_y
|
| 80 |
+
f_z = f_y - lab[:, 2] / 200
|
| 81 |
+
|
| 82 |
+
x_above_epsilon = f_x**3 > LAB_EPSILON
|
| 83 |
+
y_above_epsilon = lab[:, 0] > LAB_KAPPA * LAB_EPSILON
|
| 84 |
+
z_above_epsilon = f_z**3 > LAB_EPSILON
|
| 85 |
+
|
| 86 |
+
xyz_values = np.zeros((len(lab), 3), dtype=np.float32)
|
| 87 |
+
|
| 88 |
+
x_cubic = f_x**3
|
| 89 |
+
x_linear = (116 * f_x - 16) / LAB_KAPPA
|
| 90 |
+
xyz_values[:, 0] = np.where(x_above_epsilon, x_cubic, x_linear) * D65_ILLUMINANT_X
|
| 91 |
+
|
| 92 |
+
y_cubic = f_y**3
|
| 93 |
+
y_linear = lab[:, 0] / LAB_KAPPA
|
| 94 |
+
xyz_values[:, 1] = np.where(y_above_epsilon, y_cubic, y_linear)
|
| 95 |
+
|
| 96 |
+
z_cubic = f_z**3
|
| 97 |
+
z_linear = (116 * f_z - 16) / LAB_KAPPA
|
| 98 |
+
xyz_values[:, 2] = np.where(z_above_epsilon, z_cubic, z_linear) * D65_ILLUMINANT_Z
|
| 99 |
+
|
| 100 |
+
return xyz_values
|
| 101 |
+
|
| 102 |
+
|
| 103 |
+
def convert_xyz_to_linear_rgb(xyz):
|
| 104 |
+
xyz_to_srgb_matrix = np.array(
|
| 105 |
+
[
|
| 106 |
+
[3.2404542, -1.5371385, -0.4985314],
|
| 107 |
+
[-0.9692660, 1.8760108, 0.0415560],
|
| 108 |
+
[0.0556434, -0.2040259, 1.0572252],
|
| 109 |
+
],
|
| 110 |
+
dtype=np.float32,
|
| 111 |
+
)
|
| 112 |
+
return xyz @ xyz_to_srgb_matrix.T
|
| 113 |
+
|
| 114 |
+
|
| 115 |
+
def apply_srgb_gamma(linear_rgb):
|
| 116 |
+
SRGB_LINEAR_CUTOFF = 0.0031308
|
| 117 |
+
|
| 118 |
+
above_cutoff = linear_rgb > SRGB_LINEAR_CUTOFF
|
| 119 |
+
|
| 120 |
+
gamma_corrected_high = SRGB_SCALE * (linear_rgb ** (1 / SRGB_GAMMA)) - SRGB_OFFSET
|
| 121 |
+
gamma_corrected_low = SRGB_LINEAR_SCALE * linear_rgb
|
| 122 |
+
|
| 123 |
+
return np.where(above_cutoff, gamma_corrected_high, gamma_corrected_low)
|
| 124 |
+
|
| 125 |
+
|
| 126 |
+
def convert_normalized_to_8bit_rgb(normalized_rgb):
|
| 127 |
+
scaled_values = normalized_rgb * RGB_MAX_VALUE
|
| 128 |
+
clipped_values = np.clip(scaled_values, 0, RGB_MAX_VALUE)
|
| 129 |
+
return clipped_values.astype(np.uint8)
|
src/palette/context.md
ADDED
|
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Palette Module
|
| 2 |
+
|
| 3 |
+
Color quantization and palette generation
|
| 4 |
+
|
| 5 |
+
## Purpose
|
| 6 |
+
|
| 7 |
+
- Quantize colors to fixed palettes
|
| 8 |
+
- Map arbitrary colors to palette entries
|
| 9 |
+
- Perform color space conversions
|
| 10 |
+
|
| 11 |
+
## Layout
|
| 12 |
+
|
| 13 |
+
```
|
| 14 |
+
palette/
|
| 15 |
+
├── context.md # This file
|
| 16 |
+
├── __init__.py # API: create_palette()
|
| 17 |
+
├── quantizer.py # K-means clustering
|
| 18 |
+
├── mapper.py # Color-to-palette mapping
|
| 19 |
+
└── color_space.py # RGB/LAB conversions
|
| 20 |
+
```
|
| 21 |
+
|
| 22 |
+
## Scope
|
| 23 |
+
|
| 24 |
+
- In-scope: Color quantization, LAB space clustering, nearest neighbor mapping
|
| 25 |
+
- Out-of-scope: Image processing, texture generation
|
| 26 |
+
|
| 27 |
+
## Entrypoints
|
| 28 |
+
|
| 29 |
+
- `create_palette(colors, size)` - Generate palette
|
| 30 |
+
- `PaletteMapper(palette)` - Map colors to palette
|
| 31 |
+
- `quantize_colors(colors, n_colors)` - K-means quantization
|
| 32 |
+
|
| 33 |
+
## Dependencies
|
| 34 |
+
|
| 35 |
+
- Internal: color_space
|
| 36 |
+
- External: NumPy, scikit-learn, SciPy
|
src/palette/mapper.py
ADDED
|
@@ -0,0 +1,41 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import numpy as np
|
| 2 |
+
from scipy.spatial import KDTree
|
| 3 |
+
from .color_space import rgb_to_lab
|
| 4 |
+
|
| 5 |
+
|
| 6 |
+
class PaletteMapper:
|
| 7 |
+
def __init__(self, palette):
|
| 8 |
+
self.palette = palette
|
| 9 |
+
self.perceptual_color_tree = build_perceptual_color_tree(palette)
|
| 10 |
+
|
| 11 |
+
def map_colors(self, colors):
|
| 12 |
+
if len(colors) == 0:
|
| 13 |
+
return colors
|
| 14 |
+
|
| 15 |
+
perceptual_colors = rgb_to_lab(colors)
|
| 16 |
+
nearest_palette_indices = find_nearest_palette_indices(
|
| 17 |
+
self.perceptual_color_tree, perceptual_colors
|
| 18 |
+
)
|
| 19 |
+
return self.palette[nearest_palette_indices]
|
| 20 |
+
|
| 21 |
+
def get_palette_index(self, single_color):
|
| 22 |
+
single_color_array = single_color.reshape(1, -1)
|
| 23 |
+
perceptual_color = rgb_to_lab(single_color_array)
|
| 24 |
+
_, nearest_index = self.perceptual_color_tree.query(perceptual_color)
|
| 25 |
+
return extract_scalar_index(nearest_index)
|
| 26 |
+
|
| 27 |
+
|
| 28 |
+
def build_perceptual_color_tree(palette):
|
| 29 |
+
palette_in_lab_space = rgb_to_lab(palette)
|
| 30 |
+
return KDTree(palette_in_lab_space)
|
| 31 |
+
|
| 32 |
+
|
| 33 |
+
def find_nearest_palette_indices(kdtree, query_colors):
|
| 34 |
+
_, nearest_indices = kdtree.query(query_colors)
|
| 35 |
+
return nearest_indices
|
| 36 |
+
|
| 37 |
+
|
| 38 |
+
def extract_scalar_index(index_result):
|
| 39 |
+
if isinstance(index_result, np.ndarray):
|
| 40 |
+
return index_result[0]
|
| 41 |
+
return index_result
|
src/palette/quantizer.py
ADDED
|
@@ -0,0 +1,51 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from sklearn.cluster import KMeans
|
| 2 |
+
from .color_space import rgb_to_lab, lab_to_rgb
|
| 3 |
+
|
| 4 |
+
LARGE_DATASET_THRESHOLD = 10000
|
| 5 |
+
KMEANS_INIT_ATTEMPTS_FOR_LARGE_DATA = 3
|
| 6 |
+
KMEANS_INIT_ATTEMPTS_FOR_SMALL_DATA = 10
|
| 7 |
+
KMEANS_MAX_ITERATIONS_LARGE_DATA = 100
|
| 8 |
+
KMEANS_MAX_ITERATIONS_SMALL_DATA = 300
|
| 9 |
+
RANDOM_SEED = 42
|
| 10 |
+
|
| 11 |
+
|
| 12 |
+
def quantize_colors(colors, target_color_count=256):
|
| 13 |
+
if len(colors) == 0:
|
| 14 |
+
raise ValueError("No colors to quantize")
|
| 15 |
+
|
| 16 |
+
perceptual_colors = rgb_to_lab(colors)
|
| 17 |
+
total_input_colors = len(perceptual_colors)
|
| 18 |
+
|
| 19 |
+
kmeans_config = determine_kmeans_parameters(total_input_colors)
|
| 20 |
+
actual_clusters = min(target_color_count, total_input_colors)
|
| 21 |
+
|
| 22 |
+
print(
|
| 23 |
+
f"Quantizing {total_input_colors} colors to {target_color_count} palette entries..."
|
| 24 |
+
)
|
| 25 |
+
|
| 26 |
+
cluster_model = KMeans(
|
| 27 |
+
n_clusters=actual_clusters,
|
| 28 |
+
n_init=kmeans_config["init_attempts"],
|
| 29 |
+
max_iter=kmeans_config["max_iterations"],
|
| 30 |
+
random_state=RANDOM_SEED,
|
| 31 |
+
)
|
| 32 |
+
cluster_model.fit(perceptual_colors)
|
| 33 |
+
|
| 34 |
+
palette_centers_in_lab = cluster_model.cluster_centers_
|
| 35 |
+
palette_in_rgb = lab_to_rgb(palette_centers_in_lab)
|
| 36 |
+
|
| 37 |
+
print(f"Created palette with {len(palette_in_rgb)} unique colors")
|
| 38 |
+
return palette_in_rgb
|
| 39 |
+
|
| 40 |
+
|
| 41 |
+
def determine_kmeans_parameters(sample_count):
|
| 42 |
+
if sample_count >= LARGE_DATASET_THRESHOLD:
|
| 43 |
+
return {
|
| 44 |
+
"init_attempts": KMEANS_INIT_ATTEMPTS_FOR_LARGE_DATA,
|
| 45 |
+
"max_iterations": KMEANS_MAX_ITERATIONS_LARGE_DATA,
|
| 46 |
+
}
|
| 47 |
+
else:
|
| 48 |
+
return {
|
| 49 |
+
"init_attempts": KMEANS_INIT_ATTEMPTS_FOR_SMALL_DATA,
|
| 50 |
+
"max_iterations": KMEANS_MAX_ITERATIONS_SMALL_DATA,
|
| 51 |
+
}
|
src/preprocessing/__init__.py
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from .simplifier import simplify_texture
|
| 2 |
+
|
| 3 |
+
__all__ = ["simplify_texture"]
|
src/preprocessing/context.md
ADDED
|
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Texture Preprocessing
|
| 2 |
+
|
| 3 |
+
Simplification filters for removing fine details from textures
|
| 4 |
+
|
| 5 |
+
## Purpose
|
| 6 |
+
|
| 7 |
+
- Remove texture artifacts (scales, fur, feathers, noise)
|
| 8 |
+
- Preserve edges and broad color zones
|
| 9 |
+
- Prepare textures for cleaner palettization
|
| 10 |
+
|
| 11 |
+
## Layout
|
| 12 |
+
|
| 13 |
+
```
|
| 14 |
+
preprocessing/
|
| 15 |
+
├── context.md # This file
|
| 16 |
+
├── __init__.py # Module exports
|
| 17 |
+
└── simplifier.py # Bilateral filter implementation
|
| 18 |
+
```
|
| 19 |
+
|
| 20 |
+
## Scope
|
| 21 |
+
|
| 22 |
+
- In-scope: Edge-preserving texture smoothing
|
| 23 |
+
- Out-of-scope: Color adjustment, resizing, format conversion
|
| 24 |
+
|
| 25 |
+
## Entrypoints
|
| 26 |
+
|
| 27 |
+
- `simplify_texture(image, enabled, d, sigma_color, sigma_space)` - Apply bilateral filter
|
| 28 |
+
|
| 29 |
+
## Dependencies
|
| 30 |
+
|
| 31 |
+
- External: opencv-python, PIL, numpy
|
src/preprocessing/simplifier.py
ADDED
|
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import numpy as np
|
| 2 |
+
from PIL import Image
|
| 3 |
+
import cv2
|
| 4 |
+
|
| 5 |
+
DEFAULT_BILATERAL_D = 9
|
| 6 |
+
DEFAULT_BILATERAL_SIGMA_COLOR = 75
|
| 7 |
+
DEFAULT_BILATERAL_SIGMA_SPACE = 75
|
| 8 |
+
|
| 9 |
+
|
| 10 |
+
def simplify_texture(image, enabled=True, d=None, sigma_color=None, sigma_space=None):
|
| 11 |
+
if not enabled:
|
| 12 |
+
return image
|
| 13 |
+
|
| 14 |
+
if d is None:
|
| 15 |
+
d = DEFAULT_BILATERAL_D
|
| 16 |
+
if sigma_color is None:
|
| 17 |
+
sigma_color = DEFAULT_BILATERAL_SIGMA_COLOR
|
| 18 |
+
if sigma_space is None:
|
| 19 |
+
sigma_space = DEFAULT_BILATERAL_SIGMA_SPACE
|
| 20 |
+
|
| 21 |
+
if hasattr(image, "convert"):
|
| 22 |
+
rgb_image = image.convert("RGB")
|
| 23 |
+
img_array = np.array(rgb_image, dtype=np.uint8)
|
| 24 |
+
else:
|
| 25 |
+
img_array = np.array(image, dtype=np.uint8)
|
| 26 |
+
|
| 27 |
+
filtered = cv2.bilateralFilter(img_array, d, sigma_color, sigma_space)
|
| 28 |
+
|
| 29 |
+
return Image.fromarray(filtered, "RGB")
|