Apriel-1.5-15B-Thinker — MLX 2-bit (Apple Silicon)
Format: MLX (Mac, Apple Silicon)
Quantization: 2-bit (ultra-compact)
Base: ServiceNow-AI/Apriel-1.5-15B-Thinker
Architecture: Pixtral-style LLaVA (vision encoder → 2-layer projector → decoder)
This repository provides a 2-bit MLX build of Apriel-1.5-15B-Thinker for tight-memory Apple-Silicon devices. It prioritizes small footprint and fast load over absolute accuracy. If quality is your primary concern, prefer the 6-bit MLX variant.
🔎 What is Apriel-1.5-15B-Thinker?
Apriel-1.5-15B-Thinker is an open multimodal reasoning model that scales a Pixtral-style VLM with depth upscaling, two-stage multimodal continual pretraining (CPT), and high-quality SFT with explicit reasoning traces (math, coding, science, tool-use). The training recipe focuses on mid-training (no RLHF/RM), delivering strong image-grounded reasoning at modest compute.
This card documents the 2-bit MLX conversion. Expect higher compression and noticeable quality drop vs FP/Int or 6-bit, especially on fine-grained text in images, dense charts, or long-chain reasoning.
📦 What’s in this repo
config.json(MLX config mapped for Pixtral-style VLM)mlx_model*.safetensors(2-bit quantized shards)tokenizer.json,tokenizer_config.jsonprocessor_config.json/image_processor.jsonmodel_index.jsonand metadata
✅ Intended uses
- On-device image understanding where memory is constrained (light captioning, object/layout descriptions)
- Quick triage of screenshots, UI mocks, simple charts, forms with broad structure
- Educational demos of VLMs on Mac w/ minimal RAM budget
⚠️ Limitations
- 2-bit is very lossy. Expect degradation on:
- OCR-heavy tasks, small fonts, dense tables
- Multi-step math/coding with visual grounding
- Long context or many images
- May hallucinate or miss small details. Human review is required for critical use.
🖥️ Apple-Silicon guidance
- Works: M1/M2 (8–16 GB) for short prompts + single image; recommended: M3/M4 for smoother throughput.
- Use GPU:
--device mps
- Downloads last month
- 38
Model tree for mlx-community/Apriel-1.5-15b-Thinker-2bit-MLX
Base model
ServiceNow-AI/Apriel-1.5-15b-Thinker