100 Coder/Programming - MOE, Reasoning, Reg, Imatrix, Fused.
Models (0.8B to 87B) in regular, "reasoning", "Brainstorm", MOE (1x to 8x / 128 experts), and expanded to create better and stronger code, faster.
Text Generation • 53B • Updated • 67 • 7Note 128 experts (MOE) - Mixture of experts. All experts are coders. 256K context ; using Brainstorm 40x to enhance performance. Non-Thinking model (you can activate this using a system prompt) Links to GGUFs on this page.
DavidAU/Qwen3-Coder-42B-A3B-Instruct-TOTAL-RECALL-MASTER-CODER-M
Text Generation • 42B • Updated • 294 • 3Note 128 experts (MOE) - Mixture of experts. All experts are coders. 256K context ; using Brainstorm 20x to enhance performance. Non-Thinking model (you can activate this using a system prompt) Links to GGUFs on this page.
DavidAU/Qwen3-Coder-42B-A3B-Instruct-TOTAL-RECALL-MASTER-CODER-M-512k-ctx
Text Generation • 42B • Updated • 10 • 2Note 128 experts (MOE) - Mixture of experts. All experts are coders. 512K context ; using Brainstorm 20x to enhance performance. Non-Thinking model (you can activate this using a system prompt) Links to GGUFs on this page. Special note: Even you do not need the context, try this model as context changes will change generation.
DavidAU/Qwen3-Coder-42B-A3B-Instruct-TOTAL-RECALL-MASTER-CODER-M-1million-ctx
Text Generation • 42B • Updated • 42 • 5Note 128 experts (MOE) - Mixture of experts. All experts are coders. 1 million context ; using Brainstorm 20x to enhance performance. Non-Thinking model (you can activate this using a system prompt) Links to GGUFs on this page. Special note: Even you do not need the context, try this model as context changes will change generation.
DavidAU/Qwen3-53B-A3B-2507-THINKING-TOTAL-RECALL-v2-MASTER-CODER
Text Generation • 53B • Updated • 18 • 7Note 128 experts (MOE) - Mixture of experts. 256K context ; using Brainstorm 40x to enhance performance. Non-Thinking model (you can activate this using a system prompt) Links to GGUFs on this page.
DavidAU/Qwen3-53B-A3B-2507-TOTAL-RECALL-v2-MASTER-CODER
Text Generation • 53B • Updated • 13 • 8Note 128 experts (MOE) - Mixture of experts. 256K context ; using Brainstorm 40x to enhance performance. Links to GGUFs on this page. Non-thinking model => STRAIGHT to coding.
DavidAU/Qwen3-42B-A3B-2507-Thinking-TOTAL-RECALL-v2-Medium-MASTER-CODER
Text Generation • 42B • Updated • 18 • 4Note 128 experts (MOE) - Mixture of experts. 256K context ; using Brainstorm 20x to enhance performance. Links to GGUFs on this page. Enhanced Thinking model => Smarter thinking, fewer tokens, better code.
DavidAU/Qwen3-42B-A3B-2507-TOTAL-RECALL-v2-Medium-MASTER-CODER
Text Generation • 42B • Updated • 15 • 3Note 128 experts (MOE) - Mixture of experts. 256K context ; using Brainstorm 20x to enhance performance. Links to GGUFs on this page. Non-thinking model => STRAIGHT to coding.
DavidAU/Qwen3-53B-A3B-TOTAL-RECALL-MASTER-CODER-v1.4
Text Generation • 53B • Updated • 22 • 3Note 128 experts MOE Model. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Uses Brainstorm adapter (40x) by DavidAU to extend model function/performance.
DavidAU/Qwen2.5-2X7B-Coder-Soar-qwen-Coder-Instruct-OlympicCoder-19B
Text Generation • 19B • Updated • 14 • 2Note Specialized 2 model with MOE with additional shared expert. Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Mistral-Magistral-Devstral-Instruct-FUSED-CODER-Reasoning-36B
Text Generation • 36B • Updated • 50 • 3Note Newest Devstral version (1.1), with even better coding abilities. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k. This is a fused model with 62 layers, 561 tensors. Short thinking blocks -> then straight to coding.
DavidAU/Mistral-Devstral-2507-CODER-Brainstorm40x-44B
Text Generation • 44B • Updated • 65 • 2Note Newest Devstral version, with even better coding abilities. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Mistral-Devstral-2505-CODER-Brainstorm40x-44B
Text Generation • 44B • Updated • 13 • 2Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Qwen3-Esper3-Reasoning-CODER-Instruct-21B-Brainstorm20x-128k-ctx
Text Generation • 21B • Updated • 17Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Uses Brainstorm adapter by DavidAU to extend model function/performance. 128k context.
DavidAU/Qwen3-Esper3-Reasoning-CODER-Instruct-21B-Brainstorm20x
Text Generation • 21B • Updated • 17 • 2Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Qwen2.5-Microsoft-NextCoder-Brainstorm20x-128k-ctx-12B
Text Generation • 12B • Updated • 16 • 4Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Qwen3-Esper3-Reasoning-CODER-Instruct-12B-Brainstorm20x
Text Generation • 12B • Updated • 106 • 1Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen3-Jan-Nano-128k-6B-Brainstorm20x
Text Generation • 6B • Updated • 14 • 4Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. This is a general use AND coder/programming model.
DavidAU/Qwen3-Blitzar-Coder-F1-6B-Brainstorm20x
Text Generation • 6B • Updated • 25 • 3Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen3-Polaris-Preview-128k-6B-Brainstorm20x
Text Generation • 6B • Updated • 25 • 2Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. This is a general use AND coder/programming model.
DavidAU/Qwen3-Instruct-6B-Brainstorm20x-128k-ctx
Text Generation • 6B • Updated • 11 • 1Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. This is a general use AND coder/programming model. 128 k context.
DavidAU/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x
Text Generation • 6B • Updated • 23 • 1Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. This is a general use AND coder/programming model.
DavidAU/Qwen3-Bootes-Quick-Coder-Instruct-6B-Brainstorm20x
Text Generation • 6B • Updated • 20Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen3-Esper3-Reasoning-CODER-Instruct-6B-Brainstorm20x
Text Generation • 6B • Updated • 25Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen3-Esper3-Reasoning-Instruct-6B-Brainstorm20x-Enhanced-E32
Text Generation • 6B • Updated • 24 • 1Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Float 32 enhanced.
DavidAU/Qwen3-Esper3-Reasoning-Instruct-6B-Brainstorm20x-Enhanced-E32-128k-ctx
Text Generation • 6B • Updated • 23Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Float 32 enhanced, and 128k context.
DavidAU/Qwen2.5-Microsoft-NextCoder-Soar-Instruct-FUSED-CODER-Fast-11B
Text Generation • 11B • Updated • 10 • 2Note Two models fused together to make a stronger coder model. Float32 - 32 bit - to give the model extra power. This is an instant coder -> enter your prompt, get code.
DavidAU/Qwen3-Shining-Valiant-Instruct-Fast-CODER-Reasoning-2.4B
Text Generation • 2B • Updated • 14 • 1Note Model has full thinking/reasoning too. Model is fused together from 2 coder models. Source in Float 32 (32 bit) for stronger performance. Generally short thinking blocks or none at all. ("Fast") Suggest 2-4 generations.
DavidAU/Qwen3-Shining-Valiant-Instruct-CODER-Reasoning-2.7B
Text Generation • 3B • Updated • 23Note Model has full thinking/reasoning too. Model is fused together from 2 coder models. Source in Float 32 (32 bit) for stronger performance. Suggest 2-4 generations.
DavidAU/Qwen3-Shining-Lucy-CODER-3.4B-Brainstorm20x-e32
Text Generation • 3B • Updated • 42 • 1Note 40k context. Good for draft, simple, or code blocks - including complex. Model has full thinking/reasoning too. Model is fused together from 2 coder models. Source in Float 32 (32 bit) for stronger performance. This model will be stronger than the "reg" version. Brainstorm adapter (20x) will provide "out of the box" coding solutions. Suggest 2-4 generations to use this feature.
DavidAU/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2
Text Generation • 2B • Updated • 23 • 1Note 40k context. Good for draft, simple, or code blocks - including complex. Model has full thinking/reasoning too. Model is fused together from 2 coder models. Source in Float 32 (32 bit) for stronger performance. This model will be stronger than the "reg" version.
DavidAU/Qwen3-Shining-Lucy-CODER-2.4B
Text Generation • 2B • Updated • 31Note 40k context. Good for draft, simple, or code blocks - including complex. Model has full thinking/reasoning too. Model is fused together from 2 coder models
DavidAU/Qwen3-Zero-Coder-Reasoning-0.8B-NEO-EX-GGUF
Text Generation • 0.8B • Updated • 17.6k • 14Note Uses NEO Imatrix dataset (by DavidAU) to augment model performance. 40k context. Good for draft, simple, or code blocks - including complex. Model has full thinking/reasoning too. Model is fused together from 2 coder models
DavidAU/Qwen3-Zero-Coder-Reasoning-V2-0.8B-NEO-EX-GGUF
Text Generation • 0.8B • Updated • 16.1k • 8Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. 40k context. Good for draft, simple, or code blocks - including complex. Model has full thinking/reasoning too. Stronger than V1. Model is fused together from 2 coder models.
DavidAU/Qwen3-Zero-Coder-Reasoning-0.8B
Text Generation • 0.8B • Updated • 140 • 2Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. 40k context. Good for draft, simple, or code blocks - including complex. Model has full thinking/reasoning too. Model is fused together from 2 coder models
DavidAU/Qwen3-Zero-Coder-Reasoning-V2-0.8B
Text Generation • 0.8B • Updated • 49Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. 40k context. Good for draft, simple, or code blocks - including complex. Model has full thinking/reasoning too. Stronger than V1. Model is fused together from 2 coder models.
-
DavidAU/Openai_gpt-oss-20b-CODER-NEO-CODE-DI-MATRIX-GGUF
Text Generation • 21B • Updated • 3.51k • 12 -
DavidAU/Openai_gpt-oss-20b-NEO-GGUF
Text Generation • 21B • Updated • 2.67k • 14 -
DavidAU/Openai_gpt-oss-120b-NEO-Imatrix-GGUF
Text Generation • 117B • Updated • 1.75k • 9 -
DavidAU/OpenAi-GPT-oss-20b-MODERATE-uncensored-NEO-Imatrix-gguf
Text Generation • 21B • Updated • 6.82k • 9 -
DavidAU/Qwen3-Jan-v1-256k-ctx-6B-Brainstorm20x
Text Generation • 6B • Updated • 32 • 4 -
DavidAU/Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER
Text Generation • 42B • Updated • 364 • 19 -
DavidAU/Mistral-2x24B-MOE-Magistral-2506-Devstral-2507-1.1-Coder-Reasoning-Ultimate-44B
Text Generation • 44B • Updated • 20 • 2 -
DavidAU/Qwen3-MOE-4x4B-16B-Jan-Polaris-Instruct-Power-House-V1.1
Text Generation • 12B • Updated • 15 • 2 -
DavidAU/Qwen3-42B-A3B-2507-YOYO2-TOTAL-RECALL-Instruct
Text Generation • 42B • Updated • 10 • 1 -
DavidAU/Qwen3-54B-A3B-2507-YOYO2-TOTAL-RECALL-Instruct
Text Generation • 53B • Updated • 21 -
DavidAU/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium
Text Generation • 17B • Updated • 19 -
DavidAU/Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL
Text Generation • 42B • Updated • 37