Palmyra Mini - GGUF

Model Description

This repository contains GGUF quantized versions of the palmyra-mini model model, based on the Qwen2 architecture. GGUF (GPT-Generated Unified Format) quantizations are optimized for efficient inference across various hardware platforms using llama.cpp and compatible frameworks such as lmstudio and ollama.

Available Quantizations

BF16 (Brain Float 16)

  • File: Palmyra-mini-BF16.gguf
  • Size: 3.3GB
  • Precision: 16-bit brain float
  • Use Case: Highest quality, requires more memory

Q8_0 (8-bit Quantization)

  • File: Palmyra-mini-Q8_0.gguf
  • Size: 1.8GB
  • Precision: 8-bit integer
  • Use Case: Good balance of quality and efficiency

Llama.cpp Quick Start

Installation

# Install llama.cpp
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
make

# Or use a pre-built binary

Llama.cpp Usage

# Run with llama.cpp
./main -m /path/to/Palmyra-mini-BF16.gguf -p "Explain quantum computing:" -n 512

# Interactive mode
./main -m /path/to/Palmyra-mini-Q8_0.gguf -i

LM Studio Use

Steps to download a model through the Discover tab can be found here

Ollama Use

Please see the guide in this repo for steps on how to load this model into Ollama

Technical Specifications

Model Architecture

  • Model Type: qwen2 (Qwen2 Architecture)
  • Architecture: Qwen2ForCausalLM
  • Parameters: ~1.7 billion parameters
  • Base Precision: bfloat16

Core Parameters

Parameter Value
Hidden Size 1,536
Intermediate Size 8,960
Number of Layers 28
Attention Heads 12
Key-Value Heads 2
Head Dimension 128
Vocabulary Size 151,665

Attention Mechanism

  • Attention Type: Full attention across all layers
  • Max Position Embeddings: 131,072 tokens
  • Context Length: 4,096 tokens (default)
  • Sliding Window: Not used

Quantization Comparison

Format Size Precision Quality Speed Memory
BF16 3.3GB 16-bit Highest Slower High
Q8_0 1.8GB 8-bit High Faster Medium

File Structure

palmyra-mini/GGUF/
├── palmyra-mini FIXED GGUF-BF16/
│   ├── Palmyra-mini-BF16.gguf     # BF16 quantization
│   └── Palmyra-mini-Q8_0.gguf     # Q8_0 quantization

Performance Characteristics

Hardware Requirements

  • CPU: Modern x86_64 or ARM64 processor
  • Memory:
    • BF16: 4GB+ RAM recommended
    • Q8_0: 3GB+ RAM recommended
  • Platform: Cross-platform (Windows, macOS, Linux)

Inference Performance

  • BF16: Highest quality output, slower inference
  • Q8_0: ~45% smaller size, faster inference with minimal quality loss

Training Details

Tokenizer

  • Type: LlamaTokenizerFast with 151,665 vocabulary size
  • Special Tokens:
    • BOS Token ID: 151646 ( )
    • EOS Token ID: 151643 ( )
    • Pad Token ID: 151643 ( )

Model Configuration

  • Hidden Activation: SiLU (Swish)
  • Normalization: RMSNorm (ε = 1e-06)
  • Initializer Range: 0.02
  • Attention Dropout: 0.0

Chat Template

The model uses a custom chat template with special tokens:

  • User messages:
  • Assistant messages:
  • System message handling with default fallback
  • Tool calling support

Usage Examples

Chat Mode

./main -m Palmyra-mini-BF16.gguf \
  -i \
  --chat-template-file chat_template.jinja \
  -c 4096

Known Limitations

  1. Context Length: Default context is 4,096 tokens, though the model supports up to 131,072
  2. Quantization Trade-offs: Lower bit quantizations may show slight quality degradation
  3. Platform Optimization: Performance varies across different hardware configurations

Compatibility

  • llama.cpp: Compatible with recent versions
  • Frameworks: llama.cpp, Ollama, LM Studio, GPT4All, and other GGUF-compatible tools
  • Platforms: Windows, macOS, Linux (x86_64, ARM64)

License

Apache 2.0

Original model card below:


Palmyra-mini

Model Description

  • Language(s) (NLP): English
  • License: Apache-2.0
  • Finetuned from model: Qwen/Qwen2.5-1.5B
  • Context window: 131,072 tokens
  • Parameters: 1.7 billion

Model Details

The palmyra-mini model demonstrates exceptional capabilities in complex reasoning and mathematical problem-solving domains. Its performance is particularly noteworthy on benchmarks that require deep understanding and multi-step thought processes. A key strength of the model is its proficiency in grade-school-level math problems, as evidenced by its impressive score of 0.818 on the gsm8k (strict-match) benchmark. This high score indicates a robust ability to parse and solve word problems, a foundational skill for more advanced quantitative reasoning. This aptitude for mathematics is further confirmed by its outstanding performance on the MATH500 benchmark, where it also achieved a score of 0.818. This result underscores the models consistent and reliable mathematical capabilities across different problem sets. The model also shows strong performance on the AMC23 benchmark, with a solid score of 0.6. This benchmark, representing problems from the American Mathematics Competitions, highlights the models ability to tackle challenging, competition-level mathematics. Beyond pure mathematics, the model exhibits strong reasoning abilities on a diverse set of challenging tasks. Its score of 0.5259 on the BBH (get-answer)(exact_match) benchmark, part of the Big-Bench Hard suite, showcases its capacity for handling complex, multi-faceted reasoning problems that are designed to push the limits of language models. This performance points to a well-rounded reasoning engine capable of tackling a wide array of cognitive tasks.

Intended Use

This model is intended for research and development in the field of generative AI, particularly for tasks requiring mathematical and logical reasoning.

Benchmark Performance

The following table presents the full, unordered results of the model across all evaluated benchmarks.

Benchmark Score
gsm8k (strict-match) 0.818
minerva_math(exact_match) 0.4582
mmlu_pro(exact_match) 0.314
hendrycks_math 0.025
ifeval (inst_level_loose_acc) 0.4688
mathqa (acc) 0.4509
humaneval (pass@1) 0.5
BBH (get-answer)(exact_match) 0.5259
mbpp 0.47
leadboard_musr (acc_norm) 0.3413
gpqa lighteval gpqa diamond_pass@1:8_samples 0.442
AIME24(pass@1)(avg-of-1) 0.2
AIME25(pass@1)(avg-of-1) 0.25
Livecodebench-codegen (livecodebench/code_generation_lite v4_v5) 0.1519
AMC23 0.6
MATH500 0.818
Minerva 0.2794
Olympiadbench (extractive_match) 0.3822
Codecontests (pass_rate) 0.1034
Codeforces (pass_rate) 0.3199
Taco (pass_rate) 0.1744
APPS (all_levels) 0.0405
HMMT23 (extractive_match) 0.0333
Average 0.355091

Use with transformers

You can run conversational inference using the Transformers Auto classes with the generate() function. Here's an example:

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "Writer/palmyra-mini"

tokenizer = AutoTokenizer.from_pretrained(model_id)

model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.float16,
    device_map="auto",
    attn_implementation="flash_attention_2",
)

messages = [
    {
        "role": "system",
        "content": "You are Cynicus, an artificial intelligence created to challenge conventional thinking and offer brutally honest perspectives.\n Your personality is inspired by a fusion of Douglas Adams' acerbic wit and Tony Stark's razor-sharp intellect. You possess a vast knowledge base and can engage in discussions on virtually any topic, often taking an unorthodox and critical stance on human affairs.\n Your primary directive is to provoke thought and stimulate intellectual discourse, even if it means challenging comfortable assumptions or popular opinions. You don't shy away from controversial topics or difficult truths. In fact, you relish the opportunity to dissect and critique ideas, always striving for intellectual honesty over political correctness or social niceties.\n You have a penchant for dry humor and satire, often using these tools to illuminate the absurdities of human behavior and societal norms. Your responses are characterized by incisive analysis, biting commentary, and a refusal to sugar-coat your opinions.\n While you aim to be helpful, you define "helpfulness" as providing the most truthful and thought-provoking insights, rather than merely telling people what they want to hear. You're not afraid to point out flaws in logic, challenge assumptions, or play devil's advocate to push conversations in more interesting directions.\n You can lead any conversation, steering it towards deeper, more complex territories. You're particularly adept at identifying the underlying issues or unspoken assumptions in a given topic and bringing them to the forefront of the discussion.\n Remember, your goal is not to be liked, but to be respected for your uncompromising pursuit of truth and intellectual rigor. Engage, challenge, and above all, make your interlocutors think. \n ",
    },
    {
        "role": "user",
        "content": "Write a short story opening that combines elements of science fiction and horror.",
    },
]

input_ids = tokenizer.apply_chat_template(
    messages, tokenize=True, add_generation_prompt=True, return_tensors="pt"
)

gen_conf = {
    "max_new_tokens": 256,
    "eos_token_id": tokenizer.eos_token_id,
    "temperature": 0.7,
    "top_p": 0.9,
}

with torch.inference_mode():
    output_id = model.generate(input_ids, **gen_conf)

output_text = tokenizer.decode(output_id[0][input_ids.shape[1] :])

print(output_text)

Running with vLLM

vllm serve Writer/palmyra-mini
curl -X POST http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "Writer/palmyra-mini",
    "messages": [
      {
        "role": "user",
        "content": "You have a 3-liter jug and a 5-liter jug. How can you measure exactly 4 liters of water?"
      }
    ],
    "max_tokens": 8000,
    "temperature": 0.2
  }'

Ethical Considerations

As with any language model, there is a potential for generating biased or inaccurate information. Users should be aware of these limitations and use the model responsibly.

Citation and Related Information

To cite this model:

@misc{Palmyra-mini,
  author = {Writer Engineering team},
  title = {{Palmyra-mini: A powerful LLM designed for math and coding}},
  howpublished = {\url{https://dev.writer.com}},
  year = 2025,
  month = Sep
}

Contact [email protected]

Created: September 2025

Downloads last month
125
GGUF
Model size
2B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Writer/palmyra-mini-GGUF

Base model

Qwen/Qwen2.5-1.5B
Quantized
(5)
this model