--- base_model: Daemontatox/Zirel-3 tags: - text-generation-inference - transformers - unsloth - glm4_moe - pruning - REAP - MOE - glm - zirel - mlx - mlx-my-repo license: apache-2.0 language: - en library_name: transformers --- # Wwayu/Zirel-3-mlx-4Bit The Model [Wwayu/Zirel-3-mlx-4Bit](https://huggingface.co/Wwayu/Zirel-3-mlx-4Bit) was converted to MLX format from [Daemontatox/Zirel-3](https://huggingface.co/Daemontatox/Zirel-3) using mlx-lm version **0.26.4**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("Wwayu/Zirel-3-mlx-4Bit") prompt="hello" if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```