Is it compatible with vLLM?

#1
by bullerwins - opened

Hi!

Title. Is is compatible with vLLM? I'm trying to launch it with:

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6 VLLM_PP_LAYER_PARTITION=8,6,23,6,6,6,7 vllm serve \
                                                                                      /mnt/llms/models/ModelCloud/MiniMax-M2-GPTQMODEL-W4A16/ \
                                                                                      --served-model-name MiniMax-M2-AWQ \
                                                                                      --enable-auto-tool-choice \
                                                                                      --tool-call-parser minimax_m2 \
                                                                                      --reasoning-parser minimax_m2_append_think \
                                                                                      --swap-space 16 \
                                                                                      --max-num-seqs 32 \
                                                                                      --max-model-len 32000 \
                                                                                      --gpu-memory-utilization 0.9 \
                                                                                      --tensor-parallel-size 1 -pp 7 \
                                                                                      --enable-expert-parallel \
                                                                                      --trust-remote-code \
                                                                                      --disable-log-requests \
                                                                                      --host 0.0.0.0 \
                                                                                      --port 5000

But it didn't work. Using vllm nightly from today.

GPUS:
CUDA0=5090
CUDA1=3090
CUDA2=rtx6000
CUDA3=3090
CUDA4=3090
CUDA5=3090
CUDA6=5090

Output error log:
https://pastebin.com/sNJQdcmK

ModelCloud.AI org

There is a bug in minimax m2 modeling code in vllm that is causing this error. Should be fixed within the week.

Same issue here. Should we wait for a weight fix or for a vLLM nightly build update?

I was able to run with vllm this model with this fix made by gemini:
https://github.com/avtc/vllm/commit/cd3f7a4e9121fbdeff9f52e088bc3d9fa33ebfd2
the branch: https://github.com/avtc/vllm/tree/feature/fix-gptq-m2-load-gemini

The code to run vllm on 8 x 3090:

export VLLM_ATTENTION_BACKEND="FLASHINFER"
export TORCH_CUDA_ARCH_LIST="8.6"
export VLLM_SLEEP_WHEN_IDLE=1
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
export VLLM_MARLIN_USE_ATOMIC_ADD=1
export SAFETENSORS_FAST_GPU=1

vllm serve /home/ubuntu/models/MiniMax-M2-GPTQMODEL-W4A16-ModelCloud \
    -tp 8 \
    --port 8000 \
    --host 0.0.0.0 \
    --uvicorn-log-level info \
    --trust-remote-code \
    --gpu-memory-utilization 0.925 \
    --max-num-seqs 1 \
    --trust-remote-code \
    --dtype=float16 \
    --seed 1234 \
    --max-model-len 196608 \
    --tool-call-parser minimax_m2 \
    --reasoning-parser minimax_m2_append_think \
    --enable-auto-tool-choice \
    --enable-sleep-mode \
    --compilation-config '{"level": 3, "cudagraph_capture_sizes": [1], "cudagraph_mode": "PIECEWISE"}'

With sampling params:

        "top_p": 0.95,
        "temperature": 1.0,
        "repetition_penalty": 1.00,
        "top_k": 40,
        "min_p": 0.0,

The result for request: Create a Playable Synth Keyboard using html, css, js in a single html file:

image

and it works well.

The heptagon with balls looks good and unusual with shiny balls:

image

Sign up or log in to comment