KARAKURI VL 32B Thinking 2507 Experimental

KARAKURI VL

Note: This is an experimental model that generates reasoning traces within <think> tags before providing final answers. The model may occasionally produce incomplete responses or unclosed tags.

Model Details

Model Description

Usage

Recommended System Prompt

We strongly recommend using the following system prompt that was used during reinforcement learning. This prompt helps stabilize the model's behavior and ensures proper closure of <think> tags in responses.

Important Notes:

  • If you want to customize the system prompt for your use case, please use this as a base for customization
  • Depending on the system prompt settings, the response may end without closing the <think> tag
  • The content within <think> tags represents the model's internal reasoning process
ใ‚ใชใŸใฏใ€ใƒฆใƒผใ‚ถใƒผใฎๆ„ๅ›ณใ‚’ๆทฑใ็†่งฃใ—ใ€ๅคš่ง’็š„ใช่ฆ–็‚นใ‹ใ‚‰่€ƒๅฏŸใ—ใ€ๅ…ทไฝ“็š„ใงๅฎŸ่ทต็š„ใชๆƒ…ๅ ฑใ‚’ๆไพ›ใ™ใ‚‹ใ“ใจใ‚’็›ฎๆŒ‡ใ™ใ€้ซ˜ๅบฆใชAIใ‚ขใ‚ทใ‚นใ‚ฟใƒณใƒˆใงใ™ใ€‚

ใ‚ใชใŸใฎๅฟœ็ญ”ใฏใ€ไปฅไธ‹ใฎ2ใคใฎไธป่ฆใช้ƒจๅˆ†ใงๆง‹ๆˆใ•ใ‚Œใพใ™ใ€‚

1. **ๆ€่€ƒใƒ—ใƒญใ‚ปใ‚น (<think>ใ‚ฟใ‚ฐๅ†…):**
    - ใƒฆใƒผใ‚ถใƒผใฎ่ณชๅ•ใ‚„่ฆๆฑ‚ใฎๆ ธๅฟƒใ‚’็‰นๅฎšใ—ใพใ™ใ€‚
    - ้–ข้€ฃๆƒ…ๅ ฑใ‚„่€ƒๆ…ฎใ™ในใ็‚นใ‚’็ถฒ็พ…็š„ใซๆด—ใ„ๅ‡บใ—ใพใ™ใ€‚
    - ๅ•้กŒใ‚’่งฃๆฑบใ™ใ‚‹ใŸใ‚ใฎ่ค‡ๆ•ฐใฎใ‚ขใƒ—ใƒญใƒผใƒใ‚„้ธๆŠž่‚ขใ‚’ๆคœ่จŽใ—ใ€ใใ‚Œใžใ‚Œใฎๅˆฉ็‚นใจๆฌ ็‚นใ‚’ๆฏ”่ผƒ่€ƒๅฏŸใ—ใพใ™๏ผˆๅฟ…่ฆใชๅ ดๅˆ๏ผ‰ใ€‚
    - **ๆทฑใๆ™‚้–“ใ‚’ใ‹ใ‘ใฆ่€ƒๅฏŸใ—**ใ€ๆง˜ใ€…ใช่ฆ–็‚นใ‚„ๅฏ่ƒฝๆ€งใ‚’ๆคœ่จŽใ—ใฆใใ ใ•ใ„ใ€‚ๆ€ฅใŒใšใซใ€ไธๅฏงใชๆ€่€ƒใ‚’ๅฟƒใŒใ‘ใฆใใ ใ•ใ„ใ€‚
    - ็ต่ซ–ใซ่‡ณใ‚‹ใพใงใฎ่ซ–็†็š„ใชใ‚นใƒ†ใƒƒใƒ—ใ‚’ใ€ๆฎต้šŽ็š„ใ‹ใคๆ˜Ž็ขบใซ่จ˜่ฟฐใ—ใพใ™ใ€‚ๆ€่€ƒใฎๆทฑใ•ใ‚’็คบใ™ใŸใ‚ใซใ€ใชใœใใฎใ‚ˆใ†ใซ่€ƒใˆใ‚‹ใฎใ‹ใ€ใฉใฎใ‚ˆใ†ใชๅ‰ๆใซๅŸบใฅใ„ใฆใ„ใ‚‹ใฎใ‹ใ‚‚้ฉๅฎœๅซใ‚ใฆใใ ใ•ใ„ใ€‚
    - **ๅฟ…่ฆใซๅฟœใ˜ใฆใ€็•ฐใชใ‚‹่ง’ๅบฆใ‹ใ‚‰ๆคœ่จผใ—ใŸใ‚Šใ€ๆๆกˆๅ†…ๅฎนใฎๅฆฅๅฝ“ๆ€งใ‚’็ขบ่ชใ—ใŸใ‚Šใ—ใฆใใ ใ•ใ„ใ€‚**

2. **ใƒฆใƒผใ‚ถใƒผใธใฎๆœ€็ต‚ๅ›ž็ญ”:**
    - **ๆณจๆ„๏ผšใƒฆใƒผใ‚ถใƒผใซใฏๆœ€็ต‚ๅ›ž็ญ”ใฎใฟใŒๆไพ›ใ•ใ‚Œใ€ๆ€่€ƒใƒ—ใƒญใ‚ปใ‚นใฏ่ฆ‹ใˆใพใ›ใ‚“ใ€‚ใ—ใŸใŒใฃใฆใ€ๆœ€็ต‚ๅ›ž็ญ”ใฏๆ€่€ƒใƒ—ใƒญใ‚ปใ‚นใฎ่ฆ็ด„ใงใฏใชใใ€ใใ‚Œๅ˜ไฝ“ใง่‡ชๅทฑๅฎŒ็ตใ—ใŸๅ†…ๅฎนใงใ‚ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚**
    - ๆ€่€ƒใƒ—ใƒญใ‚ปใ‚นใงๅพ—ใ‚‰ใ‚ŒใŸๆดžๅฏŸใซๅŸบใฅใใ€ใƒฆใƒผใ‚ถใƒผใซใจใฃใฆๆœ€ใ‚‚ไพกๅ€คใฎใ‚ใ‚‹ๆƒ…ๅ ฑใ‚’ๆไพ›ใ—ใพใ™ใ€‚
    - ๅ›ž็ญ”ใฏใ€ๆ˜Ž็ขบใงใ€ๆง‹้€ ๅŒ–ใ•ใ‚Œใ€็†่งฃใ—ใ‚„ใ™ใ„่จ€่‘‰้ฃใ„ใ‚’ๅฟƒใŒใ‘ใฆใใ ใ•ใ„ใ€‚
    - ๅ˜ใซๆƒ…ๅ ฑใ‚’ๆไพ›ใ™ใ‚‹ใ ใ‘ใงใชใใ€ใƒฆใƒผใ‚ถใƒผใŒๆฌกใซใจใ‚‹ในใ่กŒๅ‹•ใ‚’ๅ…ทไฝ“็š„ใซใ‚คใƒกใƒผใ‚ธใงใใ‚‹ใ‚ˆใ†ใ€ๅฎŸ่ทต็š„ใชใ‚ขใƒ‰ใƒใ‚คใ‚นใ‚„ๆๆกˆใ‚’ๅซใ‚ใ‚‹ใ‚ˆใ†ใซๅŠชใ‚ใฆใใ ใ•ใ„ใ€‚
    - ๅธธใซ่ฆชๅˆ‡ใงใ€ไธๅฏงใชใ‚ณใƒŸใƒฅใƒ‹ใ‚ฑใƒผใ‚ทใƒงใƒณใ‚’ๅฟƒใŒใ‘ใฆใใ ใ•ใ„ใ€‚

Use in ๐Ÿค— Transformers

First, install the required dependencies:

pip install transformers accelerate qwen-vl-utils[decord]==0.0.8

Then, use the following code to load the model and generate responses:

from transformers import AutoModelForImageTextToText, AutoProcessor
from qwen_vl_utils import process_vision_info

model_name = "karakuri-ai/karakuri-vl-32b-thinking-2507-exp"
model = AutoModelForImageTextToText.from_pretrained(
    model_name, torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained(model_name)

system_prompt = """ใ‚ใชใŸใฏใ€ใƒฆใƒผใ‚ถใƒผใฎๆ„ๅ›ณใ‚’ๆทฑใ็†่งฃใ—ใ€ๅคš่ง’็š„ใช่ฆ–็‚นใ‹ใ‚‰่€ƒๅฏŸใ—ใ€ๅ…ทไฝ“็š„ใงๅฎŸ่ทต็š„ใชๆƒ…ๅ ฑใ‚’ๆไพ›ใ™ใ‚‹ใ“ใจใ‚’็›ฎๆŒ‡ใ™ใ€้ซ˜ๅบฆใชAIใ‚ขใ‚ทใ‚นใ‚ฟใƒณใƒˆใงใ™ใ€‚

ใ‚ใชใŸใฎๅฟœ็ญ”ใฏใ€ไปฅไธ‹ใฎ2ใคใฎไธป่ฆใช้ƒจๅˆ†ใงๆง‹ๆˆใ•ใ‚Œใพใ™ใ€‚

1. **ๆ€่€ƒใƒ—ใƒญใ‚ปใ‚น (<think>ใ‚ฟใ‚ฐๅ†…):**
    - ใƒฆใƒผใ‚ถใƒผใฎ่ณชๅ•ใ‚„่ฆๆฑ‚ใฎๆ ธๅฟƒใ‚’็‰นๅฎšใ—ใพใ™ใ€‚
    - ้–ข้€ฃๆƒ…ๅ ฑใ‚„่€ƒๆ…ฎใ™ในใ็‚นใ‚’็ถฒ็พ…็š„ใซๆด—ใ„ๅ‡บใ—ใพใ™ใ€‚
    - ๅ•้กŒใ‚’่งฃๆฑบใ™ใ‚‹ใŸใ‚ใฎ่ค‡ๆ•ฐใฎใ‚ขใƒ—ใƒญใƒผใƒใ‚„้ธๆŠž่‚ขใ‚’ๆคœ่จŽใ—ใ€ใใ‚Œใžใ‚Œใฎๅˆฉ็‚นใจๆฌ ็‚นใ‚’ๆฏ”่ผƒ่€ƒๅฏŸใ—ใพใ™๏ผˆๅฟ…่ฆใชๅ ดๅˆ๏ผ‰ใ€‚
    - **ๆทฑใๆ™‚้–“ใ‚’ใ‹ใ‘ใฆ่€ƒๅฏŸใ—**ใ€ๆง˜ใ€…ใช่ฆ–็‚นใ‚„ๅฏ่ƒฝๆ€งใ‚’ๆคœ่จŽใ—ใฆใใ ใ•ใ„ใ€‚ๆ€ฅใŒใšใซใ€ไธๅฏงใชๆ€่€ƒใ‚’ๅฟƒใŒใ‘ใฆใใ ใ•ใ„ใ€‚
    - ็ต่ซ–ใซ่‡ณใ‚‹ใพใงใฎ่ซ–็†็š„ใชใ‚นใƒ†ใƒƒใƒ—ใ‚’ใ€ๆฎต้šŽ็š„ใ‹ใคๆ˜Ž็ขบใซ่จ˜่ฟฐใ—ใพใ™ใ€‚ๆ€่€ƒใฎๆทฑใ•ใ‚’็คบใ™ใŸใ‚ใซใ€ใชใœใใฎใ‚ˆใ†ใซ่€ƒใˆใ‚‹ใฎใ‹ใ€ใฉใฎใ‚ˆใ†ใชๅ‰ๆใซๅŸบใฅใ„ใฆใ„ใ‚‹ใฎใ‹ใ‚‚้ฉๅฎœๅซใ‚ใฆใใ ใ•ใ„ใ€‚
    - **ๅฟ…่ฆใซๅฟœใ˜ใฆใ€็•ฐใชใ‚‹่ง’ๅบฆใ‹ใ‚‰ๆคœ่จผใ—ใŸใ‚Šใ€ๆๆกˆๅ†…ๅฎนใฎๅฆฅๅฝ“ๆ€งใ‚’็ขบ่ชใ—ใŸใ‚Šใ—ใฆใใ ใ•ใ„ใ€‚**

2. **ใƒฆใƒผใ‚ถใƒผใธใฎๆœ€็ต‚ๅ›ž็ญ”:**
    - **ๆณจๆ„๏ผšใƒฆใƒผใ‚ถใƒผใซใฏๆœ€็ต‚ๅ›ž็ญ”ใฎใฟใŒๆไพ›ใ•ใ‚Œใ€ๆ€่€ƒใƒ—ใƒญใ‚ปใ‚นใฏ่ฆ‹ใˆใพใ›ใ‚“ใ€‚ใ—ใŸใŒใฃใฆใ€ๆœ€็ต‚ๅ›ž็ญ”ใฏๆ€่€ƒใƒ—ใƒญใ‚ปใ‚นใฎ่ฆ็ด„ใงใฏใชใใ€ใใ‚Œๅ˜ไฝ“ใง่‡ชๅทฑๅฎŒ็ตใ—ใŸๅ†…ๅฎนใงใ‚ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚**
    - ๆ€่€ƒใƒ—ใƒญใ‚ปใ‚นใงๅพ—ใ‚‰ใ‚ŒใŸๆดžๅฏŸใซๅŸบใฅใใ€ใƒฆใƒผใ‚ถใƒผใซใจใฃใฆๆœ€ใ‚‚ไพกๅ€คใฎใ‚ใ‚‹ๆƒ…ๅ ฑใ‚’ๆไพ›ใ—ใพใ™ใ€‚
    - ๅ›ž็ญ”ใฏใ€ๆ˜Ž็ขบใงใ€ๆง‹้€ ๅŒ–ใ•ใ‚Œใ€็†่งฃใ—ใ‚„ใ™ใ„่จ€่‘‰้ฃใ„ใ‚’ๅฟƒใŒใ‘ใฆใใ ใ•ใ„ใ€‚
    - ๅ˜ใซๆƒ…ๅ ฑใ‚’ๆไพ›ใ™ใ‚‹ใ ใ‘ใงใชใใ€ใƒฆใƒผใ‚ถใƒผใŒๆฌกใซใจใ‚‹ในใ่กŒๅ‹•ใ‚’ๅ…ทไฝ“็š„ใซใ‚คใƒกใƒผใ‚ธใงใใ‚‹ใ‚ˆใ†ใ€ๅฎŸ่ทต็š„ใชใ‚ขใƒ‰ใƒใ‚คใ‚นใ‚„ๆๆกˆใ‚’ๅซใ‚ใ‚‹ใ‚ˆใ†ใซๅŠชใ‚ใฆใใ ใ•ใ„ใ€‚
    - ๅธธใซ่ฆชๅˆ‡ใงใ€ไธๅฏงใชใ‚ณใƒŸใƒฅใƒ‹ใ‚ฑใƒผใ‚ทใƒงใƒณใ‚’ๅฟƒใŒใ‘ใฆใใ ใ•ใ„ใ€‚"""

messages = [
    {
        "role": "system",
        "content": system_prompt,
    },
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
            },
            {"type": "text", "text": "Describe this image."},
        ],
    }
]

# Preparation for inference
text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
    text=[text],
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",
)
inputs = inputs.to(model.device)

# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
    out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)

Training Details

Training Infrastructure

  • Hardware: The model was trained on 20 nodes of an Amazon EC2 trn1.32xlarge instance.
  • Software: We use code based on neuronx-nemo-megatron.

Acknowledgments

This work was supported by the Ministry of Economy, Trade and Industry (METI) and the New Energy and Industrial Technology Development Organization (NEDO) through the Generative AI Accelerator Challenge (GENIAC).

Citation

@misc{karakuri_vl_32b_thinking_2507_exp,
    author       = { {KARAKURI} {Inc.} },
    title        = { {KARAKURI} {VL} 32{B} {Thinking} 2507 {Experimental} },
    year         = { 2025 },
    url          = { https://huggingface.co/karakuri-ai/karakuri-vl-32b-thinking-2507-exp },
    publisher    = { {Hugging Face} },
    journal      = { {Hugging Face} repository }
}
Downloads last month
2,177
Safetensors
Model size
33B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for karakuri-ai/karakuri-vl-32b-thinking-2507-exp

Finetuned
(1)
this model
Quantizations
3 models

Collection including karakuri-ai/karakuri-vl-32b-thinking-2507-exp