Heron-NVILA-Lite-33B

Heron-NVILA-Lite-33B is a vision language model trained for Japanese, based on the NVILA-Lite architecture. This implementation is compatible with transformers.AutoModelForImageTextToText.

Model Overview

Setup

# We have developed this implementation with `transformers==4.57.1`. Other versions of transformers may also work, but we have not tested them.
pip install "transformers[torch]==4.57.1" pillow

Usage

from transformers import AutoConfig, AutoModelForImageTextToText, AutoProcessor

MODEL_NAME = "turing-motors/Heron-NVILA-Lite-33B-hf"

# you can use config
config = AutoConfig.from_pretrained(MODEL_NAME, trust_remote_code=True)
model = AutoModelForImageTextToText.from_config(config, trust_remote_code=True)

# or directly from_pretrained
model = AutoModelForImageTextToText.from_pretrained(MODEL_NAME, trust_remote_code=True, device_map="auto")

# load processor
processor = AutoProcessor.from_pretrained(MODEL_NAME, trust_remote_code=True)

# show chat_template
print(processor.tokenizer.chat_template)

def generate_content(content: str, images: list | None = None, **kwargs) -> str:
    conversation = [{"role": "user", "content": content}]
    text = processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=False)
    encoding = processor(text=text, images=images, return_tensors="pt").to(model.device)
    output = model.generate(**encoding, **kwargs)
    return processor.decode(output[0, len(encoding["input_ids"][0]):], skip_special_tokens=True)

# examples generate with raw text
response = generate_content("こんにちは")
print(response)
print("---" * 40)

# examples generate with text + image
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
response = generate_content("<image>\n画像を説明してください。", images=[image])
print(response)
print("---" * 40)

# examples generate using generation_config
from PIL import Image
import requests
from transformers import GenerationConfig
generation_config = {
    "max_new_tokens": 512,
    "temperature": 0.5,
    "do_sample": True,
}
generation_config = GenerationConfig(**generation_config)
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
response = generate_content(
    "<image>\n画像を説明してください。",
    images=[image],
    generation_config=generation_config
)
print(response)
print("---" * 40)

# examples generate with text + image + text + image + text
from PIL import Image
import requests
url_list = [
    "https://images.unsplash.com/photo-1694831404826-3400c48c188d?q=80&w=2070&auto=format&fit=crop&ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D",
    "https://images.unsplash.com/photo-1693240876439-473af88b4ed7?q=80&w=1974&auto=format&fit=crop&ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D"
]
images = [
   Image.open(requests.get(url, stream=True).raw).convert("RGB") for url in url_list
]
response = generate_content("".join([
    "<image>\n",
    "これは日本の画像です",
    "<image>\n",
    "これはオーストリアの画像です",
    "各画像の違いを説明して"]), images)
print(response)
print("---" * 40)

Training Summary

Evaluation

I used llm-jp-eval-mm for this evaluation. Scores for models other than Heron-NVILA-Lite and Sarashina2-Vision-14B were taken from llm-jp-eval-mm leaderboard as of March 2025 and the Asagi website. Heron-NVILA-Lite and Sarashina2-Vision-14B were evaluated using llm-as-a-judge with "gpt-4o-2024-05-13". Sarashina2-Vision-14B was evaluated on the official blog using "gpt-4o-2024-08-06"; please note that due to differing evaluation conditions, the results for Sarashina2-Vision-14B should be treated as reference only.

Model LLM Size Heron-Bench overall LLM (%) JA-VLM-Bench-In-the-Wild LLM (/5.0) JA-VG-VQA-500 LLM (/5.0)
Heron-NVILA-Lite-1B 0.5B 45.9 2.92 3.16
Heron-NVILA-Lite-2B 1.5B 52.8 3.52 3.50
Heron-NVILA-Lite-15B 14B 59.6 4.2 3.82
Heron-NVILA-Lite-33B 32B 61.1 4.0 3.85
LLaVA-CALM2-SigLIP 7B 43.3 3.15 3.21
Llama-3-EvoVLM-JP-v2 8B 39.3 2.92 2.96
VILA-jp 13B 57.2 3.69 3.62
Asagi-14B 13B 55.8 3.44 3.84
Sarashina2-Vision-14B 13B 50.9 4.1 3.43
Qwen2-VL 7B Instruct 7B 55.5 3.61 3.6
GPT-4o - 87.6 3.85 3.58

Risks and Limitations

This model is experimental and has not been thoroughly calibrated for ethical compliance or legal standards. Caution is advised for sensitive applications.

License

Acknowledgements

This model is based on results obtained from a project, JPNP20017, subsidized by the New Energy and Industrial Technology Development Organization (NEDO).

I would like to acknowledge the use of the following open-source repositories:

Downloads last month
59
Safetensors
Model size
33B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for turing-motors/Heron-NVILA-Lite-33B-hf

Base model

Qwen/Qwen2.5-32B
Finetuned
(1174)
this model