Model Card for Gemma-SEA-LION-v4-27B-VL

Last updated: 2025-10-16

SEA-LION-VL is an instruct-tuned vision-text model for the Southeast Asia (SEA) region.

Gemma-SEA-LION-v4-27B-VL has undergone post-training using instruction-image pairs datasets in Burmese, English, Indonesian, Khmer, Lao, Malay, Mandarin, Tagalog, Tamil, Thai and Vietnamese, comprising approximately 540k samples in total, to create Gemma-SEA-LION-v4-27B-VL.

Gemma-SEA-LION-v4-27B-VL inherits Gemma 3's:

  • Large 128K context length
  • Image and text understanding capabilities, including document comprehension, visual Q&A, and image-grounded reasoning
  • Advanced function calling and structured outputs to allow for seamless integration into larger systems

Model Details

Model Description

We performed post-training in English and SEA languages on Gemma-SEA-LION-v4-27B-IT, a decoder model using the Gemma 3 architecture, to create Gemma-SEA-LION-v4-27B-VL.

For tokenization, the model employs the default tokenizer used in Gemma 3 27B IT.

  • Developed by: SEACrowd and Products Pillar, AI Singapore
  • Funded by: Singapore NRF
  • Shared by: SEACrowd and Products Pillar, AI Singapore
  • Model type: Decoder
  • Context length: 128k tokens
  • Language(s) (NLP): Burmese, English, Indonesian, Khmer, Lao, Malay, Mandarin, Tagalog, Tamil, Thai and Vietnamese
  • License: Gemma Terms of Use
  • Finetuned from model: Gemma-SEA-LION-v4-27B-IT

As of 15 October 2025, Gemma-SEA-LION-v4-27B-VL excels at Southeast Asian (SEA) tasks when compared to other open models with fewer than 200 billion parameters and demonstrates performance comparable to that of larger and top closed models. For detailed rankings, please refer to the leaderboard.

Uses

Out-of-Scope Use

The model has not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes.

Bias, Risks, and Limitations

The model was not tested for robustness against adversarial prompting. It is important for users to be aware that our model exhibits certain limitations that warrant consideration. Like many LLMs, the model can hallucinate and occasionally generates irrelevant content, introducing fictional elements that are not grounded in the provided context. Users should also exercise caution in interpreting and validating the model's responses due to the potential inconsistencies.

Limitations

In terms of text capability, Gemma-SEA-LION-v4-27B-VL has been trained and fine-tuned exclusively on the vision-text backend. As a result, its text capabilities are expected to be comparable to those of Gemma-SEA-LION-v4-27B-IT, and may not exhibit significant improvements or differences in this area.

How to Get Started with the Model

Use the code below to get started with the model using the ๐Ÿค— Transformers library.

from transformers import pipeline
import torch

pipe = pipeline(
    "image-text-to-text",
    model="aisingapore/Gemma-SEA-LION-v4-27B-VL",
    device="cuda",
    torch_dtype=torch.bfloat16
)

messages = [
    {
        "role": "system",
        "content": [{"type": "text", "text": "You are a helpful assistant."}]
    },
    {
        "role": "user",
        "content": [
            {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
            {"type": "text", "text": "What animal is on the candy?"}
        ]
    }
]

output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])

Training Details

Training Data

The dataset comprises vision-text paired in Burmese, English, Indonesian, Khmer, Lao, Malay, Mandarin, Tagalog, Tamil, Thai and Vietnamese languages, collected from a mixture of sources including web data, code, open-source datasets.

Training Procedure

Training Hyperparameters

  • Training regime:

We perform SFT using 540k of vision-text samples written in 10 languages. Then, we perform model merging with Gemma3-27B-IT to preserve general vision-text knowledge.

Evaluation

Testing Data

To assess its cultural and visual understanding, we evaluated the model on two types of tasks using datasets focused on Southeast Asian examples:

  • Visual Question Answering (VQA): We used Multiple Choice Question (MCQ) style VQA tasks, including MARVL, CVQA, and WorldCuisines.
  • Image Captioning: We used the XM3600 dataset, evaluating only on examples relevant to SEA.

Metrics

The following metrics were used to measure performance:

  • Normalized accuracy was the primary metric for the VQA tasks (CVQA, MARVL, and WorldCuisines).
  • RefCLIP Score was used for the XM3600 image captioning task.

Results

For details on Gemma-SEA-LION-v4-27B-VL performance, please refer to the SEA-LION.ai blogpost, SEA-LION v4 VL new members.

Environmental Impact

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

  • Hardware Type: Nvidia H200 140GB GPUs
  • Hours used: 13 hrs
  • Cloud Provider: SMC H200
  • Compute Region: Singapore
  • Carbon Emitted: appx. 27 kg CO2 e

More Information

This is the repository for the commercial instruction-tuned model. The model has not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes.

AI Singapore is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation or the National University of Singapore.

For more info, please contact us at [email protected], [email protected].

Contact

[email protected], [email protected]

Team

SEACrowd team members, and AI Singapore: Ahn Jeongmi, Antonyrex Sajeban, Chan Hok Teng Adwin, Cheng Zi Yi Nicholas, Choa Hsueh Mei Esther, Heng Jonathan, Huang Yuli, Hulagadri Adithya Venkatadri, Jann Railey Estrada Montalan, Kang Siow Wei Bryan, Lau Wayne, Lee Chwan Ren, Leong Wai Yi, Leong Wei Qi, Limkonchotiwat Peerat, Muhammad Ridzuan Bin Mokhtar, Nagarajan Karthik, Ng Boon Cheong Raymond, Ngee Chia Tai, Ngui Jian Gang, Nguyen Thanh Ngan, Ong Jin Jie Brandon, Ong Tat-Wee David, Ong Zhi Hao, Pereira Mark, Rengarajan Hamsawardhini, Susanto Yosephine, Sutaveephamochanon Anocha, Tan Choon Meng, Tan Chor Phin Evelyn, Tan Siao Wei Jessica, Tan Yixian, Tee Jun Yun, Teng Kok Wai Walter, Teo Eng Sipp Leslie, Tjhi William, Yeo Yeow Tong, Yong Xianbin, Zhang Zhou, Liew Rachel, Liu Bing Jie Darius

Downloads last month
49
Safetensors
Model size
27B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for SEACrowd/Gemma-SEA-LION-v4-27B-VL

Finetuned
(2)
this model
Quantizations
2 models