SAnocha commited on
Commit
93ce1dc
·
verified ·
1 Parent(s): 796003d

Update README

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -31,8 +31,8 @@ Last updated: 2025-08-22
31
  **SEA-LION** is a collection of Large Language Models (LLMs) which have been pretrained and instruct-tuned
32
  for the Southeast Asia (SEA) region.
33
 
34
- Gemma-SEA-LION-v4-27B has undergone post-training using a QA pairs dataset in Bahasa Indonesia,
35
- Burmese, Chinese, English, Khmer, Lao, Malay, Tagalog, Tamil, Thai, and Vietnamese, comprising approximately 10M samples in total, to create *Gemma-SEA-LION-v4-27B-IT*.
36
 
37
  Gemma-SEA-LION-v4-27B-IT inherits Gemma 3's:
38
 
@@ -61,7 +61,7 @@ For tokenization, the model employs the default tokenizer used in Gemma 3 27B IT
61
  - **Shared by:** Products Pillar, AI Singapore
62
  - **Model type:** Decoder
63
  - **Context length:** 128k tokens
64
- - **Language(s) (NLP):** Bahasa Indonesia, Burmese, Chinese, English, Khmer, Lao, Malay, Tagalog, Tamil, Thai and Vietnamese
65
  - **License:** [Gemma Terms of Use](https://ai.google.dev/gemma/terms)
66
  - **Finetuned from model:** Gemma-SEA-LION-v4-27B
67
 
@@ -281,7 +281,7 @@ Kang Siow Wei Bryan, Lau Wayne, Lee Chwan Ren, Leong Wai Yi, Leong Wei Qi, Limk
281
  Ngee Chia Tai, Ngui Jian Gang, Nguyen Thanh Ngan, Ong Jin Jie Brandon, Ong Tat-Wee David, Ong Zhi Hao, Pereira Mark, Rengarajan Hamsawardhini, Susanto Yosephine,
282
  Sutaveephamochanon Anocha, Tan Choon Meng, Tan Chor Phin Evelyn, Tan Siao Wei Jessica, Teng Kok Wai Walter, Teo Eng Sipp Leslie, Tjhi William, Yeo Yeow Tong, Yong Xianbin,
283
  Liew Rachel, Liu Bing Jie Darius, Teo Wei Yi, Zhou Lin (NCS), Gopalakrishnan Roshan (NCS), Anda Cuahtemoc (NCS), Sri Devi Wijaya (NCS), Nandi Partha (NCS),
284
- Elliott Chris, Mohseni Mohammadreza, Sharan Mayank, Wei Fanny, Tang Jiuqiang, Xu Xiang, Yu Ting, Loh Michelle, Mangal Saurabh, Mukherjee Pratyusha, Sim Stephanie
285
 
286
 
287
  ## Contact
 
31
  **SEA-LION** is a collection of Large Language Models (LLMs) which have been pretrained and instruct-tuned
32
  for the Southeast Asia (SEA) region.
33
 
34
+ Gemma-SEA-LION-v4-27B has undergone post-training using a QA pairs dataset in Bahasa Indonesia, Burmese, English,
35
+ Khmer, Lao, Malay, Mandarin, Tagalog, Tamil, Thai and Vietnamese, comprising approximately 10M samples in total, to create *Gemma-SEA-LION-v4-27B-IT*.
36
 
37
  Gemma-SEA-LION-v4-27B-IT inherits Gemma 3's:
38
 
 
61
  - **Shared by:** Products Pillar, AI Singapore
62
  - **Model type:** Decoder
63
  - **Context length:** 128k tokens
64
+ - **Language(s) (NLP):** Bahasa Indonesia, Burmese, English, Khmer, Lao, Malay, Mandarin, Tagalog, Tamil, Thai and Vietnamese
65
  - **License:** [Gemma Terms of Use](https://ai.google.dev/gemma/terms)
66
  - **Finetuned from model:** Gemma-SEA-LION-v4-27B
67
 
 
281
  Ngee Chia Tai, Ngui Jian Gang, Nguyen Thanh Ngan, Ong Jin Jie Brandon, Ong Tat-Wee David, Ong Zhi Hao, Pereira Mark, Rengarajan Hamsawardhini, Susanto Yosephine,
282
  Sutaveephamochanon Anocha, Tan Choon Meng, Tan Chor Phin Evelyn, Tan Siao Wei Jessica, Teng Kok Wai Walter, Teo Eng Sipp Leslie, Tjhi William, Yeo Yeow Tong, Yong Xianbin,
283
  Liew Rachel, Liu Bing Jie Darius, Teo Wei Yi, Zhou Lin (NCS), Gopalakrishnan Roshan (NCS), Anda Cuahtemoc (NCS), Sri Devi Wijaya (NCS), Nandi Partha (NCS),
284
+ Elliott Chris (Google), Mohseni Mohammadreza (Google), Sharan Mayank (Google), Wei Fanny (Google), Tang Jiuqiang (Google), Xu Xiang (Google), Yu Ting (Google), Loh Michelle (Google), Mangal Saurabh (Google), Mukherjee Pratyusha (Google), Sim Stephanie (Google)
285
 
286
 
287
  ## Contact