ð Model Description
license: other license_name: qwen language:
- th
- en
- openthaigpt
- qwen
ðđð OpenThaiGPT 14b 1.5 Instruct
!OpenThaiGPT More Infoðđð OpenThaiGPT 14b Version 1.5 is an advanced 14-billion-parameter Thai language chat model based on Qwen v2.5 released on October 13, 2024. It has been specifically fine-tuned on over 2,000,000 Thai instruction pairs and is capable of answering Thai-specific domain questions.
Online Demo:
https://demo72b.aieat.or.th/
Example code for API Calling
https://github.com/OpenThaiGPT/openthaigpt1.5apiexamplesHighlights
- State-of-the-art Thai language LLM, achieving the highest average scores across various Thai language exams compared to other open-source Thai LLMs.
- Multi-turn conversation support for extended dialogues.
- Retrieval Augmented Generation (RAG) compatibility for enhanced response generation.
- Impressive context handling: Processes up to 131,072 tokens of input and generates up to 8,192 tokens, enabling detailed and complex interactions.
- Tool calling support: Enables users to efficiently call various functions through intelligent responses.
Benchmark on OpenThaiGPT Eval
Please take a look at `openthaigpt/openthaigpt1.5-14b-instruct
for this model's evaluation result.
Exam names scb10x/llama-3-typhoon-v1.5x-70b-instruct Qwen/Qwen2.5-14B-Instruct openthaigpt/openthaigpt1.5-14b openthaigpt/openthaigpt1.5-72b
01alevel 59.17% 61.67% 65.00% 76.67% 02_tgat 46.00% 44.00% 50.00% 46.00% 03_tpat1 52.50% 60.00% 52.50% 55.00% 04investmentconsult 60.00% 76.00% 72.00% 72.00% 05facebookbelebleth200 87.50% 84.50% 87.00% 90.00% 06xcopath_200 84.50% 85.00% 86.50% 90.50% 07xnli2.0th_200 62.50% 69.50% 64.50% 70.50% 08onetm3_thai 76.00% 76.00% 84.00% 84.00% 09onetm3_social 95.00% 90.00% 90.00% 95.00% 10onetm3_math 43.75% 43.75% 12.50% 37.50% 11onetm3_science 53.85% 50.00% 53.85% 73.08% 12onetm3_english 93.33% 93.33% 93.33% 96.67% 13onetm6_thai 55.38% 52.31% 56.92% 56.92% 14onetm6_math 41.18% 23.53% 41.18% 41.18% 15onetm6_social 67.27% 60.00% 61.82% 65.45% 16onetm6_science 50.00% 50.00% 57.14% 67.86% 17onetm6_english 73.08% 82.69% 78.85% 90.38% Micro Average 69.97% 71.00% 71.51% 76.73%
Thai language multiple choice exams, Test on unseen test set, Zero-shot learning. Benchmark source code and exams information: https://github.com/OpenThaiGPT/openthaigpt_eval
(Updated on: 13 October 2024)
Benchmark on scb10x/thai_exam
Models Thai Exam (Acc)
api/claude-3-5-sonnet-20240620 69.2 openthaigpt/openthaigpt1.5-72b-instruct* 64.07 api/gpt-4o-2024-05-13 63.89 hugging-quants/Meta-Llama-3.1-405B-Instruct-AWQ-INT4 63.54 openthaigpt/openthaigpt1.5-14b-instruct* 59.65 scb10x/llama-3-typhoon-v1.5x-70b-instruct 58.76 Qwen/Qwen2-72B-Instruct 58.23 meta-llama/Meta-Llama-3.1-70B-Instruct 58.23 Qwen/Qwen2.5-14B-Instruct 57.35 api/gpt-4o-mini-2024-07-18 54.51 openthaigpt/openthaigpt1.5-7b-instruct* 52.04 SeaLLMs/SeaLLMs-v3-7B-Chat 51.33 openthaigpt/openthaigpt-1.0.0-70b-chat 50.09
* Evaluated by OpenThaiGPT team using scb10x/thai_exam.
(Updated on: 13 October 2024)
Licenses
- Built with Qwen
- Qwen License: Allow Research and
Commercial uses but if your user base exceeds 100 million monthly active users, you need to negotiate a separate commercial license. Please see LICENSE file for more information.
Sponsors
Supports
- Official website: https://openthaigpt.aieat.or.th
- Facebook page: https://web.facebook.com/groups/openthaigpt
- A Discord server for discussion and support here
- E-mail: [email protected]
Prompt Format
Prompt format is based on ChatML.
<|imstart|>system\n{sytemprompt}<|imend|>\n<|imstart|>user\n{prompt}<|imend|>\n<|imstart|>assistant\n
System prompt:
āļāļļāļāļāļ·āļāļāļđāđāļāđāļ§āļĒāļāļāļāļāļģāļāļēāļĄāļāļĩāđāļāļĨāļēāļāđāļĨāļ°āļāļ·āđāļāļŠāļąāļāļĒāđ
Examples
#### Single Turn Conversation Example
<|imstart|>system\nāļāļļāļāļāļ·āļāļāļđāđāļāđāļ§āļĒāļāļāļāļāļģāļāļēāļĄāļāļĩāđāļāļĨāļēāļāđāļĨāļ°āļāļ·āđāļāļŠāļąāļāļĒāđ<|imend|>\n<|imstart|>user\nāļŠāļ§āļąāļŠāļāļĩāļāļĢāļąāļ<|imend|>\n<|im_start|>assistant\n
#### Single Turn Conversation with Context (RAG) Example
<|imstart|>system\nāļāļļāļāļāļ·āļāļāļđāđāļāđāļ§āļĒāļāļāļāļāļģāļāļēāļĄāļāļĩāđāļāļĨāļēāļāđāļĨāļ°āļāļ·āđāļāļŠāļąāļāļĒāđ<|imend|>\n<|imstart|>user\nāļāļĢāļļāļāđāļāļāļĄāļŦāļēāļāļāļĢ āđāļāđāļāđāļĄāļ·āļāļāļŦāļĨāļ§āļ āļāļāļĢāđāļĨāļ°āļĄāļŦāļēāļāļāļĢāļāļĩāđāļĄāļĩāļāļĢāļ°āļāļēāļāļĢāļĄāļēāļāļāļĩāđāļŠāļļāļāļāļāļāļāļĢāļ°āđāļāļĻāđāļāļĒ āļāļĢāļļāļāđāļāļāļĄāļŦāļēāļāļāļĢāļĄāļĩāļāļ·āđāļāļāļĩāđāļāļąāđāļāļŦāļĄāļ 1,568.737 āļāļĢ.āļāļĄ. āļĄāļĩāļāļĢāļ°āļāļēāļāļĢāļāļēāļĄāļāļ°āđāļāļĩāļĒāļāļĢāļēāļĐāļāļĢāļāļ§āđāļē 8 āļĨāđāļēāļāļāļ\nāļāļĢāļļāļāđāļāļāļĄāļŦāļēāļāļāļĢāļĄāļĩāļāļ·āđāļāļāļĩāđāđāļāđāļēāđāļĢāđ<|imend|>\n<|im_start|>assistant\n
#### Multi Turn Conversation Example
##### First turn
<|imstart|>system\nāļāļļāļāļāļ·āļāļāļđāđāļāđāļ§āļĒāļāļāļāļāļģāļāļēāļĄāļāļĩāđāļāļĨāļēāļāđāļĨāļ°āļāļ·āđāļāļŠāļąāļāļĒāđ<|imend|>\n<|imstart|>user\nāļŠāļ§āļąāļŠāļāļĩāļāļĢāļąāļ<|imend|>\n<|im_start|>assistant\n
##### Second turn
<|imstart|>system\nāļāļļāļāļāļ·āļāļāļđāđāļāđāļ§āļĒāļāļāļāļāļģāļāļēāļĄāļāļĩāđāļāļĨāļēāļāđāļĨāļ°āļāļ·āđāļāļŠāļąāļāļĒāđ<|imend|>\n<|imstart|>user\nāļŠāļ§āļąāļŠāļāļĩāļāļĢāļąāļ<|imend|>\n<|imstart|>assistant\nāļŠāļ§āļąāļŠāļāļĩāļāļĢāļąāļ āļĒāļīāļāļāļĩāļāđāļāļāļĢāļąāļāļāļĢāļąāļ āļāļļāļāļāđāļāļāļāļēāļĢāđāļŦāđāļāļąāļāļāđāļ§āļĒāļāļ°āđāļĢāļāļĢāļąāļ?<|imend|>\n<|imstart|>user\nāļāļĢāļļāļāđāļāļāļĄāļŦāļēāļāļāļĢ āļāļ·āđāļāđāļāđāļĄāļĒāļēāļ§āđāļāļ·āļāļāļ°āđāļĢ<|imend|>\n<|im_start|>assistant\n
##### Result
<|imstart|>system\nāļāļļāļāļāļ·āļāļāļđāđāļāđāļ§āļĒāļāļāļāļāļģāļāļēāļĄāļāļĩāđāļāļĨāļēāļāđāļĨāļ°āļāļ·āđāļāļŠāļąāļāļĒāđ<|imend|>\n<|imstart|>user\nāļŠāļ§āļąāļŠāļāļĩāļāļĢāļąāļ<|imend|>\n<|imstart|>assistant\nāļŠāļ§āļąāļŠāļāļĩāļāļĢāļąāļ āļĒāļīāļāļāļĩāļāđāļāļāļĢāļąāļāļāļĢāļąāļ āļāļļāļāļāđāļāļāļāļēāļĢāđāļŦāđāļāļąāļāļāđāļ§āļĒāļāļ°āđāļĢāļāļĢāļąāļ?<|imend|>\n<|imstart|>user\nāļāļĢāļļāļāđāļāļāļĄāļŦāļēāļāļāļĢ āļāļ·āđāļāđāļāđāļĄāļĒāļēāļ§āđāļāļ·āļāļāļ°āđāļĢ<|imend|>\n<|im_start|>assistant\nāļāļ·āđāļāđāļāđāļĄāļāļāļāļāļĢāļļāļāđāļāļāļĄāļŦāļēāļāļāļĢāļāļ·āļ \"āļāļĢāļļāļāđāļāļāļĄāļŦāļēāļāļāļĢ āļāļĄāļĢāļĢāļąāļāļāđāļāļŠāļīāļāļāļĢāđ āļĄāļŦāļīāļāļāļĢāļēāļĒāļļāļāļĒāļē āļĄāļŦāļēāļāļīāļĨāļāļ āļ āļāļāļĢāļąāļāļāļĢāļēāļāļāļēāļāļĩāļāļđāļĢāļĩāļĢāļĄāļĒāđ āļāļļāļāļĄāļĢāļēāļāļāļīāđāļ§āļĻāļāđāļĄāļŦāļēāļŠāļāļēāļ āļāļĄāļĢāļāļīāļĄāļēāļāļāļ§āļāļēāļĢāļŠāļāļīāļ āļŠāļąāļāļāļ°āļāļąāļāļāļīāļĒāļ§āļīāļĐāļāļļāļāļĢāļĢāļĄāļāļĢāļ°āļŠāļīāļāļāļīāđ\"
How to use
Free API Service (hosted by Siam.Ai and Float16.cloud)
#### Siam.AI
curl https://api.aieat.or.th/v1/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer dummy" \
-d '{
"model": ".",
"prompt": "<|imstart|>system\nāļāļļāļāļāļ·āļāļāļđāđāļāđāļ§āļĒāļāļāļāļāļģāļāļēāļĄāļāļĩāđāļāļĨāļēāļāđāļĨāļ°āļāļ·āđāļāļŠāļąāļāļĒāđ<|imend|>\n<|imstart|>user\nāļāļĢāļļāļāđāļāļāļĄāļŦāļēāļāļāļĢāļāļ·āļāļāļ°āđāļĢ<|imend|>\n<|im_start|>assistant\n",
"max_tokens": 512,
"temperature": 0.7,
"top_p": 0.8,
"top_k": 40,
"stop": ["<|im_end|>"]
}'
#### Float16
curl -X POST https://api.float16.cloud/dedicate/78y8fJLuzE/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer float16-AG0F8yNce5s1DiXm1ujcNrTaZquEdaikLwhZBRhyZQNeS7Dv0X" \
-d '{
"model": "openthaigpt/openthaigpt1.5-7b-instruct",
"messages": [
{
"role": "system",
"content": "āļāļļāļāļāļ·āļāļāļđāđāļāđāļ§āļĒāļāļāļāļāļģāļāļēāļĄāļāļĩāđāļāļĨāļēāļāđāļĨāļ°āļāļ·āđāļāļŠāļąāļāļĒāđ"
},
{
"role": "user",
"content": "āļŠāļ§āļąāļŠāļāļĩ"
}
]
}'
OpenAI Client Library (Hosted by VLLM, please see below.)
import openai
Configure OpenAI client to use vLLM server
openai.api_base = "http://127.0.0.1:8000/v1"
openai.api_key = "dummy" # vLLM doesn't require a real API key
prompt = "<|imstart|>system\nāļāļļāļāļāļ·āļāļāļđāđāļāđāļ§āļĒāļāļāļāļāļģāļāļēāļĄāļāļĩāđāļāļĨāļēāļāđāļĨāļ°āļāļ·āđāļāļŠāļąāļāļĒāđ<|imend|>\n<|imstart|>user\nāļāļĢāļļāļāđāļāļāļĄāļŦāļēāļāļāļĢāļāļ·āļāļāļ°āđāļĢ<|imend|>\n<|im_start|>assistant\n"
try:
response = openai.Completion.create(
model=".", # Specify the model you're using with vLLM
prompt=prompt,
max_tokens=512,
temperature=0.7,
top_p=0.8,
top_k=40,
stop=["<|im_end|>"]
)
print("Generated Text:", response.choices[0].text)
except Exception as e:
print("Error:", str(e))
Huggingface
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "openthaigpt/openthaigpt1.5-14b-instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.frompretrained(modelname)
prompt = "āļāļĢāļ°āđāļāļĻāđāļāļĒāļāļ·āļāļāļ°āđāļĢ"
messages = [
{"role": "system", "content": "āļāļļāļāļāļ·āļāļāļđāđāļāđāļ§āļĒāļāļāļāļāļģāļāļēāļĄāļāļĩāđāļāļĨāļēāļāđāļĨāļ°āļāļ·āđāļāļŠāļąāļāļĒāđ"},
{"role": "user", "content": prompt}
]
text = tokenizer.applychattemplate(
messages,
tokenize=False,
addgenerationprompt=True
)
modelinputs = tokenizer([text], returntensors="pt").to(model.device)
generated_ids = model.generate(
model_inputs,
maxnewtokens=512
)
generated_ids = [
outputids[len(inputids):] for inputids, outputids in zip(modelinputs.inputids, generated_ids)
]
response = tokenizer.batchdecode(generatedids, skipspecialtokens=True)[0]
vLLM
- Install VLLM (https://github.com/vllm-project/vllm)
- Run server
vllm serve openthaigpt/openthaigpt1.5-14b-instruct --tensor-parallel-size 4
- Note, change
--tensor-parallel-size 4
to the amount of available GPU cards.
If you wish to enable tool calling feature, add
--enable-auto-tool-choice --tool-call-parser hermes
into command. e.g.,
vllm serve openthaigpt/openthaigpt1.5-14b-instruct --tensor-parallel-size 4 --enable-auto-tool-choice --tool-call-parser hermes
- Run inference (CURL example)
curl -X POST 'http://127.0.0.1:8000/v1/completions' \
-H 'Content-Type: application/json' \
-d '{
"model": ".",
"prompt": "<|imstart|>system\nāļāļļāļāļāļ·āļāļāļđāđāļāđāļ§āļĒāļāļāļāļāļģāļāļēāļĄāļāļĩāđāļāļĨāļēāļāđāļĨāļ°āļāļ·āđāļāļŠāļąāļāļĒāđ<|imend|>\n<|imstart|>user\nāļŠāļ§āļąāļŠāļāļĩāļāļĢāļąāļ<|imend|>\n<|im_start|>assistant\n",
"max_tokens": 512,
"temperature": 0.7,
"top_p": 0.8,
"top_k": 40,
"stop": ["<|im_end|>"]
}'
Processing Long Texts
The current
config.json is set for context length up to 32,768 tokens.
To handle extensive inputs exceeding 32,768 tokens, we utilize YaRN, a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For supported frameworks, you could add the following to
config.json` to enable YaRN:{
...
"rope_scaling": {
"factor": 4.0,
"originalmaxposition_embeddings": 32768,
"type": "yarn"
}
}
Tool Calling
The Tool Calling feature in OpenThaiGPT 1.5 enables users to efficiently call various functions through intelligent responses. This includes making external API calls to retrieve real-time data, such as current temperature information, or predicting future data simply by submitting a query. For example, a user can ask OpenThaiGPT, âWhat is the current temperature in San Francisco?â and the AI will execute a pre-defined function to provide an immediate response without the need for additional coding. This feature also allows for broader applications with external data sources, including the ability to call APIs for services such as weather updates, stock market information, or data from within the userâs own system.#### Example:
import openai
def get_temperature(location, date=None, unit="celsius"):
"""Get temperature for a location (current or specific date)."""
if date:
return {"temperature": 25.9, "location": location, "date": date, "unit": unit}
return {"temperature": 26.1, "location": location, "unit": unit}
tools = [
{
"name": "get_temperature",
"description": "Get temperature for a location (current or by date).",
"parameters": {
"location": "string", "date": "string (optional)", "unit": "enum [celsius, fahrenheit]"
},
}
]
messages = [{"role": "user", "content": "āļāļļāļāļŦāļ āļđāļĄāļīāļāļĩāđ San Francisco āļ§āļąāļāļāļĩāđāļĩāđāļĨāļ°āļāļĢāļļāđāđāļāļāļĩāđāļāļ·āļāđāļāđāļēāđāļĢāđ?"}]
Simulated response flow using OpenThaiGPT Tool Calling
response = openai.ChatCompletion.create(
model=".", messages=messages, tools=tools, temperature=0.7, max_tokens=512
)
print(response)
Full example: https://github.com/OpenThaiGPT/openthaigpt1.5apiexamples/blob/main/apitoolcallingpoweredby_siamai.py
GPU Memory Requirements
Number of Parameters | FP 16 bits | 8 bits (Quantized) | 4 bits (Quantized) | Example Graphic Card for 4 bits |
---|---|---|---|---|
7b | 24 GB | 12 GB | 6 GB | Nvidia RTX 4060 8GB |
14b | 48 GB | 24 GB | 12 GB | Nvidia RTX 4070 16GB |
72b | 192 GB | 96 GB | 48 GB | Nvidia RTX 4090 24GB x 2 cards |
OpenThaiGPT Team
- Sumeth Yuenyong ([email protected])
- Kobkrit Viriyayudhakorn ([email protected])
- Apivadee Piyatumrong ([email protected])
- Jillaphat Jaroenkantasima ([email protected])
- Thaweewat Rugsujarit ([email protected])
- Norapat Buppodom ([email protected])
- Koravich Sangkaew ([email protected])
- Peerawat Rojratchadakorn ([email protected])
- Surapon Nonesung ([email protected])
- Chanon Utupon ([email protected])
- Sadhis Wongprayoon ([email protected])
- Nucharee Thongthungwong ([email protected])
- Chawakorn Phiantham ([email protected])
- Patteera Triamamornwooth ([email protected])
- Nattarika Juntarapaoraya ([email protected])
- Kriangkrai Saetan ([email protected])
- Pitikorn Khlaisamniang ([email protected])
Citation
If OpenThaiGPT has been beneficial for your work, kindly consider citing it as follows:#### Bibtex
@misc{yuenyong2024openthaigpt15thaicentricopen,
title={OpenThaiGPT 1.5: A Thai-Centric Open Source Large Language Model},
author={Sumeth Yuenyong and Kobkrit Viriyayudhakorn and Apivadee Piyatumrong and Jillaphat Jaroenkantasima},
year={2024},
eprint={2411.07238},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2411.07238},
}
#### APA Style (for TXT, MS Word)
Yuenyong, S., Viriyayudhakorn, K., Piyatumrong, A., & Jaroenkantasima, J. (2024). OpenThaiGPT 1.5: A Thai-Centric Open Source Large Language Model. arXiv [Cs.CL]. Retrieved from http://arxiv.org/abs/2411.07238
Disclaimer: Provided responses are not guaranteed.