π Model Description
language:
- en
FinGPT-MT-Llama-3-8B-LoRA-GGUF
Original Model
LoRA Adapter
FinGPT/fingpt-mtllama3-8b_lora
Run with LlamaEdge
- LlamaEdge version: coming soon
- Context size:
8192
Quantized GGUF Models
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| FinGPT-MT-Llama-3-8B-LoRA-Q2K.gguf | Q2K | 2 | 3.18 GB| smallest, significant quality loss - not recommended for most purposes |
| FinGPT-MT-Llama-3-8B-LoRA-Q3KL.gguf | Q3K_L | 3 | 4.32 GB| small, substantial quality loss |
| FinGPT-MT-Llama-3-8B-LoRA-Q3KM.gguf | Q3K_M | 3 | 4.02 GB| very small, high quality loss |
| FinGPT-MT-Llama-3-8B-LoRA-Q3KS.gguf | Q3K_S | 3 | 3.66 GB| very small, high quality loss |
| FinGPT-MT-Llama-3-8B-LoRA-Q40.gguf | Q40 | 4 | 4.66 GB| legacy; small, very high quality loss - prefer using Q3KM |
| FinGPT-MT-Llama-3-8B-LoRA-Q4KM.gguf | Q4K_M | 4 | 4.92 GB| medium, balanced quality - recommended |
| FinGPT-MT-Llama-3-8B-LoRA-Q4KS.gguf | Q4K_S | 4 | 4.69 GB| small, greater quality loss |
| FinGPT-MT-Llama-3-8B-LoRA-Q50.gguf | Q50 | 5 | 5.6 GB| legacy; medium, balanced quality - prefer using Q4KM |
| FinGPT-MT-Llama-3-8B-LoRA-Q5KM.gguf | Q5K_M | 5 | 5.73 GB| large, very low quality loss - recommended |
| FinGPT-MT-Llama-3-8B-LoRA-Q5KS.gguf | Q5K_S | 5 | 5.6 GB| large, low quality loss - recommended |
| FinGPT-MT-Llama-3-8B-LoRA-Q6K.gguf | Q6K | 6 | 6.6 GB| very large, extremely low quality loss |
| FinGPT-MT-Llama-3-8B-LoRA-Q80.gguf | Q80 | 8 | 8.54 GB| very large, extremely low quality loss - not recommended |
| FinGPT-MT-Llama-3-8B-LoRA-f16.gguf | f16 | 16 | 16.1 GB| |
Quantized with llama.cpp b3807.
π GGUF File List
| π Filename | π¦ Size | β‘ Download |
|---|---|---|
|
FinGPT-MT-Llama-3-8B-LoRA-Q2_K.gguf
LFS
Q2
|
2.96 GB | Download |
|
FinGPT-MT-Llama-3-8B-LoRA-Q3_K_L.gguf
LFS
Q3
|
4.03 GB | Download |
|
FinGPT-MT-Llama-3-8B-LoRA-Q3_K_M.gguf
LFS
Q3
|
3.74 GB | Download |
|
FinGPT-MT-Llama-3-8B-LoRA-Q3_K_S.gguf
LFS
Q3
|
3.41 GB | Download |
|
FinGPT-MT-Llama-3-8B-LoRA-Q4_0.gguf
Recommended
LFS
Q4
|
4.34 GB | Download |
|
FinGPT-MT-Llama-3-8B-LoRA-Q4_K_M.gguf
LFS
Q4
|
4.58 GB | Download |
|
FinGPT-MT-Llama-3-8B-LoRA-Q4_K_S.gguf
LFS
Q4
|
4.37 GB | Download |
|
FinGPT-MT-Llama-3-8B-LoRA-Q5_0.gguf
LFS
Q5
|
5.21 GB | Download |
|
FinGPT-MT-Llama-3-8B-LoRA-Q5_K_M.gguf
LFS
Q5
|
5.34 GB | Download |
|
FinGPT-MT-Llama-3-8B-LoRA-Q5_K_S.gguf
LFS
Q5
|
5.21 GB | Download |
|
FinGPT-MT-Llama-3-8B-LoRA-Q6_0.gguf
LFS
Q6
|
6.14 GB | Download |
|
FinGPT-MT-Llama-3-8B-LoRA-Q8_0.gguf
LFS
Q8
|
7.95 GB | Download |
|
FinGPT-MT-Llama-3-8B-LoRA-f16.gguf
LFS
FP16
|
14.97 GB | Download |