πŸ“‹ Model Description


license: apache-2.0 pipeline_tag: text-generation library_name: node-llama-cpp tags:
  • node-llama-cpp
  • llama.cpp
  • conversational
base_model: ByteDance-Seed/Seed-OSS-36B-Instruct quantized_by: giladgd

Seed-OSS-36B-Instruct-GGUF

Static quants of ByteDance-Seed/Seed-OSS-36B-Instruct.

Quants

LinkURIQuantSize
GGUFhf:giladgd/Seed-OSS-36B-Instruct-GGUF:Q2KQ2K13.6GB
GGUFhf:giladgd/Seed-OSS-36B-Instruct-GGUF:Q3KSQ3K_S15.9GB
GGUFhf:giladgd/Seed-OSS-36B-Instruct-GGUF:Q3KMQ3K_M17.6GB
GGUFhf:giladgd/Seed-OSS-36B-Instruct-GGUF:Q3KLQ3K_L19.1GB
GGUFhf:giladgd/Seed-OSS-36B-Instruct-GGUF:Q40Q4020.6GB
GGUFhf:giladgd/Seed-OSS-36B-Instruct-GGUF:Q4KSQ4K_S20.7GB
GGUFhf:giladgd/Seed-OSS-36B-Instruct-GGUF:Q4KMQ4K_M21.8GB
GGUFhf:giladgd/Seed-OSS-36B-Instruct-GGUF:Q50Q5025.0GB
GGUFhf:giladgd/Seed-OSS-36B-Instruct-GGUF:Q5KSQ5K_S25.0GB
GGUFhf:giladgd/Seed-OSS-36B-Instruct-GGUF:Q5KMQ5K_M25.6GB
GGUFhf:giladgd/Seed-OSS-36B-Instruct-GGUF:Q6KQ6K29.7GB
GGUFhf:giladgd/Seed-OSS-36B-Instruct-GGUF:Q80Q8038.4GB
GGUFhf:giladgd/Seed-OSS-36B-Instruct-GGUF:F16F1672.3GB

[!TIP]

Download a quant using node-llama-cpp (more info):

> npx -y node-llama-cpp pull <URI>

>

Usage

Use with node-llama-cpp (recommended)

Ensure you have node.js installed:
brew install nodejs

CLI

Chat with the model:
npx -y node-llama-cpp chat hf:giladgd/Seed-OSS-36B-Instruct-GGUF:Q4KM

Code

Use it in your project:
npm install node-llama-cpp
import {getLlama, resolveModelFile, LlamaChatSession} from "node-llama-cpp";

const modelUri = "hf:giladgd/Seed-OSS-36B-Instruct-GGUF:Q4KM";

const llama = await getLlama();
const model = await llama.loadModel({
modelPath: await resolveModelFile(modelUri)
});
const context = await model.createContext();
const session = new LlamaChatSession({
contextSequence: context.getSequence()
});

const q1 = "Hi there, how are you?";
console.log("User: " + q1);

const a1 = await session.prompt(q1);
console.log("AI: " + a1);

[!TIP]

Read the getting started guide to quickly scaffold a new node-llama-cpp project

Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)
brew install llama.cpp

CLI

llama-cli -hf giladgd/Seed-OSS-36B-Instruct-GGUF:Q4KM -p "The meaning to life and the universe is"

Server

llama-server -hf giladgd/Seed-OSS-36B-Instruct-GGUF:Q4KM -c 2048

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
Seed-OSS-36B-Instruct.F16.gguf
LFS FP16
67.35 GB Download
Seed-OSS-36B-Instruct.MXFP4.gguf
LFS
35.78 GB Download
Seed-OSS-36B-Instruct.Q2_K.gguf
LFS Q2
12.67 GB Download
Seed-OSS-36B-Instruct.Q3_K_L.gguf
LFS Q3
17.83 GB Download
Seed-OSS-36B-Instruct.Q3_K_M.gguf
LFS Q3
16.41 GB Download
Seed-OSS-36B-Instruct.Q3_K_S.gguf
LFS Q3
14.77 GB Download
Seed-OSS-36B-Instruct.Q4_0.gguf
Recommended LFS Q4
19.14 GB Download
Seed-OSS-36B-Instruct.Q4_K_M.gguf
LFS Q4
20.27 GB Download
Seed-OSS-36B-Instruct.Q4_K_S.gguf
LFS Q4
19.27 GB Download
Seed-OSS-36B-Instruct.Q5_0.gguf
LFS Q5
23.26 GB Download
Seed-OSS-36B-Instruct.Q5_K_M.gguf
LFS Q5
23.84 GB Download
Seed-OSS-36B-Instruct.Q5_K_S.gguf
LFS Q5
23.26 GB Download
Seed-OSS-36B-Instruct.Q6_K.gguf
LFS Q6
27.63 GB Download
Seed-OSS-36B-Instruct.Q8_0.gguf
LFS Q8
35.78 GB Download