πŸ“‹ Model Description


base_model: X-Humanoid/Pelican1.0-VL-235B-A22B-FC language:
  • en
library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags:
  • Physical AI
  • embodied-ai
  • multimodal-learning
  • robotics

About









weighted/imatrix quants of https://huggingface.co/X-Humanoid/Pelican1.0-VL-235B-A22B-FC

For a convenient overview and download list, visit our model page for this model.

static quants are available at https://huggingface.co/mradermacher/Pelican1.0-VL-235B-A22B-FC-GGUF

This is a vision model - mmproj files (if any) will be in the static repository.

Usage

If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs
for
more details, including on how to concatenate multi-part files.

Provided Quants

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

LinkTypeSize/GBNotes
GGUFimatrix0.6imatrix file (for creating your own quants)
GGUFi1-IQ1_S48.0for the desperate
GGUFi1-IQ1_M53.2mostly desperate
GGUFi1-IQ2_XXS61.9
GGUFi1-IQ2_XS68.9
GGUFi1-IQ2_S70.3
GGUFi1-IQ2_M77.3
GGUFi1-Q2K_S79.9very low quality
GGUFi1-Q2K85.8IQ3XXS probably better
GGUFi1-IQ3_XXS90.5lower quality
GGUFi1-IQ3_XS96.1
GGUFi1-Q3KS101.5IQ3XS probably better
GGUFi1-IQ3S101.6beats Q3K*
GGUFi1-IQ3_M103.2
GGUFi1-Q3KM112.5IQ3S probably better
GGUFi1-Q3KL121.9IQ3M probably better
GGUFi1-IQ4_XS125.4
GGUFi1-Q4_0133.2fast, low quality
GGUFi1-Q4K_S133.8optimal size/speed/quality
GGUFi1-Q4K_M142.3fast, recommended
GGUFi1-Q4_1147.3
GGUFi1-Q5K_S162.0
GGUFi1-Q5K_M166.9
GGUFi1-Q6K193.1practically like static Q6K
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

!image.png

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9

FAQ / Model Request

See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.

Thanks

I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
Pelican1.0-VL-235B-A22B-FC.i1-IQ1_M.gguf
LFS
49.49 GB Download
Pelican1.0-VL-235B-A22B-FC.i1-IQ1_S.gguf
LFS
44.66 GB Download
Pelican1.0-VL-235B-A22B-FC.i1-IQ2_M.gguf
LFS Q2
71.85 GB Download
Pelican1.0-VL-235B-A22B-FC.i1-IQ2_S.gguf
LFS Q2
65.4 GB Download
Pelican1.0-VL-235B-A22B-FC.i1-IQ2_XS.gguf
LFS Q2
64.09 GB Download
Pelican1.0-VL-235B-A22B-FC.i1-IQ2_XXS.gguf
LFS Q2
57.55 GB Download
Pelican1.0-VL-235B-A22B-FC.i1-IQ3_M.gguf
LFS Q3
95.99 GB Download
Pelican1.0-VL-235B-A22B-FC.i1-IQ3_S.gguf
LFS Q3
94.5 GB Download
Pelican1.0-VL-235B-A22B-FC.i1-IQ3_XS.gguf
LFS Q3
89.36 GB Download
Pelican1.0-VL-235B-A22B-FC.i1-IQ3_XXS.gguf
LFS Q3
84.17 GB Download
Pelican1.0-VL-235B-A22B-FC.i1-IQ4_XS.gguf
LFS Q4
116.68 GB Download
Pelican1.0-VL-235B-A22B-FC.i1-Q2_K.gguf
LFS Q2
79.81 GB Download
Pelican1.0-VL-235B-A22B-FC.i1-Q2_K_S.gguf
LFS Q2
74.28 GB Download
Pelican1.0-VL-235B-A22B-FC.i1-Q3_K_L.gguf
LFS Q3
113.46 GB Download
Pelican1.0-VL-235B-A22B-FC.i1-Q3_K_M.gguf
LFS Q3
104.72 GB Download
Pelican1.0-VL-235B-A22B-FC.i1-Q3_K_S.gguf
LFS Q3
94.48 GB Download
Pelican1.0-VL-235B-A22B-FC.i1-Q4_0.gguf
Recommended LFS Q4
123.99 GB Download
Pelican1.0-VL-235B-A22B-FC.i1-Q4_1.gguf
LFS Q4
137.12 GB Download
Pelican1.0-VL-235B-A22B-FC.i1-Q4_K_M.gguf
LFS Q4
132.39 GB Download
Pelican1.0-VL-235B-A22B-FC.i1-Q4_K_S.gguf
LFS Q4
124.51 GB Download
Pelican1.0-VL-235B-A22B-FC.i1-Q5_K_M.gguf
LFS Q5
155.36 GB Download
Pelican1.0-VL-235B-A22B-FC.i1-Q5_K_S.gguf
LFS Q5
150.76 GB Download
Pelican1.0-VL-235B-A22B-FC.i1-Q6_K.gguf
LFS Q6
179.76 GB Download
Pelican1.0-VL-235B-A22B-FC.imatrix.gguf
LFS
455.56 MB Download