Model Description
language:
- en
- zh
- web3
- finance
- defi
- chain-of-thought
- sft
- security-audit
- on-device-ai
- accuracy
- ponzi-detection-rate
- code-security-score
- openai/gpt-oss-20b
DMind-3 GGUF Models
Model Generation Details
This model was generated using llama.cpp at commit 05fa625ea.
Quantization Beyond the IMatrix
I've been experimenting with a new quantization approach that selectively elevates the precision of key layers beyond what the default IMatrix configuration provides.
In my testing, standard IMatrix quantization underperforms at lower bit depths, especially with Mixture of Experts (MoE) models. To address this, I'm using the --tensor-type option in llama.cpp to manually "bump" important layers to higher precision. You can see the implementation here:
๐ Layer bumping with llama.cpp
While this does increase model file size, it significantly improves precision for a given quantization level.
I'd love your feedbackโhave you tried this? How does it perform for you?
Click here to get info on choosing the right GGUF model format
๐ฎ DMind-3: The Age of Foresight
From local logic to global foresight. In a world of isolated systems, the one who sees the whole board wins.
We have armed the individual with a shield (DMind-3-nano) and a brain (DMind-3-mini). We have enabled sovereign intelligence to defend and to reason. Yet, in the interconnected chaos of global markets, even the sharpest mind can be blindsided by a tsunami forming on the other side of the world. Local optimization is not enough. True sovereignty requires not just reaction, but pre-emption.
Web3 is a single, planet-scale financial machine. Capital flows like weather patterns, and risks cascade across protocols like lightning. To navigate this reality, one cannot merely analyze a single smart contract or a single chain. One must perceive the entire systemโits flows, its pressures, its emergent properties. This requires a perspective that transcends the local, a form of intelligence that can synthesize global, cross-domain information into actionable foresight.
DMind-3 is our answer. It is not an incremental upgrade; it is a categorical leap. While nano provides intuition and mini provides logic, max delivers foresight. It is the Oracle in the cloud, the strategic command center that sees the entire battlefield. It was built not to answer questions, but to question the answers, to model the unseen, and to chart a course through the complexity of a new financial era.
๐ก๏ธ DMind-3-nano is your Shield. โ๏ธ DMind-3-mini is your Spear. ๐ฎ DMind-3 is your Oracle.
Welcome to the Age of Foresight.
๐๏ธ DMind-3: The Macro-Strategic Financial Engine
1. Evolution & Legacy
The DMind-3 series was conceived as a complete, multi-layered cognitive architecture for the sovereign individual. Nano secures the present transaction. Mini formulates the immediate strategy. Max defines the long-term campaign.
This final piece of the trilogy moves beyond the tactical and into the strategic. It was born from the recognition that the most significant opportunities and the most devastating risks in Web3 are systemic. They are not found in code, but in the interplay between code, capital, and human psychology at a global scale. DMind-3 is engineered to be a Macro-Strategic Financial Engine, providing institutional-grade foresight as a utility for developers, funds, and the agent ecosystems built upon the DMind stack.
2. โ๏ธ Model Details
| Property | Value |
|---|---|
| Model Name | DMind-3 |
| Organization | DMindAI |
| Base Architecture | gpt-oss-20b (Customized Transformer w/ Multi-Scale RoPE) |
| Parameter Count | 21 Billion |
| Precision | BF16 / FP16 (Native) |
| Context Window | 256k tokens |
| Deployment | Cloud API & Private Enterprise VPC |
3. ๐ฌ Methodology: Hierarchical Predictive Synthesis (HPS)
DMind-3 introduces Hierarchical Predictive Synthesis (HPS). While Cยณ-SFT (used in mini) teaches the model to correct its own reasoning, HPS teaches it to synthesize multiple, conflicting, time-variant data streams into a coherent probabilistic forecast. It operates on a nested hierarchy of abstraction, from raw on-chain events to complex macroeconomic indicators.
(Figure 1: The HPS training paradigm, showing multi-level data fusion and probabilistic future state generation)
Mathematical Formalization
The HPS objective function seeks to minimize the divergence between the model's predicted distribution of future states and the actual observed outcomes, weighted by strategic importance:
$$
\mathcal{L}{\text{HPS}}(\theta) = - \mathbb{E}{\mathcal{D}} \left[ \sum{t=1}^{T} \sum{i=1}^{M} \omegai \cdot \log P\theta(S'{t+1} \mid St, At, Mi) \right] + \lambda \sum{l=1}^{L} \| \Omegal(\theta) - \Omegal(\theta{\text{ref}}) \|_F
$$
where:
| Symbol | Description |
|---|---|
| \\(S_t\\) | The state of the global market at time \\(t\\) |
| \\(A_t\\) | The set of all actions (transactions, governance votes) at time \\(t\\) |
| \\(M_i\\) | The \\(i\\)-th modality of data (e.g., on-chain, news, social sentiment) |
| \\(\omega_i\\) | The attention weight assigned to the strategic importance of modality \\(i\\) |
| \\(\Omega_l\\) | The parameter matrix at layer \\(l\\) of the network, regularized to prevent catastrophic forgetting |
Similar to DMind-3-mini, the model supports a dual-state inference mechanism triggered by a special token:
$$
\hat{y} =
\begin{cases}
\operatorname*{arg\,max}\limits{y} P\theta(y \mid x, \mathcal{C}_{\text{global}}) & \text{if } \tau = \emptyset \quad (\text{Standard Mode}) \\
\operatorname*{arg\,max}\limits{y} P\theta(y \mid x, \mathcal{C}{\text{global}}, \mathcal{R}{\text{risk}}, \mathcal{H}_{\text{hist}}) & \text{if } \tau = \texttt{
\end{cases}
$$
This forces the model to not just predict, but to weigh the importance of different data sources when constructing its view of the future.
4. ๏ฟฝ๏ฟฝ Intended Use: Institutional-Grade Web3 Intelligence
DMind-3 is designed to power the next generation of DeFi analytics, risk management platforms, and autonomous agent orchestrators.
Key Capabilities:
- ๐ฎ Macro-Strategic Foresight: Identify emerging cross-chain narratives, predict market regime shifts, and model the impact of major economic events on crypto asset correlations.
- ๐๏ธ Automated Institutional Research: Generate deep, data-driven reports on novel protocols, perform automated tokenomics valuation, and assess long-term protocol viability.
- ๐ Systemic Risk Assessment: Model contagion risk across DeFi, detect liquidity black holes before they form, and run stress tests on entire ecosystems based on simulated market shocks.
- ๐ค Agent Fleet Orchestration: Serve as the central "strategic brain" for fleets of
miniandnanoagents, providing high-level directives and market context.
5. ๐ The Brain, Shield & Oracle Ecosystem
The DMind-3 series is a vertically integrated stack designed for sovereign intelligence.
(Figure 2: The full DMind-3 Cognitive Architecture, from on-device reflexes to cloud-native foresight)
- The Oracle (DMind-3): Runs in the cloud. Provides macro-strategic foresight, systemic risk analysis, and orchestrates the agent fleet.
- The Brain (DMind-3-mini): Runs on your local high-performance machine. Executes complex, bespoke strategies and performs deep, focused research under the Oracle's guidance.
- The Shield (DMind-3-nano): Runs in your browser or wallet. Provides real-time, intuitive transaction security and intent recognition, acting as the final line of defense.
6. ๐ Training Data
DMind-3 was trained on a corpus of over 500,000 curated, high-signal documents and a multi-terabyte stream of structured on-chain data.
| Data Source | Proportion | Description |
|---|---|---|
| Institutional Alpha Reports | 35% | Comprehensive reports from premier crypto-native funds and TradFi institutions, deconstructed into causal models. |
| Global Macroeconomic Data | 25% | Time-series data from sources like the Federal Reserve (FRED), World Bank, and IMF, correlated with on-chain metrics. |
| Cross-Chain Indexed Data | 20% | A complete, indexed history of transactions, state changes, and logs across all major EVM chains, Solana, and Cosmos ecosystems. |
| Financial Post-Mortems & Audits | 10% | In-depth analysis of systemic failures, economic exploits, and protocol hacks, focusing on pre-mortem indicators and contagion pathways. |
| Geopolitical & Regulatory Feeds | 10% | Real-time feeds on global regulatory changes, policy proposals, and geopolitical events impacting digital asset markets. |
7. ๐ Performance Benchmarks
Evaluated on three key benchmarks: DMind Benchmark (Web3 Native Logic), FinanceQA (Financial Domain Knowledge), and AIME 2025 (Advanced Mathematical Reasoning).
!Figure 3: Performance Benchmarks
(Figure 3: LLM Performance Evaluation - 3 Benchmarks: DMind Benchmark, FinanceQA, AIME 2025)
The evaluation compares DMind-3 (21B) against top-tier frontier models (GPT-5.1, Claude Sonnet 4.5) and other efficient models. Despite its optimized size, the Max model demonstrates exceptional efficiency, particularly in specialized domain tasks where it outperforms significantly larger generalist models.
8. โ๏ธ Limitations & Disclaimer
- Not a Financial Advisor (NFA): DMind-3 is a powerful analytical tool for generating insights and modeling risks. It is not a registered financial advisor. All outputs should be independently verified and are not a solicitation to trade.
- Probabilistic Nature: All forecasts are probabilistic and based on the data available up to the knowledge cutoff. The model cannot predict black swan events and is subject to the inherent unpredictability of markets.
- Knowledge Cutoff: The core model has a knowledge cutoff of June 2025. While it can process real-time data provided via the API, its foundational understanding is based on its training corpus.
๐ If you find these models useful
Help me test my AI-Powered Quantum Network Monitor Assistant with quantum-ready security checks:
The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : Source Code Quantum Network Monitor. You will also find the code I use to quantize the models if you want to do it yourself GGUFModelBuilder
๐ฌ How to test:
Choose an AI assistant type:
- TurboLLM (GPT-4.1-mini)
- HugLLM (Hugginface Open-source models)
- TestLLM (Experimental CPU-only)
What Iโm Testing
Iโm pushing the limits of small open-source models for AI network monitoring, specifically:- Function calling against live network services
- How small can a model go while still handling:
๐ก TestLLM โ Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space):
- โ Zero-configuration setup
- โณ 30s load time (slow inference but no API costs) . No token limited as the cost is low.
- ๐ง Help wanted! If youโre into edge-device AI, letโs collaborate!
Other Assistants
๐ข TurboLLM โ Uses gpt-4.1-mini :- It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited.
- Create custom cmd processors to run .net code on Quantum Network Monitor Agents
- Real-time network diagnostics and monitoring
- Security Audits
- Penetration testing (Nmap/Metasploit)
๐ต HugLLM โ Latest Open-source models:
- ๐ Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita.
๐ก Example commands you could test:
"Give me info on my websites SSL certificate""Check if my server is using quantum safe encyption for communication""Run a comprehensive security audit on my server"- '"Create a cmd processor to .. (what ever you want)" Note you need to install a Quantum Network Monitor Agent to run the .net code on. This is a very flexible and powerful feature. Use with caution!
Final Word
I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAIโall out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is open source. Feel free to use whatever you find helpful.
If you appreciate the work, please consider buying me a coffee โ. Your support helps cover service costs and allows me to raise token limits for everyone.
I'm also open to job opportunities or sponsorship.
Thank you! ๐
GGUF File List
| ๐ Filename | ๐ฆ Size | โก Download |
|---|---|---|
|
DMind-3-bf16.gguf
LFS
FP16
|
38.99 GB | Download |
|
DMind-3-bf16_q8_0.gguf
LFS
Q8
|
22.02 GB | Download |
|
DMind-3-f16.gguf
LFS
FP16
|
38.99 GB | Download |
|
DMind-3-f16_q8_0.gguf
LFS
Q8
|
22.02 GB | Download |
|
DMind-3-imatrix.gguf
LFS
|
26.78 MB | Download |
|
DMind-3-iq1_m.gguf
LFS
|
11.18 GB | Download |
|
DMind-3-iq1_s.gguf
LFS
|
11.09 GB | Download |
|
DMind-3-iq2_m.gguf
LFS
Q2
|
11.18 GB | Download |
|
DMind-3-iq2_s.gguf
LFS
Q2
|
11.18 GB | Download |
|
DMind-3-iq2_xs.gguf
LFS
Q2
|
11.18 GB | Download |
|
DMind-3-iq2_xxs.gguf
LFS
Q2
|
11.09 GB | Download |
|
DMind-3-iq3_m.gguf
LFS
Q3
|
11.3 GB | Download |
|
DMind-3-iq3_s.gguf
LFS
Q3
|
11.2 GB | Download |
|
DMind-3-iq3_xs.gguf
LFS
Q3
|
11.18 GB | Download |
|
DMind-3-iq3_xxs.gguf
LFS
Q3
|
11.11 GB | Download |
|
DMind-3-iq4_nl.gguf
LFS
Q4
|
11.27 GB | Download |
|
DMind-3-iq4_xs.gguf
LFS
Q4
|
11.27 GB | Download |
|
DMind-3-q2_k_l.gguf
LFS
Q2
|
11.52 GB | Download |
|
DMind-3-q2_k_m.gguf
LFS
Q2
|
11.52 GB | Download |
|
DMind-3-q2_k_s.gguf
LFS
Q2
|
11.27 GB | Download |
|
DMind-3-q3_k_l.gguf
LFS
Q3
|
12.31 GB | Download |
|
DMind-3-q3_k_m.gguf
LFS
Q3
|
12.31 GB | Download |
|
DMind-3-q3_k_s.gguf
LFS
Q3
|
11.21 GB | Download |
|
DMind-3-q4_0.gguf
Recommended
LFS
Q4
|
11 GB | Download |
|
DMind-3-q4_0_l.gguf
LFS
Q4
|
11.54 GB | Download |
|
DMind-3-q4_1.gguf
LFS
Q4
|
12.21 GB | Download |
|
DMind-3-q4_1_l.gguf
LFS
Q4
|
12.69 GB | Download |
|
DMind-3-q4_k_l.gguf
LFS
Q4
|
14.93 GB | Download |
|
DMind-3-q4_k_m.gguf
LFS
Q4
|
14.93 GB | Download |
|
DMind-3-q4_k_s.gguf
LFS
Q4
|
13.86 GB | Download |
|
DMind-3-q5_0.gguf
LFS
Q5
|
13.43 GB | Download |
|
DMind-3-q5_0_l.gguf
LFS
Q5
|
13.84 GB | Download |
|
DMind-3-q5_1.gguf
LFS
Q5
|
14.65 GB | Download |
|
DMind-3-q5_1_l.gguf
LFS
Q5
|
14.99 GB | Download |
|
DMind-3-q5_k_l.gguf
LFS
Q5
|
15.91 GB | Download |
|
DMind-3-q5_k_m.gguf
LFS
Q5
|
15.91 GB | Download |
|
DMind-3-q5_k_s.gguf
LFS
Q5
|
14.98 GB | Download |
|
DMind-3-q6_k_l.gguf
LFS
Q6
|
20.67 GB | Download |
|
DMind-3-q6_k_m.gguf
LFS
Q6
|
20.67 GB | Download |
|
DMind-3-q8_0.gguf
LFS
Q8
|
20.73 GB | Download |