Model Description
🤖 RTILA Assistant
Full-powered fine-tuned AI model for generating automation configurations for the RTILA Automation Engine
📋 Model Description
RTILA Assistant is the standard model in the RTILA family, fine-tuned from Qwen3-14B for maximum quality. It generates JSON automation configurations for the RTILA Automation Engine with the highest accuracy and complexity handling.
🔄 Choose Your Version
| Model | Base | GGUF Size | Min RAM | Best For |
|---|---|---|---|---|
| RTILA Assistant (this) | Qwen3-14B | ~9 GB | 16 GB | 🏆 Maximum quality, complex automations |
| RTILA Assistant Lite | Qwen3-8B | ~5 GB | 8 GB | Balanced performance, mid-range devices |
| RTILA Assistant Mini | Qwen3-4B | ~2.5 GB | 6 GB | ✅ Mac M1 8GB, low VRAM, CPU inference |
Capabilities
| Category | Description |
|---|---|
| 🌐 Navigation & Interaction | Click, scroll, type, wait, handle popups, multi-tab workflows |
| 📊 Data Extraction | CSS/XPath selectors, tables, lists, nested data, pagination |
| 🔄 Logic & Flow | Loops, conditionals, error handling, retry patterns |
| 🔗 Triggers & Integrations | Webhooks, PostgreSQL, MySQL, Slack, email notifications |
| 📝 Variables & Substitution | Dynamic values, data transformations, regex patterns |
| 🛠️ Advanced Scripting | Custom JavaScript execution, page analysis, DOM manipulation |
📦 Model Specifications
| Property | Value |
|---|---|
| Base Model | Qwen3-14B |
| Format | GGUF Q4KM |
| Size | ~9 GB |
| Context Length | 1536 tokens |
💻 Hardware Requirements
| Hardware | Supported | Notes |
|---|---|---|
| GPU (16GB+ VRAM) | ✅ Recommended | RTX 4090, RTX 3090, A100 |
| GPU (12GB VRAM) | ✅ Works | RTX 4070 Ti, RTX 3080 12GB |
| GPU (8GB VRAM) | ⚠️ Tight | RTX 3060, RTX 4060 - may need offloading |
| Apple Silicon 16GB+ | ✅ Works | M1/M2/M3 Pro/Max with 16GB+ unified memory |
| Apple Silicon 8GB | ❌ Too small | Use Mini instead |
| CPU-only | ⚠️ Slow | 16GB+ RAM required, expect slow inference |
💡 Don't have 16GB? Try RTILA Assistant Lite (8GB) or Mini (6GB)
🚀 Quick Start
Option 1: Ollama (Easiest)
ollama run hf.co/rtila-corporation/rtila-assistant:Q4KM
Option 2: LM Studio
- Download LM Studio
- Search for
rtila-corporation/rtila-assistant - Download
Q4K_M - Set parameters: Temperature=0.7, Top-P=0.8, Top-K=20
- Start chatting!
Option 3: llama.cpp
# Download model
huggingface-cli download rtila-corporation/rtila-assistant \
rtila-assistant.Q4KM.gguf --local-dir ./models
Run interactive chat
./llama-cli -m ./models/rtila-assistant.Q4KM.gguf \
-p "Scrape product prices from an e-commerce site" \
--temp 0.7 --top-p 0.8 --top-k 20
Option 4: Python (llama-cpp-python)
from llama_cpp import Llama
llm = Llama(
modelpath="./models/rtila-assistant.Q4K_M.gguf",
n_ctx=1536,
ngpulayers=-1, # Use GPU if available
)
response = llm.createchatcompletion(
messages=[
{"role": "system", "content": "You are RTILA Assistant..."},
{"role": "user", "content": "Create a config for scraping product data"}
],
temperature=0.7,
top_p=0.8,
top_k=20,
)
print(response['choices'][0]['message']['content'])
⚙️ Recommended Generation Settings
| Parameter | Value |
|---|---|
| Temperature | 0.7 |
| Top-P | 0.8 |
| Top-K | 20 |
| Min-P | 0 |
💬 Example Prompts & Outputs
Example 1: Basic Extraction
Prompt:
Extract all product names and prices from the current page
Output:
{
"settings": {
"name": "Product Extraction"
},
"datasets": {
"products": {
"selector": ".product-item",
"type": "list",
"fields": {
"name": {
"selector": ".product-title",
"type": "text"
},
"price": {
"selector": ".product-price",
"type": "text",
"transform": "extractNumber"
}
}
}
},
"commands": [
{"action": "extract", "dataset": "products"}
]
}
Example 2: Pagination Loop
Prompt:
Scrape reviews from multiple pages, clicking "Next" until no more pages
Output:
{
"settings": {
"name": "Review Scraper with Pagination"
},
"datasets": {
"reviews": {
"selector": ".review-card",
"type": "list",
"fields": {
"author": {"selector": ".reviewer-name", "type": "text"},
"rating": {"selector": ".star-rating", "type": "attribute", "attribute": "data-rating"},
"text": {"selector": ".review-body", "type": "text"},
"date": {"selector": ".review-date", "type": "text"}
}
}
},
"commands": [
{
"action": "loop",
"while": {"exists": ".next-page:not(.disabled)"},
"maxIterations": 50,
"commands": [
{"action": "extract", "dataset": "reviews", "append": true},
{"action": "click", "selector": ".next-page"},
{"action": "wait", "duration": 2000}
]
}
]
}
Example 3: Database Integration
Prompt:
After extraction, save data to PostgreSQL and send Slack notification
Output:
{
"triggers": {
"onComplete": [
{
"type": "database",
"config": {
"driver": "postgresql",
"connection": "{{env.DATABASE_URL}}",
"table": "scraped_data",
"mode": "upsert",
"uniqueKey": "id"
}
},
{
"type": "webhook",
"config": {
"url": "{{env.SLACK_WEBHOOK}}",
"method": "POST",
"body": {
"text": "✅ Extraction complete: {{datasets.products.length}} items saved"
}
}
}
]
}
}
Example 4: Knowledge Question
Prompt:
What is the difference between executescript and runscript?
Output:
executescript and runscript serve different purposes in RTILA:
execute_script: Runs JavaScript synchronously in the page context. Returns immediately
with the result. Best for quick DOM queries, getting values, or simple manipulations.
run_script: Executes JavaScript asynchronously with full control flow. Supports
await, can make fetch requests, and handles complex async operations. Returns a Promise.
Use execute_script for: Reading values, checking conditions, simple DOM changes
Use run_script for: API calls, complex async workflows, operations that need to wait
🏋️ Training Details
| Parameter | Value |
|---|---|
| Base Model | unsloth/Qwen3-14B |
| Method | QLoRA (4-bit) |
| LoRA Rank | 64 |
| LoRA Alpha | 128 |
| Context Length | 1536 tokens |
| Training Examples | ~400 |
| Epochs | 4 (with early stopping) |
| Learning Rate | 2e-5 |
| Thinking Mode | Disabled |
Training Data
- Navigation & Interaction patterns
- Data extraction configurations
- Logic & flow control
- Triggers & integrations
- Variables & substitution
- Advanced scripting
- Error handling
- Knowledge base Q&A
📝 System Prompt
For best results, use this system prompt:
You are RTILA Assistant, an expert AI for generating automation configurations for the RTILA Automation Engine.
Your capabilities:
- Generate complete JSON configurations for web automation tasks
- Define datasets with selectors, properties, and transformations
- Configure navigation, extraction, loops, and conditionals
- Set up triggers for webhooks, databases, and integrations
- Explain RTILA concepts and best practices
When generating configurations:
- Always output valid JSON with proper structure
- Include 'settings', 'datasets', and 'commands' sections as needed
- Use appropriate selectors (CSS, XPath) for the target elements
- Apply transformations when data cleaning is required
When answering questions:
- Be concise and accurate
- Provide examples when helpful
- Reference specific RTILA features and commands
🔗 Model Family
| Model | Link | Best For |
|---|---|---|
| RTILA Assistant (this) | huggingface.co/rtila-corporation/rtila-assistant | Maximum quality |
| RTILA Assistant Lite | huggingface.co/rtila-corporation/rtila-assistant-lite | Mid-range devices |
| RTILA Assistant Mini | huggingface.co/rtila-corporation/rtila-assistant-mini | Mac M1 8GB, low VRAM |
📄 License
Apache 2.0
🙏 Acknowledgments
GGUF File List
| 📁 Filename | 📦 Size | ⚡ Download |
|---|---|---|
|
qwen3-14b.Q4_K_M.gguf
Recommended
LFS
Q4
|
8.38 GB | Download |