SoulAI LLM
Last updated
Last updated
The SoulAI LLM is not just another large language model. It is a finance- and agent-optimized brain, fine-tuned to reason, decide, and act autonomously within decentralized systems. Designed from the ground up for crypto logic, smart contract interaction, and workflow execution, SoulAI's LLM is the central nervous system of our autonomous agent ecosystem.
SoulAI’s LLM is a fine-tuned variant of Meta’s LLaMA 3.1–8B, trained on domain-specific datasets curated exclusively for intelligent financial reasoning, DeFi mechanics, agent workflows, and smart contract logic. While other models focus on conversation, SoulAI’s LLM is purpose-built to think, plan, and initiate action across blockchain environments.
Base Model: Meta LLaMA 3.1–8B
Tuning Method: LoRA / PEFT
Platform: Hugging Face AutoTrain + Manual Curation
Frameworks: Transformers, Text-Generation-Inference, PyTorch
Optimized For: Instruction-following, chain-of-thought, multi-step workflows
Model Format: BF16 (Safetensors)
Over 10,000+ instruction-tuned examples were manually selected and synthesized to simulate:
Crypto trading logic
Tokenomics reasoning
Financial planning queries
Wallet/portfolio decision simulations
Smart contract analysis
Multi-agent task orchestration
API command generation
DeFi monitoring workflows
Conversational finance support
DAO governance logic
Risk profile assessments
SoulAI’s dataset was not built for general chit-chat — it was engineered to power a machine that understands crypto deeply, responds with precision, and supports programmable task logic.
Agentic Thinking: Trained to simulate how agents should reason and respond over multiple steps.
Chain of Thought (CoT): Supports decomposed reasoning for complex multi-variable questions (e.g., “What’s the optimal ETH staking strategy based on gas, yield, and volatility?”).
Structured Output: Responds with JSON, markdown, or bullet logic when needed — designed to be machine-parsable and chainable.
Autonomous Role-Switching: Can simulate sub-agent responses, route tasks to logical entities (e.g., a ContractChecker vs. TradeAdvisor).
Real-World Context Adaptation: Designed to process and reason over real-time inputs (prices, news, wallet states, API data).
Example Use Cases:
DeFi Strategy Generator: Given a user’s wallet balance, risk tolerance, and current ETH/SOL/USDC prices, the model will return multiple DeFi yield options, ranked and structured.
Smart Contract Auditor: Parse Solidity or Vyper snippets and identify high-risk patterns, vulnerabilities, and gas inefficiencies.
Crypto Chat Assistant: Built-in conversational model with embedded knowledge of current crypto narratives, trends, and market dynamics.
Agent Memory Architect: Train and simulate agent memory states, feedback loops, and agent-to-agent collaboration using language only.
DAO Policy Writer: Create proposals, governance suggestions, and tokenomic updates for decentralized organizations.
HuggingFace Transformers — Run locally with full Python SDK
Text-Generation-Inference Server — Optimized GPU inference
RESTful API — Use with any front-end or back-end (chat, trading bots, dashboards)
Ollama WebUI / Custom UI Agents — Serve within autonomous agents using natural prompt pipelines
Agent Forge Integration — Drag-and-drop instruction chains connected directly to SoulAI LLM nodes
Not Generic — Trained on hand-curated financial data and agent logic, not scraped noise.
Not Censored — Internal logic enables more transparent reasoning and autonomous behavior for agent control layers.
Not Passive — Designed to create outputs that are immediately usable by agents, smart contracts, and automated workflows.
Programmable Intelligence — You don’t just prompt this model. You instruct, deploy, and monetize it.
Fine-tuned 13B and 34B variants for high-res agent autonomy
On-chain deployment modules for EVM interaction
Self-reinforcement via interaction logs (learning from task outcomes)
GPU staking incentives tied to inference demand
LLM-based AI DAO: A decentralized intelligence collective powered by SoulAI logic cores
The SoulAI LLM is more than a foundation — it is the engine of a new class of intelligent, revenue-generating digital workers. It does not merely reply. It acts. It does not simply respond. It executes. This is the birth of agentic LLMs — and SoulAI leads the charge.
SoulAI LLM is designed to be fully deployable on consumer hardware or cloud GPUs, across a variety of front-ends and inference frameworks. Whether you're using a WebUI like Ollama or running fine-grained agent workflows via API, SoulAI is compatible, efficient, and ready to plug in.
Ollama is a minimalist, fast WebUI for running LLMs locally using .gguf
models.
✅ Requirements
SoulAI GGUF model (on HuggingFace or hosted link)
GPU (Recommended: 6GB+ VRAM)
🛠️ Install Ollama
🧠 Pull the SoulAI Model
Or add a local .gguf
:
🗨️ Start Chat
oobabooga's Web UI supports .bin
, .gguf
, and HuggingFace Transformers.
✅ Requirements
Python 3.10+
Git
GPU (8GB VRAM recommended for 8B model)
Model file: Safetensors or GGUF
🚀 Installation
🔄 Add Model
Put soulai-model.safetensors
in the models/SoulAI
folder
Or download directly from Hugging Face: https://huggingface.co/shafire/SoulAI
▶️ Launch
Navigate to localhost:7860
in your browser to begin using SoulAI with all WebUI features (chat, memory, system prompts, etc.)
For devs, coders, and researchers who want complete control via Python.
🧪 Example Setup
Use this method for backend automation, agent logic, subgraph parsing, and crypto dashboards.
For lightweight use cases or quick serverless interactions.
🔧 CURL Example
Use this for rapid integrations into Telegram bots, Discord AI, trading assistants, or IVR agents.
SoulAI 8B
6–8 GB
.gguf
Ollama, llama.cpp
SoulAI 8B
12–16 GB
.safetensors
Transformers, oobabooga
SoulAI API
None (Cloud)
API
Hugging Face
Add --trust-remote-code
if using non-GGUF in oobabooga
Use stream=True
for async inference in Python
Combine with agent frameworks like LangChain or AutoGPT for intelligent chaining
Enable memory modules in oobabooga or Ollama to simulate autonomous behaviors
SoulAI is being adapted for:
GGUF 13B / 34B variants
Private swarm deployments
Agent-OS integrations
Custom wallets and DEX agents
Voice integration using Whisper + SoulAI combined
Ollama installed:
You’ll now have a webchat UI at where you can chat with the model.