docs.soulai.info
  • Welcome
  • Getting Started With Soul AI LLM
    • Quickstart SoulAI Agent
    • We Are SoulAI
  • Basics
    • SoulAI LLM
Powered by GitBook
On this page
  • 🧠 SoulAI LLM — The Foundation of Agentic Intelligence Download HERE
  • 🧠 SoulAI LLM — The Foundation of Agentic Intelligence
  • ⚙️ How to Run SoulAI — Deploy the LLM Anywhere
Export as PDF
  1. Basics

SoulAI LLM

PreviousWe Are SoulAI

Last updated 11 days ago

🧠 SoulAI LLM — The Foundation of Agentic Intelligence Download

🧠 SoulAI LLM — The Foundation of Agentic Intelligence

The SoulAI LLM is not just another large language model. It is a finance- and agent-optimized brain, fine-tuned to reason, decide, and act autonomously within decentralized systems. Designed from the ground up for crypto logic, smart contract interaction, and workflow execution, SoulAI's LLM is the central nervous system of our autonomous agent ecosystem.


🔍 What Is SoulAI’s LLM?

SoulAI’s LLM is a fine-tuned variant of Meta’s LLaMA 3.1–8B, trained on domain-specific datasets curated exclusively for intelligent financial reasoning, DeFi mechanics, agent workflows, and smart contract logic. While other models focus on conversation, SoulAI’s LLM is purpose-built to think, plan, and initiate action across blockchain environments.


⚙️ Architecture & Fine-Tuning

  • Base Model: Meta LLaMA 3.1–8B

  • Tuning Method: LoRA / PEFT

  • Platform: Hugging Face AutoTrain + Manual Curation

  • Frameworks: Transformers, Text-Generation-Inference, PyTorch

  • Optimized For: Instruction-following, chain-of-thought, multi-step workflows

  • Model Format: BF16 (Safetensors)


🧬 Custom Instruction Dataset

Over 10,000+ instruction-tuned examples were manually selected and synthesized to simulate:

  • Crypto trading logic

  • Tokenomics reasoning

  • Financial planning queries

  • Wallet/portfolio decision simulations

  • Smart contract analysis

  • Multi-agent task orchestration

  • API command generation

  • DeFi monitoring workflows

  • Conversational finance support

  • DAO governance logic

  • Risk profile assessments

SoulAI’s dataset was not built for general chit-chat — it was engineered to power a machine that understands crypto deeply, responds with precision, and supports programmable task logic.


💼 Key Capabilities

  • Agentic Thinking: Trained to simulate how agents should reason and respond over multiple steps.

  • Chain of Thought (CoT): Supports decomposed reasoning for complex multi-variable questions (e.g., “What’s the optimal ETH staking strategy based on gas, yield, and volatility?”).

  • Structured Output: Responds with JSON, markdown, or bullet logic when needed — designed to be machine-parsable and chainable.

  • Autonomous Role-Switching: Can simulate sub-agent responses, route tasks to logical entities (e.g., a ContractChecker vs. TradeAdvisor).

  • Real-World Context Adaptation: Designed to process and reason over real-time inputs (prices, news, wallet states, API data).


🌐 SoulAI LLM in Action

Example Use Cases:

  1. DeFi Strategy Generator: Given a user’s wallet balance, risk tolerance, and current ETH/SOL/USDC prices, the model will return multiple DeFi yield options, ranked and structured.

  2. Smart Contract Auditor: Parse Solidity or Vyper snippets and identify high-risk patterns, vulnerabilities, and gas inefficiencies.

  3. Crypto Chat Assistant: Built-in conversational model with embedded knowledge of current crypto narratives, trends, and market dynamics.

  4. Agent Memory Architect: Train and simulate agent memory states, feedback loops, and agent-to-agent collaboration using language only.

  5. DAO Policy Writer: Create proposals, governance suggestions, and tokenomic updates for decentralized organizations.


🔗 Integrations & Deployment Options

  • HuggingFace Transformers — Run locally with full Python SDK

  • Text-Generation-Inference Server — Optimized GPU inference

  • RESTful API — Use with any front-end or back-end (chat, trading bots, dashboards)

  • Ollama WebUI / Custom UI Agents — Serve within autonomous agents using natural prompt pipelines

  • Agent Forge Integration — Drag-and-drop instruction chains connected directly to SoulAI LLM nodes


🧠 Why SoulAI’s LLM Stands Out

  • Not Generic — Trained on hand-curated financial data and agent logic, not scraped noise.

  • Not Censored — Internal logic enables more transparent reasoning and autonomous behavior for agent control layers.

  • Not Passive — Designed to create outputs that are immediately usable by agents, smart contracts, and automated workflows.

  • Programmable Intelligence — You don’t just prompt this model. You instruct, deploy, and monetize it.


🚀 Future Roadmap

  • Fine-tuned 13B and 34B variants for high-res agent autonomy

  • On-chain deployment modules for EVM interaction

  • Self-reinforcement via interaction logs (learning from task outcomes)

  • GPU staking incentives tied to inference demand

  • LLM-based AI DAO: A decentralized intelligence collective powered by SoulAI logic cores


🧠 One Model. Infinite Agents.

The SoulAI LLM is more than a foundation — it is the engine of a new class of intelligent, revenue-generating digital workers. It does not merely reply. It acts. It does not simply respond. It executes. This is the birth of agentic LLMs — and SoulAI leads the charge.

⚙️ How to Run SoulAI — Deploy the LLM Anywhere

SoulAI LLM is designed to be fully deployable on consumer hardware or cloud GPUs, across a variety of front-ends and inference frameworks. Whether you're using a WebUI like Ollama or running fine-grained agent workflows via API, SoulAI is compatible, efficient, and ready to plug in.


🖥️ Option 1: Run SoulAI with Ollama WebUI

Ollama is a minimalist, fast WebUI for running LLMs locally using .gguf models.

✅ Requirements

  • SoulAI GGUF model (on HuggingFace or hosted link)

  • GPU (Recommended: 6GB+ VRAM)

🛠️ Install Ollama

bashCopyEditcurl -fsSL https://ollama.com/install.sh | sh

🧠 Pull the SoulAI Model

bashCopyEditollama pull shafire/soulai-llm:latest

Or add a local .gguf:

bashCopyEditollama create soulai -f soulai_model.gguf

🗨️ Start Chat

bashCopyEditollama run soulai

🧠 Option 2: Run in oobabooga Web UI (Text Generation Web UI)

oobabooga's Web UI supports .bin, .gguf, and HuggingFace Transformers.

✅ Requirements

  • Python 3.10+

  • Git

  • GPU (8GB VRAM recommended for 8B model)

  • Model file: Safetensors or GGUF

🚀 Installation

bashCopyEditgit clone https://github.com/oobabooga/text-generation-webui
cd text-generation-webui
pip install -r requirements.txt

🔄 Add Model

  • Put soulai-model.safetensors in the models/SoulAI folder

  • Or download directly from Hugging Face: https://huggingface.co/shafire/SoulAI

▶️ Launch

bashCopyEditpython server.py --model SoulAI

Navigate to localhost:7860 in your browser to begin using SoulAI with all WebUI features (chat, memory, system prompts, etc.)


🤗 Option 3: Use via Hugging Face Transformers (Python)

For devs, coders, and researchers who want complete control via Python.

🧪 Example Setup

pythonCopyEditfrom transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "shafire/SoulAI"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="auto",
    torch_dtype="auto"
).eval()

messages = [{"role": "user", "content": "What’s a good DeFi strategy today?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
output = model.generate(input_ids.to("cuda"), max_new_tokens=400)
print(tokenizer.decode(output[0][input_ids.shape[1]:], skip_special_tokens=True))

Use this method for backend automation, agent logic, subgraph parsing, and crypto dashboards.


🌐 Option 4: API Access via Hugging Face Inference

For lightweight use cases or quick serverless interactions.

🔧 CURL Example

bashCopyEditcurl https://api-inference.huggingface.co/models/shafire/SoulAI \
-X POST \
-d '{"inputs": "Summarize today’s crypto market in under 100 words."}' \
-H "Authorization: Bearer YOUR_HF_TOKEN"

Use this for rapid integrations into Telegram bots, Discord AI, trading assistants, or IVR agents.


🧰 Hardware Recommendations

Model Size
VRAM Needed
Format
Mode

SoulAI 8B

6–8 GB

.gguf

Ollama, llama.cpp

SoulAI 8B

12–16 GB

.safetensors

Transformers, oobabooga

SoulAI API

None (Cloud)

API

Hugging Face


🧠 Pro Tips

  • Add --trust-remote-code if using non-GGUF in oobabooga

  • Use stream=True for async inference in Python

  • Combine with agent frameworks like LangChain or AutoGPT for intelligent chaining

  • Enable memory modules in oobabooga or Ollama to simulate autonomous behaviors


🛰️ Future Compatibility

SoulAI is being adapted for:

  • GGUF 13B / 34B variants

  • Private swarm deployments

  • Agent-OS integrations

  • Custom wallets and DEX agents

  • Voice integration using Whisper + SoulAI combined

Ollama installed:

You’ll now have a webchat UI at where you can chat with the model.

https://ollama.com
localhost:11434
HERE
Page cover image