// local AI agent runtime

A local AI agent —
runs on your machine,
under your control

TARS is a local AI agent runtime that runs as a single Go binary on your machine. From the browser console you can directly inspect and control its work — agent runs, memory, scheduled jobs, Git changes, execution history.

Single binary Local-first Anthropic · OpenAI · Gemini · Claude Code CLI MIT licensed
// what is tars

An AI agent
that works on your machine

The name comes from the TARS in Interstellar — practical, direct, dependable when things get complicated. TARS aims for that.

Not an agent running somewhere in the cloud you can't see, but a local AI agent that runs on your machine and can be inspected and controlled directly. Most AI agent tools are CLI-first, or add a thin web UI on top. TARS is designed around the browser console: chat, sub-agents, scheduled jobs, memory review, Git changes, run flow, and pending approvals each get their own page.

Since the agent works with your files and tools, you should be able to see what it's doing and step in when needed — that's the starting premise. Extensions stay lean: skills load only when invoked; plugins and MCP servers are used only when explicitly allowed. The system prompt stays small, and the agent stays focused on the current task.

// the console

Where you watch
the agent work

Many local agent tools end at a CLI. TARS uses the browser console as its main interface. Open 127.0.0.1:43180/console and you get screens that actually let you inspect and control the agent — not just status pages.

/home /console

Mission Control

Pulse, Reflection, plans, runtime runs, cron jobs, disk pressure, sessions, recommended setup actions — all on one screen. See agent state and ongoing work at a glance.

/work /console/chat

Chat

Dock the panels you need: Sessions, Tasks, Health, Git Inspector, Skill Inbox, Cron, Prior Context. Branch sessions at a specific message. First-turn tier recommendation for the model that fits.

/work /console/sessions/graph

Lineage

Conversation and work flow as a Git-log-style tree. Preview the message where each session branched. Promote insights from a branch into Memory Inbox without touching the parent.

/work /console/memory

Memory

Review what the agent wants to save as long-term memory before it is stored. Edit stored knowledge as Markdown. Compare Tool path vs Prefetch path recall.

/operate /console/agentruntime

Agent Runtime

List, tree, Gantt, and interactive Flow graph views. Replay scrubber, cost flow, file attention, Git diff timeline, checkpoint restart.

/operate /console/approvals

Approvals

Review risky cleanup plans and Git changes before they apply. Approve or reject pending work. The Automation Audit log keeps every decision reviewable.

/operate /console/analytics

Analytics

Token use, cost per model, tool and skill call counts. Daily usage and cost flow. Daily budget chip in the header.

/work /console/extensions

Extensions

Build and sandbox-test extensions with Skill Creator and MCP Server Creator. Hub installs surface trust signals: score, last update, passing tests, install count.

Plus pages for Plans, System Prompt, Cron, Logs, Pulse, Reflection, and Settings — sidebar grouped under Home / Work / Operate / Setup.
// runtime features

Core stays small
The rest is opt-in

TARS doesn't push every feature into the system prompt at once. The base runtime stays small; the rest goes into skills and plugins.

agent_runtime

Sub-Agent Orchestration

Spawn read-only sub-agents for research and planning. Per-task model tier routing, allowlist policy, depth control. Parallel and compare modes.

memory

Durable Memory

Markdown memory with semantic search via Gemini embeddings. Daily logs, reviewed experiences, nightly Reflection — stored on disk and auditable. Review-before-store lets you decide what gets remembered.

pulse

Pulse Watchdog

A periodic loop that checks runtime health. Detects cron failures, stuck runs, disk pressure, Telegram errors. Calls a narrow LLM only when needed.

reflection

Nightly Reflection

Extracts experiences and memory candidates from sessions overnight. Cleans up empty sessions, refreshes memory candidates. Runs as deterministic Go without exposing LLM tools.

cron

Scheduled Jobs

30-second tick scheduler. Cron expressions and @at one-time triggers. Per-job audit history with state caps.

llm_router

3-Tier LLM Router

Three tiers — Heavy, Standard, Light. Roles bind to tiers; providers and models are managed in config. Pick lighter or stronger models depending on what the work needs.

extensions

Skills, Plugins, MCP

Skills are Markdown plus a runnable CLI — loaded only when invoked, so the system prompt stays small. Plugins are gated; MCP is supported as a client.

channels

Multi-Channel I/O

Beyond the browser console: Telegram bidirectional messaging, inbound webhooks, macOS Assistant popup, and a local API for scripts.

// vs others

Where TARS draws different lines

Two strong projects already exist in this space — OpenClaw and Hermes Agent. Each has its own focus. Here are the points TARS treats as important.

DimensionOpenClawHermes AgentTARS
LanguageTypeScriptPythonGo (single binary)
Primary UICLICLI + APIBrowser console (CLI/Telegram/webhooks too)
Sub-agentsACP + subagent runtimes, Docker sandboxThreadPoolExecutor (max 3), ephemeral promptPer-task model tier, allowlist policy, depth control
Model routingPer-agent model overridePer-child override, MoA (4 frontier models)3-tier bundles (heavy/standard/light), role→tier mapping
MemorySession transcriptsHoncho/Holographic plugin hooksMarkdown + semantic + review-before-store + nightly reflection
BackgroundPulse watchdog (1-min) + nightly reflection batch
SchedulingSession-bound cron + audit logs
ExtensibilityBuilt-in toolsToolsetsSkills + companion CLIs + gated plugins/MCP

Comparison is from the TARS perspective and intentionally simplified. Read the source for each project to form your own view.

// architecture

One binary,
separated tool surfaces

TARS runs as a single binary, but doesn't expose every tool the same way. The tools available in chat are kept separate from the tools used inside the runtime. The ops_, pulse_, and reflection_ families can't be called directly from regular chat — they are reserved for runtime-internal operations. Pulse uses a narrow Go interface and only calls the LLM when needed; Reflection is deterministic.

┌─ cmd/tars (cobra) ──────────────────────────────────────┐
│ serve · service · init · doctor · status · cron · ...   │
└──────────────────────────┬──────────────────────────────┘
                           │
            ┌──────────────▼──────────────┐
            │  tarsserver (127.0.0.1:43180) │
            └──┬─────────┬──────────┬──────┘
               │         │          │
       ┌───────▼──┐ ┌────▼─────┐ ┌──▼─────────┐
       │  chat    │ │  pulse   │ │ reflection │
       │  agent   │ │ watchdog │ │  nightly   │
       └────┬─────┘ └────┬─────┘ └────┬───────┘
            │            │            │
       ┌────▼────────────▼────────────▼─┐
       │   memory · cron · ops · llm    │
       └────────────────────────────────┘
// quickstart

Get started in three steps

On first run, the setup wizard walks you through LLM provider and model tier configuration. Until an LLM is configured, the console runs in setup-only mode.

01

Install

macOS / Linux — pre-built binary with console

brew tap devlikebear/tap
brew install devlikebear/tap/tars
02

Initialize workspace

tars init
03

Start the server

Runs in the terminal until Ctrl+C.

tars serve
# console at http://127.0.0.1:43180/console