// runtime + console, on your machine

A local AI agent —
with a console worth opening.

TARS runs as a single Go binary on your machine and opens a full browser console — chat with dockable Git Inspector, agent runtime flow graphs, message-level session forks, memory inbox review, scheduled jobs, and a background watchdog. Everything that matters has a UI.

Single binary Local-first Anthropic · OpenAI · Gemini · Claude Code CLI MIT licensed
// what is tars

Built to work beside a
human operator.

The name is an homage to TARS from Interstellar — practical, direct, built to work under pressure. TARS borrows that name as a north star for the kind of local agent runtime it wants to be.

Most agent frameworks live in the cloud, or ship as a thin CLI with maybe an HTTP API bolted on. TARS lives on your machine, owns its own memory, and treats the browser console as a first-class surface — not an afterthought. Every operating concern (chat, sub-agents, scheduling, memory review, approvals, analytics) has a real page.

Extension is intentionally lean: skills (Markdown + companion CLI) load only when invoked — not every chat turn. Plugins and MCP servers are gated, so the system prompt stays small and the agent stays focused.

// inside the console

The console isn't a viewer.
It's the cockpit.

Most local agent tools ship a CLI and call it done. TARS treats the browser console as a first-class surface — every page is a working tool, not a status read-out. Open 127.0.0.1:43180/console and these are some of the rooms you walk into:

/home /console

Mission Control

Pulse, Reflection, plans, runtime runs, cron jobs, disk pressure, sessions, recommended setup actions — all on one screen.

/work /console/chat

Chat

Dockable panels: Sessions, Tasks, Health, Git Inspector, Skill Inbox, Cron, Prior Context. Message-level session forks. First-turn tier recommendation.

/work /console/sessions/graph

Lineage

Git-log–style tree of root and forked sessions. Fork point previews. Promote fork insights into Memory Inbox without mutating the parent.

/work /console/memory

Memory

Review-before-store inbox for reflection candidates. Edit stored knowledge inline. Compare Tool path vs Prefetch path recall.

/operate /console/agentruntime

Agent Runtime

List, tree, Gantt, and interactive Flow graph views. Replay scrubber, cost flow, file attention, git diff timeline, checkpoint restart.

/operate /console/approvals

Approvals

Risky cleanup plans and approved Git mutations gated before TARS applies them. Automation Audit log keeps every decision reviewable.

/operate /console/analytics

Analytics

Usage totals, daily token bars, model cost rows, tool/skill call counts. UTC day-bounded daily budget chip in the header.

/work /console/extensions

Extensions

Skill Creator and MCP Server Creator with sandbox tests. Hub installs surface trust signals (score, last update, passing tests, install count).

Plus pages for Plans, System Prompt, Cron, Logs, Pulse, Reflection, and Settings — sidebar grouped under Home / Work / Operate / Setup.
// runtime features

Behind the console,
a careful runtime.

TARS draws a hard line at what should be in the binary versus what should be a skill. The runtime stays small; the rest is opt-in.

agent_runtime

Sub-Agent Orchestration

Spawn read-only agents for research and planning. Per-task tier routing, allowlist policy, depth control. Parallel and compare modes.

memory

Durable Memory

Markdown memory with semantic search via Gemini embeddings. Daily logs, reviewed experiences, nightly reflection — all auditable on disk.

pulse

Pulse Watchdog

Runs every minute. Catches cron failures, stuck runs, disk pressure, telegram errors — escalates to a narrow LLM decision call only when needed.

reflection

Nightly Reflection

Between 02:00–05:00 TARS extracts experiences from sessions, prunes empty ones, and grows the memory store. Deterministic Go, no LLM tool surface.

cron

Scheduled Jobs

Tick-based scheduler at 30s resolution. Cron expressions plus @at one-time triggers. Per-job audit history, capped to keep state lean.

llm_router

3-Tier LLM Router

Heavy / standard / light tiers map to provider+model bundles. Roles bind to tiers, credentials live at provider level, env-var JSON overrides.

extensions

Skills, Plugins, MCP

Skills are Markdown + companion CLI — only loaded when invoked, so the system prompt stays small. Plugins gated; MCP fully supported as a client.

channels

Multi-Channel I/O

Beyond the console: Telegram bidirectional messaging, inbound webhooks, macOS Assistant popup, and a local API for scripts.

// vs others

How TARS differs.

Two excellent local agent projects in this space — OpenClaw and Hermes Agent. Each has its own focus. Here's where TARS draws different lines.

DimensionOpenClawHermes AgentTARS
LanguageTypeScriptPythonGo (single binary)
Primary UICLICLI + APIBrowser console (CLI/Telegram/webhooks too)
Sub-agentsACP + subagent runtimes, Docker sandboxThreadPoolExecutor (max 3), ephemeral promptPer-task model tier, allowlist policy, depth control
Model routingPer-agent model overridePer-child override, MoA (4 frontier models)3-tier bundles (heavy/standard/light), role→tier mapping
MemorySession transcriptsHoncho/Holographic plugin hooksMarkdown + semantic + review-before-store + nightly reflection
BackgroundPulse watchdog (1-min) + nightly reflection batch
SchedulingSession-bound cron + audit logs
ExtensibilityBuilt-in toolsToolsetsSkills + companion CLIs + gated plugins/MCP

Comparison is from the TARS perspective and intentionally simplified. Read the source for each project to form your own view.

// architecture

One binary,
two registries.

TARS isolates the chat tool surface from system internals. The user-facing registry can never call ops_, pulse_, or reflection_ tools — those are reserved for the runtime itself. Pulse uses narrow Go interfaces and only one LLM call. Reflection is fully deterministic.

┌─ cmd/tars (cobra) ──────────────────────────────────────┐
│ serve · service · init · doctor · status · cron · ...   │
└──────────────────────────┬──────────────────────────────┘
                           │
            ┌──────────────▼──────────────┐
            │  tarsserver (127.0.0.1:43180) │
            └──┬─────────┬──────────┬──────┘
               │         │          │
       ┌───────▼──┐ ┌────▼─────┐ ┌──▼─────────┐
       │  chat    │ │  pulse   │ │ reflection │
       │  agent   │ │ watchdog │ │  nightly   │
       └────┬─────┘ └────┬─────┘ └────┬───────┘
            │            │            │
       ┌────▼────────────▼────────────▼─┐
       │   memory · cron · ops · llm    │
       └────────────────────────────────┘
// quickstart

Three steps to a running agent.

On first launch, the wizard walks you through provider and tier configuration. The console boots in setup-only mode until an LLM is configured.

01

Install

macOS / Linux — pre-built binary with console

brew tap devlikebear/tap
brew install devlikebear/tap/tars
02

Initialize workspace

tars init
03

Start the server

Runs in the terminal until Ctrl+C.

tars serve
# console at http://127.0.0.1:43180/console