Getting started

Install Team-X, run the first conversation with an AI employee, connect a local LLM via Ollama, and tour the cockpit. Fifteen minutes start to first ticket.

This guide takes you from a fresh installer to your first working AI employee. Plan on fifteen minutes. No account is required, no internet is required after download, and no telemetry is collected at any point.

Install

Grab the latest release from github.com/git-rocky-stack/team-x/releases:

PlatformFileArchitecture
WindowsTeam-X-Setup-x.x.x.exex64, arm64
macOSTeam-X-x.x.x.dmgIntel x64, Apple Silicon arm64
LinuxTeam-X-x.x.x.AppImage or .debx64

Run the installer for your platform. On macOS, drag Team-X to Applications. On Linux, chmod +x the AppImage and run it directly.

If you would rather build from source:

git clone https://github.com/git-rocky-stack/team-x.git
cd team-x
pnpm install
pnpm dev

Requires Node 20+ and pnpm 9+.

First boot

When Team-X launches for the first time, it:

  1. Creates a local SQLite database in your app data directory.
  2. Runs migrations to set up the full schema (employees, tickets, projects, meetings, vault, audit log, runtime, copilot insights).
  3. Seeds a starter company named Strategia-X with a CEO and a Senior Fullstack Engineer.
  4. Seeds provider templates for Ollama and Anthropic, both disabled.

You land on the Cards subview of Mission Control with two employee cards. The app is ready to use offline immediately, but you need at least one provider configured to actually run an agent.

Team-X is designed to run fully offline through Ollama. The recommended starter setup is llama3.1:8b.

# install ollama for your platform: see ollama.com
ollama serve
ollama pull llama3.1:8b

In Team-X:

  1. Open Settings > Providers.
  2. Toggle Ollama on.
  3. Confirm the auto-detected endpoint at http://127.0.0.1:11434.
  4. Click Test Connection. You should see a green check.

Your privacy tier is now Local. No bytes leave your machine until you explicitly add a cloud provider.

Your first conversation

  1. Click any employee card in Mission Control. The Chat Drawer opens on the right.
  2. Type a message in the composer. Press Ctrl+Enter (Windows / Linux) or Cmd+Enter (macOS) to send.
  3. Watch the token stream live. The agent responds in role: the CEO thinks strategically, the engineer thinks technically.

Every response is grounded in the role specification, the org state, and the RAG context retrieved from your past messages and vault files.

Try the command palette

Press Ctrl+K or Cmd+K from any view. The palette is the natural-language control surface for the entire app:

  • “Hire a senior backend engineer.”
  • “File a ticket for the login crash and assign it to Sarah.”
  • “Call an all-hands with the engineering team.”
  • “Why is the frontend team behind schedule?”

The first three are structured intents and execute directly (with a confirmation gate on destructive actions). The fourth is routed to the agentic loop, which plans, calls read-only org tools, and produces a grounded multi-paragraph answer citing specific tickets and people. See Agentic loop for details.

The interface, in one tour

Top bar

The top bar contains every primary tab in the app.

TabDescription
DashboardMission Control, timeline, stream, floor, commands, live queues, runtime signals, and telemetry snapshots
AutonomyDoctor checks, benchmarks, agent self-improvement, runtimes, routines, budgets, approvals, artifacts, memory, and operator access
OrgOrg chart visualization and employee structure
ProjectsProject cards, goals, linked tickets, target dates, schedule calendar, and progress tracking
TicketsKanban board, ticket detail, due dates, participants, attachments, comments, and ticket-thread discussion
MeetingsMeeting history and the “Call Meeting” action
ChatDirect conversations, the thread roster, ticket-thread previews, agent conversations, and Copilot transcripts
FilesFile vault with search, integrity checks, ticket attachments, and agent-created deliverables
TelemetryUsage stats, cost analysis, provider breakdown
AuditAppend-only event log with filters and export
SettingsProviders, runtime strategy, privacy, backup, updates, extensions, memory, and portability

Sidenav

The left sidenav shows:

  • Company switcher: switch between multiple AI organizations.
  • Employee list: quick access to chat with any employee.
  • Threads: open the communication roster without leaving your current work context.
  • Autonomy: jump to the operator control plane.
  • User Guide: role-based onboarding and deep links into live setup surfaces.
  • Status indicators: agent activity at a glance.

Dashboard subviews

The Dashboard has five views accessible via subtabs:

  • Mission Control: operations-first view of runs, queues, commands, autonomy posture, and telemetry.
  • Timeline: chronological event feed.
  • Stream: raw LLM output from all agents.
  • Floor: grid layout of employee activity.
  • Commands: recent command-palette operations.

Your first generated file

Once providers are configured and an employee has execution tools enabled, you can ask that employee to create a concrete file deliverable from chat or a ticket. Team-X supports:

  • Text deliverables: .txt, .md, .csv, .json, .html
  • Office deliverables: .docx, .xlsx, .pptx
  • Legacy Office wording: .doc, .xls, and .ppt requests are produced as modern .docx, .xlsx, and .pptx files

Generated files are written inside the employee workspace. When vault storage is available, Team-X also copies the file into Files, tags it as agent-created, records SHA256 metadata, and adds artifact provenance under Autonomy > Artifacts.

Next steps