RuntimeInspector turns your terminal into a live, explained dashboard — sessions, frameworks, and risk signals. Local-first. No code uploaded.
The Problem
Modern dev workflows spawn dozens of invisible processes. Without visibility, you're flying blind.
How It Works
RuntimeInspector reads your process tree, identifying CLIs, agents, servers, and their child processes.
Related processes are clustered by agent, repository, and runtime into coherent sessions you can understand at a glance.
Each session gets a plain-English summary and risk signals — long-running, orphaned, duplicated, or resource-heavy.
Dashboard Preview
A real-time view of every AI agent, dev server, and runtime on your machine — grouped, explained, and scored.
Spawned 3 child shells; actively editing 12 files across src/auth/. Listening on :3000 via dev server.
Running Jest in watch mode with 23 test suites. High CPU from parallel test workers. Writing to tests/api/.
HMR active on :5173. No file changes in 47 minutes. 2 connected browser clients.
Parent process (pid 44021) no longer exists. Server still bound to :4000. No requests in 1h 38m.
2 containers active: postgres:16 on :5432, redis:7 on :6379. Healthy. Volume mounts to ./data/.
Reading .github/workflows/. Modifying deploy.yml and adding a new staging job. No child processes yet.
Features
See grouped sessions, not raw PIDs. Understand what's running at a glance instead of parsing ps aux.
Auto-detects Claude Code, Codex, Cursor, Vite, Docker, Node, and more — identifying both the tool and what it's doing.
Flags long-running, orphaned, duplicate, and resource-heavy sessions before they become problems.
Reads process metadata only. No source code, terminal output, or file contents leave your machine.
Designed for terminal-heavy developers. Works alongside your existing tools, not against them.
Set custom thresholds for alerts, mute repos, and define allowlists — all through a simple YAML config.
RuntimeInspector is a local dashboard that runs on localhost. It reads process metadata — PIDs, command names, ports, and resource counters — never your source files or terminal output.
Who It's For
If you run AI coding agents, multiple dev servers, or terminal-heavy workflows — and you've ever wondered “what's still running?” — RuntimeInspector is for you.
Running 3+ AI agents and dev servers in parallel across projects.
Shared visibility into what local dev environments are doing across teammates.
Anyone who wants runtime observability without shipping logs to the cloud.
FAQ
A session is a group of related processes tied to a single intent — for example, an AI agent and all the shells, dev servers, and watchers it spawns. Instead of showing you 47 raw processes, RuntimeInspector groups them into one understandable session with context about what it's doing and why.
RuntimeInspector supports macOS and Linux at launch. Windows support (including WSL) is on the roadmap for the first post-launch update. The dashboard runs locally on localhost, so there are no platform restrictions on the UI side.
No. RuntimeInspector is local-first by design. It reads process metadata (PIDs, command names, ports, resource usage) — not your source files. No code, file contents, or terminal output leaves your machine unless you explicitly configure it. There is no cloud dependency.
Out of the box: Claude Code, OpenAI Codex CLI, Cursor Agent, Aider, GitHub Copilot CLI, and Continue. For frameworks: Node.js, Vite, Next.js, Docker, Python, Go, Rust (cargo), and more. Detection is plugin-based, so adding new tools takes minutes.
Yes. You can configure thresholds for "long-running" (default: 30 minutes), CPU spikes (default: 80%), and orphan detection sensitivity. You can also mute specific processes or repos, and create custom alert rules via a simple YAML config.
Not at all. RuntimeInspector is a runtime observability tool — think of it as a flight dashboard for your local development environment. It doesn't manage tasks or track tickets. It tells you what processes are actually running, what they're doing, and whether anything looks wrong.
Be first to see what your AI agents are actually doing — as they go parallel.