A detailed capability matrix across the leading AI coding agent platforms,
scored on the dimensions that matter for serious agent work.
Coverage indicators are based on each platform's documented features as of early 2026.
Full plugin SDK with `.claude-plugin/plugin.json` manifest format supporting skills, agents, hooks, MCP servers, and LSP servers. Plugins are distributable through marketplaces, installable via `/plugin install`, and documented at the official SDK reference — a complete, production-grade plugin ecosystem.
Premier
First-class native MCP via `.mcp.json` supporting STDIO and HTTP servers; MCP tools are treated identically to built-in tools in hooks, permissions, and the subagent system, including full `mcp__<server>__<tool>` matcher syntax. MCP servers can also be bundled directly inside plugins.
Premier
14 distinct lifecycle events (SessionStart through SessionEnd, including SubagentStart, TeammateIdle, PreCompact, and more) with handlers that can be shell commands, LLM evaluations, or full agent-based hooks. PreToolUse hooks can block, allow, or modify tool inputs before execution — the most mature and capable hooks system in this comparison.
Premier
Reads `CLAUDE.md` at org, project-shared, user-global, and project-local scopes, plus modular `.claude/rules/*.md` with path-scoped frontmatter and an auto-memory system (`~/.claude/projects/<project>/memory/MEMORY.md`) where Claude itself writes learnings between sessions — the most comprehensive instruction layering of any tool in this comparison.
Premier
Custom subagents in `.claude/agents/` each have a dedicated markdown file with YAML frontmatter for system prompt, tool allowlist/denylist, model selection, permission mode, MCP server list, hooks, skills, and persistent memory scope — fully declarative, per-agent configuration with no equivalent in other tools.
Advanced
Skills in `.claude/skills/<name>/SKILL.md` create invocable `/name` slash commands with `$ARGUMENTS` placeholders, per-index argument substitution, and shell preprocessing; scoped to user, project, or plugin levels. Solid and flexible, though the system shares the same `agentskills.io` open standard as Codex.
Premier
Ships 9 named built-in tools (Read, Write, Edit, Glob, Grep, Bash, WebFetch, WebSearch, Task) plus full MCP tool access. The combination of a structured Edit tool for precise string replacement, a dedicated Glob/Grep pair backed by ripgrep, and a Task tool for subagent spawning is notably more comprehensive than competitors.
Advanced
Glob for file pattern discovery, ripgrep-backed Grep for cross-repo content search, Read with offset/limit for large files, and Explore subagents optimized for read-only codebase traversal. CLAUDE.md files load hierarchically and auto-memory persists codebase learnings, but lacks the semantic vector indexing that Cursor or VS Code Copilot provide.
Advanced
Task tool spawns foreground or background subagents within a session; experimental Agent Teams (opt-in via env flag) run fully independent Claude Code sessions coordinated by a team lead with shared task lists and inter-agent messaging via tmux/iTerm2. Powerful but the Agent Teams feature is still experimental.
Advanced
Auto-memory at `~/.claude/projects/<project>/memory/MEMORY.md` is written by Claude and loaded into context at each session start (first 200 lines); subagents can be configured with their own persistent memory at user, project, or local scope. More capable than most competitors, though the 200-line load limit and lack of semantic retrieval leave room for improvement.
Premier
Six named permission modes (default, acceptEdits, plan, dontAsk, delegate, bypassPermissions) plus a fine-grained allow/ask/deny rule system with glob patterns for Bash commands, file paths, and domains — manageable via `/permissions` UI, settings files, or managed org-wide policy. The most granular and composable approval system in this comparison.
Standard
Officially supports Anthropic Claude models via Anthropic API, Amazon Bedrock, Google Vertex AI, and Microsoft Foundry. Third-party routing via `ANTHROPIC_BASE_URL` to OpenRouter or local proxies is unsupported and can degrade with non-Claude models — functional but constrained compared to model-agnostic tools.
Basic
Runs locally as a CLI/IDE extension but requires a live cloud API connection; fully offline operation is not supported. Pointing `ANTHROPIC_BASE_URL` at a local Ollama proxy is technically possible but explicitly unsupported and unreliable.
Premier
Seamlessly cloud-native with first-class Anthropic API integration plus Amazon Bedrock, Google Vertex AI, and Microsoft Foundry support. Immediate setup with no infrastructure to manage — the only officially supported deployment model.
No dedicated plugin SDK, plugin manifest format, or plugin marketplace. Extensibility is achieved through MCP servers and the Agent Skills system, both of which are general mechanisms rather than a first-party plugin ecosystem — extensible, but not a plugin system in any structured sense.
Advanced
Native MCP in CLI and IDE extension; servers configured in `~/.codex/config.toml` or project `.codex/config.toml` and launched automatically at session start. Codex can itself run as an MCP server, enabling orchestration by external agents — a notable differentiator, though integration depth is slightly less deep than Claude Code's hook/permission integration.
None
No hooks system has shipped as of February 2026. A community PR (#11067) proposing PreToolUse/PostToolUse/SessionStart events was closed without merging when OpenAI restricted unsolicited contributions. OpenAI confirmed active work on the feature but it is not available.
Advanced
Reads `AGENTS.md` files hierarchically from `~/.codex/AGENTS.md` (global) through repo root and subdirectories to the current directory, with support for `AGENTS.override.md` files, configurable size limits (default 32 KiB), and fallback filenames. Well-designed and flexible, though lacks auto-memory and modular path-scoped rule files.
Basic
No native concept of individually-configured named agents with dedicated instruction files. Instructions are project/user-scoped via `AGENTS.md`. Per-agent prompts in multi-agent Agents SDK workflows are programmatic code, not declarative config files accessible to end users.
Standard
Agent Skills system uses a `SKILL.md` file per skill directory with name, description, and instructions; skills invoke explicitly via `$skill-name` or auto-selected when a task matches the description. Follows the open `agentskills.io` standard but lacks argument placeholders and shell preprocessing found in Claude Code's skills.
Standard
Includes file read/edit/management, shell command execution with sandbox policies, web search (cached and live), code review via `/review`, and image input. File operations are more general-purpose and shell-command-driven rather than discrete structured tools like Claude Code's Glob, Grep, and Edit.
Standard
Designed for repository-scale operation — reads full working directory, navigates file trees, edits across multiple files, and runs shell commands for search. The cloud platform runs each task in an isolated sandbox preloaded with the full repo, but there is no semantic indexing or ripgrep-specific tooling; file search is shell-command-driven.
Standard
The Codex cloud app natively supports parallel agents in isolated sandboxes with git worktree support. The CLI itself has no built-in parallel sub-agent spawning within a session; that requires the cloud interface or programmatic Agents SDK orchestration.
Basic
Session transcripts persist in `~/.codex/history.jsonl` and are resumable via `codex resume`. Memory infrastructure (`/m_update`, `/m_drop`) was added in early 2026 as described 'initial plumbing' — not a mature automatic cross-session memory system, with no equivalent to Claude Code's auto-summarizing MEMORY.md.
Standard
Three approval modes via `--ask-for-approval` (untrusted, on-request, never) and three sandbox policy tiers (read-only, workspace-write, danger-full-access). Functional and configurable, but fewer modes and less granularity than Claude Code's six permission modes plus fine-grained rule system.
Advanced
The `--model` flag or `config.toml` profiles allow switching between OpenAI models; the `--oss` flag enables first-party local inference via Ollama, LM Studio, or MLX; mid-session model switching is available via `/model`; and custom provider configurations support Mistral, Azure, and custom proxy endpoints.
Advanced
Official `--oss` flag routes inference to Ollama, LM Studio, or MLX with no cloud dependency — a first-party supported, documented capability, not a workaround. Cloud mode remains the recommended primary path and some features degrade locally.
Premier
Cloud-native Codex App provides isolated sandboxes per task, built-in parallel agent support, and git worktree integration. The cloud platform is the richest feature tier and the primary recommended deployment model.
Inherits the full VS Code extension ecosystem via Open VSX Registry, giving access to thousands of extensions without any porting effort. No separate Cursor-specific plugin SDK exists, so ceiling is capped by the VS Code Extension API rather than anything Cursor-native.
Advanced
Native MCP support via project-level and global `mcp.json` files, covering stdio and HTTP transports with a growing ecosystem of pre-built integrations. Does not yet ship a built-in MCP marketplace or implement the 2025-11-25 spec revision that VS Code supports.
Standard
Cursor 1.7 introduced hooks at 7 lifecycle events with allow/deny/ask return semantics, but the feature is in beta with noted documentation gaps. The event set (7 events) is narrower than VS Code's 8 and Claude Code's 14, and production stability is not confirmed.
Advanced
The `.cursor/rules/*.mdc` system supports glob-targeted, conditionally-applied, and always-on rules with YAML frontmatter, enabling precise per-context instruction injection. The deprecated `.cursorrules` fallback and absence of org-level scoping keep it short of premier.
Standard
Glob-scoped `.mdc` rule files approximate per-context instructions but there is no native concept of a named agent instance with a dedicated instruction file or tool allowlist. The closest analog, `.cursor/skills/`, provides instruction bundles but not true agent-scoped configuration.
Advanced
Markdown files in `.cursor/commands/` (project) and `~/.cursor/commands/` (global) are surfaced as `/` dropdown commands that inject full prompt content with project context attached. No parameterization or templating system is documented, which limits composability.
Premier
Includes file read/write/create, multi-file editing, terminal execution, semantic codebase search, and a native embedded browser tool (Cursor 2.0) that can launch browsers, interact with UIs, capture screenshots, and inspect DOM elements. The browser tool is a differentiator no other IDE-native agent currently ships.
Premier
AST-aware chunking via tree-sitter, server-side embeddings stored in Turbopuffer, and nearest-neighbor retrieval combine to produce one of the most technically sophisticated codebase indexing pipelines among IDE agents. The `@codebase` reference and semantic search work across the full project without manual context selection.
Advanced
Cursor 2.0 supports up to 8 simultaneous agents in isolated git worktrees or remote machines with automatic CPU/memory balancing across sub-tasks. The 8-agent ceiling and worktree isolation model are well-implemented, though orchestration is manual rather than automated.
Basic
No native cross-session memory; context resets completely at session end. Persistence requires workarounds: encoding context in `.cursor/rules/` files, using third-party MCP servers like Basic Memory, or community-built Memory Bank markdown patterns. None of these are first-party or automatic.
Standard
The approval model is binary: default confirmation prompts or YOLO mode (all-or-nothing). There is no built-in command allowlist/denylist or per-tool granularity; MCP hooks can approximate this but require significant setup work.
Advanced
Supports Anthropic, OpenAI, Google, DeepSeek, and Cursor's own Composer model with a per-conversation model picker and an Auto mode. Local models via Ollama require an ngrok or Cloudflare tunnel workaround, and Cursor Tab autocomplete is cloud-only regardless of model selection.
Basic
Architecturally cloud-dependent; local Ollama requires an ngrok or Cloudflare tunnel because Cursor rejects localhost directly. The flagship Tab autocomplete feature does not work with any local model regardless of configuration.
Premier
Fully cloud-native with server-side semantic indexing, Cursor's custom models, and embedded browser automation in Cursor 2.0 — all cloud-only. Immediate frictionless setup with the complete feature set available out of the box.
A two-tier extensibility model—GitHub Copilot Extensions (GitHub Apps, cross-platform, preview SDK) plus VS Code Chat Participants (Chat API + Language Model API with deep editor/workspace access)—gives third parties more integration surface than any other IDE agent. Both tiers are formally documented and supported.
Premier
First-class MCP support with a built-in marketplace for browsing and installing servers from the GitHub MCP Registry, support for stdio and HTTP transports, and implementation of the 2025-11-25 MCP spec including URL mode elicitation and long-running task support. Most complete MCP integration among IDE agents reviewed.
Advanced
VS Code 1.109.3 introduced 8 lifecycle hook events (PreToolUse, PostToolUse, SessionStart, Stop, SubagentStart, SubagentStop, and others) in Preview, using the same format as Claude Code for cross-tool compatibility. The feature is still Preview-stage and not yet generally available.
Advanced
Auto-detected `.github/copilot-instructions.md` for workspace-wide instructions plus scoped `*.instructions.md` files targetable to specific file types or directories, mirroring Cursor's `.mdc` system. Personal and org-level instruction layers available via GitHub settings round out the hierarchy.
Advanced
VS Code 1.108 Agent Skills load `SKILL.md` instruction bundles contextually when requests match skill descriptions. Custom agents in `.vscode/agents/` carry their own system prompts and tool configurations, providing true per-agent scoped instructions—a capability Cursor lacks natively.
Advanced
Prompt files (`.prompt.md`) in `.github/prompts/` are invokable as `/filename` slash commands. Built-in commands (`/fix`, `/explain`, `/tests`, `/new`, `/init`) augment the system. No Handlebars-style parameterization like Continue.dev, limiting dynamic reuse.
Advanced
Built-in tools cover file read/edit, terminal, semantic and keyword search, web fetch, symbol reference navigation, test running, and a bundled GitHub MCP Server. The plan agent (v1.107+) adds structured task decomposition. No native browser automation equivalent to Cursor 2.0's DOM-inspection tool.
Advanced
Three-tier indexing—remote GitHub index, local semantic embedding index, and keyword fallback—provides robust repo-wide retrieval. Agent mode autonomously reads, searches, and edits across files. Slightly less transparent than Cursor's pipeline in terms of tuning controls.
Advanced
VS Code 1.107 'Agent HQ' enabled parallel agent sessions in isolated git worktrees; VS Code 1.109 added orchestrator-spawned parallel subagents for research and coding subtasks. The orchestrator model with automated subagent spawning and result merging is more sophisticated than Cursor's manual 8-agent approach.
Basic
No native cross-session memory; each session starts fresh. 'Copilot Memories' is rolling out (currently more mature in Visual Studio than VS Code) and will auto-populate `copilot-instructions.md`, but is not generally available. Current persistence relies entirely on instruction files or external MCP memory servers.
Premier
Per-tool approval dialogs offer four persistence scopes (once, session, workspace, always). `github.copilot.chat.agent.terminal.allowList` and `denyList` settings enable fine-grained command-level auto-approval. A global auto-approve toggle also exists. This is the most granular approval model among the tools reviewed.
Advanced
Supports OpenAI (GPT-4.1, GPT-5, GPT-5 mini), Anthropic (Claude Sonnet 4.5, Haiku 4.5), and Google (Gemini Pro 2.5) with Auto mode and BYOK for hundreds of additional models. Ollama and LM Studio are natively integrated without tunneling workarounds, though Agent and Plan modes show instability with local models.
Standard
Ollama is natively integrated without the tunnel workarounds Cursor requires, and the AI Toolkit extension adds CPU/GPU/NPU local inference paths. However, Agent and Plan modes are unstable with local models, and a GitHub Copilot subscription is mandatory regardless of inference source.
Premier
GitHub Copilot is a mature managed cloud service with multi-model support (GPT-4.1, GPT-5, Claude Sonnet, Gemini Pro) available immediately via subscription. Cloud is the foundation of all Agent and Plan mode capabilities.
ACP is an open standard with SDKs in Python, TypeScript, Kotlin, and Rust, plus a JetBrains plugin marketplace and ACP Agent Registry (IDE 2025.3+). The open multi-SDK approach and Claude Agent as a first ACP integrator demonstrate real ecosystem depth, though the marketplace is smaller than VS Code's.
Advanced
Native MCP support since IntelliJ 2025.1 with STDIO and Streamable HTTP transports, plus the ability to import server configs directly from a Claude Desktop JSON file. Junie gained MCP support independently, giving two separate agent surfaces native MCP access.
Basic
No end-user hook system exists in AI Assistant's standard configuration. The Koog framework SDK provides rich hooks (onBeforeLLMCall, onToolCall, onToolCallResult, onAgentFinished) for developers building ACP-compatible agents, but these are completely inaccessible to everyday AI Assistant users.
Advanced
Markdown rules in `.aiassistant/rules/` are version-controllable and support four activation modes: always, model-decision, file glob, and manual via `@rule:`/`#rule:` syntax. A `.aiignore` file adds fine-grained file-access control, going well beyond what most competitors offer.
Basic
Rules can be scoped by file glob or activated by model decision, but there is no first-class mechanism to assign instructions exclusively to one named agent (e.g., Junie vs. Claude Agent). The Agent Skills Manager plugin adds task-specific scripts but does not close this gap.
Standard
The Prompt Library stores custom prompts with context macros (`$SELECTION`, `$SELECTION_LANG`) accessible via AI Actions, which covers typical use cases. However, the ability to select a saved prompt directly from the chat input was removed in 2025.1, making discovery less convenient than peers.
Advanced
Agent mode (Junie/Claude Agent) covers multi-file read/write, file creation/deletion, and terminal execution, and uniquely benefits from native access to the IDE's build system, type graph, and symbol index without a separate indexing step. Brave and Plan modes add meaningful workflow control.
Premier
JetBrains AI Assistant uses the IDE's native project index — including symbols, type graph, and module structure — giving it structural codebase understanding that text-search-based tools cannot match. The 2025.1.2 Codebase Mode adds RAG-based retrieval on top of this foundation, making it the strongest codebase-awareness implementation.
Basic
Multiple distinct agents (Junie, Claude Agent, ACP agents) can be active from the same chat UI, and Junie supports async task delegation via GitHub integration. However, true parallel sub-task execution within a single session is explicitly listed as a future roadmap item and is not shipped as of early 2026.
Basic
No built-in cross-session memory; each new chat starts fresh with no auto-summarization or automatic context carry-over. Persistence requires either external MCP memory servers or manually maintained project rules files checked into the repo.
Advanced
Three distinct approval tiers: default (per-action approval), Brave mode (global auto-approve), and a granular Action Allowlist for whitelisting specific operations without enabling Brave globally. This three-tier design goes beyond the binary ask/skip model most competitors ship.
Advanced
Supports OpenAI (GPT-4o, o1, o3-mini), Anthropic (Claude 3.5/3.7 Sonnet, Haiku), Google (Gemini 1.5/2.5 Pro), and local models via Ollama, LM Studio, or any OpenAI-compatible endpoint. Models can be assigned per feature (e.g., chat vs. code completion), adding meaningful configurability beyond simple model switching.
Premier
Full offline mode with local inference via Ollama, LM Studio, or llama.cpp; the 2025 free tier includes unlimited local code completion at no cost. No tunneling workarounds required — the most complete officially supported local story among IDE agents reviewed.
Advanced
JetBrains AI subscription provides polished cloud access to OpenAI, Anthropic, and Google models with remote IDE analysis. Solid managed cloud service, though local-first operation is the platform's defining differentiator.
Open source with a published `registerCustomContextProvider` API for VS Code extension developers, plus full configurability of model providers, slash commands, and MCP servers via `config.yaml`/`config.ts`. Continue Hub enables community sharing of configurations, rules, and agents, creating a real ecosystem.
Advanced
Native MCP support via YAML files in `.continue/mcpServers/` or inline in `config.yaml`, with STDIO, SSE, and Streamable HTTP transports including OAuth-authenticated servers. There is a known compatibility gap with the 2025-06-18 MCP spec revision in the VS Code extension, but standard servers work correctly.
None
Continue has no lifecycle hook system whatsoever — no pre/post tool-use events, no session start/end hooks, and no mechanism for programmable behavior injection. Tool approval is handled via static policies (Ask First/Automatic/Excluded), not hooks; implementing hook-level behavior requires contributing to the core codebase.
Premier
Per-project and global user rules with `globs`, `regex`, and `alwaysApply` properties; system prompts are overridable per model and per mode (`baseSystemMessage`, `baseAgentSystemMessage`, `basePlanSystemMessage`) in `config.yaml`. This level of per-model and per-mode instruction control is more granular than any competitor.
Basic
The IDE extension has no first-class per-agent instruction scoping; all rules apply broadly across chat, agent, and edit modes. Agent-level configs (models, rules, tools) exist for Cloud Agents on Continue Hub, but that is a separate cloud product and not available locally in the IDE.
Advanced
Prompt files in `.continue/prompts/` with `invokable: true` create slash commands directly in the IDE, supporting Handlebars templating with `{{input}}` and arbitrary TypeScript logic via `config.ts`. This programmable approach goes beyond static text templates and enables genuinely dynamic prompt workflows.
Standard
Built-in tools cover read file, create file, ripgrep exact search, glob search, terminal command, web search, and git diff, which handles typical use cases well. The toolset is comparable to peers but lacks a native browser automation tool; browser interaction requires an MCP server like Playwright.
Advanced
Multi-tier retrieval combining embeddings-based vector search, AST parsing via tree-sitter, and ripgrep text search, with tunable retrieval parameters (result count, re-ranking) in `config.yaml`. Agent mode uses built-in glob and ripgrep tools for autonomous repo navigation without a separate indexing product.
Basic
Frontier models can issue multiple parallel tool calls within a single agent session, but this is model-level parallelism rather than native multi-agent orchestration. Running multiple simultaneous agents requires manually running multiple `cn` CLI processes via shell scripting, with no IDE-native coordination.
Basic
No native cross-session memory; context resets completely at the end of each session. Continue Hub publishes a `continuedev/memory-mcp` knowledge-graph server (March 2025) for opt-in persistence via MCP, but this is not bundled or automatic — committing rules files remains the primary recommended pattern.
Advanced
Per-tool approval policies with three levels (Ask First, Automatic, Excluded) configurable per individual tool via the IDE toolbar. The CLI (`cn`) adds session-level `--allow`, `--ask`, and `--exclude` flags for runtime overrides, providing meaningful control depth for power users.
Premier
Supports virtually every major provider including OpenAI, Anthropic, Google Gemini, Mistral, Amazon Bedrock, Azure OpenAI, xAI, Ollama, LM Studio, and any OpenAI-compatible endpoint, with different models assignable to different tasks (e.g., local model for autocomplete, cloud model for agent chat). This is Continue's defining strength and the most comprehensive provider coverage of any tool reviewed.
Premier
Fully local operation with Ollama is a first-class, well-documented primary use case with zero data leaving the machine. The entire open-source stack — including the backend — is self-hostable, the strongest data-residency story of any tool reviewed.
Standard
Works with all major cloud AI providers (OpenAI, Anthropic, Google, Mistral, Bedrock, Azure) and Continue Hub for team-shared configurations. Cloud is fully functional but secondary to the local-first design philosophy; there is no Continue-managed cloud service.
No formal plugin SDK or native plugin marketplace; extensibility is achieved through MCP servers (via the built-in MCP Marketplace) and open-source forking. Third parties cannot build Cline-native extensions the way VS Code extensions work.
Premier
First-class MCP support with a built-in MCP Marketplace; Cline can install, manage, and scaffold brand-new MCP servers on the user's behalf, and treats MCP as its primary extensibility mechanism for connecting to external APIs and services.
Advanced
Hooks system (v3.36+) supports PreToolUse (can block or modify), PostToolUse, UserPromptSubmit, and TaskStart events in `.clinerules/hooks/`; hooks receive JSON context via stdin and return JSON to control flow. Currently macOS and Linux only; Windows is not supported.
Advanced
Supports `.clinerules` files or a `.clinerules/` directory of multiple markdown files for project-scoped rules, plus global instructions; both layers are distinct, version-controllable, and team-shareable. The older text-box system was deprecated in favor of this file-based approach.
Advanced
The Skills system (v3.48+) allows modular instruction sets in `.cline/skills/` (project) or `~/.cline/skills/` (global), each with YAML frontmatter defining name, description, and triggers; a skill activates only when the current request matches its description, avoiding context bloat.
Advanced
Slash-command workflows are stored as markdown files in `.clinerules/workflows/` or `~/Documents/Cline/Workflows/` and invoked by typing `/workflow-name.md`; Cline can also auto-generate workflow files from completed conversations.
Premier
Ships a comprehensive built-in toolset including file read/write/create with inline diff previews, terminal execution with live output, regex and AST-based search, multi-root workspace traversal, and a native headless Chromium browser (click, type, scroll, screenshot, console capture) — the latter being unique among VS Code AI coding extensions.
Advanced
Builds codebase awareness through file-tree reading, AST parsing, and regex search within the agent loop without a separate indexing step; fully supports VS Code multi-root workspaces, reading, editing, and searching across multiple repos simultaneously.
Standard
The Cline CLI exposes a gRPC API for managing multiple concurrent Cline instances on different sub-tasks, but within the VS Code extension sub-cline instances currently execute sequentially, waiting for each to complete before starting the next.
Basic
No built-in automatic memory layer; each new session starts with a clean context window. 'Memory Bank' is a user-defined convention of structured markdown files committed to the repo that must be manually instructed to load at session start — not an automatic behind-the-scenes system.
Premier
Granular Auto-Approve system with independent per-category controls for file reads, file edits, terminal commands, browser actions, and MCP tool use, plus a configurable maximum API-request limit before pausing; 'YOLO mode' provides a global override for fully autonomous operation.
Premier
Fully model-agnostic with direct connections to major providers (Anthropic, OpenAI, Gemini, xAI), aggregator/proxy services (OpenRouter, Requesty), and local inference engines (Ollama, LM Studio); models are switchable per task with no lock-in to any provider.
Premier
Runs entirely locally as a VS Code extension with no data sent to Cline's servers. When paired with Ollama or LM Studio the full stack — extension, inference, and storage — operates fully on-device with zero internet dependency.
Standard
Connects seamlessly to all major cloud AI providers (Anthropic, OpenAI, Gemini, xAI, OpenRouter) with no configuration friction. There is no Cline-managed cloud service; cloud access means bringing your own provider API key.
Skills & Plugins system with a community hub (ClawHub) lets users install, share, or build custom extensions. Agents can autonomously generate new skills during a conversation — the most open-ended extensibility model in this comparison.
Advanced
MCP Registry integration allows connecting external tools via MCP. Functional and documented, though MCP is secondary to OpenClaw's native 50+ integration connectors (Gmail, GitHub, Spotify, Obsidian, etc.) which are not MCP-based.
Advanced
Webhooks, cron jobs, and Pub/Sub integrations (e.g., Gmail Pub/Sub) enable event-driven automation at the session and task level. These are integration-focused trigger hooks rather than a tool-level PreToolUse/PostToolUse system, limiting precision compared to Claude Code or Cline.
Standard
Per-agent workspace configuration and session profiles support custom instructions, but there is no hierarchical instruction file system (like CLAUDE.md or AGENTS.md). Instruction scope is workspace-level rather than per-project or per-rule.
Advanced
Skills system stores modular instruction bundles per skill; multi-agent routing can isolate workspaces and sessions per agent. Individual agents can have dedicated skill sets and memory scopes, though configuration is less declarative than Claude Code's `.claude/agents/` approach.
Standard
Skills serve as reusable instruction modules invokable by name or description match. No explicit slash-command system or prompt file convention is documented; cron and webhook triggers handle scheduled invocation rather than user-facing prompt reuse.
Premier
Covers the full autonomous agent toolkit: file system read/write, shell command execution with configurable sandboxing, and browser automation via a dedicated Chromium instance with CDP control (click, type, screenshot, DOM inspection). Runs across macOS, Windows, and Linux.
Basic
File system access is available within the configured workspace (`~/.openclaw/workspace`), and the agent can read and write files. However, OpenClaw is not a coding-specific tool and has no semantic indexing, AST parsing, or codebase-traversal tooling oriented toward software projects.
Advanced
Multi-agent routing assigns inbound channels or accounts to isolated agents with separate workspaces and session contexts, enabling concurrent independent agents. Within a single session, parallel tool calls rely on model-level parallelism rather than orchestrated sub-agent spawning.
Premier
24/7 persistent memory system that learns user preferences across sessions; cross-agent memory sharing allows knowledge accumulated in one agent to be available to others. This is the most capable automatic memory system of any platform in this comparison.
Standard
Sandboxing options for shell command execution are configurable, and the tool emphasizes user data ownership and privacy by default. Granular per-tool allow/deny rules comparable to Claude Code or VS Code Copilot's approval systems are not documented.
Premier
Works with Anthropic Claude, OpenAI models, and locally-running models (Ollama, LM Studio); includes automatic model failover and profile rotation for resilience. Users report running alternative backends including MiniMax. Not locked to any single provider.
Premier
Designed to run privately on user infrastructure by default — local device, on-premises server, or remote VM accessible via Tailscale or SSH. No data is sent to OpenClaw's servers unless explicitly configured.
Standard
Supports deployment on remote Linux cloud instances via Tailscale or SSH, enabling cloud-hosted self-managed setups. There is no OpenClaw-managed SaaS option; all cloud deployments are self-provisioned and maintained by the user.
41
/ 56
No platforms selected. Use the filters above to show platforms.
Last updated February 2026. Features change rapidly — verify against each platform's official documentation.
Consolidated platform changelog
Releases pulled from each platform's changelog, rewritten in spaget's voice and tagged with the matrix capabilities they touch.
Last refreshed .
Cline v3.82.0 restores VS Code foreground terminal support and adds OpenAI, SAP AI Core, and Z AI model integrations — incremental tightening on model flexibility with no level boundary crossed.
Claude Code v2.1.126 adds dynamic model discovery from compatible gateways and a project-state purge command — neither crosses a matrix boundary, but the gateway model enumeration meaningfully tightens the `ANTHROPIC_BASE_URL` workaround path that the model-flexibility note already flags as unsupported.
Cursor tightens team admin controls for its plugin marketplace, removing the repository prerequisite and adding install-behavior configuration — an ops improvement to an existing capability, not a new tier.
JetBrains adds Codex (via ChatGPT account) and Claude Agent as first-party coding agents in AI Chat, deepening the multi-agent roster but not changing the structural capability profile — agent-specific-instructions and parallel-agent-execution remain the platform's ceiling constraints.
Codex 0.128.0 ships persisted `/goal` workflows — named, pauseable, resumable goal sessions with server-side runtime continuation — which materially advances cross-session memory and task persistence beyond the 'initial plumbing' state the matrix currently notes.
JetBrains bundles Claude Agent (Anthropic SDK-backed) directly into AI Assistant's chat under the standard subscription, adding a second distinct agent surface alongside Junie — but this is an expansion of the existing multi-agent roster, not a new capability class.
JetBrains adds Claude 4 Sonnet to the chat model roster and ships pre-commit AI checks with one-click fixes — tightens model breadth and workflow integration without crossing a level boundary on either dimension.
Cursor adds always-on security agents (Security Reviewer and Vulnerability Scanner) in beta for Teams and Enterprise — a domain-specific multi-agent capability, but scoped to a vertical use case and plan-gated rather than a structural platform advancement.
Cursor opens its agent runtime to developers via an SDK, but this is a platform extension play for builders — not a new end-user capability that shifts any existing matrix dimension.
Claude Code adds Bedrock service-tier selection and a UX improvement to session resumption — incremental quality-of-life on cloud-deployment and memory-context-persistence, neither crossing a level boundary.
Three targeted capability tighteners: `alwaysLoad` on MCP servers gives operators a reliable escape hatch from tool-search deferral, and `claude plugin prune` / `--prune` cascade clean up the plugin dependency lifecycle — neither crosses a level boundary but both deepen an already-premier implementation.
Cursor ships async subagents with an improved worktrees experience and multi-root workspace support, meaningfully upgrading the parallelism model but staying within existing level bounds.
JetBrains adds Codex and Claude Agent as first-class coding agents in AI Chat alongside the existing Junie, expanding the roster of named agent surfaces but not changing the declarative per-agent configuration story.
Cline extends its Skills system with enterprise-managed remote skill configuration — globally defined skills pushed from enterprise remote config, with always-on enforcement and dedicated UI surfacing — strengthening the agent-specific-instructions dimension without crossing a level boundary.
Three agent-system refinements: forked subagents exposed to external builds, per-agent MCP server loading now works in main-thread sessions, and model selection persistence is tightened — no capability boundary crossed, but the subagent and per-agent-config machinery gets meaningfully more reliable.
Performance-only release: faster `/resume` on large sessions and parallelized MCP stdio startup improve operational smoothness but shift no capability dimension on the scorecard.
Single crash fix in the Agent Teams permission dialog — no capability shift, but confirms the experimental multi-agent permission flow is under active hardening.
Claude Code v2.1.113 adds `sandbox.network.deniedDomains` — a deny-override on top of the existing allowlist wildcard system — incrementally deepening the already-premier approval/permission configuration model without crossing a new level boundary.
Cline v3.79.0 adds Azure Blob Storage as a storage provider and exposes `globalSkills` in remote config — incremental operational extensions, not capability-level shifts; Claude Opus 4.7 model support is a model roster update with no architecture change.
Cursor adds interactive canvas responses — a new output modality, but not a capability that maps to any matrix dimension tracked in the current rubric.
Bugfix release: VS Code Copilot now discovers AGENTS.md and CLAUDE.md at workspace roots and correctly emits agent instructions in the CopilotCLI customization provider — tightens custom-ai-instructions coverage without moving the scorecard.
VS Code Copilot wires lifecycle hooks through the CopilotCLI customization provider and adds inline summarization within the agent loop for prompt cache efficiency — tightens the hooks integration and context pipeline without crossing a level boundary.
Internal plumbing and terminology alignment for the codebase index UI, plus an MCP Insiders channel setting — no capability boundary crossed, just tightening existing features.
Chunked file reading tightens Cline's multi-file awareness for large files, and Lazy Teammate Mode plus Notification hook polish refine existing UX without crossing any scorecard boundary.
Cline v3.76.0 ships a Kanban-based task management UI and loop detection for runaway tool calls — operational hygiene improvements that don't move any matrix dimension.
Cline v3.75.0 tightens hooks reliability with stabilized tests and cleans up example scaffolding — no capability boundary crossed, hooks system unchanged at level 3.
Cline v3.74.0 tightens API efficiency with free-model auto-detection and a file-read dedup cache — incremental improvements to core tooling and model flexibility, but nothing that crosses a level boundary.
Cline v3.73.0 adds W&B Inference by CoreWeave (17 models) to its provider roster and tightens parallel tool calling for OpenRouter and Cline providers — no level changes, but incrementally broadens the already-comprehensive model-flexibility and agent-core-tools surface.