Agent management and creation for the CLI.
Accepted internal output modes for CLI subcommands.
App color scheme.
The default agent name used when no -a flag is provided.
When True, compact_conversation requires HITL approval like other gated tools.
Get the default coding agent instructions.
These are the immutable base instructions that cannot be modified by the agent. Long-term memory (AGENTS.md) is handled separately by the middleware.
Get the glyph set for the current charset mode.
Get the default working directory for a given sandbox provider.
List subagents from user and/or project directories.
Scans for subagent definitions in the provided directories. Project subagents override user subagents with the same name.
Check a URL for suspicious Unicode and domain spoofing patterns.
Detect deceptive or hidden Unicode code points in text.
Join safety warnings into a display string with overflow indicator.
Render hidden Unicode characters as explicit markers.
Example output: abc<U+202E RIGHT-TO-LEFT OVERRIDE>def.
Remove known dangerous/invisible Unicode characters from text.
Summarize Unicode issues for warning messages.
Deduplicates by code point. When more than max_items unique entries exist,
the summary is truncated with a +N more entries suffix.
List all available agents.
Reset an agent to default or copy from another agent.
Get the base system prompt for the agent.
Loads the base system prompt template from system_prompt.md and
interpolates dynamic sections (model identity, working directory,
skills path, execution mode).
Create a CLI-configured agent with flexible options.
This is the main entry point for creating a deepagents CLI agent, usable both internally and from external code (e.g., benchmarking frameworks).
Metadata for a connected MCP server and its tools.
Inject local context (git state, project structure, etc.) into the system prompt.
Runs a bash detection script via backend.execute() on first interaction
and again after each summarization event, stores the result in state, and
appends it to the system prompt on every model call.
Because the script runs inside the backend, it works for both local shells and remote sandboxes.