Tools
MCPToolApprovalFunction
module-attribute
MCPToolApprovalFunction = Callable[
[MCPToolApprovalRequest],
MaybeAwaitable[MCPToolApprovalFunctionResult],
]
A function that approves or rejects a tool call.
ShellApprovalFunction
module-attribute
ShellApprovalFunction = Callable[
[RunContextWrapper[Any], "ShellActionRequest", str],
MaybeAwaitable[bool],
]
A function that determines whether a shell action requires approval. Takes (run_context, action, call_id) and returns whether approval is needed.
ShellOnApprovalFunction
module-attribute
ShellOnApprovalFunction = Callable[
[RunContextWrapper[Any], "ToolApprovalItem"],
MaybeAwaitable[ShellOnApprovalFunctionResult],
]
A function that auto-approves or rejects a shell tool call when approval is needed. Takes (run_context, approval_item) and returns approval decision.
ApplyPatchApprovalFunction
module-attribute
ApplyPatchApprovalFunction = Callable[
[RunContextWrapper[Any], ApplyPatchOperation, str],
MaybeAwaitable[bool],
]
A function that determines whether an apply_patch operation requires approval. Takes (run_context, operation, call_id) and returns whether approval is needed.
ApplyPatchOnApprovalFunction
module-attribute
ApplyPatchOnApprovalFunction = Callable[
[RunContextWrapper[Any], "ToolApprovalItem"],
MaybeAwaitable[ApplyPatchOnApprovalFunctionResult],
]
A function that auto-approves or rejects an apply_patch tool call when approval is needed. Takes (run_context, approval_item) and returns approval decision.
LocalShellExecutor
module-attribute
LocalShellExecutor = Callable[
[LocalShellCommandRequest], MaybeAwaitable[str]
]
A function that executes a command on a shell.
ShellToolContainerSkill
module-attribute
ShellToolContainerSkill = Union[
ShellToolSkillReference, ShellToolInlineSkill
]
Container skill configuration.
ShellToolContainerNetworkPolicy
module-attribute
ShellToolContainerNetworkPolicy = Union[
ShellToolContainerNetworkPolicyAllowlist,
ShellToolContainerNetworkPolicyDisabled,
]
Network policy configuration for hosted shell containers.
ShellToolHostedEnvironment
module-attribute
ShellToolHostedEnvironment = Union[
ShellToolContainerAutoEnvironment,
ShellToolContainerReferenceEnvironment,
]
Hosted shell environment variants.
ShellToolEnvironment
module-attribute
ShellToolEnvironment = Union[
ShellToolLocalEnvironment, ShellToolHostedEnvironment
]
All supported shell environments.
ShellExecutor
module-attribute
ShellExecutor = Callable[
[ShellCommandRequest],
MaybeAwaitable[Union[str, ShellResult]],
]
Executes a shell command sequence and returns either text or structured output.
Tool
module-attribute
Tool = Union[
FunctionTool,
FileSearchTool,
WebSearchTool,
ComputerTool[Any],
HostedMCPTool,
ShellTool,
ApplyPatchTool,
LocalShellTool,
ImageGenerationTool,
CodeInterpreterTool,
ToolSearchTool,
]
A tool that can be used in an agent.
ToolOutputText
ToolOutputTextDict
ToolOutputImage
Bases: BaseModel
Represents a tool output that should be sent to the model as an image.
You can provide either an image_url (URL or data URL) or a file_id for previously uploaded
content. The optional detail can control vision detail.
Source code in src/agents/tool.py
check_at_least_one_required_field
check_at_least_one_required_field() -> ToolOutputImage
Validate that at least one of image_url or file_id is provided.
Source code in src/agents/tool.py
ToolOutputImageDict
Bases: TypedDict
TypedDict variant for image tool outputs.
Source code in src/agents/tool.py
ToolOutputFileContent
Bases: BaseModel
Represents a tool output that should be sent to the model as a file.
Provide one of file_data (base64), file_url, or file_id. You may also
provide an optional filename when using file_data to hint file name.
Source code in src/agents/tool.py
check_at_least_one_required_field
check_at_least_one_required_field() -> (
ToolOutputFileContent
)
Validate that at least one of file_data, file_url, or file_id is provided.
Source code in src/agents/tool.py
ToolOutputFileContentDict
Bases: TypedDict
TypedDict variant for file content tool outputs.
Source code in src/agents/tool.py
ComputerCreate
Bases: Protocol[ComputerT_co]
Initializes a computer for the current run context.
Source code in src/agents/tool.py
ComputerDispose
Bases: Protocol[ComputerT_contra]
Cleans up a computer initialized for a run context.
Source code in src/agents/tool.py
ComputerProvider
dataclass
Bases: Generic[ComputerT]
Configures create/dispose hooks for per-run computer lifecycle management.
Source code in src/agents/tool.py
FunctionToolResult
dataclass
Source code in src/agents/tool.py
run_item
instance-attribute
run_item: RunItem | None
The run item that was produced as a result of the tool call.
This can be None when the tool run is interrupted and no output item should be emitted yet.
interruptions
class-attribute
instance-attribute
interruptions: list[ToolApprovalItem] = field(
default_factory=list
)
Interruptions from nested agent runs (for agent-as-tool).
FunctionTool
dataclass
A tool that wraps a function. In most cases, you should use the function_tool helpers to
create a FunctionTool, as they let you easily wrap a Python function.
Source code in src/agents/tool.py
219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 | |
name
instance-attribute
The name of the tool, as shown to the LLM. Generally the name of the function.
params_json_schema
instance-attribute
The JSON schema for the tool's parameters.
on_invoke_tool
instance-attribute
on_invoke_tool: Callable[
[ToolContext[Any], str], Awaitable[Any]
]
A function that invokes the tool with the given context and parameters. The params passed are: 1. The tool run context. 2. The arguments from the LLM, as a JSON string.
You must return a one of the structured tool output types (e.g. ToolOutputText, ToolOutputImage,
ToolOutputFileContent) or a string representation of the tool output, or a list of them,
or something we can call str() on.
In case of errors, you can either raise an Exception (which will cause the run to fail) or
return a string error message (which will be sent back to the LLM).
strict_json_schema
class-attribute
instance-attribute
Whether the JSON schema is in strict mode. We strongly recommend setting this to True, as it increases the likelihood of correct JSON input.
is_enabled
class-attribute
instance-attribute
is_enabled: (
bool
| Callable[
[RunContextWrapper[Any], AgentBase],
MaybeAwaitable[bool],
]
) = True
Whether the tool is enabled. Either a bool or a Callable that takes the run context and agent and returns whether the tool is enabled. You can use this to dynamically enable/disable a tool based on your context/state.
tool_input_guardrails
class-attribute
instance-attribute
tool_input_guardrails: (
list[ToolInputGuardrail[Any]] | None
) = None
Optional list of input guardrails to run before invoking this tool.
tool_output_guardrails
class-attribute
instance-attribute
tool_output_guardrails: (
list[ToolOutputGuardrail[Any]] | None
) = None
Optional list of output guardrails to run after invoking this tool.
needs_approval
class-attribute
instance-attribute
needs_approval: (
bool
| Callable[
[RunContextWrapper[Any], dict[str, Any], str],
Awaitable[bool],
]
) = False
Whether the tool needs approval before execution. If True, the run will be interrupted and the tool call will need to be approved using RunState.approve() or rejected using RunState.reject() before continuing. Can be a bool (always/never needs approval) or a function that takes (run_context, tool_parameters, call_id) and returns whether this specific call needs approval.
timeout_seconds
class-attribute
instance-attribute
Optional timeout (seconds) for each tool invocation.
timeout_behavior
class-attribute
instance-attribute
How to handle timeout events.
- "error_as_result": return a model-visible timeout error string.
- "raise_exception": raise a ToolTimeoutError and fail the run.
timeout_error_function
class-attribute
instance-attribute
Optional formatter for timeout errors when timeout_behavior is "error_as_result".
defer_loading
class-attribute
instance-attribute
Whether the Responses API should hide this tool definition until tool search loads it.
FileSearchTool
dataclass
A hosted tool that lets the LLM search through a vector store. Currently only supported with OpenAI models, using the Responses API.
Source code in src/agents/tool.py
vector_store_ids
instance-attribute
The IDs of the vector stores to search.
max_num_results
class-attribute
instance-attribute
The maximum number of results to return.
include_search_results
class-attribute
instance-attribute
Whether to include the search results in the output produced by the LLM.
ranking_options
class-attribute
instance-attribute
Ranking options for search.
WebSearchTool
dataclass
A hosted tool that lets the LLM search the web. Currently only supported with OpenAI models, using the Responses API.
Source code in src/agents/tool.py
user_location
class-attribute
instance-attribute
Optional location for the search. Lets you customize results to be relevant to a location.
filters
class-attribute
instance-attribute
A filter to apply based on file attributes.
ComputerTool
dataclass
Bases: Generic[ComputerT]
A hosted tool that lets the LLM control a computer.
Source code in src/agents/tool.py
computer
instance-attribute
The computer implementation, or a factory that produces a computer per run.
on_safety_check
class-attribute
instance-attribute
on_safety_check: (
Callable[
[ComputerToolSafetyCheckData], MaybeAwaitable[bool]
]
| None
) = None
Optional callback to acknowledge computer tool safety checks.
ComputerToolSafetyCheckData
dataclass
Information about a computer tool safety check.
Source code in src/agents/tool.py
MCPToolApprovalRequest
dataclass
A request to approve a tool call.
Source code in src/agents/tool.py
MCPToolApprovalFunctionResult
Bases: TypedDict
The result of an MCP tool approval function.
Source code in src/agents/tool.py
ShellOnApprovalFunctionResult
Bases: TypedDict
The result of a shell tool on_approval callback.
Source code in src/agents/tool.py
ApplyPatchOnApprovalFunctionResult
Bases: TypedDict
The result of an apply_patch tool on_approval callback.
Source code in src/agents/tool.py
HostedMCPTool
dataclass
A tool that allows the LLM to use a remote MCP server. The LLM will automatically list and
call tools, without requiring a round trip back to your code.
If you want to run MCP servers locally via stdio, in a VPC or other non-publicly-accessible
environment, or you just prefer to run tool calls locally, then you can instead use the servers
in agents.mcp and pass Agent(mcp_servers=[...]) to the agent.
Source code in src/agents/tool.py
tool_config
instance-attribute
The MCP tool config, which includes the server URL and other settings.
on_approval_request
class-attribute
instance-attribute
on_approval_request: MCPToolApprovalFunction | None = None
An optional function that will be called if approval is requested for an MCP tool. If not
provided, you will need to manually add approvals/rejections to the input and call
Runner.run(...) again.
CodeInterpreterTool
dataclass
A tool that allows the LLM to execute code in a sandboxed environment.
Source code in src/agents/tool.py
ImageGenerationTool
dataclass
A tool that allows the LLM to generate images.
Source code in src/agents/tool.py
LocalShellCommandRequest
dataclass
A request to execute a command on a shell.
Source code in src/agents/tool.py
LocalShellTool
dataclass
A tool that allows the LLM to execute commands on a shell.
For more details, see: https://platform.openai.com/docs/guides/tools-local-shell
Source code in src/agents/tool.py
executor
instance-attribute
executor: LocalShellExecutor
A function that executes a command on a shell.
ShellToolLocalSkill
ShellToolSkillReference
ShellToolInlineSkillSource
ShellToolInlineSkill
ShellToolContainerNetworkPolicyDomainSecret
ShellToolContainerNetworkPolicyAllowlist
Bases: TypedDict
Allowlist network policy for hosted containers.
Source code in src/agents/tool.py
ShellToolContainerNetworkPolicyDisabled
ShellToolLocalEnvironment
ShellToolContainerAutoEnvironment
Bases: TypedDict
Auto-provisioned hosted container environment.
Source code in src/agents/tool.py
ShellToolContainerReferenceEnvironment
ShellCallOutcome
dataclass
ShellCommandOutput
dataclass
Structured output for a single shell command execution.
Source code in src/agents/tool.py
ShellResult
dataclass
ShellActionRequest
dataclass
ShellCallData
dataclass
Normalized shell call data provided to shell executors.
Source code in src/agents/tool.py
ShellCommandRequest
dataclass
ShellTool
dataclass
Next-generation shell tool. LocalShellTool will be deprecated in favor of this.
Source code in src/agents/tool.py
needs_approval
class-attribute
instance-attribute
needs_approval: bool | ShellApprovalFunction = False
Whether the shell tool needs approval before execution. If True, the run will be interrupted and the tool call will need to be approved using RunState.approve() or rejected using RunState.reject() before continuing. Can be a bool (always/never needs approval) or a function that takes (run_context, action, call_id) and returns whether this specific call needs approval.
on_approval
class-attribute
instance-attribute
on_approval: ShellOnApprovalFunction | None = None
Optional handler to auto-approve or reject when approval is required. If provided, it will be invoked immediately when an approval is needed.
environment
class-attribute
instance-attribute
environment: ShellToolEnvironment | None = None
Execution environment for shell commands.
If omitted, local mode is used.
__post_init__
Validate shell tool configuration and normalize environment fields.
Source code in src/agents/tool.py
ApplyPatchTool
dataclass
Hosted apply_patch tool. Lets the model request file mutations via unified diffs.
Source code in src/agents/tool.py
needs_approval
class-attribute
instance-attribute
needs_approval: bool | ApplyPatchApprovalFunction = False
Whether the apply_patch tool needs approval before execution. If True, the run will be interrupted and the tool call will need to be approved using RunState.approve() or rejected using RunState.reject() before continuing. Can be a bool (always/never needs approval) or a function that takes (run_context, operation, call_id) and returns whether this specific call needs approval.
on_approval
class-attribute
instance-attribute
on_approval: ApplyPatchOnApprovalFunction | None = None
Optional handler to auto-approve or reject when approval is required. If provided, it will be invoked immediately when an approval is needed.
ToolSearchTool
dataclass
A hosted Responses API tool that lets the model search deferred tools by namespace.
execution="client" is supported for manual Responses orchestration, but the standard
OpenAI Agents runner does not auto-execute client tool search calls.
Source code in src/agents/tool.py
with_function_tool_failure_error_handler
with_function_tool_failure_error_handler(
invoke_tool_impl: Callable[
[ToolContext[Any], str], Awaitable[Any]
],
on_handled_error: Callable[
[FunctionTool, Exception, str], None
],
) -> Callable[[ToolContext[Any], str], Awaitable[Any]]
Wrap a tool invoker so copied FunctionTools resolve failure policy against themselves.
Source code in src/agents/tool.py
resolve_computer
async
resolve_computer(
*,
tool: ComputerTool[Any],
run_context: RunContextWrapper[Any],
) -> ComputerLike
Resolve a computer for a given run context, initializing it if needed.
Source code in src/agents/tool.py
dispose_resolved_computers
async
dispose_resolved_computers(
*, run_context: RunContextWrapper[Any]
) -> None
Dispose any computer instances created for the provided run context.
Source code in src/agents/tool.py
tool_namespace
tool_namespace(
*,
name: str,
description: str | None,
tools: list[FunctionTool],
) -> list[FunctionTool]
Attach namespace metadata to function tools for OpenAI Responses tool search.
Source code in src/agents/tool.py
get_function_tool_responses_only_features
get_function_tool_responses_only_features(
tool: FunctionTool,
) -> tuple[str, ...]
Return Responses-only features used by a function tool.
Source code in src/agents/tool.py
ensure_function_tool_supports_responses_only_features
ensure_function_tool_supports_responses_only_features(
tool: FunctionTool, *, backend_name: str
) -> None
Reject Responses-only function-tool features on unsupported backends.
Source code in src/agents/tool.py
ensure_tool_choice_supports_backend
ensure_tool_choice_supports_backend(
tool_choice: Literal["auto", "required", "none"]
| str
| Any
| None,
*,
backend_name: str,
) -> None
Backend-specific converters should validate reserved tool choices.
is_responses_tool_search_surface
is_responses_tool_search_surface(tool: Tool) -> bool
Return True when a tool can be exposed through hosted Responses tool search.
Source code in src/agents/tool.py
has_responses_tool_search_surface
has_responses_tool_search_surface(
tools: list[Tool],
) -> bool
Return True when tool search has at least one eligible searchable surface.
is_required_tool_search_surface
is_required_tool_search_surface(tool: Tool) -> bool
Return True when a tool requires ToolSearchTool() to stay reachable.
Source code in src/agents/tool.py
has_required_tool_search_surface
has_required_tool_search_surface(tools: list[Tool]) -> bool
Return True when any enabled surface requires ToolSearchTool().
validate_responses_tool_search_configuration
validate_responses_tool_search_configuration(
tools: list[Tool],
*,
allow_opaque_search_surface: bool = False,
) -> None
Validate the Responses-only tool_search and defer-loading contract.
Source code in src/agents/tool.py
prune_orphaned_tool_search_tools
Preserve explicit ToolSearchTool entries until request conversion validates them.
Whether a tool_search definition is valid can depend on prompt-managed surfaces that are only known during request conversion, so pruning here hides misconfiguration instead of surfacing a clear error.
Source code in src/agents/tool.py
default_tool_error_function
default_tool_error_function(
ctx: RunContextWrapper[Any], error: Exception
) -> str
The default tool error function, which just returns a generic error message.
Source code in src/agents/tool.py
default_tool_timeout_error_message
Build the default message returned to the model when a tool times out.
set_function_tool_failure_error_function
set_function_tool_failure_error_function(
function_tool: FunctionTool,
failure_error_function: ToolErrorFunction
| None
| object = _UNSET_FAILURE_ERROR_FUNCTION,
) -> FunctionTool
Store internal failure formatter config for tool wrappers and runtime fallbacks.
Source code in src/agents/tool.py
resolve_function_tool_failure_error_function
resolve_function_tool_failure_error_function(
function_tool: FunctionTool,
) -> ToolErrorFunction | None
Return the configured tool failure formatter for runtime-generated error handling.
Source code in src/agents/tool.py
maybe_invoke_function_tool_failure_error_function
async
maybe_invoke_function_tool_failure_error_function(
*,
function_tool: FunctionTool,
context: RunContextWrapper[Any],
error: BaseException,
) -> str | None
Invoke the configured failure formatter, if one exists.
Source code in src/agents/tool.py
invoke_function_tool
async
invoke_function_tool(
*,
function_tool: FunctionTool,
context: ToolContext[Any],
arguments: str,
) -> Any
Invoke a function tool, enforcing timeout configuration when provided.
Source code in src/agents/tool.py
function_tool
function_tool(
func: ToolFunction[...],
*,
name_override: str | None = None,
description_override: str | None = None,
docstring_style: DocstringStyle | None = None,
use_docstring_info: bool = True,
failure_error_function: ToolErrorFunction | None = None,
strict_mode: bool = True,
is_enabled: bool
| Callable[
[RunContextWrapper[Any], AgentBase],
MaybeAwaitable[bool],
] = True,
needs_approval: bool
| Callable[
[RunContextWrapper[Any], dict[str, Any], str],
Awaitable[bool],
] = False,
tool_input_guardrails: list[ToolInputGuardrail[Any]]
| None = None,
tool_output_guardrails: list[ToolOutputGuardrail[Any]]
| None = None,
timeout: float | None = None,
timeout_behavior: ToolTimeoutBehavior = "error_as_result",
timeout_error_function: ToolErrorFunction | None = None,
defer_loading: bool = False,
) -> FunctionTool
function_tool(
*,
name_override: str | None = None,
description_override: str | None = None,
docstring_style: DocstringStyle | None = None,
use_docstring_info: bool = True,
failure_error_function: ToolErrorFunction | None = None,
strict_mode: bool = True,
is_enabled: bool
| Callable[
[RunContextWrapper[Any], AgentBase],
MaybeAwaitable[bool],
] = True,
needs_approval: bool
| Callable[
[RunContextWrapper[Any], dict[str, Any], str],
Awaitable[bool],
] = False,
tool_input_guardrails: list[ToolInputGuardrail[Any]]
| None = None,
tool_output_guardrails: list[ToolOutputGuardrail[Any]]
| None = None,
timeout: float | None = None,
timeout_behavior: ToolTimeoutBehavior = "error_as_result",
timeout_error_function: ToolErrorFunction | None = None,
defer_loading: bool = False,
) -> Callable[[ToolFunction[...]], FunctionTool]
function_tool(
func: ToolFunction[...] | None = None,
*,
name_override: str | None = None,
description_override: str | None = None,
docstring_style: DocstringStyle | None = None,
use_docstring_info: bool = True,
failure_error_function: ToolErrorFunction
| None
| object = _UNSET_FAILURE_ERROR_FUNCTION,
strict_mode: bool = True,
is_enabled: bool
| Callable[
[RunContextWrapper[Any], AgentBase],
MaybeAwaitable[bool],
] = True,
needs_approval: bool
| Callable[
[RunContextWrapper[Any], dict[str, Any], str],
Awaitable[bool],
] = False,
tool_input_guardrails: list[ToolInputGuardrail[Any]]
| None = None,
tool_output_guardrails: list[ToolOutputGuardrail[Any]]
| None = None,
timeout: float | None = None,
timeout_behavior: ToolTimeoutBehavior = "error_as_result",
timeout_error_function: ToolErrorFunction | None = None,
defer_loading: bool = False,
) -> (
FunctionTool
| Callable[[ToolFunction[...]], FunctionTool]
)
Decorator to create a FunctionTool from a function. By default, we will: 1. Parse the function signature to create a JSON schema for the tool's parameters. 2. Use the function's docstring to populate the tool's description. 3. Use the function's docstring to populate argument descriptions. The docstring style is detected automatically, but you can override it.
If the function takes a RunContextWrapper as the first argument, it must match the
context type of the agent that uses the tool.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
func
|
ToolFunction[...] | None
|
The function to wrap. |
None
|
name_override
|
str | None
|
If provided, use this name for the tool instead of the function's name. |
None
|
description_override
|
str | None
|
If provided, use this description for the tool instead of the function's docstring. |
None
|
docstring_style
|
DocstringStyle | None
|
If provided, use this style for the tool's docstring. If not provided, we will attempt to auto-detect the style. |
None
|
use_docstring_info
|
bool
|
If True, use the function's docstring to populate the tool's description and argument descriptions. |
True
|
failure_error_function
|
ToolErrorFunction | None | object
|
If provided, use this function to generate an error message when the tool call fails. The error message is sent to the LLM. If you pass None, then no error message will be sent and instead an Exception will be raised. |
_UNSET_FAILURE_ERROR_FUNCTION
|
strict_mode
|
bool
|
Whether to enable strict mode for the tool's JSON schema. We strongly recommend setting this to True, as it increases the likelihood of correct JSON input. If False, it allows non-strict JSON schemas. For example, if a parameter has a default value, it will be optional, additional properties are allowed, etc. See here for more: https://platform.openai.com/docs/guides/structured-outputs?api-mode=responses#supported-schemas |
True
|
is_enabled
|
bool | Callable[[RunContextWrapper[Any], AgentBase], MaybeAwaitable[bool]]
|
Whether the tool is enabled. Can be a bool or a callable that takes the run context and agent and returns whether the tool is enabled. Disabled tools are hidden from the LLM at runtime. |
True
|
needs_approval
|
bool | Callable[[RunContextWrapper[Any], dict[str, Any], str], Awaitable[bool]]
|
Whether the tool needs approval before execution. If True, the run will be interrupted and the tool call will need to be approved using RunState.approve() or rejected using RunState.reject() before continuing. Can be a bool (always/never needs approval) or a function that takes (run_context, tool_parameters, call_id) and returns whether this specific call needs approval. |
False
|
tool_input_guardrails
|
list[ToolInputGuardrail[Any]] | None
|
Optional list of guardrails to run before invoking the tool. |
None
|
tool_output_guardrails
|
list[ToolOutputGuardrail[Any]] | None
|
Optional list of guardrails to run after the tool returns. |
None
|
timeout
|
float | None
|
Optional timeout in seconds for each tool call. |
None
|
timeout_behavior
|
ToolTimeoutBehavior
|
Timeout handling mode. "error_as_result" returns a model-visible message, while "raise_exception" raises ToolTimeoutError and fails the run. |
'error_as_result'
|
timeout_error_function
|
ToolErrorFunction | None
|
Optional formatter used for timeout messages when timeout_behavior="error_as_result". |
None
|
defer_loading
|
bool
|
Whether to hide this tool definition until Responses API tool search explicitly loads it. |
False
|
Source code in src/agents/tool.py
1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 | |