strands-agents-tools


Namestrands-agents-tools JSON
Version 0.2.10 PyPI version JSON
download
home_pageNone
SummaryA collection of specialized tools for Strands Agents
upload_time2025-10-08 16:29:07
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseApache-2.0
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">
  <div>
    <a href="https://strandsagents.com">
      <img src="https://strandsagents.com/latest/assets/logo-github.svg" alt="Strands Agents" width="55px" height="105px">
    </a>
  </div>

  <h1>
    Strands Agents Tools
  </h1>

  <h2>
    A model-driven approach to building AI agents in just a few lines of code.
  </h2>

  <div align="center">
    <a href="https://github.com/strands-agents/tools/graphs/commit-activity"><img alt="GitHub commit activity" src="https://img.shields.io/github/commit-activity/m/strands-agents/tools"/></a>
    <a href="https://github.com/strands-agents/tools/issues"><img alt="GitHub open issues" src="https://img.shields.io/github/issues/strands-agents/tools"/></a>
    <a href="https://github.com/strands-agents/tools/pulls"><img alt="GitHub open pull requests" src="https://img.shields.io/github/issues-pr/strands-agents/tools"/></a>
    <a href="https://github.com/strands-agents/tools/blob/main/LICENSE"><img alt="License" src="https://img.shields.io/github/license/strands-agents/tools"/></a>
    <a href="https://pypi.org/project/strands-agents-tools/"><img alt="PyPI version" src="https://img.shields.io/pypi/v/strands-agents-tools"/></a>
    <a href="https://python.org"><img alt="Python versions" src="https://img.shields.io/pypi/pyversions/strands-agents-tools"/></a>
  </div>

  <p>
    <a href="https://strandsagents.com/">Documentation</a>
    ◆ <a href="https://github.com/strands-agents/samples">Samples</a>
    ◆ <a href="https://github.com/strands-agents/sdk-python">Python SDK</a>
    ◆ <a href="https://github.com/strands-agents/tools">Tools</a>
    ◆ <a href="https://github.com/strands-agents/agent-builder">Agent Builder</a>
    ◆ <a href="https://github.com/strands-agents/mcp-server">MCP Server</a>
  </p>
</div>

Strands Agents Tools is a community-driven project that provides a powerful set of tools for your agents to use. It bridges the gap between large language models and practical applications by offering ready-to-use tools for file operations, system execution, API interactions, mathematical operations, and more.

## ✨ Features

- 📁 **File Operations** - Read, write, and edit files with syntax highlighting and intelligent modifications
- 🖥️ **Shell Integration** - Execute and interact with shell commands securely
- 🧠 **Memory** - Store user and agent memories across agent runs to provide personalized experiences with both Mem0 and Amazon Bedrock Knowledge Bases
- 🕸️ **Web Infrastructure** - Perform web searches, extract page content, and crawl websites with Tavily and Exa-powered tools
- 🌐 **HTTP Client** - Make API requests with comprehensive authentication support
- 💬 **Slack Client** - Real-time Slack events, message processing, and Slack API access
- 🐍 **Python Execution** - Run Python code snippets with state persistence, user confirmation for code execution, and safety features
- 🧮 **Mathematical Tools** - Perform advanced calculations with symbolic math capabilities
- ☁️ **AWS Integration** - Seamless access to AWS services
- 🖼️ **Image Processing** - Generate and process images for AI applications
- 🎥 **Video Processing** - Use models and agents to generate dynamic videos
- 🎙️ **Audio Output** - Enable models to generate audio and speak
- 🔄 **Environment Management** - Handle environment variables safely
- 📝 **Journaling** - Create and manage structured logs and journals
- ⏱️ **Task Scheduling** - Schedule and manage cron jobs
- 🧠 **Advanced Reasoning** - Tools for complex thinking and reasoning capabilities
- 🐝 **Swarm Intelligence** - Coordinate multiple AI agents for parallel problem solving with shared memory
- 🔌 **Dynamic MCP Client** - ⚠️ Dynamically connect to external MCP servers and load remote tools (use with caution - see security warnings)
- 🔄 **Multiple tools in Parallel**  - Call multiple other tools at the same time in parallel with Batch Tool
- 🔍 **Browser Tool** - Tool giving an agent access to perform automated actions on a browser (chromium)
- 📈 **Diagram** - Create AWS cloud diagrams, basic diagrams, or UML diagrams using python libraries
- 📰 **RSS Feed Manager** - Subscribe, fetch, and process RSS feeds with content filtering and persistent storage
- 🖱️ **Computer Tool** - Automate desktop actions including mouse movements, keyboard input, screenshots, and application management

## 📦 Installation

### Quick Install

```bash
pip install strands-agents-tools
```

To install the dependencies for optional tools:

```bash
pip install strands-agents-tools[mem0_memory, use_browser, rss, use_computer]
```

### Development Install

```bash
# Clone the repository
git clone https://github.com/strands-agents/tools.git
cd tools

# Create and activate virtual environment
python3 -m venv .venv
source .venv/bin/activate  # On Windows: venv\Scripts\activate

# Install in development mode
pip install -e ".[dev]"

# Install pre-commit hooks
pre-commit install
```

### Tools Overview

Below is a comprehensive table of all available tools, how to use them with an agent, and typical use cases:

| Tool | Agent Usage | Use Case |
|------|-------------|----------|
| a2a_client | `provider = A2AClientToolProvider(known_agent_urls=["http://localhost:9000"]); agent = Agent(tools=provider.tools)` | Discover and communicate with A2A-compliant agents, send messages between agents |
| file_read | `agent.tool.file_read(path="path/to/file.txt")` | Reading configuration files, parsing code files, loading datasets |
| file_write | `agent.tool.file_write(path="path/to/file.txt", content="file content")` | Writing results to files, creating new files, saving output data |
| editor | `agent.tool.editor(command="view", path="path/to/file.py")` | Advanced file operations like syntax highlighting, pattern replacement, and multi-file edits |
| shell* | `agent.tool.shell(command="ls -la")` | Executing shell commands, interacting with the operating system, running scripts |
| http_request | `agent.tool.http_request(method="GET", url="https://api.example.com/data")` | Making API calls, fetching web data, sending data to external services |
| tavily_search | `agent.tool.tavily_search(query="What is artificial intelligence?", search_depth="advanced")` | Real-time web search optimized for AI agents with a variety of custom parameters |
| tavily_extract | `agent.tool.tavily_extract(urls=["www.tavily.com"], extract_depth="advanced")` | Extract clean, structured content from web pages with advanced processing and noise removal |
| tavily_crawl | `agent.tool.tavily_crawl(url="www.tavily.com", max_depth=2, instructions="Find API docs")` | Crawl websites intelligently starting from a base URL with filtering and extraction |
| tavily_map | `agent.tool.tavily_map(url="www.tavily.com", max_depth=2, instructions="Find all pages")` | Map website structure and discover URLs starting from a base URL without content extraction |
| exa_search | `agent.tool.exa_search(query="Best project management tools", text=True)` | Intelligent web search with auto mode (default) that combines neural and keyword search for optimal results |
| exa_get_contents | `agent.tool.exa_get_contents(urls=["https://example.com/article"], text=True, summary={"query": "key points"})` | Extract full content and summaries from specific URLs with live crawling fallback |
| python_repl* | `agent.tool.python_repl(code="import pandas as pd\ndf = pd.read_csv('data.csv')\nprint(df.head())")` | Running Python code snippets, data analysis, executing complex logic with user confirmation for security |
| calculator | `agent.tool.calculator(expression="2 * sin(pi/4) + log(e**2)")` | Performing mathematical operations, symbolic math, equation solving |
| code_interpreter | `code_interpreter = AgentCoreCodeInterpreter(region="us-west-2"); agent = Agent(tools=[code_interpreter.code_interpreter])` | Execute code in isolated sandbox environments with multi-language support (Python, JavaScript, TypeScript), persistent sessions, and file operations |
| use_aws | `agent.tool.use_aws(service_name="s3", operation_name="list_buckets", parameters={}, region="us-west-2")` | Interacting with AWS services, cloud resource management |
| retrieve | `agent.tool.retrieve(text="What is STRANDS?")` | Retrieving information from Amazon Bedrock Knowledge Bases |
| nova_reels | `agent.tool.nova_reels(action="create", text="A cinematic shot of mountains", s3_bucket="my-bucket")` | Create high-quality videos using Amazon Bedrock Nova Reel with configurable parameters via environment variables |
| agent_core_memory | `agent.tool.agent_core_memory(action="record", content="Hello, I like vegetarian food")` | Store and retrieve memories with Amazon Bedrock Agent Core Memory service |
| mem0_memory | `agent.tool.mem0_memory(action="store", content="Remember I like to play tennis", user_id="alex")` | Store user and agent memories across agent runs to provide personalized experience |
| bright_data | `agent.tool.bright_data(action="scrape_as_markdown", url="https://example.com")` | Web scraping, search queries, screenshot capture, and structured data extraction from websites and different data feeds|
| memory | `agent.tool.memory(action="retrieve", query="product features")` | Store, retrieve, list, and manage documents in Amazon Bedrock Knowledge Bases with configurable parameters via environment variables |
| environment | `agent.tool.environment(action="list", prefix="AWS_")` | Managing environment variables, configuration management |
| generate_image_stability | `agent.tool.generate_image_stability(prompt="A tranquil pool")` | Creating images using Stability AI models |
| generate_image | `agent.tool.generate_image(prompt="A sunset over mountains")` | Creating AI-generated images for various applications |
| image_reader | `agent.tool.image_reader(image_path="path/to/image.jpg")` | Processing and reading image files for AI analysis |
| journal | `agent.tool.journal(action="write", content="Today's progress notes")` | Creating structured logs, maintaining documentation |
| think | `agent.tool.think(thought="Complex problem to analyze", cycle_count=3)` | Advanced reasoning, multi-step thinking processes |
| load_tool | `agent.tool.load_tool(path="path/to/custom_tool.py", name="custom_tool")` | Dynamically loading custom tools and extensions |
| swarm | `agent.tool.swarm(task="Analyze this problem", swarm_size=3, coordination_pattern="collaborative")` | Coordinating multiple AI agents to solve complex problems through collective intelligence |
| current_time | `agent.tool.current_time(timezone="US/Pacific")` | Get the current time in ISO 8601 format for a specified timezone |
| sleep | `agent.tool.sleep(seconds=5)` | Pause execution for the specified number of seconds, interruptible with SIGINT (Ctrl+C) |
| agent_graph | `agent.tool.agent_graph(agents=["agent1", "agent2"], connections=[{"from": "agent1", "to": "agent2"}])` | Create and visualize agent relationship graphs for complex multi-agent systems |
| cron* | `agent.tool.cron(action="schedule", name="task", schedule="0 * * * *", command="backup.sh")` | Schedule and manage recurring tasks with cron job syntax <br> **Does not work on Windows |
| slack | `agent.tool.slack(action="post_message", channel="general", text="Hello team!")` | Interact with Slack workspace for messaging and monitoring |
| speak | `agent.tool.speak(text="Operation completed successfully", style="green", mode="polly")` | Output status messages with rich formatting and optional text-to-speech |
| stop | `agent.tool.stop(message="Process terminated by user request")` | Gracefully terminate agent execution with custom message |
| handoff_to_user | `agent.tool.handoff_to_user(message="Please confirm action", breakout_of_loop=False)` | Hand off control to user for confirmation, input, or complete task handoff |
| use_llm | `agent.tool.use_llm(prompt="Analyze this data", system_prompt="You are a data analyst")` | Create nested AI loops with customized system prompts for specialized tasks |
| workflow | `agent.tool.workflow(action="create", name="data_pipeline", steps=[{"tool": "file_read"}, {"tool": "python_repl"}])` | Define, execute, and manage multi-step automated workflows |
| mcp_client | `agent.tool.mcp_client(action="connect", connection_id="my_server", transport="stdio", command="python", args=["server.py"])` | ⚠️ **SECURITY WARNING**: Dynamically connect to external MCP servers via stdio, sse, or streamable_http, list tools, and call remote tools. This can pose security risks as agents may connect to malicious servers. Use with caution in production. |
| batch| `agent.tool.batch(invocations=[{"name": "current_time", "arguments": {"timezone": "Europe/London"}}, {"name": "stop", "arguments": {}}])` | Call multiple other tools in parallel. |
| browser | `browser = LocalChromiumBrowser(); agent = Agent(tools=[browser.browser])` | Web scraping, automated testing, form filling, web automation tasks |
| diagram | `agent.tool.diagram(diagram_type="cloud", nodes=[{"id": "s3", "type": "S3"}], edges=[])` | Create AWS cloud architecture diagrams, network diagrams, graphs, and UML diagrams (all 14 types) |
| rss | `agent.tool.rss(action="subscribe", url="https://example.com/feed.xml", feed_id="tech_news")` | Manage RSS feeds: subscribe, fetch, read, search, and update content from various sources |
| use_computer | `agent.tool.use_computer(action="click", x=100, y=200, app_name="Chrome") ` | Desktop automation, GUI interaction, screen capture |
| search_video | `agent.tool.search_video(query="people discussing AI")` | Semantic video search using TwelveLabs' Marengo model |
| chat_video | `agent.tool.chat_video(prompt="What are the main topics?", video_id="video_123")` | Interactive video analysis using TwelveLabs' Pegasus model |

\* *These tools do not work on windows*

## 💻 Usage Examples

### File Operations

```python
from strands import Agent
from strands_tools import file_read, file_write, editor

agent = Agent(tools=[file_read, file_write, editor])

agent.tool.file_read(path="config.json")
agent.tool.file_write(path="output.txt", content="Hello, world!")
agent.tool.editor(command="view", path="script.py")
```

### Dynamic MCP Client Integration

⚠️ **SECURITY WARNING**: The Dynamic MCP Client allows agents to autonomously connect to external MCP servers and load remote tools at runtime. This poses significant security risks as agents can potentially connect to malicious servers and execute untrusted code. Use with extreme caution in production environments.

This tool is different from the static MCP server implementation in the Strands SDK (see [MCP Tools Documentation](https://github.com/strands-agents/docs/blob/main/docs/user-guide/concepts/tools/mcp-tools.md)) which uses pre-configured, trusted MCP servers.

```python
from strands import Agent
from strands_tools import mcp_client

agent = Agent(tools=[mcp_client])

# Connect to a custom MCP server via stdio
agent.tool.mcp_client(
    action="connect",
    connection_id="my_tools",
    transport="stdio",
    command="python",
    args=["my_mcp_server.py"]
)

# List available tools on the server
tools = agent.tool.mcp_client(
    action="list_tools",
    connection_id="my_tools"
)

# Call a tool from the MCP server
result = agent.tool.mcp_client(
    action="call_tool",
    connection_id="my_tools",
    tool_name="calculate",
    tool_args={"x": 10, "y": 20}
)

# Connect to a SSE-based server
agent.tool.mcp_client(
    action="connect",
    connection_id="web_server",
    transport="sse",
    server_url="http://localhost:8080/sse"
)

# Connect to a streamable HTTP server
agent.tool.mcp_client(
    action="connect",
    connection_id="http_server",
    transport="streamable_http",
    server_url="https://api.example.com/mcp",
    headers={"Authorization": "Bearer token"},
    timeout=60
)

# Load MCP tools into agent's registry for direct access
# ⚠️ WARNING: This loads external tools directly into the agent
agent.tool.mcp_client(
    action="load_tools",
    connection_id="my_tools"
)
# Now you can call MCP tools directly as: agent.tool.calculate(x=10, y=20)
```

### Shell Commands

*Note: `shell` does not work on Windows.*

```python
from strands import Agent
from strands_tools import shell

agent = Agent(tools=[shell])

# Execute a single command
result = agent.tool.shell(command="ls -la")

# Execute a sequence of commands
results = agent.tool.shell(command=["mkdir -p test_dir", "cd test_dir", "touch test.txt"])

# Execute commands with error handling
agent.tool.shell(command="risky-command", ignore_errors=True)
```

### HTTP Requests

```python
from strands import Agent
from strands_tools import http_request

agent = Agent(tools=[http_request])

# Make a simple GET request
response = agent.tool.http_request(
    method="GET",
    url="https://api.example.com/data"
)

# POST request with authentication
response = agent.tool.http_request(
    method="POST",
    url="https://api.example.com/resource",
    headers={"Content-Type": "application/json"},
    body=json.dumps({"key": "value"}),
    auth_type="Bearer",
    auth_token="your_token_here"
)

# Convert HTML webpages to markdown for better readability
response = agent.tool.http_request(
    method="GET",
    url="https://example.com/article",
    convert_to_markdown=True
)
```

### Tavily Search, Extract, Crawl, and Map

```python
from strands import Agent
from strands_tools.tavily import (
    tavily_search, tavily_extract, tavily_crawl, tavily_map
)

# For async usage, call the corresponding *_async function with await.
# Synchronous usage 
agent = Agent(tools=[tavily_search, tavily_extract, tavily_crawl, tavily_map])

# Real-time web search
result = agent.tool.tavily_search(
    query="Latest developments in renewable energy",
    search_depth="advanced",
    topic="news",
    max_results=10,
    include_raw_content=True
)

# Extract content from multiple URLs
result = agent.tool.tavily_extract(
    urls=["www.tavily.com", "www.apple.com"],
    extract_depth="advanced",
    format="markdown"
)

# Advanced crawl with instructions and filtering
result = agent.tool.tavily_crawl(
    url="www.tavily.com",
    max_depth=2,
    limit=50,
    instructions="Find all API documentation and developer guides",
    extract_depth="advanced",
    include_images=True
)

# Basic website mapping
result = agent.tool.tavily_map(url="www.tavily.com")

```

### Exa Search and Contents

```python
from strands import Agent
from strands_tools.exa import exa_search, exa_get_contents

agent = Agent(tools=[exa_search, exa_get_contents])

# Basic search (auto mode is default and recommended)
result = agent.tool.exa_search(
    query="Best project management software",
    text=True
)

# Company-specific search when needed
result = agent.tool.exa_search(
    query="Anthropic AI safety research",
    category="company",
    include_domains=["anthropic.com"],
    num_results=5,
    summary={"query": "key research areas and findings"}
)

# News search with date filtering
result = agent.tool.exa_search(
    query="AI regulation policy updates",
    category="news",
    start_published_date="2024-01-01T00:00:00.000Z",
    text=True
)

# Get detailed content from specific URLs
result = agent.tool.exa_get_contents(
    urls=[
        "https://example.com/blog-post",
        "https://github.com/microsoft/semantic-kernel"
    ],
    text={"maxCharacters": 5000, "includeHtmlTags": False},
    summary={
        "query": "main points and practical applications"
    },
    subpages=2,
    extras={"links": 5, "imageLinks": 2}
)

# Structured summary with JSON schema
result = agent.tool.exa_get_contents(
    urls=["https://example.com/article"],
    summary={
        "query": "main findings and recommendations",
        "schema": {
            "type": "object",
            "properties": {
                "main_points": {"type": "string", "description": "Key points from the article"},
                "recommendations": {"type": "string", "description": "Suggested actions or advice"},
                "conclusion": {"type": "string", "description": "Overall conclusion"},
                "relevance": {"type": "string", "description": "Why this matters"}
            },
            "required": ["main_points", "conclusion"]
        }
    }
)

```

### Python Code Execution

*Note: `python_repl` does not work on Windows.*

```python
from strands import Agent
from strands_tools import python_repl

agent = Agent(tools=[python_repl])

# Execute Python code with state persistence
result = agent.tool.python_repl(code="""
import pandas as pd

# Load and process data
data = pd.read_csv('data.csv')
processed = data.groupby('category').mean()

processed.head()
""")
```

### Code Interpreter

```python
from strands import Agent
from strands_tools.code_interpreter import AgentCoreCodeInterpreter

# Create the code interpreter tool
bedrock_agent_core_code_interpreter = AgentCoreCodeInterpreter(region="us-west-2")
agent = Agent(tools=[bedrock_agent_core_code_interpreter.code_interpreter])

# Create a session
agent.tool.code_interpreter({
    "action": {
        "type": "initSession",
        "description": "Data analysis session",
        "session_name": "analysis-session"
    }
})

# Execute Python code
agent.tool.code_interpreter({
    "action": {
        "type": "executeCode",
        "session_name": "analysis-session",
        "code": "print('Hello from sandbox!')",
        "language": "python"
    }
})
```

### Swarm Intelligence

```python
from strands import Agent
from strands_tools import swarm

agent = Agent(tools=[swarm])

# Create a collaborative swarm of agents to tackle a complex problem
result = agent.tool.swarm(
    task="Generate creative solutions for reducing plastic waste in urban areas",
    swarm_size=5,
    coordination_pattern="collaborative"
)

# Create a competitive swarm for diverse solution generation
result = agent.tool.swarm(
    task="Design an innovative product for smart home automation",
    swarm_size=3,
    coordination_pattern="competitive"
)

# Hybrid approach combining collaboration and competition
result = agent.tool.swarm(
    task="Develop marketing strategies for a new sustainable fashion brand",
    swarm_size=4,
    coordination_pattern="hybrid"
)
```

### Use AWS

```python
from strands import Agent
from strands_tools import use_aws

agent = Agent(tools=[use_aws])

# List S3 buckets
result = agent.tool.use_aws(
    service_name="s3",
    operation_name="list_buckets",
    parameters={},
    region="us-east-1",
    label="List all S3 buckets"
)

# Get the contents of a specific S3 bucket
result = agent.tool.use_aws(
    service_name="s3",
    operation_name="list_objects_v2",
    parameters={"Bucket": "example-bucket"},  # Replace with your actual bucket name
    region="us-east-1",
    label="List objects in a specific S3 bucket"
)

# Get the list of EC2 subnets
result = agent.tool.use_aws(
    service_name="ec2",
    operation_name="describe_subnets",
    parameters={},
    region="us-east-1",
    label="List all subnets"
)
```

### Batch Tool

```python
import os
import sys

from strands import Agent
from strands_tools import batch, http_request, use_aws

# Example usage of the batch with http_request and use_aws tools
agent = Agent(tools=[batch, http_request, use_aws])

result = agent.tool.batch(
    invocations=[
        {"name": "http_request", "arguments": {"method": "GET", "url": "https://api.ipify.org?format=json"}},
        {
            "name": "use_aws",
            "arguments": {
                "service_name": "s3",
                "operation_name": "list_buckets",
                "parameters": {},
                "region": "us-east-1",
                "label": "List S3 Buckets"
            }
        },
    ]
)
```

### Video Tools

```python
from strands import Agent
from strands_tools import search_video, chat_video

agent = Agent(tools=[search_video, chat_video])

# Search for video content using natural language
result = agent.tool.search_video(
    query="people discussing AI technology",
    threshold="high",
    group_by="video",
    page_limit=5
)

# Chat with existing video (no index_id needed)
result = agent.tool.chat_video(
    prompt="What are the main topics discussed in this video?",
    video_id="existing-video-id"
)

# Chat with new video file (index_id required for upload)
result = agent.tool.chat_video(
    prompt="Describe what happens in this video",
    video_path="/path/to/video.mp4",
    index_id="your-index-id"  # or set TWELVELABS_PEGASUS_INDEX_ID env var
)
```

### AgentCore Memory
```python
from strands import Agent
from strands_tools.agent_core_memory import AgentCoreMemoryToolProvider


provider = AgentCoreMemoryToolProvider(
    memory_id="memory-123abc",  # Required
    actor_id="user-456",        # Required
    session_id="session-789",   # Required
    namespace="default",        # Required
    region="us-west-2"          # Optional, defaults to us-west-2
)

agent = Agent(tools=provider.tools)

# Create a new memory
result = agent.tool.agent_core_memory(
    action="record",
    content="I am allergic to shellfish"
)

# Search for relevant memories
result = agent.tool.agent_core_memory(
    action="retrieve",
    query="user preferences"
)

# List all memories
result = agent.tool.agent_core_memory(
    action="list"
)

# Get a specific memory by ID
result = agent.tool.agent_core_memory(
    action="get",
    memory_record_id="mr-12345"
)
```

### Browser
```python
from strands import Agent
from strands_tools.browser import LocalChromiumBrowser

# Create browser tool
browser = LocalChromiumBrowser()
agent = Agent(tools=[browser.browser])

# Simple navigation
result = agent.tool.browser({
    "action": {
        "type": "navigate",
        "url": "https://example.com"
    }
})

# Initialize a session first
result = agent.tool.browser({
    "action": {
        "type": "initSession",
        "session_name": "main-session",
        "description": "Web automation session"
    }
})
```

### Handoff to User

```python
from strands import Agent
from strands_tools import handoff_to_user

agent = Agent(tools=[handoff_to_user])

# Request user confirmation and continue
response = agent.tool.handoff_to_user(
    message="I need your approval to proceed with deleting these files. Type 'yes' to confirm.",
    breakout_of_loop=False
)

# Complete handoff to user (stops agent execution)
agent.tool.handoff_to_user(
    message="Task completed. Please review the results and take any necessary follow-up actions.",
    breakout_of_loop=True
)
```

### A2A Client

```python
from strands import Agent
from strands_tools.a2a_client import A2AClientToolProvider

# Initialize the A2A client provider with known agent URLs
provider = A2AClientToolProvider(known_agent_urls=["http://localhost:9000"])
agent = Agent(tools=provider.tools)

# Use natural language to interact with A2A agents
response = agent("discover available agents and send a greeting message")

# The agent will automatically use the available tools:
# - discover_agent(url) to find agents
# - list_discovered_agents() to see all discovered agents
# - send_message(message_text, target_agent_url) to communicate
```

### Diagram

```python
from strands import Agent
from strands_tools import diagram

agent = Agent(tools=[diagram])

# Create an AWS cloud architecture diagram
result = agent.tool.diagram(
    diagram_type="cloud",
    nodes=[
        {"id": "users", "type": "Users", "label": "End Users"},
        {"id": "cloudfront", "type": "CloudFront", "label": "CDN"},
        {"id": "s3", "type": "S3", "label": "Static Assets"},
        {"id": "api", "type": "APIGateway", "label": "API Gateway"},
        {"id": "lambda", "type": "Lambda", "label": "Backend Service"}
    ],
    edges=[
        {"from": "users", "to": "cloudfront"},
        {"from": "cloudfront", "to": "s3"},
        {"from": "users", "to": "api"},
        {"from": "api", "to": "lambda"}
    ],
    title="Web Application Architecture"
)

# Create a UML class diagram
result = agent.tool.diagram(
    diagram_type="class",
    elements=[
        {
            "name": "User",
            "attributes": ["+id: int", "-name: string", "#email: string"],
            "methods": ["+login(): bool", "+logout(): void"]
        },
        {
            "name": "Order",
            "attributes": ["+id: int", "-items: List", "-total: float"],
            "methods": ["+addItem(item): void", "+calculateTotal(): float"]
        }
    ],
    relationships=[
        {"from": "User", "to": "Order", "type": "association", "multiplicity": "1..*"}
    ],
    title="E-commerce Domain Model"
)
```

### RSS Feed Management

```python
from strands import Agent
from strands_tools import rss

agent = Agent(tools=[rss])

# Subscribe to a feed
result = agent.tool.rss(
    action="subscribe",
    url="https://news.example.com/rss/technology"
)

# List all subscribed feeds
feeds = agent.tool.rss(action="list")

# Read entries from a specific feed
entries = agent.tool.rss(
    action="read",
    feed_id="news_example_com_technology",
    max_entries=5,
    include_content=True
)

# Search across all feeds
search_results = agent.tool.rss(
    action="search",
    query="machine learning",
    max_entries=10
)

# Fetch feed content without subscribing
latest_news = agent.tool.rss(
    action="fetch",
    url="https://blog.example.org/feed",
    max_entries=3
)
```

### Use Computer

```python
from strands import Agent
from strands_tools import use_computer

agent = Agent(tools=[use_computer])

# Find mouse position
result = agent.tool.use_computer(action="mouse_position")

# Automate adding text
result = agent.tool.use_computer(action="type", text="Hello, world!", app_name="Notepad")

# Analyze current computer screen
result = agent.tool.use_computer(action="analyze_screen")

result = agent.tool.use_computer(action="open_app", app_name="Calculator")
result = agent.tool.use_computer(action="close_app", app_name="Calendar")

result = agent.tool.use_computer(
    action="hotkey",
    hotkey_str="command+ctrl+f",  # For macOS
    app_name="Chrome"
)
```

## 🌍 Environment Variables Configuration

Agents Tools provides extensive customization through environment variables. This allows you to configure tool behavior without modifying code, making it ideal for different environments (development, testing, production).

### Global Environment Variables

These variables affect multiple tools:

| Environment Variable | Description | Default | Affected Tools |
|----------------------|-------------|---------|---------------|
| BYPASS_TOOL_CONSENT | Bypass consent for tool invocation, set to "true" to enable | false | All tools that require consent (e.g. shell, file_write, python_repl) |
| STRANDS_TOOL_CONSOLE_MODE | Enable rich UI for tools, set to "enabled" to enable | disabled | All tools that have optional rich UI |
| AWS_REGION | Default AWS region for AWS operations | us-west-2 | use_aws, retrieve, generate_image, memory, nova_reels |
| AWS_PROFILE | AWS profile name to use from ~/.aws/credentials | default | use_aws, retrieve |
| LOG_LEVEL | Logging level (DEBUG, INFO, WARNING, ERROR) | INFO | All tools |

### Tool-Specific Environment Variables

#### Calculator Tool

| Environment Variable | Description | Default |
|----------------------|-------------|---------|
| CALCULATOR_MODE | Default calculation mode | evaluate |
| CALCULATOR_PRECISION | Number of decimal places for results | 10 |
| CALCULATOR_SCIENTIFIC | Whether to use scientific notation for numbers | False |
| CALCULATOR_FORCE_NUMERIC | Force numeric evaluation of symbolic expressions | False |
| CALCULATOR_FORCE_SCIENTIFIC_THRESHOLD | Threshold for automatic scientific notation | 1e21 |
| CALCULATOR_DERIVE_ORDER | Default order for derivatives | 1 |
| CALCULATOR_SERIES_POINT | Default point for series expansion | 0 |
| CALCULATOR_SERIES_ORDER | Default order for series expansion | 5 |

#### Current Time Tool

| Environment Variable | Description | Default |
|----------------------|-------------|---------|
| DEFAULT_TIMEZONE | Default timezone for current_time tool | UTC |

#### Sleep Tool

| Environment Variable | Description | Default |
|----------------------|-------------|---------|
| MAX_SLEEP_SECONDS | Maximum allowed sleep duration in seconds | 300 |

#### Tavily Search, Extract, Crawl, and Map Tools

| Environment Variable | Description | Default |
|----------------------|-------------|---------|
| TAVILY_API_KEY | Tavily API key (required for all Tavily functionality) | None |
- Visit https://www.tavily.com/ to create a free account and API key.

#### Exa Search and Contents Tools

| Environment Variable | Description | Default |
|----------------------|-------------|---------|
| EXA_API_KEY | Exa API key (required for all Exa functionality) | None |
- Visit https://dashboard.exa.ai/api-keys to create a free account and API key.

#### Mem0 Memory Tool

The Mem0 Memory Tool supports three different backend configurations:

1. **Mem0 Platform**:
   - Uses the Mem0 Platform API for memory management
   - Requires a Mem0 API key

2. **OpenSearch** (Recommended for AWS environments):
   - Uses OpenSearch as the vector store backend
   - Requires AWS credentials and OpenSearch configuration

3. **FAISS** (Default for local development):
   - Uses FAISS as the local vector store backend
   - Requires faiss-cpu package for local vector storage

4. **Neptune Analytics** (Optional Graph backend for search enhancement):
   - Uses Neptune Analytics as the graph store backend to enhance memory recall.
   - Requires AWS credentials and Neptune Analytics configuration
   ```
   # Configure your Neptune Analytics graph ID in the .env file:
   export NEPTUNE_ANALYTICS_GRAPH_IDENTIFIER=sample-graph-id
   
   # Configure your Neptune Analytics graph ID in Python code:
   import os
   os.environ['NEPTUNE_ANALYTICS_GRAPH_IDENTIFIER'] = "g-sample-graph-id"
   
   ```

| Environment Variable | Description | Default | Required For |
|----------------------|-------------|---------|--------------|
| MEM0_API_KEY | Mem0 Platform API key | None | Mem0 Platform |
| OPENSEARCH_HOST | OpenSearch Host URL | None | OpenSearch |
| AWS_REGION | AWS Region for OpenSearch | us-west-2 | OpenSearch |
| NEPTUNE_ANALYTICS_GRAPH_IDENTIFIER | Neptune Analytics Graph Identifier | None | Neptune Analytics |
| DEV | Enable development mode (bypasses confirmations) | false | All modes |
| MEM0_LLM_PROVIDER | LLM provider for memory processing | aws_bedrock | All modes |
| MEM0_LLM_MODEL | LLM model for memory processing | anthropic.claude-3-5-haiku-20241022-v1:0 | All modes |
| MEM0_LLM_TEMPERATURE | LLM temperature (0.0-2.0) | 0.1 | All modes |
| MEM0_LLM_MAX_TOKENS | LLM maximum tokens | 2000 | All modes |
| MEM0_EMBEDDER_PROVIDER | Embedder provider for vector embeddings | aws_bedrock | All modes |
| MEM0_EMBEDDER_MODEL | Embedder model for vector embeddings | amazon.titan-embed-text-v2:0 | All modes |


**Note**:
- If `MEM0_API_KEY` is set, the tool will use the Mem0 Platform
- If `OPENSEARCH_HOST` is set, the tool will use OpenSearch
- If neither is set, the tool will default to FAISS (requires `faiss-cpu` package)
- If `NEPTUNE_ANALYTICS_GRAPH_IDENTIFIER` is set, the tool will configure Neptune Analytics as graph store to enhance memory search
- LLM configuration applies to all backend modes and allows customization of the language model used for memory processing

#### Bright Data Tool

| Environment Variable | Description | Default |
|----------------------|-------------|---------|
| BRIGHTDATA_API_KEY | Bright Data API Key | None |
| BRIGHTDATA_ZONE | Bright Data Web Unlocker Zone | web_unlocker1 |

#### Memory Tool

| Environment Variable | Description | Default |
|----------------------|-------------|---------|
| MEMORY_DEFAULT_MAX_RESULTS | Default maximum results for list operations | 50 |
| MEMORY_DEFAULT_MIN_SCORE | Default minimum relevance score for filtering results | 0.4 |

#### Nova Reels Tool

| Environment Variable | Description | Default |
|----------------------|-------------|---------|
| NOVA_REEL_DEFAULT_SEED | Default seed for video generation | 0 |
| NOVA_REEL_DEFAULT_FPS | Default frames per second for generated videos | 24 |
| NOVA_REEL_DEFAULT_DIMENSION | Default video resolution in WIDTHxHEIGHT format | 1280x720 |
| NOVA_REEL_DEFAULT_MAX_RESULTS | Default maximum number of jobs to return for list action | 10 |

#### Python REPL Tool

| Environment Variable | Description | Default |
|----------------------|-------------|---------|
| PYTHON_REPL_BINARY_MAX_LEN | Maximum length for binary content before truncation | 100 |
| PYTHON_REPL_INTERACTIVE | Whether to enable interactive PTY mode | None |
| PYTHON_REPL_RESET_STATE | Whether to reset the REPL state before execution | None |

#### Shell Tool

| Environment Variable | Description | Default |
|----------------------|-------------|---------|
| SHELL_DEFAULT_TIMEOUT | Default timeout in seconds for shell commands | 900 |

#### Slack Tool

| Environment Variable | Description | Default |
|----------------------|-------------|---------|
| SLACK_DEFAULT_EVENT_COUNT | Default number of events to retrieve | 42 |
| STRANDS_SLACK_AUTO_REPLY | Enable automatic replies to messages | false |
| STRANDS_SLACK_LISTEN_ONLY_TAG | Only process messages containing this tag | None |

#### Speak Tool

| Environment Variable | Description | Default |
|----------------------|-------------|---------|
| SPEAK_DEFAULT_STYLE | Default style for status messages | green |
| SPEAK_DEFAULT_MODE | Default speech mode (fast/polly) | fast |
| SPEAK_DEFAULT_VOICE_ID | Default Polly voice ID | Joanna |
| SPEAK_DEFAULT_OUTPUT_PATH | Default audio output path | speech_output.mp3 |
| SPEAK_DEFAULT_PLAY_AUDIO | Whether to play audio by default | True |

#### Editor Tool

| Environment Variable | Description | Default |
|----------------------|-------------|---------|
| EDITOR_DIR_TREE_MAX_DEPTH | Maximum depth for directory tree visualization | 2 |
| EDITOR_DEFAULT_STYLE | Default style for output panels | default |
| EDITOR_DEFAULT_LANGUAGE | Default language for syntax highlighting | python |

#### Environment Tool

| Environment Variable | Description | Default |
|----------------------|-------------|---------|
| ENV_VARS_MASKED_DEFAULT | Default setting for masking sensitive values | true |

#### Dynamic MCP Client Tool

| Environment Variable | Description | Default | 
|----------------------|-------------|---------|
| STRANDS_MCP_TIMEOUT | Default timeout in seconds for MCP operations | 30.0 |

#### File Read Tool

| Environment Variable | Description | Default |
|----------------------|-------------|---------|
| FILE_READ_RECURSIVE_DEFAULT | Default setting for recursive file searching | true |
| FILE_READ_CONTEXT_LINES_DEFAULT | Default number of context lines around search matches | 2 |
| FILE_READ_START_LINE_DEFAULT | Default starting line number for lines mode | 0 |
| FILE_READ_CHUNK_OFFSET_DEFAULT | Default byte offset for chunk mode | 0 |
| FILE_READ_DIFF_TYPE_DEFAULT | Default diff type for file comparisons | unified |
| FILE_READ_USE_GIT_DEFAULT | Default setting for using git in time machine mode | true |
| FILE_READ_NUM_REVISIONS_DEFAULT | Default number of revisions to show in time machine mode | 5 |

#### Browser Tool

| Environment Variable | Description | Default |
|----------------------|-------------|---------|
| STRANDS_DEFAULT_WAIT_TIME | Default setting for wait time with actions | 1 |
| STRANDS_BROWSER_MAX_RETRIES | Default number of retries to perform when an action fails | 3 |
| STRANDS_BROWSER_RETRY_DELAY | Default retry delay time for retry mechanisms | 1 |
| STRANDS_BROWSER_SCREENSHOTS_DIR | Default directory where screenshots will be saved | screenshots |
| STRANDS_BROWSER_USER_DATA_DIR | Default directory where data for reloading a browser instance is stored | ~/.browser_automation |
| STRANDS_BROWSER_HEADLESS | Default headless setting for launching browsers | false |
| STRANDS_BROWSER_WIDTH | Default width of the browser | 1280 |
| STRANDS_BROWSER_HEIGHT | Default height of the browser | 800 |

#### RSS Tool

| Environment Variable | Description | Default |
|----------------------|-------------|---------|
| STRANDS_RSS_MAX_ENTRIES | Default setting for maximum number of entries per feed | 100 |
| STRANDS_RSS_UPDATE_INTERVAL | Default amount of time between updating rss feeds in minutes | 60 |
| STRANDS_RSS_STORAGE_PATH | Default storage path where rss feeds are stored locally | strands_rss_feeds (this may vary based on your system) |

#### Video Tools

| Environment Variable | Description | Default | 
|----------------------|-------------|---------|
| TWELVELABS_API_KEY | TwelveLabs API key for video analysis | None |
| TWELVELABS_MARENGO_INDEX_ID | Default index ID for search_video tool | None |
| TWELVELABS_PEGASUS_INDEX_ID | Default index ID for chat_video tool | None |


## Contributing ❤️

This is a community-driven project, powered by passionate developers like you.
We enthusiastically welcome contributions from everyone,
regardless of experience level—your unique perspective is valuable to us!

### How to Get Started?

1. **Find your first opportunity**: If you're new to the project, explore our labeled "good first issues" for beginner-friendly tasks.
2. **Understand our workflow**: Review our [Contributing Guide](CONTRIBUTING.md)  to learn about our development setup, coding standards, and pull request process.
3. **Make your impact**: Contributions come in many forms—fixing bugs, enhancing documentation, improving performance, adding features, writing tests, or refining the user experience.
4. **Submit your work**: When you're ready, submit a well-documented pull request, and our maintainers will provide feedback to help get your changes merged.

Your questions, insights, and ideas are always welcome!

Together, we're building something meaningful that impacts real users. We look forward to collaborating with you!

## License

This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.

## Security

See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "strands-agents-tools",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": null,
    "author": null,
    "author_email": "AWS <opensource@amazon.com>",
    "download_url": "https://files.pythonhosted.org/packages/ff/7a/5aaec4dbdbce7a35c8ea2f507a3f0c8e32bdb6a70bda72f358b47518d1ae/strands_agents_tools-0.2.10.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\n  <div>\n    <a href=\"https://strandsagents.com\">\n      <img src=\"https://strandsagents.com/latest/assets/logo-github.svg\" alt=\"Strands Agents\" width=\"55px\" height=\"105px\">\n    </a>\n  </div>\n\n  <h1>\n    Strands Agents Tools\n  </h1>\n\n  <h2>\n    A model-driven approach to building AI agents in just a few lines of code.\n  </h2>\n\n  <div align=\"center\">\n    <a href=\"https://github.com/strands-agents/tools/graphs/commit-activity\"><img alt=\"GitHub commit activity\" src=\"https://img.shields.io/github/commit-activity/m/strands-agents/tools\"/></a>\n    <a href=\"https://github.com/strands-agents/tools/issues\"><img alt=\"GitHub open issues\" src=\"https://img.shields.io/github/issues/strands-agents/tools\"/></a>\n    <a href=\"https://github.com/strands-agents/tools/pulls\"><img alt=\"GitHub open pull requests\" src=\"https://img.shields.io/github/issues-pr/strands-agents/tools\"/></a>\n    <a href=\"https://github.com/strands-agents/tools/blob/main/LICENSE\"><img alt=\"License\" src=\"https://img.shields.io/github/license/strands-agents/tools\"/></a>\n    <a href=\"https://pypi.org/project/strands-agents-tools/\"><img alt=\"PyPI version\" src=\"https://img.shields.io/pypi/v/strands-agents-tools\"/></a>\n    <a href=\"https://python.org\"><img alt=\"Python versions\" src=\"https://img.shields.io/pypi/pyversions/strands-agents-tools\"/></a>\n  </div>\n\n  <p>\n    <a href=\"https://strandsagents.com/\">Documentation</a>\n    \u25c6 <a href=\"https://github.com/strands-agents/samples\">Samples</a>\n    \u25c6 <a href=\"https://github.com/strands-agents/sdk-python\">Python SDK</a>\n    \u25c6 <a href=\"https://github.com/strands-agents/tools\">Tools</a>\n    \u25c6 <a href=\"https://github.com/strands-agents/agent-builder\">Agent Builder</a>\n    \u25c6 <a href=\"https://github.com/strands-agents/mcp-server\">MCP Server</a>\n  </p>\n</div>\n\nStrands Agents Tools is a community-driven project that provides a powerful set of tools for your agents to use. It bridges the gap between large language models and practical applications by offering ready-to-use tools for file operations, system execution, API interactions, mathematical operations, and more.\n\n## \u2728 Features\n\n- \ud83d\udcc1 **File Operations** - Read, write, and edit files with syntax highlighting and intelligent modifications\n- \ud83d\udda5\ufe0f **Shell Integration** - Execute and interact with shell commands securely\n- \ud83e\udde0 **Memory** - Store user and agent memories across agent runs to provide personalized experiences with both Mem0 and Amazon Bedrock Knowledge Bases\n- \ud83d\udd78\ufe0f **Web Infrastructure** - Perform web searches, extract page content, and crawl websites with Tavily and Exa-powered tools\n- \ud83c\udf10 **HTTP Client** - Make API requests with comprehensive authentication support\n- \ud83d\udcac **Slack Client** - Real-time Slack events, message processing, and Slack API access\n- \ud83d\udc0d **Python Execution** - Run Python code snippets with state persistence, user confirmation for code execution, and safety features\n- \ud83e\uddee **Mathematical Tools** - Perform advanced calculations with symbolic math capabilities\n- \u2601\ufe0f **AWS Integration** - Seamless access to AWS services\n- \ud83d\uddbc\ufe0f **Image Processing** - Generate and process images for AI applications\n- \ud83c\udfa5 **Video Processing** - Use models and agents to generate dynamic videos\n- \ud83c\udf99\ufe0f **Audio Output** - Enable models to generate audio and speak\n- \ud83d\udd04 **Environment Management** - Handle environment variables safely\n- \ud83d\udcdd **Journaling** - Create and manage structured logs and journals\n- \u23f1\ufe0f **Task Scheduling** - Schedule and manage cron jobs\n- \ud83e\udde0 **Advanced Reasoning** - Tools for complex thinking and reasoning capabilities\n- \ud83d\udc1d **Swarm Intelligence** - Coordinate multiple AI agents for parallel problem solving with shared memory\n- \ud83d\udd0c **Dynamic MCP Client** - \u26a0\ufe0f Dynamically connect to external MCP servers and load remote tools (use with caution - see security warnings)\n- \ud83d\udd04 **Multiple tools in Parallel**  - Call multiple other tools at the same time in parallel with Batch Tool\n- \ud83d\udd0d **Browser Tool** - Tool giving an agent access to perform automated actions on a browser (chromium)\n- \ud83d\udcc8 **Diagram** - Create AWS cloud diagrams, basic diagrams, or UML diagrams using python libraries\n- \ud83d\udcf0 **RSS Feed Manager** - Subscribe, fetch, and process RSS feeds with content filtering and persistent storage\n- \ud83d\uddb1\ufe0f **Computer Tool** - Automate desktop actions including mouse movements, keyboard input, screenshots, and application management\n\n## \ud83d\udce6 Installation\n\n### Quick Install\n\n```bash\npip install strands-agents-tools\n```\n\nTo install the dependencies for optional tools:\n\n```bash\npip install strands-agents-tools[mem0_memory, use_browser, rss, use_computer]\n```\n\n### Development Install\n\n```bash\n# Clone the repository\ngit clone https://github.com/strands-agents/tools.git\ncd tools\n\n# Create and activate virtual environment\npython3 -m venv .venv\nsource .venv/bin/activate  # On Windows: venv\\Scripts\\activate\n\n# Install in development mode\npip install -e \".[dev]\"\n\n# Install pre-commit hooks\npre-commit install\n```\n\n### Tools Overview\n\nBelow is a comprehensive table of all available tools, how to use them with an agent, and typical use cases:\n\n| Tool | Agent Usage | Use Case |\n|------|-------------|----------|\n| a2a_client | `provider = A2AClientToolProvider(known_agent_urls=[\"http://localhost:9000\"]); agent = Agent(tools=provider.tools)` | Discover and communicate with A2A-compliant agents, send messages between agents |\n| file_read | `agent.tool.file_read(path=\"path/to/file.txt\")` | Reading configuration files, parsing code files, loading datasets |\n| file_write | `agent.tool.file_write(path=\"path/to/file.txt\", content=\"file content\")` | Writing results to files, creating new files, saving output data |\n| editor | `agent.tool.editor(command=\"view\", path=\"path/to/file.py\")` | Advanced file operations like syntax highlighting, pattern replacement, and multi-file edits |\n| shell* | `agent.tool.shell(command=\"ls -la\")` | Executing shell commands, interacting with the operating system, running scripts |\n| http_request | `agent.tool.http_request(method=\"GET\", url=\"https://api.example.com/data\")` | Making API calls, fetching web data, sending data to external services |\n| tavily_search | `agent.tool.tavily_search(query=\"What is artificial intelligence?\", search_depth=\"advanced\")` | Real-time web search optimized for AI agents with a variety of custom parameters |\n| tavily_extract | `agent.tool.tavily_extract(urls=[\"www.tavily.com\"], extract_depth=\"advanced\")` | Extract clean, structured content from web pages with advanced processing and noise removal |\n| tavily_crawl | `agent.tool.tavily_crawl(url=\"www.tavily.com\", max_depth=2, instructions=\"Find API docs\")` | Crawl websites intelligently starting from a base URL with filtering and extraction |\n| tavily_map | `agent.tool.tavily_map(url=\"www.tavily.com\", max_depth=2, instructions=\"Find all pages\")` | Map website structure and discover URLs starting from a base URL without content extraction |\n| exa_search | `agent.tool.exa_search(query=\"Best project management tools\", text=True)` | Intelligent web search with auto mode (default) that combines neural and keyword search for optimal results |\n| exa_get_contents | `agent.tool.exa_get_contents(urls=[\"https://example.com/article\"], text=True, summary={\"query\": \"key points\"})` | Extract full content and summaries from specific URLs with live crawling fallback |\n| python_repl* | `agent.tool.python_repl(code=\"import pandas as pd\\ndf = pd.read_csv('data.csv')\\nprint(df.head())\")` | Running Python code snippets, data analysis, executing complex logic with user confirmation for security |\n| calculator | `agent.tool.calculator(expression=\"2 * sin(pi/4) + log(e**2)\")` | Performing mathematical operations, symbolic math, equation solving |\n| code_interpreter | `code_interpreter = AgentCoreCodeInterpreter(region=\"us-west-2\"); agent = Agent(tools=[code_interpreter.code_interpreter])` | Execute code in isolated sandbox environments with multi-language support (Python, JavaScript, TypeScript), persistent sessions, and file operations |\n| use_aws | `agent.tool.use_aws(service_name=\"s3\", operation_name=\"list_buckets\", parameters={}, region=\"us-west-2\")` | Interacting with AWS services, cloud resource management |\n| retrieve | `agent.tool.retrieve(text=\"What is STRANDS?\")` | Retrieving information from Amazon Bedrock Knowledge Bases |\n| nova_reels | `agent.tool.nova_reels(action=\"create\", text=\"A cinematic shot of mountains\", s3_bucket=\"my-bucket\")` | Create high-quality videos using Amazon Bedrock Nova Reel with configurable parameters via environment variables |\n| agent_core_memory | `agent.tool.agent_core_memory(action=\"record\", content=\"Hello, I like vegetarian food\")` | Store and retrieve memories with Amazon Bedrock Agent Core Memory service |\n| mem0_memory | `agent.tool.mem0_memory(action=\"store\", content=\"Remember I like to play tennis\", user_id=\"alex\")` | Store user and agent memories across agent runs to provide personalized experience |\n| bright_data | `agent.tool.bright_data(action=\"scrape_as_markdown\", url=\"https://example.com\")` | Web scraping, search queries, screenshot capture, and structured data extraction from websites and different data feeds|\n| memory | `agent.tool.memory(action=\"retrieve\", query=\"product features\")` | Store, retrieve, list, and manage documents in Amazon Bedrock Knowledge Bases with configurable parameters via environment variables |\n| environment | `agent.tool.environment(action=\"list\", prefix=\"AWS_\")` | Managing environment variables, configuration management |\n| generate_image_stability | `agent.tool.generate_image_stability(prompt=\"A tranquil pool\")` | Creating images using Stability AI models |\n| generate_image | `agent.tool.generate_image(prompt=\"A sunset over mountains\")` | Creating AI-generated images for various applications |\n| image_reader | `agent.tool.image_reader(image_path=\"path/to/image.jpg\")` | Processing and reading image files for AI analysis |\n| journal | `agent.tool.journal(action=\"write\", content=\"Today's progress notes\")` | Creating structured logs, maintaining documentation |\n| think | `agent.tool.think(thought=\"Complex problem to analyze\", cycle_count=3)` | Advanced reasoning, multi-step thinking processes |\n| load_tool | `agent.tool.load_tool(path=\"path/to/custom_tool.py\", name=\"custom_tool\")` | Dynamically loading custom tools and extensions |\n| swarm | `agent.tool.swarm(task=\"Analyze this problem\", swarm_size=3, coordination_pattern=\"collaborative\")` | Coordinating multiple AI agents to solve complex problems through collective intelligence |\n| current_time | `agent.tool.current_time(timezone=\"US/Pacific\")` | Get the current time in ISO 8601 format for a specified timezone |\n| sleep | `agent.tool.sleep(seconds=5)` | Pause execution for the specified number of seconds, interruptible with SIGINT (Ctrl+C) |\n| agent_graph | `agent.tool.agent_graph(agents=[\"agent1\", \"agent2\"], connections=[{\"from\": \"agent1\", \"to\": \"agent2\"}])` | Create and visualize agent relationship graphs for complex multi-agent systems |\n| cron* | `agent.tool.cron(action=\"schedule\", name=\"task\", schedule=\"0 * * * *\", command=\"backup.sh\")` | Schedule and manage recurring tasks with cron job syntax <br> **Does not work on Windows |\n| slack | `agent.tool.slack(action=\"post_message\", channel=\"general\", text=\"Hello team!\")` | Interact with Slack workspace for messaging and monitoring |\n| speak | `agent.tool.speak(text=\"Operation completed successfully\", style=\"green\", mode=\"polly\")` | Output status messages with rich formatting and optional text-to-speech |\n| stop | `agent.tool.stop(message=\"Process terminated by user request\")` | Gracefully terminate agent execution with custom message |\n| handoff_to_user | `agent.tool.handoff_to_user(message=\"Please confirm action\", breakout_of_loop=False)` | Hand off control to user for confirmation, input, or complete task handoff |\n| use_llm | `agent.tool.use_llm(prompt=\"Analyze this data\", system_prompt=\"You are a data analyst\")` | Create nested AI loops with customized system prompts for specialized tasks |\n| workflow | `agent.tool.workflow(action=\"create\", name=\"data_pipeline\", steps=[{\"tool\": \"file_read\"}, {\"tool\": \"python_repl\"}])` | Define, execute, and manage multi-step automated workflows |\n| mcp_client | `agent.tool.mcp_client(action=\"connect\", connection_id=\"my_server\", transport=\"stdio\", command=\"python\", args=[\"server.py\"])` | \u26a0\ufe0f **SECURITY WARNING**: Dynamically connect to external MCP servers via stdio, sse, or streamable_http, list tools, and call remote tools. This can pose security risks as agents may connect to malicious servers. Use with caution in production. |\n| batch| `agent.tool.batch(invocations=[{\"name\": \"current_time\", \"arguments\": {\"timezone\": \"Europe/London\"}}, {\"name\": \"stop\", \"arguments\": {}}])` | Call multiple other tools in parallel. |\n| browser | `browser = LocalChromiumBrowser(); agent = Agent(tools=[browser.browser])` | Web scraping, automated testing, form filling, web automation tasks |\n| diagram | `agent.tool.diagram(diagram_type=\"cloud\", nodes=[{\"id\": \"s3\", \"type\": \"S3\"}], edges=[])` | Create AWS cloud architecture diagrams, network diagrams, graphs, and UML diagrams (all 14 types) |\n| rss | `agent.tool.rss(action=\"subscribe\", url=\"https://example.com/feed.xml\", feed_id=\"tech_news\")` | Manage RSS feeds: subscribe, fetch, read, search, and update content from various sources |\n| use_computer | `agent.tool.use_computer(action=\"click\", x=100, y=200, app_name=\"Chrome\") ` | Desktop automation, GUI interaction, screen capture |\n| search_video | `agent.tool.search_video(query=\"people discussing AI\")` | Semantic video search using TwelveLabs' Marengo model |\n| chat_video | `agent.tool.chat_video(prompt=\"What are the main topics?\", video_id=\"video_123\")` | Interactive video analysis using TwelveLabs' Pegasus model |\n\n\\* *These tools do not work on windows*\n\n## \ud83d\udcbb Usage Examples\n\n### File Operations\n\n```python\nfrom strands import Agent\nfrom strands_tools import file_read, file_write, editor\n\nagent = Agent(tools=[file_read, file_write, editor])\n\nagent.tool.file_read(path=\"config.json\")\nagent.tool.file_write(path=\"output.txt\", content=\"Hello, world!\")\nagent.tool.editor(command=\"view\", path=\"script.py\")\n```\n\n### Dynamic MCP Client Integration\n\n\u26a0\ufe0f **SECURITY WARNING**: The Dynamic MCP Client allows agents to autonomously connect to external MCP servers and load remote tools at runtime. This poses significant security risks as agents can potentially connect to malicious servers and execute untrusted code. Use with extreme caution in production environments.\n\nThis tool is different from the static MCP server implementation in the Strands SDK (see [MCP Tools Documentation](https://github.com/strands-agents/docs/blob/main/docs/user-guide/concepts/tools/mcp-tools.md)) which uses pre-configured, trusted MCP servers.\n\n```python\nfrom strands import Agent\nfrom strands_tools import mcp_client\n\nagent = Agent(tools=[mcp_client])\n\n# Connect to a custom MCP server via stdio\nagent.tool.mcp_client(\n    action=\"connect\",\n    connection_id=\"my_tools\",\n    transport=\"stdio\",\n    command=\"python\",\n    args=[\"my_mcp_server.py\"]\n)\n\n# List available tools on the server\ntools = agent.tool.mcp_client(\n    action=\"list_tools\",\n    connection_id=\"my_tools\"\n)\n\n# Call a tool from the MCP server\nresult = agent.tool.mcp_client(\n    action=\"call_tool\",\n    connection_id=\"my_tools\",\n    tool_name=\"calculate\",\n    tool_args={\"x\": 10, \"y\": 20}\n)\n\n# Connect to a SSE-based server\nagent.tool.mcp_client(\n    action=\"connect\",\n    connection_id=\"web_server\",\n    transport=\"sse\",\n    server_url=\"http://localhost:8080/sse\"\n)\n\n# Connect to a streamable HTTP server\nagent.tool.mcp_client(\n    action=\"connect\",\n    connection_id=\"http_server\",\n    transport=\"streamable_http\",\n    server_url=\"https://api.example.com/mcp\",\n    headers={\"Authorization\": \"Bearer token\"},\n    timeout=60\n)\n\n# Load MCP tools into agent's registry for direct access\n# \u26a0\ufe0f WARNING: This loads external tools directly into the agent\nagent.tool.mcp_client(\n    action=\"load_tools\",\n    connection_id=\"my_tools\"\n)\n# Now you can call MCP tools directly as: agent.tool.calculate(x=10, y=20)\n```\n\n### Shell Commands\n\n*Note: `shell` does not work on Windows.*\n\n```python\nfrom strands import Agent\nfrom strands_tools import shell\n\nagent = Agent(tools=[shell])\n\n# Execute a single command\nresult = agent.tool.shell(command=\"ls -la\")\n\n# Execute a sequence of commands\nresults = agent.tool.shell(command=[\"mkdir -p test_dir\", \"cd test_dir\", \"touch test.txt\"])\n\n# Execute commands with error handling\nagent.tool.shell(command=\"risky-command\", ignore_errors=True)\n```\n\n### HTTP Requests\n\n```python\nfrom strands import Agent\nfrom strands_tools import http_request\n\nagent = Agent(tools=[http_request])\n\n# Make a simple GET request\nresponse = agent.tool.http_request(\n    method=\"GET\",\n    url=\"https://api.example.com/data\"\n)\n\n# POST request with authentication\nresponse = agent.tool.http_request(\n    method=\"POST\",\n    url=\"https://api.example.com/resource\",\n    headers={\"Content-Type\": \"application/json\"},\n    body=json.dumps({\"key\": \"value\"}),\n    auth_type=\"Bearer\",\n    auth_token=\"your_token_here\"\n)\n\n# Convert HTML webpages to markdown for better readability\nresponse = agent.tool.http_request(\n    method=\"GET\",\n    url=\"https://example.com/article\",\n    convert_to_markdown=True\n)\n```\n\n### Tavily Search, Extract, Crawl, and Map\n\n```python\nfrom strands import Agent\nfrom strands_tools.tavily import (\n    tavily_search, tavily_extract, tavily_crawl, tavily_map\n)\n\n# For async usage, call the corresponding *_async function with await.\n# Synchronous usage \nagent = Agent(tools=[tavily_search, tavily_extract, tavily_crawl, tavily_map])\n\n# Real-time web search\nresult = agent.tool.tavily_search(\n    query=\"Latest developments in renewable energy\",\n    search_depth=\"advanced\",\n    topic=\"news\",\n    max_results=10,\n    include_raw_content=True\n)\n\n# Extract content from multiple URLs\nresult = agent.tool.tavily_extract(\n    urls=[\"www.tavily.com\", \"www.apple.com\"],\n    extract_depth=\"advanced\",\n    format=\"markdown\"\n)\n\n# Advanced crawl with instructions and filtering\nresult = agent.tool.tavily_crawl(\n    url=\"www.tavily.com\",\n    max_depth=2,\n    limit=50,\n    instructions=\"Find all API documentation and developer guides\",\n    extract_depth=\"advanced\",\n    include_images=True\n)\n\n# Basic website mapping\nresult = agent.tool.tavily_map(url=\"www.tavily.com\")\n\n```\n\n### Exa Search and Contents\n\n```python\nfrom strands import Agent\nfrom strands_tools.exa import exa_search, exa_get_contents\n\nagent = Agent(tools=[exa_search, exa_get_contents])\n\n# Basic search (auto mode is default and recommended)\nresult = agent.tool.exa_search(\n    query=\"Best project management software\",\n    text=True\n)\n\n# Company-specific search when needed\nresult = agent.tool.exa_search(\n    query=\"Anthropic AI safety research\",\n    category=\"company\",\n    include_domains=[\"anthropic.com\"],\n    num_results=5,\n    summary={\"query\": \"key research areas and findings\"}\n)\n\n# News search with date filtering\nresult = agent.tool.exa_search(\n    query=\"AI regulation policy updates\",\n    category=\"news\",\n    start_published_date=\"2024-01-01T00:00:00.000Z\",\n    text=True\n)\n\n# Get detailed content from specific URLs\nresult = agent.tool.exa_get_contents(\n    urls=[\n        \"https://example.com/blog-post\",\n        \"https://github.com/microsoft/semantic-kernel\"\n    ],\n    text={\"maxCharacters\": 5000, \"includeHtmlTags\": False},\n    summary={\n        \"query\": \"main points and practical applications\"\n    },\n    subpages=2,\n    extras={\"links\": 5, \"imageLinks\": 2}\n)\n\n# Structured summary with JSON schema\nresult = agent.tool.exa_get_contents(\n    urls=[\"https://example.com/article\"],\n    summary={\n        \"query\": \"main findings and recommendations\",\n        \"schema\": {\n            \"type\": \"object\",\n            \"properties\": {\n                \"main_points\": {\"type\": \"string\", \"description\": \"Key points from the article\"},\n                \"recommendations\": {\"type\": \"string\", \"description\": \"Suggested actions or advice\"},\n                \"conclusion\": {\"type\": \"string\", \"description\": \"Overall conclusion\"},\n                \"relevance\": {\"type\": \"string\", \"description\": \"Why this matters\"}\n            },\n            \"required\": [\"main_points\", \"conclusion\"]\n        }\n    }\n)\n\n```\n\n### Python Code Execution\n\n*Note: `python_repl` does not work on Windows.*\n\n```python\nfrom strands import Agent\nfrom strands_tools import python_repl\n\nagent = Agent(tools=[python_repl])\n\n# Execute Python code with state persistence\nresult = agent.tool.python_repl(code=\"\"\"\nimport pandas as pd\n\n# Load and process data\ndata = pd.read_csv('data.csv')\nprocessed = data.groupby('category').mean()\n\nprocessed.head()\n\"\"\")\n```\n\n### Code Interpreter\n\n```python\nfrom strands import Agent\nfrom strands_tools.code_interpreter import AgentCoreCodeInterpreter\n\n# Create the code interpreter tool\nbedrock_agent_core_code_interpreter = AgentCoreCodeInterpreter(region=\"us-west-2\")\nagent = Agent(tools=[bedrock_agent_core_code_interpreter.code_interpreter])\n\n# Create a session\nagent.tool.code_interpreter({\n    \"action\": {\n        \"type\": \"initSession\",\n        \"description\": \"Data analysis session\",\n        \"session_name\": \"analysis-session\"\n    }\n})\n\n# Execute Python code\nagent.tool.code_interpreter({\n    \"action\": {\n        \"type\": \"executeCode\",\n        \"session_name\": \"analysis-session\",\n        \"code\": \"print('Hello from sandbox!')\",\n        \"language\": \"python\"\n    }\n})\n```\n\n### Swarm Intelligence\n\n```python\nfrom strands import Agent\nfrom strands_tools import swarm\n\nagent = Agent(tools=[swarm])\n\n# Create a collaborative swarm of agents to tackle a complex problem\nresult = agent.tool.swarm(\n    task=\"Generate creative solutions for reducing plastic waste in urban areas\",\n    swarm_size=5,\n    coordination_pattern=\"collaborative\"\n)\n\n# Create a competitive swarm for diverse solution generation\nresult = agent.tool.swarm(\n    task=\"Design an innovative product for smart home automation\",\n    swarm_size=3,\n    coordination_pattern=\"competitive\"\n)\n\n# Hybrid approach combining collaboration and competition\nresult = agent.tool.swarm(\n    task=\"Develop marketing strategies for a new sustainable fashion brand\",\n    swarm_size=4,\n    coordination_pattern=\"hybrid\"\n)\n```\n\n### Use AWS\n\n```python\nfrom strands import Agent\nfrom strands_tools import use_aws\n\nagent = Agent(tools=[use_aws])\n\n# List S3 buckets\nresult = agent.tool.use_aws(\n    service_name=\"s3\",\n    operation_name=\"list_buckets\",\n    parameters={},\n    region=\"us-east-1\",\n    label=\"List all S3 buckets\"\n)\n\n# Get the contents of a specific S3 bucket\nresult = agent.tool.use_aws(\n    service_name=\"s3\",\n    operation_name=\"list_objects_v2\",\n    parameters={\"Bucket\": \"example-bucket\"},  # Replace with your actual bucket name\n    region=\"us-east-1\",\n    label=\"List objects in a specific S3 bucket\"\n)\n\n# Get the list of EC2 subnets\nresult = agent.tool.use_aws(\n    service_name=\"ec2\",\n    operation_name=\"describe_subnets\",\n    parameters={},\n    region=\"us-east-1\",\n    label=\"List all subnets\"\n)\n```\n\n### Batch Tool\n\n```python\nimport os\nimport sys\n\nfrom strands import Agent\nfrom strands_tools import batch, http_request, use_aws\n\n# Example usage of the batch with http_request and use_aws tools\nagent = Agent(tools=[batch, http_request, use_aws])\n\nresult = agent.tool.batch(\n    invocations=[\n        {\"name\": \"http_request\", \"arguments\": {\"method\": \"GET\", \"url\": \"https://api.ipify.org?format=json\"}},\n        {\n            \"name\": \"use_aws\",\n            \"arguments\": {\n                \"service_name\": \"s3\",\n                \"operation_name\": \"list_buckets\",\n                \"parameters\": {},\n                \"region\": \"us-east-1\",\n                \"label\": \"List S3 Buckets\"\n            }\n        },\n    ]\n)\n```\n\n### Video Tools\n\n```python\nfrom strands import Agent\nfrom strands_tools import search_video, chat_video\n\nagent = Agent(tools=[search_video, chat_video])\n\n# Search for video content using natural language\nresult = agent.tool.search_video(\n    query=\"people discussing AI technology\",\n    threshold=\"high\",\n    group_by=\"video\",\n    page_limit=5\n)\n\n# Chat with existing video (no index_id needed)\nresult = agent.tool.chat_video(\n    prompt=\"What are the main topics discussed in this video?\",\n    video_id=\"existing-video-id\"\n)\n\n# Chat with new video file (index_id required for upload)\nresult = agent.tool.chat_video(\n    prompt=\"Describe what happens in this video\",\n    video_path=\"/path/to/video.mp4\",\n    index_id=\"your-index-id\"  # or set TWELVELABS_PEGASUS_INDEX_ID env var\n)\n```\n\n### AgentCore Memory\n```python\nfrom strands import Agent\nfrom strands_tools.agent_core_memory import AgentCoreMemoryToolProvider\n\n\nprovider = AgentCoreMemoryToolProvider(\n    memory_id=\"memory-123abc\",  # Required\n    actor_id=\"user-456\",        # Required\n    session_id=\"session-789\",   # Required\n    namespace=\"default\",        # Required\n    region=\"us-west-2\"          # Optional, defaults to us-west-2\n)\n\nagent = Agent(tools=provider.tools)\n\n# Create a new memory\nresult = agent.tool.agent_core_memory(\n    action=\"record\",\n    content=\"I am allergic to shellfish\"\n)\n\n# Search for relevant memories\nresult = agent.tool.agent_core_memory(\n    action=\"retrieve\",\n    query=\"user preferences\"\n)\n\n# List all memories\nresult = agent.tool.agent_core_memory(\n    action=\"list\"\n)\n\n# Get a specific memory by ID\nresult = agent.tool.agent_core_memory(\n    action=\"get\",\n    memory_record_id=\"mr-12345\"\n)\n```\n\n### Browser\n```python\nfrom strands import Agent\nfrom strands_tools.browser import LocalChromiumBrowser\n\n# Create browser tool\nbrowser = LocalChromiumBrowser()\nagent = Agent(tools=[browser.browser])\n\n# Simple navigation\nresult = agent.tool.browser({\n    \"action\": {\n        \"type\": \"navigate\",\n        \"url\": \"https://example.com\"\n    }\n})\n\n# Initialize a session first\nresult = agent.tool.browser({\n    \"action\": {\n        \"type\": \"initSession\",\n        \"session_name\": \"main-session\",\n        \"description\": \"Web automation session\"\n    }\n})\n```\n\n### Handoff to User\n\n```python\nfrom strands import Agent\nfrom strands_tools import handoff_to_user\n\nagent = Agent(tools=[handoff_to_user])\n\n# Request user confirmation and continue\nresponse = agent.tool.handoff_to_user(\n    message=\"I need your approval to proceed with deleting these files. Type 'yes' to confirm.\",\n    breakout_of_loop=False\n)\n\n# Complete handoff to user (stops agent execution)\nagent.tool.handoff_to_user(\n    message=\"Task completed. Please review the results and take any necessary follow-up actions.\",\n    breakout_of_loop=True\n)\n```\n\n### A2A Client\n\n```python\nfrom strands import Agent\nfrom strands_tools.a2a_client import A2AClientToolProvider\n\n# Initialize the A2A client provider with known agent URLs\nprovider = A2AClientToolProvider(known_agent_urls=[\"http://localhost:9000\"])\nagent = Agent(tools=provider.tools)\n\n# Use natural language to interact with A2A agents\nresponse = agent(\"discover available agents and send a greeting message\")\n\n# The agent will automatically use the available tools:\n# - discover_agent(url) to find agents\n# - list_discovered_agents() to see all discovered agents\n# - send_message(message_text, target_agent_url) to communicate\n```\n\n### Diagram\n\n```python\nfrom strands import Agent\nfrom strands_tools import diagram\n\nagent = Agent(tools=[diagram])\n\n# Create an AWS cloud architecture diagram\nresult = agent.tool.diagram(\n    diagram_type=\"cloud\",\n    nodes=[\n        {\"id\": \"users\", \"type\": \"Users\", \"label\": \"End Users\"},\n        {\"id\": \"cloudfront\", \"type\": \"CloudFront\", \"label\": \"CDN\"},\n        {\"id\": \"s3\", \"type\": \"S3\", \"label\": \"Static Assets\"},\n        {\"id\": \"api\", \"type\": \"APIGateway\", \"label\": \"API Gateway\"},\n        {\"id\": \"lambda\", \"type\": \"Lambda\", \"label\": \"Backend Service\"}\n    ],\n    edges=[\n        {\"from\": \"users\", \"to\": \"cloudfront\"},\n        {\"from\": \"cloudfront\", \"to\": \"s3\"},\n        {\"from\": \"users\", \"to\": \"api\"},\n        {\"from\": \"api\", \"to\": \"lambda\"}\n    ],\n    title=\"Web Application Architecture\"\n)\n\n# Create a UML class diagram\nresult = agent.tool.diagram(\n    diagram_type=\"class\",\n    elements=[\n        {\n            \"name\": \"User\",\n            \"attributes\": [\"+id: int\", \"-name: string\", \"#email: string\"],\n            \"methods\": [\"+login(): bool\", \"+logout(): void\"]\n        },\n        {\n            \"name\": \"Order\",\n            \"attributes\": [\"+id: int\", \"-items: List\", \"-total: float\"],\n            \"methods\": [\"+addItem(item): void\", \"+calculateTotal(): float\"]\n        }\n    ],\n    relationships=[\n        {\"from\": \"User\", \"to\": \"Order\", \"type\": \"association\", \"multiplicity\": \"1..*\"}\n    ],\n    title=\"E-commerce Domain Model\"\n)\n```\n\n### RSS Feed Management\n\n```python\nfrom strands import Agent\nfrom strands_tools import rss\n\nagent = Agent(tools=[rss])\n\n# Subscribe to a feed\nresult = agent.tool.rss(\n    action=\"subscribe\",\n    url=\"https://news.example.com/rss/technology\"\n)\n\n# List all subscribed feeds\nfeeds = agent.tool.rss(action=\"list\")\n\n# Read entries from a specific feed\nentries = agent.tool.rss(\n    action=\"read\",\n    feed_id=\"news_example_com_technology\",\n    max_entries=5,\n    include_content=True\n)\n\n# Search across all feeds\nsearch_results = agent.tool.rss(\n    action=\"search\",\n    query=\"machine learning\",\n    max_entries=10\n)\n\n# Fetch feed content without subscribing\nlatest_news = agent.tool.rss(\n    action=\"fetch\",\n    url=\"https://blog.example.org/feed\",\n    max_entries=3\n)\n```\n\n### Use Computer\n\n```python\nfrom strands import Agent\nfrom strands_tools import use_computer\n\nagent = Agent(tools=[use_computer])\n\n# Find mouse position\nresult = agent.tool.use_computer(action=\"mouse_position\")\n\n# Automate adding text\nresult = agent.tool.use_computer(action=\"type\", text=\"Hello, world!\", app_name=\"Notepad\")\n\n# Analyze current computer screen\nresult = agent.tool.use_computer(action=\"analyze_screen\")\n\nresult = agent.tool.use_computer(action=\"open_app\", app_name=\"Calculator\")\nresult = agent.tool.use_computer(action=\"close_app\", app_name=\"Calendar\")\n\nresult = agent.tool.use_computer(\n    action=\"hotkey\",\n    hotkey_str=\"command+ctrl+f\",  # For macOS\n    app_name=\"Chrome\"\n)\n```\n\n## \ud83c\udf0d Environment Variables Configuration\n\nAgents Tools provides extensive customization through environment variables. This allows you to configure tool behavior without modifying code, making it ideal for different environments (development, testing, production).\n\n### Global Environment Variables\n\nThese variables affect multiple tools:\n\n| Environment Variable | Description | Default | Affected Tools |\n|----------------------|-------------|---------|---------------|\n| BYPASS_TOOL_CONSENT | Bypass consent for tool invocation, set to \"true\" to enable | false | All tools that require consent (e.g. shell, file_write, python_repl) |\n| STRANDS_TOOL_CONSOLE_MODE | Enable rich UI for tools, set to \"enabled\" to enable | disabled | All tools that have optional rich UI |\n| AWS_REGION | Default AWS region for AWS operations | us-west-2 | use_aws, retrieve, generate_image, memory, nova_reels |\n| AWS_PROFILE | AWS profile name to use from ~/.aws/credentials | default | use_aws, retrieve |\n| LOG_LEVEL | Logging level (DEBUG, INFO, WARNING, ERROR) | INFO | All tools |\n\n### Tool-Specific Environment Variables\n\n#### Calculator Tool\n\n| Environment Variable | Description | Default |\n|----------------------|-------------|---------|\n| CALCULATOR_MODE | Default calculation mode | evaluate |\n| CALCULATOR_PRECISION | Number of decimal places for results | 10 |\n| CALCULATOR_SCIENTIFIC | Whether to use scientific notation for numbers | False |\n| CALCULATOR_FORCE_NUMERIC | Force numeric evaluation of symbolic expressions | False |\n| CALCULATOR_FORCE_SCIENTIFIC_THRESHOLD | Threshold for automatic scientific notation | 1e21 |\n| CALCULATOR_DERIVE_ORDER | Default order for derivatives | 1 |\n| CALCULATOR_SERIES_POINT | Default point for series expansion | 0 |\n| CALCULATOR_SERIES_ORDER | Default order for series expansion | 5 |\n\n#### Current Time Tool\n\n| Environment Variable | Description | Default |\n|----------------------|-------------|---------|\n| DEFAULT_TIMEZONE | Default timezone for current_time tool | UTC |\n\n#### Sleep Tool\n\n| Environment Variable | Description | Default |\n|----------------------|-------------|---------|\n| MAX_SLEEP_SECONDS | Maximum allowed sleep duration in seconds | 300 |\n\n#### Tavily Search, Extract, Crawl, and Map Tools\n\n| Environment Variable | Description | Default |\n|----------------------|-------------|---------|\n| TAVILY_API_KEY | Tavily API key (required for all Tavily functionality) | None |\n- Visit https://www.tavily.com/ to create a free account and API key.\n\n#### Exa Search and Contents Tools\n\n| Environment Variable | Description | Default |\n|----------------------|-------------|---------|\n| EXA_API_KEY | Exa API key (required for all Exa functionality) | None |\n- Visit https://dashboard.exa.ai/api-keys to create a free account and API key.\n\n#### Mem0 Memory Tool\n\nThe Mem0 Memory Tool supports three different backend configurations:\n\n1. **Mem0 Platform**:\n   - Uses the Mem0 Platform API for memory management\n   - Requires a Mem0 API key\n\n2. **OpenSearch** (Recommended for AWS environments):\n   - Uses OpenSearch as the vector store backend\n   - Requires AWS credentials and OpenSearch configuration\n\n3. **FAISS** (Default for local development):\n   - Uses FAISS as the local vector store backend\n   - Requires faiss-cpu package for local vector storage\n\n4. **Neptune Analytics** (Optional Graph backend for search enhancement):\n   - Uses Neptune Analytics as the graph store backend to enhance memory recall.\n   - Requires AWS credentials and Neptune Analytics configuration\n   ```\n   # Configure your Neptune Analytics graph ID in the .env file:\n   export NEPTUNE_ANALYTICS_GRAPH_IDENTIFIER=sample-graph-id\n   \n   # Configure your Neptune Analytics graph ID in Python code:\n   import os\n   os.environ['NEPTUNE_ANALYTICS_GRAPH_IDENTIFIER'] = \"g-sample-graph-id\"\n   \n   ```\n\n| Environment Variable | Description | Default | Required For |\n|----------------------|-------------|---------|--------------|\n| MEM0_API_KEY | Mem0 Platform API key | None | Mem0 Platform |\n| OPENSEARCH_HOST | OpenSearch Host URL | None | OpenSearch |\n| AWS_REGION | AWS Region for OpenSearch | us-west-2 | OpenSearch |\n| NEPTUNE_ANALYTICS_GRAPH_IDENTIFIER | Neptune Analytics Graph Identifier | None | Neptune Analytics |\n| DEV | Enable development mode (bypasses confirmations) | false | All modes |\n| MEM0_LLM_PROVIDER | LLM provider for memory processing | aws_bedrock | All modes |\n| MEM0_LLM_MODEL | LLM model for memory processing | anthropic.claude-3-5-haiku-20241022-v1:0 | All modes |\n| MEM0_LLM_TEMPERATURE | LLM temperature (0.0-2.0) | 0.1 | All modes |\n| MEM0_LLM_MAX_TOKENS | LLM maximum tokens | 2000 | All modes |\n| MEM0_EMBEDDER_PROVIDER | Embedder provider for vector embeddings | aws_bedrock | All modes |\n| MEM0_EMBEDDER_MODEL | Embedder model for vector embeddings | amazon.titan-embed-text-v2:0 | All modes |\n\n\n**Note**:\n- If `MEM0_API_KEY` is set, the tool will use the Mem0 Platform\n- If `OPENSEARCH_HOST` is set, the tool will use OpenSearch\n- If neither is set, the tool will default to FAISS (requires `faiss-cpu` package)\n- If `NEPTUNE_ANALYTICS_GRAPH_IDENTIFIER` is set, the tool will configure Neptune Analytics as graph store to enhance memory search\n- LLM configuration applies to all backend modes and allows customization of the language model used for memory processing\n\n#### Bright Data Tool\n\n| Environment Variable | Description | Default |\n|----------------------|-------------|---------|\n| BRIGHTDATA_API_KEY | Bright Data API Key | None |\n| BRIGHTDATA_ZONE | Bright Data Web Unlocker Zone | web_unlocker1 |\n\n#### Memory Tool\n\n| Environment Variable | Description | Default |\n|----------------------|-------------|---------|\n| MEMORY_DEFAULT_MAX_RESULTS | Default maximum results for list operations | 50 |\n| MEMORY_DEFAULT_MIN_SCORE | Default minimum relevance score for filtering results | 0.4 |\n\n#### Nova Reels Tool\n\n| Environment Variable | Description | Default |\n|----------------------|-------------|---------|\n| NOVA_REEL_DEFAULT_SEED | Default seed for video generation | 0 |\n| NOVA_REEL_DEFAULT_FPS | Default frames per second for generated videos | 24 |\n| NOVA_REEL_DEFAULT_DIMENSION | Default video resolution in WIDTHxHEIGHT format | 1280x720 |\n| NOVA_REEL_DEFAULT_MAX_RESULTS | Default maximum number of jobs to return for list action | 10 |\n\n#### Python REPL Tool\n\n| Environment Variable | Description | Default |\n|----------------------|-------------|---------|\n| PYTHON_REPL_BINARY_MAX_LEN | Maximum length for binary content before truncation | 100 |\n| PYTHON_REPL_INTERACTIVE | Whether to enable interactive PTY mode | None |\n| PYTHON_REPL_RESET_STATE | Whether to reset the REPL state before execution | None |\n\n#### Shell Tool\n\n| Environment Variable | Description | Default |\n|----------------------|-------------|---------|\n| SHELL_DEFAULT_TIMEOUT | Default timeout in seconds for shell commands | 900 |\n\n#### Slack Tool\n\n| Environment Variable | Description | Default |\n|----------------------|-------------|---------|\n| SLACK_DEFAULT_EVENT_COUNT | Default number of events to retrieve | 42 |\n| STRANDS_SLACK_AUTO_REPLY | Enable automatic replies to messages | false |\n| STRANDS_SLACK_LISTEN_ONLY_TAG | Only process messages containing this tag | None |\n\n#### Speak Tool\n\n| Environment Variable | Description | Default |\n|----------------------|-------------|---------|\n| SPEAK_DEFAULT_STYLE | Default style for status messages | green |\n| SPEAK_DEFAULT_MODE | Default speech mode (fast/polly) | fast |\n| SPEAK_DEFAULT_VOICE_ID | Default Polly voice ID | Joanna |\n| SPEAK_DEFAULT_OUTPUT_PATH | Default audio output path | speech_output.mp3 |\n| SPEAK_DEFAULT_PLAY_AUDIO | Whether to play audio by default | True |\n\n#### Editor Tool\n\n| Environment Variable | Description | Default |\n|----------------------|-------------|---------|\n| EDITOR_DIR_TREE_MAX_DEPTH | Maximum depth for directory tree visualization | 2 |\n| EDITOR_DEFAULT_STYLE | Default style for output panels | default |\n| EDITOR_DEFAULT_LANGUAGE | Default language for syntax highlighting | python |\n\n#### Environment Tool\n\n| Environment Variable | Description | Default |\n|----------------------|-------------|---------|\n| ENV_VARS_MASKED_DEFAULT | Default setting for masking sensitive values | true |\n\n#### Dynamic MCP Client Tool\n\n| Environment Variable | Description | Default | \n|----------------------|-------------|---------|\n| STRANDS_MCP_TIMEOUT | Default timeout in seconds for MCP operations | 30.0 |\n\n#### File Read Tool\n\n| Environment Variable | Description | Default |\n|----------------------|-------------|---------|\n| FILE_READ_RECURSIVE_DEFAULT | Default setting for recursive file searching | true |\n| FILE_READ_CONTEXT_LINES_DEFAULT | Default number of context lines around search matches | 2 |\n| FILE_READ_START_LINE_DEFAULT | Default starting line number for lines mode | 0 |\n| FILE_READ_CHUNK_OFFSET_DEFAULT | Default byte offset for chunk mode | 0 |\n| FILE_READ_DIFF_TYPE_DEFAULT | Default diff type for file comparisons | unified |\n| FILE_READ_USE_GIT_DEFAULT | Default setting for using git in time machine mode | true |\n| FILE_READ_NUM_REVISIONS_DEFAULT | Default number of revisions to show in time machine mode | 5 |\n\n#### Browser Tool\n\n| Environment Variable | Description | Default |\n|----------------------|-------------|---------|\n| STRANDS_DEFAULT_WAIT_TIME | Default setting for wait time with actions | 1 |\n| STRANDS_BROWSER_MAX_RETRIES | Default number of retries to perform when an action fails | 3 |\n| STRANDS_BROWSER_RETRY_DELAY | Default retry delay time for retry mechanisms | 1 |\n| STRANDS_BROWSER_SCREENSHOTS_DIR | Default directory where screenshots will be saved | screenshots |\n| STRANDS_BROWSER_USER_DATA_DIR | Default directory where data for reloading a browser instance is stored | ~/.browser_automation |\n| STRANDS_BROWSER_HEADLESS | Default headless setting for launching browsers | false |\n| STRANDS_BROWSER_WIDTH | Default width of the browser | 1280 |\n| STRANDS_BROWSER_HEIGHT | Default height of the browser | 800 |\n\n#### RSS Tool\n\n| Environment Variable | Description | Default |\n|----------------------|-------------|---------|\n| STRANDS_RSS_MAX_ENTRIES | Default setting for maximum number of entries per feed | 100 |\n| STRANDS_RSS_UPDATE_INTERVAL | Default amount of time between updating rss feeds in minutes | 60 |\n| STRANDS_RSS_STORAGE_PATH | Default storage path where rss feeds are stored locally | strands_rss_feeds (this may vary based on your system) |\n\n#### Video Tools\n\n| Environment Variable | Description | Default | \n|----------------------|-------------|---------|\n| TWELVELABS_API_KEY | TwelveLabs API key for video analysis | None |\n| TWELVELABS_MARENGO_INDEX_ID | Default index ID for search_video tool | None |\n| TWELVELABS_PEGASUS_INDEX_ID | Default index ID for chat_video tool | None |\n\n\n## Contributing \u2764\ufe0f\n\nThis is a community-driven project, powered by passionate developers like you.\nWe enthusiastically welcome contributions from everyone,\nregardless of experience level\u2014your unique perspective is valuable to us!\n\n### How to Get Started?\n\n1. **Find your first opportunity**: If you're new to the project, explore our labeled \"good first issues\" for beginner-friendly tasks.\n2. **Understand our workflow**: Review our [Contributing Guide](CONTRIBUTING.md)  to learn about our development setup, coding standards, and pull request process.\n3. **Make your impact**: Contributions come in many forms\u2014fixing bugs, enhancing documentation, improving performance, adding features, writing tests, or refining the user experience.\n4. **Submit your work**: When you're ready, submit a well-documented pull request, and our maintainers will provide feedback to help get your changes merged.\n\nYour questions, insights, and ideas are always welcome!\n\nTogether, we're building something meaningful that impacts real users. We look forward to collaborating with you!\n\n## License\n\nThis project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.\n\n## Security\n\nSee [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information.\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "A collection of specialized tools for Strands Agents",
    "version": "0.2.10",
    "project_urls": {
        "Bug Tracker": "https://github.com/strands-agents/tools/issues",
        "Documentation": "https://strandsagents.com/",
        "Homepage": "https://github.com/strands-agents/tools"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "875ea5a2a4ab0e3ce24d5c98176888c0fd8fc4095a884e2834ea003c0b9d5bf8",
                "md5": "459f7c93b89c77a1b8ccc6c8490b0db2",
                "sha256": "a368c19590e4f21030ebd3dd595872d7a1e863f691a26abd7ee6ff20291681f2"
            },
            "downloads": -1,
            "filename": "strands_agents_tools-0.2.10-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "459f7c93b89c77a1b8ccc6c8490b0db2",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 291418,
            "upload_time": "2025-10-08T16:29:05",
            "upload_time_iso_8601": "2025-10-08T16:29:05.858496Z",
            "url": "https://files.pythonhosted.org/packages/87/5e/a5a2a4ab0e3ce24d5c98176888c0fd8fc4095a884e2834ea003c0b9d5bf8/strands_agents_tools-0.2.10-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "ff7a5aaec4dbdbce7a35c8ea2f507a3f0c8e32bdb6a70bda72f358b47518d1ae",
                "md5": "7837b41879f79b860aceb176af371526",
                "sha256": "c6b519f1f6ce32d3e5681693a7ae9b45fd63c8ec3073a4291e8aba43e9d41f68"
            },
            "downloads": -1,
            "filename": "strands_agents_tools-0.2.10.tar.gz",
            "has_sig": false,
            "md5_digest": "7837b41879f79b860aceb176af371526",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 431031,
            "upload_time": "2025-10-08T16:29:07",
            "upload_time_iso_8601": "2025-10-08T16:29:07.794503Z",
            "url": "https://files.pythonhosted.org/packages/ff/7a/5aaec4dbdbce7a35c8ea2f507a3f0c8e32bdb6a70bda72f358b47518d1ae/strands_agents_tools-0.2.10.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-08 16:29:07",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "strands-agents",
    "github_project": "tools",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "strands-agents-tools"
}
        
Elapsed time: 1.86871s