Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions docs.json
Original file line number Diff line number Diff line change
Expand Up @@ -138,6 +138,7 @@
"pages": [
"openhands/usage/llms/openhands-llms",
"openhands/usage/llms/azure-llms",
"openhands/usage/llms/bedrock-llms",
"openhands/usage/llms/google-llms",
"openhands/usage/llms/groq",
"openhands/usage/llms/local-llms",
Expand Down Expand Up @@ -293,6 +294,12 @@
"sdk/guides/llm-error-handling"
]
},
{
"group": "LLM Providers",
"pages": [
"sdk/guides/llm-provider-bedrock"
]
},
{
"group": "Agent Features",
"pages": [
Expand Down
75 changes: 75 additions & 0 deletions openhands/usage/llms/bedrock-llms.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
---
title: AWS Bedrock
description: OpenHands uses LiteLLM to make calls to AWS Bedrock models. You can find their documentation on using AWS Bedrock as a provider [here](https://docs.litellm.ai/docs/providers/bedrock).
---

## Prerequisites

AWS Bedrock requires the `boto3` library to be installed. This is used internally by LiteLLM - you don't need to import it in your code.

```bash
pip install boto3>=1.28.57
```

## AWS Bedrock Configuration

When running OpenHands, you'll need to set AWS credentials using environment variables with `-e` in the docker run command.

### Authentication Options

| Method | Environment Variables |
|--------|----------------------|
| Access Keys | `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY` |
| Session Token | `AWS_SESSION_TOKEN` (in addition to access keys) |
| AWS Profile | `AWS_PROFILE_NAME` |
| IAM Role | `AWS_ROLE_NAME`, `AWS_WEB_IDENTITY_TOKEN` |
| Bedrock API Key | `AWS_BEARER_TOKEN_BEDROCK` |

### Example with Access Keys

```bash
docker run -it --pull=always \
-e AWS_ACCESS_KEY_ID="your-access-key" \
-e AWS_SECRET_ACCESS_KEY="your-secret-key" \
-e AWS_REGION_NAME="us-east-1" \
...
```

Then in the OpenHands UI Settings under the `LLM` tab:

1. Enable `Advanced` options.
2. Set the following:
- `Custom Model` to `bedrock/<model-id>` (e.g., `bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0`)

## Model Names

Use the `bedrock/` prefix followed by the Bedrock model ID:

| Model | Model ID |
|-------|----------|
| Claude 3.5 Sonnet v2 | `bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0` |
| Claude 3 Opus | `bedrock/anthropic.claude-3-opus-20240229-v1:0` |
| Claude 3 Haiku | `bedrock/anthropic.claude-3-haiku-20240307-v1:0` |
| Claude 3.5 Haiku | `bedrock/anthropic.claude-3-5-haiku-20241022-v1:0` |

<Note>
You can find the full list of available Bedrock model IDs in the [AWS Bedrock documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.html) or in the AWS Console under Bedrock > Model access.
</Note>

## Cross-Region Inference

AWS Bedrock supports cross-region inference for improved availability. To use it, include the region in your model ID:

```
bedrock/us.anthropic.claude-3-5-sonnet-20241022-v2:0
```

## Troubleshooting

### Common Issues

1. **Access Denied**: Ensure your AWS credentials have the necessary permissions for Bedrock. You need `bedrock:InvokeModel` permission.

2. **Model Not Found**: Verify that the model is enabled in your AWS account. Go to AWS Console > Bedrock > Model access to enable models.

3. **Region Issues**: Make sure `AWS_REGION_NAME` is set to a region where Bedrock is available and where you have model access enabled.
1 change: 1 addition & 0 deletions openhands/usage/llms/llms.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -78,6 +78,7 @@

We have a few guides for running OpenHands with specific model providers:

- [AWS Bedrock](/openhands/usage/llms/bedrock-llms)
- [Azure](/openhands/usage/llms/azure-llms)
- [Google](/openhands/usage/llms/google-llms)
- [Groq](/openhands/usage/llms/groq)
Expand All @@ -95,7 +96,7 @@

LLM providers have specific settings that can be customized to optimize their performance with OpenHands, such as:

- **Custom Tokenizers**: For specialized models, you can add a suitable tokenizer.

Check warning on line 99 in openhands/usage/llms/llms.mdx

View check run for this annotation

Mintlify / Mintlify Validation (allhandsai) - vale-spellcheck

openhands/usage/llms/llms.mdx#L99

Did you really mean 'Tokenizers'?

Check warning on line 99 in openhands/usage/llms/llms.mdx

View check run for this annotation

Mintlify / Mintlify Validation (allhandsai) - vale-spellcheck

openhands/usage/llms/llms.mdx#L99

Did you really mean 'tokenizer'?
- **Native Tool Calling**: Toggle native function/tool calling capabilities.

For detailed information about model customization, see
Expand Down
1 change: 1 addition & 0 deletions sdk/arch/llm.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@

| Component | Purpose | Design |
|-----------|---------|--------|
| **[`LLM`](https://github.com/OpenHands/software-agent-sdk/blob/main/openhands-sdk/openhands/sdk/llm/llm.py)** | Configuration model | Pydantic model with provider settings |

Check warning on line 70 in sdk/arch/llm.mdx

View check run for this annotation

Mintlify / Mintlify Validation (allhandsai) - vale-spellcheck

sdk/arch/llm.mdx#L70

Did you really mean 'Pydantic'?
| **[`completion()`](https://github.com/OpenHands/software-agent-sdk/blob/main/openhands-sdk/openhands/sdk/llm/llm.py)** | Chat Completions API | Handles retries, timeouts, streaming |
| **[`responses()`](https://github.com/OpenHands/software-agent-sdk/blob/main/openhands-sdk/openhands/sdk/llm/llm.py)** | Responses API | Enhanced reasoning with encrypted thinking |
| **[`LiteLLM`](https://github.com/BerriAI/litellm)** | Provider adapter | Unified API for 100+ providers |
Expand Down Expand Up @@ -114,7 +114,7 @@

**Environment Variable Pattern:**
- **Prefix:** All variables start with `LLM_`
- **Mapping:** `LLM_FIELD` → `field` (lowercased)

Check warning on line 117 in sdk/arch/llm.mdx

View check run for this annotation

Mintlify / Mintlify Validation (allhandsai) - vale-spellcheck

sdk/arch/llm.mdx#L117

Did you really mean 'lowercased'?
- **Types:** Auto-cast to int, float, bool, JSON, or SecretStr

**Common Variables:**
Expand Down Expand Up @@ -185,7 +185,7 @@

1. **Validation:** Check required fields (model, messages)
2. **Request:** Call LiteLLM with provider-specific formatting
3. **Retry Logic:** Exponential backoff on failures (configurable)

Check warning on line 188 in sdk/arch/llm.mdx

View check run for this annotation

Mintlify / Mintlify Validation (allhandsai) - vale-spellcheck

sdk/arch/llm.mdx#L188

Did you really mean 'backoff'?
4. **Telemetry:** Record tokens, cost, latency
5. **Response:** Return completion or raise error

Expand Down Expand Up @@ -294,12 +294,13 @@
| OpenHands hosted models | [/openhands/usage/llms/openhands-llms](/openhands/usage/llms/openhands-llms) |
| OpenAI | [/openhands/usage/llms/openai-llms](/openhands/usage/llms/openai-llms) |
| Azure OpenAI | [/openhands/usage/llms/azure-llms](/openhands/usage/llms/azure-llms) |
| AWS Bedrock | [/openhands/usage/llms/bedrock-llms](/openhands/usage/llms/bedrock-llms) |
| Google Gemini / Vertex | [/openhands/usage/llms/google-llms](/openhands/usage/llms/google-llms) |
| Groq | [/openhands/usage/llms/groq](/openhands/usage/llms/groq) |

Check warning on line 299 in sdk/arch/llm.mdx

View check run for this annotation

Mintlify / Mintlify Validation (allhandsai) - vale-spellcheck

sdk/arch/llm.mdx#L299

Did you really mean 'Groq'?
| OpenRouter | [/openhands/usage/llms/openrouter](/openhands/usage/llms/openrouter) |
| Moonshot | [/openhands/usage/llms/moonshot](/openhands/usage/llms/moonshot) |
| LiteLLM proxy | [/openhands/usage/llms/litellm-proxy](/openhands/usage/llms/litellm-proxy) |
| Local LLMs (Ollama, SGLang, vLLM, LM Studio) | [/openhands/usage/llms/local-llms](/openhands/usage/llms/local-llms) |

Check warning on line 303 in sdk/arch/llm.mdx

View check run for this annotation

Mintlify / Mintlify Validation (allhandsai) - vale-spellcheck

sdk/arch/llm.mdx#L303

Did you really mean 'LLMs'?

Check warning on line 303 in sdk/arch/llm.mdx

View check run for this annotation

Mintlify / Mintlify Validation (allhandsai) - vale-spellcheck

sdk/arch/llm.mdx#L303

Did you really mean 'Ollama'?

Check warning on line 303 in sdk/arch/llm.mdx

View check run for this annotation

Mintlify / Mintlify Validation (allhandsai) - vale-spellcheck

sdk/arch/llm.mdx#L303

Did you really mean 'SGLang'?
| Custom LLM configurations | [/openhands/usage/llms/custom-llm-configs](/openhands/usage/llms/custom-llm-configs) |

When you follow any of those guides while building with the SDK, create an
Expand Down Expand Up @@ -399,7 +400,7 @@

## See Also

- **[Agent Architecture](/sdk/arch/agent)** - How agents use LLMs for reasoning and perform actions

Check warning on line 403 in sdk/arch/llm.mdx

View check run for this annotation

Mintlify / Mintlify Validation (allhandsai) - vale-spellcheck

sdk/arch/llm.mdx#L403

Did you really mean 'LLMs'?
- **[Events](/sdk/arch/events)** - LLM request/response event types
- **[Security](/sdk/arch/security)** - Optional LLM-based security analysis
- **[Provider Setup Guides](/openhands/usage/llms/openai-llms)** - Provider-specific configuration
169 changes: 169 additions & 0 deletions sdk/guides/llm-provider-bedrock.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,169 @@
---
title: AWS Bedrock
description: Configure the SDK to use Claude and other models via AWS Bedrock.
---

> A ready-to-run example is available [here](#ready-to-run-example)!

Use AWS Bedrock to access Claude and other foundation models through your AWS account.

## Prerequisites

AWS Bedrock requires the `boto3` library:

```bash
pip install openhands-sdk boto3>=1.28.57
```

<Note>
`boto3` is used internally by LiteLLM - you don't need to import it in your code.
</Note>

## Authentication Options

Configure AWS credentials using environment variables:

| Method | Environment Variables |
|--------|----------------------|
| Access Keys | `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY` |
| Session Token | `AWS_SESSION_TOKEN` (in addition to access keys) |
| AWS Profile | `AWS_PROFILE_NAME` |
| IAM Role | `AWS_ROLE_NAME`, `AWS_WEB_IDENTITY_TOKEN` |
| Bedrock API Key | `AWS_BEARER_TOKEN_BEDROCK` |

You must also set the AWS region:

```bash
export AWS_REGION_NAME="us-east-1"
```

## Model Names

Use the `bedrock/` prefix followed by the Bedrock model ID:

| Model | Model ID |
|-------|----------|
| Claude 3.5 Sonnet v2 | `bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0` |
| Claude 3 Opus | `bedrock/anthropic.claude-3-opus-20240229-v1:0` |
| Claude 3 Haiku | `bedrock/anthropic.claude-3-haiku-20240307-v1:0` |
| Claude 3.5 Haiku | `bedrock/anthropic.claude-3-5-haiku-20241022-v1:0` |

## Basic Usage

```python icon="python" focus={7}
import os
from openhands.sdk import LLM, Agent, Conversation
from openhands.sdk.tool import Tool
from openhands.tools.terminal import TerminalTool
from openhands.tools.file_editor import FileEditorTool

llm = LLM(model="bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0")

agent = Agent(
llm=llm,
tools=[
Tool(name=TerminalTool.name),
Tool(name=FileEditorTool.name),
],
)

conversation = Conversation(agent=agent, workspace=os.getcwd())
conversation.send_message("List the files in this directory")
conversation.run()
```

## Ready-to-run Example

<Note>
Before running, ensure you have:
1. Installed `boto3>=1.28.57`
2. Set AWS credentials via environment variables
3. Enabled the model in your AWS Bedrock console
</Note>

```python icon="python" expandable
"""
AWS Bedrock with Claude models.

Prerequisites:
pip install openhands-sdk boto3>=1.28.57

Environment variables:
AWS_ACCESS_KEY_ID - Your AWS access key
AWS_SECRET_ACCESS_KEY - Your AWS secret key
AWS_REGION_NAME - AWS region (e.g., us-east-1)
"""
import os

from openhands.sdk import LLM, Agent, Conversation
from openhands.sdk.tool import Tool
from openhands.tools.terminal import TerminalTool
from openhands.tools.file_editor import FileEditorTool

# AWS credentials are read from environment variables by LiteLLM/boto3
# AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION_NAME

llm = LLM(
model=os.getenv("LLM_MODEL", "bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0"),
)

agent = Agent(
llm=llm,
tools=[
Tool(name=TerminalTool.name),
Tool(name=FileEditorTool.name),
],
)

conversation = Conversation(agent=agent, workspace=os.getcwd())
conversation.send_message("List the files in this directory and summarize what you see.")
conversation.run()

# Report cost
cost = llm.metrics.accumulated_cost
print(f"EXAMPLE_COST: {cost}")
```

### Running the Example

```bash
# Set AWS credentials
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
export AWS_REGION_NAME="us-east-1"

# Run the example
python bedrock_example.py
```

## Cross-Region Inference

AWS Bedrock supports cross-region inference. Include the region prefix in your model ID:

```python
llm = LLM(model="bedrock/us.anthropic.claude-3-5-sonnet-20241022-v2:0")
```

## Troubleshooting

### Access Denied

Ensure your AWS credentials have the `bedrock:InvokeModel` permission.

### Model Not Found

Verify the model is enabled in your AWS account:
1. Go to AWS Console > Bedrock > Model access
2. Enable the models you want to use

### Region Issues

Ensure `AWS_REGION_NAME` is set to a region where:
- Bedrock is available
- You have model access enabled

## Next Steps

- **[LLM Registry](/sdk/guides/llm-registry)** - Manage multiple LLM providers
- **[LLM Routing](/sdk/guides/llm-routing)** - Automatically route to different models
- **[OpenHands Bedrock Guide](/openhands/usage/llms/bedrock-llms)** - Self-hosted OpenHands configuration