Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .github/workflows/e2e.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,11 @@ jobs:
- name: Checkout code
uses: actions/checkout@v4

- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11"

- name: Set up Go
uses: actions/setup-go@v5
with:
Expand Down
73 changes: 73 additions & 0 deletions integrations/langchain-agentcube/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
# LangChain + AgentCube sandbox

Wire AgentCube **Code Interpreter** to [LangChain Deep Agents](https://docs.langchain.com/oss/python/deepagents/sandboxes) as the `backend`.

**Prerequisites**: AgentCube cluster with a `CodeInterpreter` CR deployed; see [getting-started](../../docs/getting-started.md). Local **Python >= 3.11**.

---

## 1. Install

From the repository root:

```bash
pip install -e ./sdk-python
pip install -e ./integrations/langchain-agentcube
pip install langchain-openai # DeepSeek and OpenAI-compatible APIs
# pip install langchain-anthropic # only if you use Anthropic
```

---

## 2. Cluster environment variables

After `kubectl port-forward` to `workloadmanager` and `agentcube-router`:

```bash
export WORKLOAD_MANAGER_URL="http://localhost:8080"
export ROUTER_URL="http://localhost:8081"
export AGENTCUBE_NAMESPACE="default" # same namespace as the CodeInterpreter CR
# export API_TOKEN="..." # if your cluster requires it
# export CODE_INTERPRETER_NAME=my-ci # optional; default my-interpreter
```

---

## 3. LLM API keys

**DeepSeek** (OpenAI-compatible; `pip install langchain-openai`):

```bash
export DEEPSEEK_API_KEY="sk-..."
# Optional: DEEPSEEK_API_BASE (default https://api.deepseek.com/v1), DEEPSEEK_MODEL (default deepseek-chat)
```

**Anthropic Claude** (`pip install langchain-anthropic`):

```bash
export ANTHROPIC_API_KEY="sk-ant-..."
# Optional: ANTHROPIC_MODEL (default claude-haiku-4-5-20251001)
```

**OpenAI GPT** (same `langchain-openai` package):

```bash
export OPENAI_API_KEY="sk-..."
# Optional: OPENAI_MODEL (default gpt-4o-mini); set OPENAI_API_BASE for proxies or compatible gateways
```

If several keys are set, the example script prefers **DeepSeek, then Claude, then GPT**. For DeepSeek you can instead set only `OPENAI_API_KEY` + `OPENAI_API_BASE=https://api.deepseek.com/v1` + `OPENAI_MODEL=deepseek-chat` without `DEEPSEEK_API_KEY`.

---

## 4. Run the example

```bash
python integrations/langchain-agentcube/example/deep_agent_sandbox.py
```

Flags: `--interpreter`, `--namespace`, `--prompt`. A full fix is to rename the CLI top-level package (e.g. `agentcube_cli`) in the future.

---

Everything below uses **agentcube-sdk** plus Router / Workload Manager.
109 changes: 109 additions & 0 deletions integrations/langchain-agentcube/example/deep_agent_sandbox.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,109 @@
#!/usr/bin/env python3
# Copyright The Volcano Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.

"""Example: create_deep_agent with AgentcubeSandbox. See ../README.md."""

from __future__ import annotations

import argparse
import os
import sys
from pathlib import Path
from typing import Any


def _require_env(name: str) -> str:
v = os.environ.get(name)
if not v:
print(f"Error: environment variable {name} is not set", file=sys.stderr)
sys.exit(1)
return v


def _make_chat_model() -> object:
if os.environ.get("DEEPSEEK_API_KEY"):
from langchain_openai import ChatOpenAI

return ChatOpenAI(
model=os.environ.get("DEEPSEEK_MODEL", "deepseek-chat"),
api_key=os.environ["DEEPSEEK_API_KEY"],
base_url=os.environ.get("DEEPSEEK_API_BASE", "https://api.deepseek.com/v1"),
temperature=0,
)
if os.environ.get("ANTHROPIC_API_KEY"):
from langchain_anthropic import ChatAnthropic

return ChatAnthropic(
model=os.environ.get("ANTHROPIC_MODEL", "claude-haiku-4-5-20251001"),
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Hardcoding a specific dated model version like claude-haiku-4-5-20251001 as a default can lead to maintenance issues when the model is deprecated or updated. Consider using a more stable alias (e.g., claude-3-5-haiku-latest) or requiring the user to provide the model via environment variables without a hardcoded default.

temperature=0,
)
if os.environ.get("OPENAI_API_KEY"):
from langchain_openai import ChatOpenAI

kwargs: dict[str, Any] = {
"model": os.environ.get("OPENAI_MODEL", "gpt-4o-mini"),
"api_key": os.environ["OPENAI_API_KEY"],
"temperature": 0,
}
if os.environ.get("OPENAI_API_BASE"):
kwargs["base_url"] = os.environ["OPENAI_API_BASE"]
return ChatOpenAI(**kwargs)
print(
"Error: set DEEPSEEK_API_KEY, ANTHROPIC_API_KEY, or OPENAI_API_KEY.",
file=sys.stderr,
)
sys.exit(1)


def main() -> None:
parser = argparse.ArgumentParser(description="Deep Agents + AgentcubeSandbox")
parser.add_argument(
"--interpreter",
default=os.environ.get("CODE_INTERPRETER_NAME", "my-interpreter"),
)
parser.add_argument(
"--namespace",
default=os.environ.get("AGENTCUBE_NAMESPACE", "default"),
)
parser.add_argument(
"--prompt",
default="Write and run a Python script to print 'Hello, World!'",
)
args = parser.parse_args()

_require_env("WORKLOAD_MANAGER_URL")
_require_env("ROUTER_URL")
sys.path.insert(0, str(Path(__file__).resolve().parents[3] / "sdk-python"))

from agentcube import CodeInterpreterClient
from deepagents import create_deep_agent
from langchain_agentcube import AgentcubeSandbox

model = _make_chat_model()
with CodeInterpreterClient(
name=args.interpreter,
namespace=args.namespace,
router_url=os.environ["ROUTER_URL"],
workload_manager_url=os.environ["WORKLOAD_MANAGER_URL"],
auth_token=os.environ.get("API_TOKEN"),
) as client:
agent = create_deep_agent(
model=model,
system_prompt="You are a coding assistant with sandbox access.",
backend=AgentcubeSandbox(client=client),
)
try:
result = agent.invoke(
{"messages": [{"role": "user", "content": args.prompt}]},
)
except Exception as e:
print(f"Error: {e}", file=sys.stderr)
sys.exit(1)
print(result)


if __name__ == "__main__":
main()
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Copyright The Volcano Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.

from langchain_agentcube.sandbox import AgentcubeSandbox

__all__ = ["AgentcubeSandbox"]
117 changes: 117 additions & 0 deletions integrations/langchain-agentcube/langchain_agentcube/sandbox.py
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am thinking about contributing to upstream lang chain organization if possible

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

about what. The agentcube Code Interpreter?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This implement, you can see aws agentcore is there.

Original file line number Diff line number Diff line change
@@ -0,0 +1,117 @@
# Copyright The Volcano Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.

"""AgentCube Code Interpreter as a Deep Agents ``BaseSandbox`` backend."""

from __future__ import annotations

import os
import tempfile
from typing import TYPE_CHECKING

from deepagents.backends.protocol import (
ExecuteResponse,
FileDownloadResponse,
FileUploadResponse,
)
from deepagents.backends.sandbox import BaseSandbox

if TYPE_CHECKING:
from agentcube import CodeInterpreterClient


def _normalize_remote_path(path: str) -> str:
"""Map paths from Deep Agents (often absolute) to session workspace-relative paths."""
return path.replace("\\", "/").strip().lstrip("/")


class AgentcubeSandbox(BaseSandbox):
"""Wraps :class:`~agentcube.CodeInterpreterClient` for ``create_deep_agent(..., backend=...)``."""

def __init__(
self,
*,
client: CodeInterpreterClient,
default_timeout: int | None = 30 * 60,
) -> None:
self._client = client
self._default_timeout = default_timeout

@property
def id(self) -> str:
sid = self._client.session_id
return sid if sid else "agentcube-unknown"

def execute(
self,
command: str,
*,
timeout: int | None = None,
) -> ExecuteResponse:
eff = timeout if timeout is not None else self._default_timeout
to = float(eff) if eff is not None else None
r = self._client.execute_command_result(command, timeout=to)
out = r.get("stdout") or ""
stderr = (r.get("stderr") or "").strip()
if stderr:
out += f"\n<stderr>{stderr}</stderr>"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Appending stderr with a leading newline can result in unexpected output formatting if stdout is empty (the result will start with a newline). Consider checking if out is empty before adding the newline.

Suggested change
out += f"\n<stderr>{stderr}</stderr>"
out = f"{out}\n<stderr>{stderr}</stderr>" if out else f"<stderr>{stderr}</stderr>"

return ExecuteResponse(
output=out,
exit_code=int(r.get("exit_code", -1)),
truncated=False,
)

def upload_files(self, files: list[tuple[str, bytes]]) -> list[FileUploadResponse]:
responses: list[FileUploadResponse] = []
for path, content in files:
rel = _normalize_remote_path(path)
if not rel:
responses.append(FileUploadResponse(path=path, error="invalid_path"))
continue
tmp_path: str | None = None
try:
fd, tmp_path = tempfile.mkstemp(prefix="agentcube-upload-", suffix=".bin")
with os.fdopen(fd, "wb") as f:
f.write(content)
self._client.upload_file(tmp_path, rel)
responses.append(FileUploadResponse(path=path, error=None))
except Exception as e:
responses.append(FileUploadResponse(path=path, error=str(e)))
finally:
if tmp_path:
try:
os.unlink(tmp_path)
except OSError:
pass
return responses

def download_files(self, paths: list[str]) -> list[FileDownloadResponse]:
responses: list[FileDownloadResponse] = []
for path in paths:
rel = _normalize_remote_path(path)
if not rel:
responses.append(
FileDownloadResponse(path=path, content=None, error="invalid_path")
)
continue
fd, tmp_path = tempfile.mkstemp(prefix="agentcube-dl-", suffix=".bin")
os.close(fd)
try:
try:
self._client.download_file(rel, tmp_path)
except Exception as e: # noqa: BLE001
responses.append(
FileDownloadResponse(path=path, content=None, error=str(e))
)
continue
with open(tmp_path, "rb") as f:
data = f.read()
responses.append(FileDownloadResponse(path=path, content=data, error=None))
finally:
try:
os.unlink(tmp_path)
except OSError:
pass
return responses
24 changes: 24 additions & 0 deletions integrations/langchain-agentcube/pyproject.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# Copyright The Volcano Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.

[build-system]
requires = ["setuptools>=61.0", "wheel"]
build-backend = "setuptools.build_meta"

[project]
name = "langchain-agentcube"
version = "0.1.0"
description = "LangChain Deep Agents sandbox backend for AgentCube Code Interpreter"
readme = "README.md"
license = "Apache-2.0"
requires-python = ">=3.11"
dependencies = [
"deepagents>=0.5.0,<0.6",
"agentcube-sdk>=0.1.0",
]

[tool.setuptools.packages.find]
where = ["."]
include = ["langchain_agentcube*"]
Loading
Loading