Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
52 changes: 52 additions & 0 deletions docs/executors.md
Original file line number Diff line number Diff line change
Expand Up @@ -193,6 +193,58 @@ config = ContainerConfig(
- **Resource limits** - CPU, memory, disk quotas
- **Clean state** - Each execution in fresh container

### Remote Session Server Mode

`ContainerExecutor` can also connect to an existing session server instead of starting a
local container itself:

```python
from py_code_mode import RedisStorage, Session
from py_code_mode.execution import ContainerExecutor

storage = RedisStorage(
url="redis://localhost:6379",
prefix="production",
workspace_id="workspace-123",
)

executor = ContainerExecutor(remote_url="http://session-server:8000")

async with Session(storage=storage, executor=executor) as session:
result = await session.run(agent_code)
```

In remote mode:

- the host storage backend supplies `workspace_id`
- the server issues the execution `session_id`
- workflows, artifacts, and workflow search are scoped to that workspace

The executor binds the session by calling `POST /sessions` and then sends the returned
session ID on subsequent execution, workflow, artifact, and info requests via
`X-Session-ID`.

Multiple sessions using the same `workspace_id` share storage state. Different
`workspace_id` values are isolated from each other.

If `workspace_id` is omitted, the remote server uses the legacy default namespace for
backward compatibility. This is one shared unscoped namespace, not access to all
workspaces.

### Remote Storage Requirements

Remote mode only sends workspace identity. The session server must be configured with
server-owned storage roots so it can rebuild workspace-scoped storage internally.

Relevant server config fields:

- `storage_base_path`: base directory for file-backed workspace storage
- `storage_prefix`: Redis prefix for Redis-backed workspace storage

The host storage and the remote server must refer to the same logical backing store.
For true remote deployments, Redis-backed storage is recommended because both sides can
share the same namespace cleanly.

### Configuration Options

```python
Expand Down
26 changes: 24 additions & 2 deletions docs/production.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,16 +94,21 @@ else:

### 5. Isolate Storage by Tenant

Use separate Redis prefixes for multi-tenant deployments:
Use a stable environment prefix plus `workspace_id` for multi-tenant deployments:

```python
def get_storage(tenant_id: str, redis_url: str) -> RedisStorage:
return RedisStorage(
url=redis_url,
prefix=f"tenant-{tenant_id}"
prefix="production",
workspace_id=tenant_id,
)
```

If `workspace_id` is omitted, the system uses the legacy default namespace. That is one
shared unscoped namespace, so multi-tenant deployments should set `workspace_id`
explicitly.

---

## Scalability Patterns
Expand Down Expand Up @@ -140,6 +145,23 @@ async def handle_request(agent_code: str, tenant_id: str):

Load balancer distributes requests across instances.

### Remote Session Servers

For remote `ContainerExecutor(remote_url=...)` deployments:

- the client provides `workspace_id` through the storage backend
- the session server creates an execution `session_id`
- workflow/artifact isolation is enforced by the server's workspace-scoped storage bundle

Configure the session server with server-owned storage roots:

- `storage_base_path` for file-backed storage
- `storage_prefix` for Redis-backed storage

The host storage configuration and the remote session server must refer to the same
logical backing store. In practice, Redis-backed storage is the recommended production
topology for remote deployments because both sides can share one namespace directly.

---

## Container Image Management
Expand Down
50 changes: 49 additions & 1 deletion docs/storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,29 @@ RedisStorage(
)
```

### Workspace Scoping

Both storage backends accept an optional `workspace_id`:

```python
from pathlib import Path

from py_code_mode import FileStorage, RedisStorage

file_storage = FileStorage(base_path=Path("./data"), workspace_id="client-a")
redis_storage = RedisStorage(
url="redis://localhost:6379",
prefix="production",
workspace_id="client-a",
)
```

When `workspace_id` is set, workflows, artifacts, and vector caches are scoped to that
workspace and shared by other sessions using the same ID.

When `workspace_id` is omitted, storage uses the legacy unscoped namespace. This is one
shared default namespace, **not** access to all workspaces.

---

## One Agent Learns, All Agents Benefit
Expand Down Expand Up @@ -125,6 +148,17 @@ tenant_a_storage = RedisStorage(url="redis://localhost:6379", prefix="tenant-a")
tenant_b_storage = RedisStorage(url="redis://localhost:6379", prefix="tenant-b")
```

For multi-tenant systems inside one environment, prefer a stable app-level prefix plus
per-session `workspace_id` values:

```python
storage = RedisStorage(
url="redis://localhost:6379",
prefix="production",
workspace_id="client-a",
)
```

---

## Migrating Between Storage Backends
Expand Down Expand Up @@ -199,10 +233,24 @@ python -m py_code_mode.store diff \

### Multi-Tenant

- Use separate prefix per tenant
- Use a stable environment prefix (for example `prod`) plus `workspace_id` per tenant or campaign
- Consider separate Redis instances for hard isolation
- Monitor Redis memory usage

### Remote Session Servers

When using `ContainerExecutor(remote_url=...)`, the host storage object and the remote
session server must point at the same logical backing store:

- file-backed remote mode: host `FileStorage(...)` should correspond to the server's
`storage_base_path`
- Redis-backed remote mode: host `RedisStorage(prefix=..., workspace_id=...)` should
correspond to the server's `storage_prefix`

For true remote deployments, `RedisStorage` is usually the simplest and safest option
because both the host process and the remote session server can share the same Redis
namespace directly.

### Workflow Lifecycle

```python
Expand Down
12 changes: 8 additions & 4 deletions src/py_code_mode/bootstrap.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,8 +52,10 @@ async def bootstrap_namespaces(config: dict[str, Any]) -> NamespaceBundle:

Args:
config: Dict with "type" key ("file" or "redis") and type-specific fields.
- For "file": {"type": "file", "base_path": str, "tools_path": str|None}
- For "file": {"type": "file", "base_path": str, "workspace_id": str|None,
"tools_path": str|None}
- For "redis": {"type": "redis", "url": str, "prefix": str,
"workspace_id": str|None,
"tools_path": str|None}
- tools_path is optional; if provided, tools load from that directory

Expand Down Expand Up @@ -128,7 +130,8 @@ async def _bootstrap_file_storage(config: dict[str, Any]) -> NamespaceBundle:
from py_code_mode.storage import FileStorage

base_path = Path(config["base_path"])
storage = FileStorage(base_path)
workspace_id = config.get("workspace_id")
storage = FileStorage(base_path, workspace_id=workspace_id)

tools_ns = await _load_tools_namespace(config.get("tools_path"))
artifact_store = storage.get_artifact_store()
Expand Down Expand Up @@ -159,15 +162,16 @@ async def _bootstrap_redis_storage(config: dict[str, Any]) -> NamespaceBundle:

url = config["url"]
prefix = config["prefix"]
workspace_id = config.get("workspace_id")

# Connect to Redis
storage = RedisStorage(url=url, prefix=prefix)
storage = RedisStorage(url=url, prefix=prefix, workspace_id=workspace_id)

tools_ns = await _load_tools_namespace(config.get("tools_path"))
artifact_store = storage.get_artifact_store()

# Create deps namespace
deps_store = RedisDepsStore(storage.client, prefix=f"{prefix}:deps")
deps_store = RedisDepsStore(storage.client, prefix=prefix)
installer = PackageInstaller()
deps_ns = DepsNamespace(deps_store, installer)

Expand Down
Loading