Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
218 changes: 218 additions & 0 deletions docs/en/concepts/llms.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -772,6 +772,224 @@ In this section, you'll find detailed examples that help you select, configure,
```
</Accordion>

<Accordion title="Oracle Cloud Infrastructure (OCI)">
CrewAI provides native integration with OCI Generative AI for generic chat models, OpenAI-hosted OCI models, Google-hosted OCI models, Meta-hosted OCI models, and dedicated inference endpoints.

**Recommended Cohere Models (verified in OCI on March 10, 2026):**
- `cohere.command-a-reasoning` for reasoning-heavy text workflows
- `cohere.command-a-03-2025` for general text generation and streaming
- `cohere.command-a-vision` for the top Cohere multimodal tier once OCI Cohere vision formatting is enabled in CrewAI

**Recommended Regions for Cohere in OCI:**
- choose any subscribed OCI region where the target model is available
- common examples include `us-chicago-1`, `us-ashburn-1`, `uk-london-1`, and `eu-paris-1`

```toml Code
# Required
OCI_COMPARTMENT_ID=ocid1.compartment.oc1..exampleuniqueID

# Optional when not passing service_endpoint directly
OCI_REGION=<your-oci-region>

# Authentication options
OCI_AUTH_TYPE=API_KEY
OCI_AUTH_PROFILE=DEFAULT
OCI_AUTH_FILE_LOCATION=~/.oci/config

# Optional explicit endpoint override
OCI_SERVICE_ENDPOINT=https://inference.generativeai.<your-oci-region>.oci.oraclecloud.com
```

**Basic Usage:**
```python Code
from crewai import LLM

llm = LLM(
model="oci/cohere.command-a-reasoning",
compartment_id="ocid1.compartment.oc1..exampleuniqueID",
auth_type="API_KEY",
auth_profile="DEFAULT",
temperature=0,
max_tokens=512,
)
```

**Provider Routing Examples:**
```python Code
from crewai import LLM

meta_llm = LLM(
model="oci/meta.llama-3.3-70b-instruct",
compartment_id="ocid1.compartment.oc1..exampleuniqueID",
)

gemini_llm = LLM(
model="oci/google.gemini-2.5-flash",
compartment_id="ocid1.compartment.oc1..exampleuniqueID",
)

openai_llm = LLM(
model="oci/openai.gpt-4o-mini",
compartment_id="ocid1.compartment.oc1..exampleuniqueID",
)
```

**Async Usage:**
```python Code
import asyncio
from crewai import LLM

llm = LLM(
model="oci/cohere.command-a-03-2025",
compartment_id="ocid1.compartment.oc1..exampleuniqueID",
)

async def main():
result = await llm.acall("Summarize Oracle Cloud in one sentence.")
print(result)

asyncio.run(main())
```

**Streaming Usage:**
```python Code
from crewai import LLM

llm = LLM(
model="oci/cohere.command-a-03-2025",
compartment_id="ocid1.compartment.oc1..exampleuniqueID",
stream=True,
temperature=0,
)

result = llm.call("Reply with exactly three words about Oracle Cloud.")
print(result)
```

**Multimodal Usage:**
```python Code
from crewai import LLM
from crewai_files import ImageFile

llm = LLM(
model="oci/google.gemini-2.5-flash",
compartment_id="ocid1.compartment.oc1..exampleuniqueID",
)

response = llm.call(
[
{
"role": "user",
"content": "Describe this architecture diagram.",
"files": {
"diagram": ImageFile(source="./architecture.png"),
},
}
]
)

print(response)
```

**Structured Outputs:**
```python Code
from pydantic import BaseModel
from crewai import LLM

class OCIAnswer(BaseModel):
topic: str
summary: str

llm = LLM(
model="oci/cohere.command-a-reasoning",
compartment_id="ocid1.compartment.oc1..exampleuniqueID",
temperature=0,
)

response = llm.call(
"Return a short JSON summary about Oracle Cloud.",
response_model=OCIAnswer,
)
print(response.summary)
```

**Agent + Tool Usage:**
```python Code
from crewai import Agent, LLM
from crewai.tools import tool

@tool
def add_numbers(a: int, b: int) -> int:
"""Add two numbers and return the sum."""
return a + b

agent = Agent(
role="OCI Calculator",
goal="Use tools to solve arithmetic problems",
backstory="You are a precise calculator assistant.",
llm=LLM(
model="oci/cohere.command-a-03-2025",
compartment_id="ocid1.compartment.oc1..exampleuniqueID",
temperature=0,
),
tools=[add_numbers],
verbose=True,
)

result = agent.kickoff(
"Use the add_numbers tool to calculate 15 + 27. Return only the final result."
)
print(result.raw)
```

**Dedicated Endpoint Usage:**
```python Code
from crewai import LLM

llm = LLM(
model="oci/ocid1.generativeaiendpoint.oc1.<your-oci-region>.exampleuniqueID",
compartment_id="ocid1.compartment.oc1..exampleuniqueID",
)
```

**Supported Authentication Types:**
- `API_KEY`
- `SECURITY_TOKEN`
- `INSTANCE_PRINCIPAL`
- `RESOURCE_PRINCIPAL`

**Features:**
- Native function calling support
- Structured outputs with JSON schema response formats
- Async `acall()` support
- Streaming responses for OCI chat models
- Multimodal generic chat inputs for text, images, documents, video, and audio
- Stop sequences support
- Dedicated endpoint routing
- Generic chat support for OCI-hosted OpenAI, Google, and Meta models
- Cohere-specific OCI chat formatting for text models
- OCI embeddings in CrewAI RAG via the `oci` embedding provider
- CrewAI-managed OCI retrieval via `/en/tools/cloud-storage/ociknowledgebasetool`

**Current Limitations:**
- Cohere multimodal and vision-specific formatting is not implemented yet

**`langchain-oci` Sample Coverage in CrewAI:**
- `01-getting-started`: native OCI auth, provider routing, and streaming examples above
- `02-vision-and-multimodal`: supported through CrewAI file attachments on OCI multimodal models
- `03-building-ai-agents`: maps directly to CrewAI `Agent` usage with an OCI-backed `LLM`
- `04-tool-calling-mastery`: supported through CrewAI tools and OCI native function calling
- `05-structured-output`: supported through `response_model`
- `07-async-for-production`: supported through `acall()`
- `09-provider-deep-dive`: covered by provider-prefixed model routing and dedicated endpoint support
- `10-embeddings`: covered by the OCI embedding provider for text and image embeddings, plus [`OCIKnowledgeBaseTool`](/en/tools/cloud-storage/ociknowledgebasetool) for CrewAI-managed retrieval

**Install:**
```bash
uv add "crewai[oci]"
```
</Accordion>

<Accordion title="Amazon SageMaker">
```toml Code
AWS_ACCESS_KEY_ID=<your-access-key>
Expand Down
111 changes: 111 additions & 0 deletions docs/en/tools/cloud-storage/ociknowledgebasetool.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
---
title: "OCI Knowledge Base Tool"
description: "Build and query a CrewAI-managed knowledge base powered by OCI embeddings"
icon: "database"
mode: "wide"
---

# `OCIKnowledgeBaseTool`

The `OCIKnowledgeBaseTool` gives your agents a CrewAI-managed retrieval tool powered by OCI embedding models. It uses CrewAI's native `RagTool` stack and defaults to OCI embeddings, so you can load documents, directories, or URLs and query them semantically from your crews.

This is the closest CrewAI-native OCI equivalent to the Bedrock knowledge-base workflow. Unlike Amazon Bedrock Knowledge Bases, the index is managed inside CrewAI's RAG system rather than an OCI managed retrieval service.

## Installation

```bash
uv add "crewai[oci]"
uv add "crewai-tools[oci]"
```

## Example

```python
from crewai import Agent
from crewai_tools import OCIKnowledgeBaseTool

kb_tool = OCIKnowledgeBaseTool(
knowledge_source="./oracle-architecture.pdf",
compartment_id="ocid1.compartment.oc1..exampleuniqueID",
region="<your-oci-region>",
)

agent = Agent(
role="OCI Research Analyst",
goal="Answer architecture questions with the indexed OCI knowledge base",
tools=[kb_tool],
verbose=True,
)
```

## Add Sources Dynamically

```python
from crewai_tools import OCIKnowledgeBaseTool

kb_tool = OCIKnowledgeBaseTool(
compartment_id="ocid1.compartment.oc1..exampleuniqueID",
region="<your-oci-region>",
)

kb_tool.add("./runbooks/networking.md")
kb_tool.add("./runbooks/security/")
kb_tool.add("https://docs.oracle.com/en-us/iaas/Content/home.htm")
```

## Configuration

The tool defaults to this embedding configuration:

```python
{
"embedding_model": {
"provider": "oci",
"config": {
"model_name": "cohere.embed-english-v3.0",
"compartment_id": "ocid1.compartment.oc1..exampleuniqueID",
"region": "<your-oci-region>",
"auth_type": "API_KEY",
"auth_profile": "DEFAULT",
"auth_file_location": "~/.oci/config",
}
}
}
```

You can override the vector database layer with standard `RagTool` config:

```python
from crewai_tools import OCIKnowledgeBaseTool

kb_tool = OCIKnowledgeBaseTool(
compartment_id="ocid1.compartment.oc1..exampleuniqueID",
config={
"vectordb": {
"provider": "qdrant",
"config": {
"url": "http://localhost:6333",
"api_key": "qdrant-key",
},
}
},
)
```

## Environment Variables

```bash
OCI_COMPARTMENT_ID=ocid1.compartment.oc1..exampleuniqueID
OCI_REGION=<your-oci-region>
OCI_AUTH_TYPE=API_KEY
OCI_AUTH_PROFILE=DEFAULT
OCI_AUTH_FILE_LOCATION=~/.oci/config
OCI_EMBED_MODEL=cohere.embed-english-v3.0
```

## Notes

- Uses CrewAI's native RAG stack, not an OCI managed knowledge-base service
- Supports any source type that `RagTool` can ingest
- Defaults to OCI embeddings, but you can still override the `config` field for advanced vector store configuration
- For direct OCI text or image embedding workflows with `cohere.embed-v4.0`, use the CrewAI OCI embedding provider outside of `OCIKnowledgeBaseTool`
38 changes: 38 additions & 0 deletions docs/en/tools/cloud-storage/ociobjectstoragereadertool.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
---
title: "OCI Object Storage Reader Tool"
description: "Read files from Oracle Cloud Infrastructure Object Storage"
icon: "cloud"
mode: "wide"
---

# `OCIObjectStorageReaderTool`

Use `OCIObjectStorageReaderTool` to read text files from Oracle Cloud Infrastructure Object Storage inside a CrewAI workflow.

## Installation

```bash
uv pip install 'crewai-tools[oci]'
```

## Usage

```python
from crewai import Agent
from crewai_tools import OCIObjectStorageReaderTool

oci_reader = OCIObjectStorageReaderTool(namespace_name="my-namespace")

agent = Agent(
role="Cloud Reader",
goal="Fetch cloud-hosted files",
tools=[oci_reader],
)
```

## Path Formats

- `oci://bucket/path/to/file.txt`
- `oci://namespace@bucket/path/to/file.txt`

If the namespace is omitted, the tool will use `namespace_name`, `OCI_OBJECT_STORAGE_NAMESPACE`, or fetch the namespace from OCI automatically.
Loading