diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md index 4d0d8df90d0..565c53e5609 100644 --- a/.github/CONTRIBUTING.md +++ b/.github/CONTRIBUTING.md @@ -164,10 +164,14 @@ Access the application at [http://localhost:3000/](http://localhost:3000/) To use local models with Sim: -1. Pull models using our helper script: +1. Install Ollama and pull models: ```bash -./apps/sim/scripts/ollama_docker.sh pull +# Install Ollama (if not already installed) +curl -fsSL https://ollama.ai/install.sh | sh + +# Pull a model (e.g., gemma3:4b) +ollama pull gemma3:4b ``` 2. Start Sim with local model support: @@ -533,7 +537,7 @@ This visibility system ensures clean user interfaces while maintaining full flex ### Guidelines & Best Practices -- **Code Style:** Follow the project's ESLint and Prettier configurations. Use meaningful variable names and small, focused functions. +- **Code Style:** Follow the project's Biome configurations. Use meaningful variable names and small, focused functions. - **Documentation:** Clearly document the purpose, inputs, outputs, and any special behavior for your block/tool. - **Error Handling:** Implement robust error handling and provide user-friendly error messages. - **Parameter Visibility:** Always specify the appropriate visibility level for each parameter to ensure proper UI behavior and LLM integration. diff --git a/README.md b/README.md index f9855815e9f..be3b0ec97cf 100644 --- a/README.md +++ b/README.md @@ -59,27 +59,21 @@ docker compose -f docker-compose.prod.yml up -d Access the application at [http://localhost:3000/](http://localhost:3000/) -#### Using Local Models +#### Using Local Models with Ollama -To use local models with Sim: - -1. Pull models using our helper script: +Run Sim with local AI models using [Ollama](https://ollama.ai) - no external APIs required: ```bash -./apps/sim/scripts/ollama_docker.sh pull -``` +# Start with GPU support (automatically downloads gemma3:4b model) +docker compose -f docker-compose.ollama.yml --profile setup up -d -2. Start Sim with local model support: +# For CPU-only systems: +docker compose -f docker-compose.ollama.yml --profile cpu --profile setup up -d +``` +Wait for the model to download, then visit [http://localhost:3000](http://localhost:3000). Add more models with: ```bash -# With NVIDIA GPU support -docker compose --profile local-gpu -f docker-compose.ollama.yml up -d - -# Without GPU (CPU only) -docker compose --profile local-cpu -f docker-compose.ollama.yml up -d - -# If hosting on a server, update the environment variables in the docker-compose.prod.yml file to include the server's public IP then start again (OLLAMA_URL to i.e. http://1.1.1.1:11434) -docker compose -f docker-compose.prod.yml up -d +docker compose -f docker-compose.ollama.yml exec ollama ollama pull llama3.1:8b ``` ### Option 3: Dev Containers diff --git a/apps/docs/content/docs/blocks/agent.mdx b/apps/docs/content/docs/blocks/agent.mdx index b5b02ff7688..42c3ea9bc5f 100644 --- a/apps/docs/content/docs/blocks/agent.mdx +++ b/apps/docs/content/docs/blocks/agent.mdx @@ -61,7 +61,7 @@ The user prompt represents the primary input data for inference processing. This The Agent block supports multiple LLM providers through a unified inference interface. Available models include: -**OpenAI Models**: GPT-4o, o1, o3, o4-mini, gpt-4.1 (API-based inference) +**OpenAI Models**: GPT-5, GPT-4o, o1, o3, o4-mini, gpt-4.1 (API-based inference) **Anthropic Models**: Claude 3.7 Sonnet (API-based inference) **Google Models**: Gemini 2.5 Pro, Gemini 2.0 Flash (API-based inference) **Alternative Providers**: Groq, Cerebras, xAI, DeepSeek (API-based inference) diff --git a/apps/docs/content/docs/connections/data-structure.mdx b/apps/docs/content/docs/connections/data-structure.mdx index 591d364fea4..590b94483e3 100644 --- a/apps/docs/content/docs/connections/data-structure.mdx +++ b/apps/docs/content/docs/connections/data-structure.mdx @@ -84,7 +84,7 @@ Different block types produce different output structures. Here's what you can e ```json { "content": "Evaluation summary", - "model": "gpt-4o", + "model": "gpt-5", "tokens": { "prompt": 120, "completion": 85, diff --git a/apps/docs/content/docs/tools/file.mdx b/apps/docs/content/docs/tools/file.mdx index f7afb49c4e9..88a3fecf8eb 100644 --- a/apps/docs/content/docs/tools/file.mdx +++ b/apps/docs/content/docs/tools/file.mdx @@ -50,7 +50,7 @@ The File Parser tool is particularly useful for scenarios where your agents need ## Usage Instructions -Upload and extract contents from structured file formats including PDFs, CSV spreadsheets, and Word documents (DOCX). Upload files directly. Specialized parsers extract text and metadata from each format. You can upload multiple files at once and access them individually or as a combined document. +Upload and extract contents from structured file formats including PDFs, CSV spreadsheets, and Word documents (DOCX). You can either provide a URL to a file or upload files directly. Specialized parsers extract text and metadata from each format. You can upload multiple files at once and access them individually or as a combined document. diff --git a/apps/docs/content/docs/tools/hunter.mdx b/apps/docs/content/docs/tools/hunter.mdx index 6ac7e166f25..f7f2c17fa29 100644 --- a/apps/docs/content/docs/tools/hunter.mdx +++ b/apps/docs/content/docs/tools/hunter.mdx @@ -38,6 +38,7 @@ With Hunter.io, you can: In Sim, the Hunter.io integration enables your agents to programmatically search for and verify email addresses, discover companies, and enrich contact data using Hunter.io’s API. This allows you to automate lead generation, contact enrichment, and email verification directly within your workflows. Your agents can leverage Hunter.io’s tools to streamline outreach, keep your CRM up-to-date, and power intelligent automation scenarios for sales, recruiting, and more. {/* MANUAL-CONTENT-END */} + ## Usage Instructions Search for email addresses, verify their deliverability, discover companies, and enrich contact data using Hunter.io's powerful email finding capabilities. diff --git a/apps/docs/content/docs/tools/knowledge.mdx b/apps/docs/content/docs/tools/knowledge.mdx index b42ce28fb9e..5a1b4e6b354 100644 --- a/apps/docs/content/docs/tools/knowledge.mdx +++ b/apps/docs/content/docs/tools/knowledge.mdx @@ -64,7 +64,7 @@ Search for similar content in a knowledge base using vector similarity | Parameter | Type | Required | Description | | --------- | ---- | -------- | ----------- | | `knowledgeBaseId` | string | Yes | ID of the knowledge base to search in | -| `query` | string | Yes | Search query text | +| `query` | string | No | Search query text \(optional when using tag filters\) | | `topK` | number | No | Number of most similar results to return \(1-100\) | | `tagFilters` | any | No | Array of tag filters with tagName and tagValue properties | diff --git a/apps/docs/content/docs/tools/meta.json b/apps/docs/content/docs/tools/meta.json index db0fc6c0753..b4ba8f9277f 100644 --- a/apps/docs/content/docs/tools/meta.json +++ b/apps/docs/content/docs/tools/meta.json @@ -29,9 +29,11 @@ "mem0", "memory", "microsoft_excel", + "microsoft_planner", "microsoft_teams", "mistral_parse", "notion", + "onedrive", "openai", "outlook", "perplexity", @@ -41,6 +43,7 @@ "s3", "schedule", "serper", + "sharepoint", "slack", "stagehand", "stagehand_agent", diff --git a/apps/docs/content/docs/tools/microsoft_planner.mdx b/apps/docs/content/docs/tools/microsoft_planner.mdx new file mode 100644 index 00000000000..5ae82298f35 --- /dev/null +++ b/apps/docs/content/docs/tools/microsoft_planner.mdx @@ -0,0 +1,178 @@ +--- +title: Microsoft Planner +description: Read and create tasks in Microsoft Planner +--- + +import { BlockInfoCard } from "@/components/ui/block-info-card" + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + `} +/> + +{/* MANUAL-CONTENT-START:intro */} +[Microsoft Planner](https://www.microsoft.com/en-us/microsoft-365/planner) is a task management tool that helps teams organize work visually using boards, tasks, and buckets. Integrated with Microsoft 365, it offers a simple, intuitive way to manage team projects, assign responsibilities, and track progress. + +With Microsoft Planner, you can: + +- **Create and manage tasks**: Add new tasks with due dates, priorities, and assigned users +- **Organize with buckets**: Group tasks by phase, status, or category to reflect your team’s workflow +- **Visualize project status**: Use boards, charts, and filters to monitor workload and track progress +- **Stay integrated with Microsoft 365**: Seamlessly connect tasks with Teams, Outlook, and other Microsoft tools + +In Sim, the Microsoft Planner integration allows your agents to programmatically create, read, and manage tasks as part of their workflows. Agents can generate new tasks based on incoming requests, retrieve task details to drive decisions, and track status across projects — all without human intervention. Whether you're building workflows for client onboarding, internal project tracking, or follow-up task generation, integrating Microsoft Planner with Sim gives your agents a structured way to coordinate work, automate task creation, and keep teams aligned. +{/* MANUAL-CONTENT-END */} + + +## Usage Instructions + +Integrate Microsoft Planner functionality to manage tasks. Read all user tasks, tasks from specific plans, individual tasks, or create new tasks with various properties like title, description, due date, and assignees using OAuth authentication. + + + +## Tools + +### `microsoft_planner_read_task` + +Read tasks from Microsoft Planner - get all user tasks or all tasks from a specific plan + +#### Input + +| Parameter | Type | Required | Description | +| --------- | ---- | -------- | ----------- | +| `accessToken` | string | Yes | The access token for the Microsoft Planner API | +| `planId` | string | No | The ID of the plan to get tasks from \(if not provided, gets all user tasks\) | +| `taskId` | string | No | The ID of the task to get | + +#### Output + +| Parameter | Type | Description | +| --------- | ---- | ----------- | +| `task` | json | The Microsoft Planner task object, including details such as id, title, description, status, due date, and assignees. | +| `metadata` | json | Additional metadata about the operation, such as timestamps, request status, or other relevant information. | + +### `microsoft_planner_create_task` + +Create a new task in Microsoft Planner + +#### Input + +| Parameter | Type | Required | Description | +| --------- | ---- | -------- | ----------- | +| `accessToken` | string | Yes | The access token for the Microsoft Planner API | +| `planId` | string | Yes | The ID of the plan where the task will be created | +| `title` | string | Yes | The title of the task | +| `description` | string | No | The description of the task | +| `dueDateTime` | string | No | The due date and time for the task \(ISO 8601 format\) | +| `assigneeUserId` | string | No | The user ID to assign the task to | +| `bucketId` | string | No | The bucket ID to place the task in | + +#### Output + +| Parameter | Type | Description | +| --------- | ---- | ----------- | +| `task` | json | The Microsoft Planner task object, including details such as id, title, description, status, due date, and assignees. | +| `metadata` | json | Additional metadata about the operation, such as timestamps, request status, or other relevant information. | + + + +## Notes + +- Category: `tools` +- Type: `microsoft_planner` diff --git a/apps/docs/content/docs/tools/mistral_parse.mdx b/apps/docs/content/docs/tools/mistral_parse.mdx index eb4700322ff..dbd5a0f72fc 100644 --- a/apps/docs/content/docs/tools/mistral_parse.mdx +++ b/apps/docs/content/docs/tools/mistral_parse.mdx @@ -79,7 +79,7 @@ The Mistral Parse tool is particularly useful for scenarios where your agents ne ## Usage Instructions -Extract text and structure from PDF documents using Mistral's OCR API. Configure processing options and get the content in your preferred format. For URLs, they must be publicly accessible and point to a valid PDF file. Note: Google Drive, Dropbox, and other cloud storage links are not supported; use a direct download URL from a web server instead. +Extract text and structure from PDF documents using Mistral's OCR API. Either enter a URL to a PDF document or upload a PDF file directly. Configure processing options and get the content in your preferred format. For URLs, they must be publicly accessible and point to a valid PDF file. Note: Google Drive, Dropbox, and other cloud storage links are not supported; use a direct download URL from a web server instead. diff --git a/apps/docs/content/docs/tools/onedrive.mdx b/apps/docs/content/docs/tools/onedrive.mdx new file mode 100644 index 00000000000..7a389f238ef --- /dev/null +++ b/apps/docs/content/docs/tools/onedrive.mdx @@ -0,0 +1,127 @@ +--- +title: OneDrive +description: Create, upload, and list files +--- + +import { BlockInfoCard } from "@/components/ui/block-info-card" + + + + + + + + + `} +/> + +{/* MANUAL-CONTENT-START:intro */} +[OneDrive](https://onedrive.live.com) is Microsoft’s cloud storage and file synchronization service that allows users to securely store, access, and share files across devices. Integrated deeply into the Microsoft 365 ecosystem, OneDrive supports seamless collaboration, version control, and real-time access to content across teams and organizations. + +Learn how to integrate the OneDrive tool in Sim to automatically pull, manage, and organize your cloud files within your workflows. This tutorial walks you through connecting OneDrive, setting up file access, and using stored content to power automation. Ideal for syncing essential documents and media with your agents in real time. + +With OneDrive, you can: + +- **Store files securely in the cloud**: Upload and access documents, images, and other files from any device +- **Organize your content**: Create structured folders and manage file versions with ease +- **Collaborate in real time**: Share files, edit them simultaneously with others, and track changes +- **Access across devices**: Use OneDrive from desktop, mobile, and web platforms +- **Integrate with Microsoft 365**: Work seamlessly with Word, Excel, PowerPoint, and Teams +- **Control permissions**: Share files and folders with custom access settings and expiration controls + +In Sim, the OneDrive integration enables your agents to directly interact with your cloud storage. Agents can upload new files to specific folders, retrieve and read existing files, and list folder contents to dynamically organize and access information. This integration allows your agents to incorporate file operations into intelligent workflows — automating document intake, content analysis, and structured storage management. By connecting Sim with OneDrive, you empower your agents to manage and use cloud documents programmatically, eliminating manual steps and enhancing automation with secure, real-time file access. +{/* MANUAL-CONTENT-END */} + + +## Usage Instructions + +Integrate OneDrive functionality to manage files and folders. Upload new files, create new folders, and list contents of folders using OAuth authentication. Supports file operations with custom MIME types and folder organization. + + + +## Tools + +### `onedrive_upload` + +Upload a file to OneDrive + +#### Input + +| Parameter | Type | Required | Description | +| --------- | ---- | -------- | ----------- | +| `accessToken` | string | Yes | The access token for the OneDrive API | +| `fileName` | string | Yes | The name of the file to upload | +| `content` | string | Yes | The content of the file to upload | +| `folderSelector` | string | No | Select the folder to upload the file to | +| `folderId` | string | No | The ID of the folder to upload the file to \(internal use\) | + +#### Output + +| Parameter | Type | Description | +| --------- | ---- | ----------- | +| `file` | json | The OneDrive file object, including details such as id, name, size, and more. | +| `files` | json | An array of OneDrive file objects, each containing details such as id, name, size, and more. | + +### `onedrive_create_folder` + +Create a new folder in OneDrive + +#### Input + +| Parameter | Type | Required | Description | +| --------- | ---- | -------- | ----------- | +| `accessToken` | string | Yes | The access token for the OneDrive API | +| `folderName` | string | Yes | Name of the folder to create | +| `folderSelector` | string | No | Select the parent folder to create the folder in | +| `folderId` | string | No | ID of the parent folder \(internal use\) | + +#### Output + +| Parameter | Type | Description | +| --------- | ---- | ----------- | +| `file` | json | The OneDrive file object, including details such as id, name, size, and more. | +| `files` | json | An array of OneDrive file objects, each containing details such as id, name, size, and more. | + +### `onedrive_list` + +List files and folders in OneDrive + +#### Input + +| Parameter | Type | Required | Description | +| --------- | ---- | -------- | ----------- | +| `accessToken` | string | Yes | The access token for the OneDrive API | +| `folderSelector` | string | No | Select the folder to list files from | +| `folderId` | string | No | The ID of the folder to list files from \(internal use\) | +| `query` | string | No | A query to filter the files | +| `pageSize` | number | No | The number of files to return | + +#### Output + +| Parameter | Type | Description | +| --------- | ---- | ----------- | +| `file` | json | The OneDrive file object, including details such as id, name, size, and more. | +| `files` | json | An array of OneDrive file objects, each containing details such as id, name, size, and more. | + + + +## Notes + +- Category: `tools` +- Type: `onedrive` diff --git a/apps/docs/content/docs/tools/outlook.mdx b/apps/docs/content/docs/tools/outlook.mdx index 6e1873b6462..1aefc7e0777 100644 --- a/apps/docs/content/docs/tools/outlook.mdx +++ b/apps/docs/content/docs/tools/outlook.mdx @@ -158,6 +158,10 @@ Send emails using Outlook | `to` | string | Yes | Recipient email address | | `subject` | string | Yes | Email subject | | `body` | string | Yes | Email body content | +| `replyToMessageId` | string | No | Message ID to reply to \(for threading\) | +| `conversationId` | string | No | Conversation ID for threading | +| `cc` | string | No | CC recipients \(comma-separated\) | +| `bcc` | string | No | BCC recipients \(comma-separated\) | #### Output diff --git a/apps/docs/content/docs/tools/sharepoint.mdx b/apps/docs/content/docs/tools/sharepoint.mdx new file mode 100644 index 00000000000..3c44e35104e --- /dev/null +++ b/apps/docs/content/docs/tools/sharepoint.mdx @@ -0,0 +1,135 @@ +--- +title: Sharepoint +description: Read and create pages +--- + +import { BlockInfoCard } from "@/components/ui/block-info-card" + + + + + + + + + + + + `} +/> + +{/* MANUAL-CONTENT-START:intro */} +[SharePoint](https://www.microsoft.com/en-us/microsoft-365/sharepoint/collaboration) is a collaborative platform from Microsoft that enables users to build and manage internal websites, share documents, and organize team resources. It provides a powerful, flexible solution for creating digital workspaces and streamlining content management across organizations. + +With SharePoint, you can: + +- **Create team and communication sites**: Set up pages and portals to support collaboration, announcements, and content distribution +- **Organize and share content**: Store documents, manage files, and enable version control with secure sharing capabilities +- **Customize pages**: Add text parts to tailor each site to your team's needs +- **Improve discoverability**: Use metadata, search, and navigation tools to help users quickly find what they need +- **Collaborate securely**: Control access with robust permission settings and Microsoft 365 integration + +In Sim, the SharePoint integration empowers your agents to create and access SharePoint sites and pages as part of their workflows. This enables automated document management, knowledge sharing, and workspace creation without manual effort. Agents can generate new project pages, upload or retrieve files, and organize resources dynamically, based on workflow inputs. By connecting Sim with SharePoint, you bring structured collaboration and content management into your automation flows — giving your agents the ability to coordinate team activities, surface key information, and maintain a single source of truth across your organization. +{/* MANUAL-CONTENT-END */} + + +## Usage Instructions + +Integrate Sharepoint functionality to manage pages. Read and create pages, and list sites using OAuth authentication. Supports page operations with custom MIME types and folder organization. + + + +## Tools + +### `sharepoint_create_page` + +Create a new page in a SharePoint site + +#### Input + +| Parameter | Type | Required | Description | +| --------- | ---- | -------- | ----------- | +| `accessToken` | string | Yes | The access token for the SharePoint API | +| `siteId` | string | No | The ID of the SharePoint site \(internal use\) | +| `siteSelector` | string | No | Select the SharePoint site | +| `pageName` | string | Yes | The name of the page to create | +| `pageTitle` | string | No | The title of the page \(defaults to page name if not provided\) | +| `pageContent` | string | No | The content of the page | + +#### Output + +| Parameter | Type | Description | +| --------- | ---- | ----------- | +| `sites` | json | An array of SharePoint site objects, each containing details such as id, name, and more. | + +### `sharepoint_read_page` + +Read a specific page from a SharePoint site + +#### Input + +| Parameter | Type | Required | Description | +| --------- | ---- | -------- | ----------- | +| `accessToken` | string | Yes | The access token for the SharePoint API | +| `siteSelector` | string | No | Select the SharePoint site | +| `siteId` | string | No | The ID of the SharePoint site \(internal use\) | +| `pageId` | string | No | The ID of the page to read | +| `pageName` | string | No | The name of the page to read \(alternative to pageId\) | +| `maxPages` | number | No | Maximum number of pages to return when listing all pages \(default: 10, max: 50\) | + +#### Output + +| Parameter | Type | Description | +| --------- | ---- | ----------- | +| `sites` | json | An array of SharePoint site objects, each containing details such as id, name, and more. | + +### `sharepoint_list_sites` + +List details of all SharePoint sites + +#### Input + +| Parameter | Type | Required | Description | +| --------- | ---- | -------- | ----------- | +| `accessToken` | string | Yes | The access token for the SharePoint API | +| `siteSelector` | string | No | Select the SharePoint site | +| `groupId` | string | No | The group ID for accessing a group team site | + +#### Output + +| Parameter | Type | Description | +| --------- | ---- | ----------- | +| `sites` | json | An array of SharePoint site objects, each containing details such as id, name, and more. | + + + +## Notes + +- Category: `tools` +- Type: `sharepoint` diff --git a/apps/sim/.env.example b/apps/sim/.env.example index 5c126fb3230..ee2c0f84d69 100644 --- a/apps/sim/.env.example +++ b/apps/sim/.env.example @@ -15,3 +15,6 @@ ENCRYPTION_KEY=your_encryption_key # Use `openssl rand -hex 32` to generate # RESEND_API_KEY= # Uncomment and add your key from https://resend.com to send actual emails # If left commented out, emails will be logged to console instead +# Local AI Models (Optional) +# OLLAMA_URL=http://localhost:11434 # URL for local Ollama server - uncomment if using local models + diff --git a/apps/sim/app/(auth)/layout.tsx b/apps/sim/app/(auth)/layout.tsx index 76a3726817d..3d1f4bcb30e 100644 --- a/apps/sim/app/(auth)/layout.tsx +++ b/apps/sim/app/(auth)/layout.tsx @@ -2,9 +2,12 @@ import Image from 'next/image' import Link from 'next/link' +import { useBrandConfig } from '@/lib/branding/branding' import { GridPattern } from '@/app/(landing)/components/grid-pattern' export default function AuthLayout({ children }: { children: React.ReactNode }) { + const brand = useBrandConfig() + return (
{/* Background pattern */} @@ -21,7 +24,17 @@ export default function AuthLayout({ children }: { children: React.ReactNode })
- Sim Logo + {brand.logoUrl ? ( + {`${brand.name} + ) : ( + {`${brand.name} + )}
diff --git a/apps/sim/app/(landing)/components/nav-client.tsx b/apps/sim/app/(landing)/components/nav-client.tsx index 6bc96ea1cc0..e312b7e09d9 100644 --- a/apps/sim/app/(landing)/components/nav-client.tsx +++ b/apps/sim/app/(landing)/components/nav-client.tsx @@ -15,6 +15,7 @@ import { SheetTitle, SheetTrigger, } from '@/components/ui/sheet' +import { useBrandConfig } from '@/lib/branding/branding' import { usePrefetchOnHover } from '@/app/(landing)/utils/prefetch' // --- Framer Motion Variants --- @@ -165,6 +166,7 @@ export default function NavClient({ const [isMobile, setIsMobile] = useState(initialIsMobile ?? false) const [isSheetOpen, setIsSheetOpen] = useState(false) const _router = useRouter() + const brand = useBrandConfig() useEffect(() => { setMounted(true) @@ -199,7 +201,17 @@ export default function NavClient({
- Sim Logo + {brand.logoUrl ? ( + {`${brand.name} + ) : ( + {`${brand.name} + )}
diff --git a/apps/sim/app/api/auth/oauth/connections/route.ts b/apps/sim/app/api/auth/oauth/connections/route.ts index b174564c3ab..6bcb0c6b20f 100644 --- a/apps/sim/app/api/auth/oauth/connections/route.ts +++ b/apps/sim/app/api/auth/oauth/connections/route.ts @@ -6,8 +6,6 @@ import { createLogger } from '@/lib/logs/console/logger' import { db } from '@/db' import { account, user } from '@/db/schema' -export const dynamic = 'force-dynamic' - const logger = createLogger('OAuthConnectionsAPI') interface GoogleIdToken { diff --git a/apps/sim/app/api/billing/route.ts b/apps/sim/app/api/billing/route.ts index bf92abd8fab..6769fee05a5 100644 --- a/apps/sim/app/api/billing/route.ts +++ b/apps/sim/app/api/billing/route.ts @@ -9,8 +9,6 @@ import { member } from '@/db/schema' const logger = createLogger('UnifiedBillingAPI') -export const dynamic = 'force-dynamic' - /** * Unified Billing Endpoint */ diff --git a/apps/sim/app/api/copilot/chat/file-utils.ts b/apps/sim/app/api/copilot/chat/file-utils.ts new file mode 100644 index 00000000000..48b81bafa6c --- /dev/null +++ b/apps/sim/app/api/copilot/chat/file-utils.ts @@ -0,0 +1,132 @@ +export interface FileAttachment { + id: string + s3_key: string + filename: string + media_type: string + size: number +} + +export interface AnthropicMessageContent { + type: 'text' | 'image' | 'document' + text?: string + source?: { + type: 'base64' + media_type: string + data: string + } +} + +/** + * Mapping of MIME types to Anthropic content types + */ +export const MIME_TYPE_MAPPING: Record = { + // Images + 'image/jpeg': 'image', + 'image/jpg': 'image', + 'image/png': 'image', + 'image/gif': 'image', + 'image/webp': 'image', + 'image/svg+xml': 'image', + + // Documents + 'application/pdf': 'document', + 'text/plain': 'document', + 'text/csv': 'document', + 'application/json': 'document', + 'application/xml': 'document', + 'text/xml': 'document', + 'text/html': 'document', + 'application/vnd.openxmlformats-officedocument.wordprocessingml.document': 'document', // .docx + 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet': 'document', // .xlsx + 'application/vnd.openxmlformats-officedocument.presentationml.presentation': 'document', // .pptx + 'application/msword': 'document', // .doc + 'application/vnd.ms-excel': 'document', // .xls + 'application/vnd.ms-powerpoint': 'document', // .ppt + 'text/markdown': 'document', + 'application/rtf': 'document', +} + +/** + * Get the Anthropic content type for a given MIME type + */ +export function getAnthropicContentType(mimeType: string): 'image' | 'document' | null { + return MIME_TYPE_MAPPING[mimeType.toLowerCase()] || null +} + +/** + * Check if a MIME type is supported by Anthropic + */ +export function isSupportedFileType(mimeType: string): boolean { + return mimeType.toLowerCase() in MIME_TYPE_MAPPING +} + +/** + * Convert a file buffer to base64 + */ +export function bufferToBase64(buffer: Buffer): string { + return buffer.toString('base64') +} + +/** + * Create Anthropic message content from file data + */ +export function createAnthropicFileContent( + fileBuffer: Buffer, + mimeType: string +): AnthropicMessageContent | null { + const contentType = getAnthropicContentType(mimeType) + if (!contentType) { + return null + } + + return { + type: contentType, + source: { + type: 'base64', + media_type: mimeType, + data: bufferToBase64(fileBuffer), + }, + } +} + +/** + * Extract file extension from filename + */ +export function getFileExtension(filename: string): string { + const lastDot = filename.lastIndexOf('.') + return lastDot !== -1 ? filename.slice(lastDot + 1).toLowerCase() : '' +} + +/** + * Get MIME type from file extension (fallback if not provided) + */ +export function getMimeTypeFromExtension(extension: string): string { + const extensionMimeMap: Record = { + // Images + jpg: 'image/jpeg', + jpeg: 'image/jpeg', + png: 'image/png', + gif: 'image/gif', + webp: 'image/webp', + svg: 'image/svg+xml', + + // Documents + pdf: 'application/pdf', + txt: 'text/plain', + csv: 'text/csv', + json: 'application/json', + xml: 'application/xml', + html: 'text/html', + htm: 'text/html', + docx: 'application/vnd.openxmlformats-officedocument.wordprocessingml.document', + xlsx: 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet', + pptx: 'application/vnd.openxmlformats-officedocument.presentationml.presentation', + doc: 'application/msword', + xls: 'application/vnd.ms-excel', + ppt: 'application/vnd.ms-powerpoint', + md: 'text/markdown', + rtf: 'application/rtf', + } + + return extensionMimeMap[extension.toLowerCase()] || 'application/octet-stream' +} diff --git a/apps/sim/app/api/copilot/chat/route.ts b/apps/sim/app/api/copilot/chat/route.ts index 6b5bfad14d0..9f042f855ed 100644 --- a/apps/sim/app/api/copilot/chat/route.ts +++ b/apps/sim/app/api/copilot/chat/route.ts @@ -13,12 +13,25 @@ import { getCopilotModel } from '@/lib/copilot/config' import { TITLE_GENERATION_SYSTEM_PROMPT, TITLE_GENERATION_USER_PROMPT } from '@/lib/copilot/prompts' import { env } from '@/lib/env' import { createLogger } from '@/lib/logs/console/logger' +import { downloadFile } from '@/lib/uploads' +import { downloadFromS3WithConfig } from '@/lib/uploads/s3/s3-client' +import { S3_COPILOT_CONFIG, USE_S3_STORAGE } from '@/lib/uploads/setup' import { db } from '@/db' import { copilotChats } from '@/db/schema' import { executeProviderRequest } from '@/providers' +import { createAnthropicFileContent, isSupportedFileType } from './file-utils' const logger = createLogger('CopilotChatAPI') +// Schema for file attachments +const FileAttachmentSchema = z.object({ + id: z.string(), + s3_key: z.string(), + filename: z.string(), + media_type: z.string(), + size: z.number(), +}) + // Schema for chat messages const ChatMessageSchema = z.object({ message: z.string().min(1, 'Message is required'), @@ -29,6 +42,7 @@ const ChatMessageSchema = z.object({ createNewChat: z.boolean().optional().default(false), stream: z.boolean().optional().default(true), implicitFeedback: z.string().optional(), + fileAttachments: z.array(FileAttachmentSchema).optional(), }) // Sim Agent API configuration @@ -145,6 +159,7 @@ export async function POST(req: NextRequest) { createNewChat, stream, implicitFeedback, + fileAttachments, } = ChatMessageSchema.parse(body) logger.info(`[${tracker.requestId}] Processing copilot chat request`, { @@ -195,15 +210,91 @@ export async function POST(req: NextRequest) { } } + // Process file attachments if present + const processedFileContents: any[] = [] + if (fileAttachments && fileAttachments.length > 0) { + logger.info(`[${tracker.requestId}] Processing ${fileAttachments.length} file attachments`) + + for (const attachment of fileAttachments) { + try { + // Check if file type is supported + if (!isSupportedFileType(attachment.media_type)) { + logger.warn(`[${tracker.requestId}] Unsupported file type: ${attachment.media_type}`) + continue + } + + // Download file from S3 + logger.info(`[${tracker.requestId}] Downloading file: ${attachment.s3_key}`) + let fileBuffer: Buffer + if (USE_S3_STORAGE) { + fileBuffer = await downloadFromS3WithConfig(attachment.s3_key, S3_COPILOT_CONFIG) + } else { + // Fallback to generic downloadFile for other storage providers + fileBuffer = await downloadFile(attachment.s3_key) + } + + // Convert to Anthropic format + const fileContent = createAnthropicFileContent(fileBuffer, attachment.media_type) + if (fileContent) { + processedFileContents.push(fileContent) + logger.info( + `[${tracker.requestId}] Processed file: ${attachment.filename} (${attachment.media_type})` + ) + } + } catch (error) { + logger.error( + `[${tracker.requestId}] Failed to process file ${attachment.filename}:`, + error + ) + // Continue processing other files + } + } + } + // Build messages array for sim agent with conversation history const messages = [] - // Add conversation history + // Add conversation history (need to rebuild these with file support if they had attachments) for (const msg of conversationHistory) { - messages.push({ - role: msg.role, - content: msg.content, - }) + if (msg.fileAttachments && msg.fileAttachments.length > 0) { + // This is a message with file attachments - rebuild with content array + const content: any[] = [{ type: 'text', text: msg.content }] + + // Process file attachments for historical messages + for (const attachment of msg.fileAttachments) { + try { + if (isSupportedFileType(attachment.media_type)) { + let fileBuffer: Buffer + if (USE_S3_STORAGE) { + fileBuffer = await downloadFromS3WithConfig(attachment.s3_key, S3_COPILOT_CONFIG) + } else { + // Fallback to generic downloadFile for other storage providers + fileBuffer = await downloadFile(attachment.s3_key) + } + const fileContent = createAnthropicFileContent(fileBuffer, attachment.media_type) + if (fileContent) { + content.push(fileContent) + } + } + } catch (error) { + logger.error( + `[${tracker.requestId}] Failed to process historical file ${attachment.filename}:`, + error + ) + } + } + + messages.push({ + role: msg.role, + content, + }) + } else { + // Regular text-only message + messages.push({ + role: msg.role, + content: msg.content, + }) + } } // Add implicit feedback if provided @@ -214,11 +305,27 @@ export async function POST(req: NextRequest) { }) } - // Add current user message - messages.push({ - role: 'user', - content: message, - }) + // Add current user message with file attachments + if (processedFileContents.length > 0) { + // Message with files - use content array format + const content: any[] = [{ type: 'text', text: message }] + + // Add file contents + for (const fileContent of processedFileContents) { + content.push(fileContent) + } + + messages.push({ + role: 'user', + content, + }) + } else { + // Text-only message + messages.push({ + role: 'user', + content: message, + }) + } // Start title generation in parallel if this is a new chat with first message if (actualChatId && !currentChat?.title && conversationHistory.length === 0) { @@ -270,6 +377,7 @@ export async function POST(req: NextRequest) { role: 'user', content: message, timestamp: new Date().toISOString(), + ...(fileAttachments && fileAttachments.length > 0 && { fileAttachments }), } // Create a pass-through stream that captures the response @@ -590,6 +698,7 @@ export async function POST(req: NextRequest) { role: 'user', content: message, timestamp: new Date().toISOString(), + ...(fileAttachments && fileAttachments.length > 0 && { fileAttachments }), } const assistantMessage = { diff --git a/apps/sim/app/api/copilot/chat/update-messages/route.ts b/apps/sim/app/api/copilot/chat/update-messages/route.ts index c7af5952516..598679e560c 100644 --- a/apps/sim/app/api/copilot/chat/update-messages/route.ts +++ b/apps/sim/app/api/copilot/chat/update-messages/route.ts @@ -24,6 +24,17 @@ const UpdateMessagesSchema = z.object({ timestamp: z.string(), toolCalls: z.array(z.any()).optional(), contentBlocks: z.array(z.any()).optional(), + fileAttachments: z + .array( + z.object({ + id: z.string(), + s3_key: z.string(), + filename: z.string(), + media_type: z.string(), + size: z.number(), + }) + ) + .optional(), }) ), }) diff --git a/apps/sim/app/api/copilot/methods/route.test.ts b/apps/sim/app/api/copilot/methods/route.test.ts index 206aa7bceda..02cae1bc637 100644 --- a/apps/sim/app/api/copilot/methods/route.test.ts +++ b/apps/sim/app/api/copilot/methods/route.test.ts @@ -354,7 +354,14 @@ describe('Copilot Methods API Route', () => { 86400 ) expect(mockRedisGet).toHaveBeenCalledWith('tool_call:tool-call-123') - expect(mockToolRegistryExecute).toHaveBeenCalledWith('interrupt-tool', { key: 'value' }) + expect(mockToolRegistryExecute).toHaveBeenCalledWith('interrupt-tool', { + key: 'value', + confirmationMessage: 'User approved', + fullData: { + message: 'User approved', + status: 'accepted', + }, + }) }) it('should handle tool execution with interrupt - user rejection', async () => { @@ -613,6 +620,10 @@ describe('Copilot Methods API Route', () => { expect(mockToolRegistryExecute).toHaveBeenCalledWith('no_op', { existing: 'param', confirmationMessage: 'Confirmation message', + fullData: { + message: 'Confirmation message', + status: 'accepted', + }, }) }) diff --git a/apps/sim/app/api/copilot/methods/route.ts b/apps/sim/app/api/copilot/methods/route.ts index ced17d38ac3..7a21f8a875a 100644 --- a/apps/sim/app/api/copilot/methods/route.ts +++ b/apps/sim/app/api/copilot/methods/route.ts @@ -57,7 +57,7 @@ async function addToolToRedis(toolCallId: string): Promise { */ async function pollRedisForTool( toolCallId: string -): Promise<{ status: NotificationStatus; message?: string } | null> { +): Promise<{ status: NotificationStatus; message?: string; fullData?: any } | null> { const redis = getRedisClient() if (!redis) { logger.warn('pollRedisForTool: Redis client not available') @@ -86,12 +86,14 @@ async function pollRedisForTool( let status: NotificationStatus | null = null let message: string | undefined + let fullData: any = null // Try to parse as JSON (new format), fallback to string (old format) try { const parsedData = JSON.parse(redisValue) status = parsedData.status as NotificationStatus message = parsedData.message || undefined + fullData = parsedData // Store the full parsed data } catch { // Fallback to old format (direct status string) status = redisValue as NotificationStatus @@ -138,7 +140,7 @@ async function pollRedisForTool( }) } - return { status, message } + return { status, message, fullData } } // Wait before next poll @@ -163,9 +165,13 @@ async function pollRedisForTool( * Handle tool calls that require user interruption/approval * Returns { approved: boolean, rejected: boolean, error?: boolean, message?: string } to distinguish between rejection, timeout, and error */ -async function interruptHandler( - toolCallId: string -): Promise<{ approved: boolean; rejected: boolean; error?: boolean; message?: string }> { +async function interruptHandler(toolCallId: string): Promise<{ + approved: boolean + rejected: boolean + error?: boolean + message?: string + fullData?: any +}> { if (!toolCallId) { logger.error('interruptHandler: No tool call ID provided') return { approved: false, rejected: false, error: true, message: 'No tool call ID provided' } @@ -185,31 +191,31 @@ async function interruptHandler( return { approved: false, rejected: false } } - const { status, message } = result + const { status, message, fullData } = result if (status === 'rejected') { logger.info('Tool execution rejected by user', { toolCallId, message }) - return { approved: false, rejected: true, message } + return { approved: false, rejected: true, message, fullData } } if (status === 'accepted') { logger.info('Tool execution approved by user', { toolCallId, message }) - return { approved: true, rejected: false, message } + return { approved: true, rejected: false, message, fullData } } if (status === 'error') { logger.error('Tool execution failed with error', { toolCallId, message }) - return { approved: false, rejected: false, error: true, message } + return { approved: false, rejected: false, error: true, message, fullData } } if (status === 'background') { logger.info('Tool execution moved to background', { toolCallId, message }) - return { approved: true, rejected: false, message } + return { approved: true, rejected: false, message, fullData } } if (status === 'success') { logger.info('Tool execution completed successfully', { toolCallId, message }) - return { approved: true, rejected: false, message } + return { approved: true, rejected: false, message, fullData } } logger.warn('Unexpected tool call status', { toolCallId, status, message }) @@ -326,7 +332,7 @@ export async function POST(req: NextRequest) { }) // Handle interrupt flow - const { approved, rejected, error, message } = await interruptHandler(toolCallId) + const { approved, rejected, error, message, fullData } = await interruptHandler(toolCallId) if (rejected) { logger.info(`[${requestId}] Tool execution rejected by user`, { @@ -371,10 +377,13 @@ export async function POST(req: NextRequest) { message, }) - // For noop tool, pass the confirmation message as a parameter - if (methodId === 'no_op' && message) { + // For tools that need confirmation data, pass the message and/or fullData as parameters + if (message) { params.confirmationMessage = message } + if (fullData) { + params.fullData = fullData + } } // Execute the tool directly via registry diff --git a/apps/sim/app/api/files/presigned/route.ts b/apps/sim/app/api/files/presigned/route.ts index 30c33215446..a343d3a1eb3 100644 --- a/apps/sim/app/api/files/presigned/route.ts +++ b/apps/sim/app/api/files/presigned/route.ts @@ -9,9 +9,11 @@ import { getS3Client, sanitizeFilenameForMetadata } from '@/lib/uploads/s3/s3-cl import { BLOB_CHAT_CONFIG, BLOB_CONFIG, + BLOB_COPILOT_CONFIG, BLOB_KB_CONFIG, S3_CHAT_CONFIG, S3_CONFIG, + S3_COPILOT_CONFIG, S3_KB_CONFIG, } from '@/lib/uploads/setup' import { createErrorResponse, createOptionsResponse } from '@/app/api/files/utils' @@ -22,9 +24,11 @@ interface PresignedUrlRequest { fileName: string contentType: string fileSize: number + userId?: string + chatId?: string } -type UploadType = 'general' | 'knowledge-base' | 'chat' +type UploadType = 'general' | 'knowledge-base' | 'chat' | 'copilot' class PresignedUrlError extends Error { constructor( @@ -58,7 +62,7 @@ export async function POST(request: NextRequest) { throw new ValidationError('Invalid JSON in request body') } - const { fileName, contentType, fileSize } = data + const { fileName, contentType, fileSize, userId, chatId } = data if (!fileName?.trim()) { throw new ValidationError('fileName is required and cannot be empty') @@ -83,7 +87,16 @@ export async function POST(request: NextRequest) { ? 'knowledge-base' : uploadTypeParam === 'chat' ? 'chat' - : 'general' + : uploadTypeParam === 'copilot' + ? 'copilot' + : 'general' + + // Validate copilot-specific requirements + if (uploadType === 'copilot') { + if (!userId?.trim()) { + throw new ValidationError('userId is required for copilot uploads') + } + } if (!isUsingCloudStorage()) { throw new StorageConfigError( @@ -96,9 +109,9 @@ export async function POST(request: NextRequest) { switch (storageProvider) { case 's3': - return await handleS3PresignedUrl(fileName, contentType, fileSize, uploadType) + return await handleS3PresignedUrl(fileName, contentType, fileSize, uploadType, userId) case 'blob': - return await handleBlobPresignedUrl(fileName, contentType, fileSize, uploadType) + return await handleBlobPresignedUrl(fileName, contentType, fileSize, uploadType, userId) default: throw new StorageConfigError(`Unknown storage provider: ${storageProvider}`) } @@ -126,7 +139,8 @@ async function handleS3PresignedUrl( fileName: string, contentType: string, fileSize: number, - uploadType: UploadType + uploadType: UploadType, + userId?: string ) { try { const config = @@ -134,15 +148,26 @@ async function handleS3PresignedUrl( ? S3_KB_CONFIG : uploadType === 'chat' ? S3_CHAT_CONFIG - : S3_CONFIG + : uploadType === 'copilot' + ? S3_COPILOT_CONFIG + : S3_CONFIG if (!config.bucket || !config.region) { throw new StorageConfigError(`S3 configuration missing for ${uploadType} uploads`) } const safeFileName = fileName.replace(/\s+/g, '-').replace(/[^a-zA-Z0-9.-]/g, '_') - const prefix = uploadType === 'knowledge-base' ? 'kb/' : uploadType === 'chat' ? 'chat/' : '' - const uniqueKey = `${prefix}${Date.now()}-${uuidv4()}-${safeFileName}` + + let prefix = '' + if (uploadType === 'knowledge-base') { + prefix = 'kb/' + } else if (uploadType === 'chat') { + prefix = 'chat/' + } else if (uploadType === 'copilot') { + prefix = `${userId}/` + } + + const uniqueKey = `${prefix}${uuidv4()}-${safeFileName}` const sanitizedOriginalName = sanitizeFilenameForMetadata(fileName) @@ -155,6 +180,9 @@ async function handleS3PresignedUrl( metadata.purpose = 'knowledge-base' } else if (uploadType === 'chat') { metadata.purpose = 'chat' + } else if (uploadType === 'copilot') { + metadata.purpose = 'copilot' + metadata.userId = userId || '' } const command = new PutObjectCommand({ @@ -210,7 +238,8 @@ async function handleBlobPresignedUrl( fileName: string, contentType: string, fileSize: number, - uploadType: UploadType + uploadType: UploadType, + userId?: string ) { try { const config = @@ -218,7 +247,9 @@ async function handleBlobPresignedUrl( ? BLOB_KB_CONFIG : uploadType === 'chat' ? BLOB_CHAT_CONFIG - : BLOB_CONFIG + : uploadType === 'copilot' + ? BLOB_COPILOT_CONFIG + : BLOB_CONFIG if ( !config.accountName || @@ -229,8 +260,17 @@ async function handleBlobPresignedUrl( } const safeFileName = fileName.replace(/\s+/g, '-').replace(/[^a-zA-Z0-9.-]/g, '_') - const prefix = uploadType === 'knowledge-base' ? 'kb/' : uploadType === 'chat' ? 'chat/' : '' - const uniqueKey = `${prefix}${Date.now()}-${uuidv4()}-${safeFileName}` + + let prefix = '' + if (uploadType === 'knowledge-base') { + prefix = 'kb/' + } else if (uploadType === 'chat') { + prefix = 'chat/' + } else if (uploadType === 'copilot') { + prefix = `${userId}/` + } + + const uniqueKey = `${prefix}${uuidv4()}-${safeFileName}` const blobServiceClient = getBlobServiceClient() const containerClient = blobServiceClient.getContainerClient(config.containerName) @@ -282,6 +322,9 @@ async function handleBlobPresignedUrl( uploadHeaders['x-ms-meta-purpose'] = 'knowledge-base' } else if (uploadType === 'chat') { uploadHeaders['x-ms-meta-purpose'] = 'chat' + } else if (uploadType === 'copilot') { + uploadHeaders['x-ms-meta-purpose'] = 'copilot' + uploadHeaders['x-ms-meta-userid'] = encodeURIComponent(userId || '') } return NextResponse.json({ diff --git a/apps/sim/app/api/files/serve/[...path]/route.ts b/apps/sim/app/api/files/serve/[...path]/route.ts index 810bd58e108..4b18b7cf600 100644 --- a/apps/sim/app/api/files/serve/[...path]/route.ts +++ b/apps/sim/app/api/files/serve/[...path]/route.ts @@ -13,8 +13,6 @@ import { getContentType, } from '@/app/api/files/utils' -export const dynamic = 'force-dynamic' - const logger = createLogger('FilesServeAPI') async function streamToBuffer(readableStream: NodeJS.ReadableStream): Promise { @@ -58,7 +56,11 @@ export async function GET( if (isUsingCloudStorage() || isCloudPath) { // Extract the actual key (remove 's3/' or 'blob/' prefix if present) const cloudKey = isCloudPath ? path.slice(1).join('/') : fullPath - return await handleCloudProxy(cloudKey) + + // Get bucket type from query parameter + const bucketType = request.nextUrl.searchParams.get('bucket') + + return await handleCloudProxy(cloudKey, bucketType) } // Use local handler for local files @@ -152,12 +154,37 @@ async function downloadKBFile(cloudKey: string): Promise { /** * Proxy cloud file through our server */ -async function handleCloudProxy(cloudKey: string): Promise { +async function handleCloudProxy( + cloudKey: string, + bucketType?: string | null +): Promise { try { // Check if this is a KB file (starts with 'kb/') const isKBFile = cloudKey.startsWith('kb/') - const fileBuffer = isKBFile ? await downloadKBFile(cloudKey) : await downloadFile(cloudKey) + let fileBuffer: Buffer + + if (isKBFile) { + fileBuffer = await downloadKBFile(cloudKey) + } else if (bucketType === 'copilot') { + // Download from copilot-specific bucket + const storageProvider = getStorageProvider() + + if (storageProvider === 's3') { + const { downloadFromS3WithConfig } = await import('@/lib/uploads/s3/s3-client') + const { S3_COPILOT_CONFIG } = await import('@/lib/uploads/setup') + fileBuffer = await downloadFromS3WithConfig(cloudKey, S3_COPILOT_CONFIG) + } else if (storageProvider === 'blob') { + // For Azure Blob, use the default downloadFile for now + // TODO: Add downloadFromBlobWithConfig when needed + fileBuffer = await downloadFile(cloudKey) + } else { + fileBuffer = await downloadFile(cloudKey) + } + } else { + // Default bucket + fileBuffer = await downloadFile(cloudKey) + } // Extract the original filename from the key (last part after last /) const originalFilename = cloudKey.split('/').pop() || 'download' diff --git a/apps/sim/app/api/folders/[id]/route.ts b/apps/sim/app/api/folders/[id]/route.ts index 06ce831139a..a686fca0bef 100644 --- a/apps/sim/app/api/folders/[id]/route.ts +++ b/apps/sim/app/api/folders/[id]/route.ts @@ -2,9 +2,6 @@ import { and, eq } from 'drizzle-orm' import { type NextRequest, NextResponse } from 'next/server' import { getSession } from '@/lib/auth' import { createLogger } from '@/lib/logs/console/logger' - -export const dynamic = 'force-dynamic' - import { getUserEntityPermissions } from '@/lib/permissions/utils' import { db } from '@/db' import { workflow, workflowFolder } from '@/db/schema' diff --git a/apps/sim/app/api/folders/route.ts b/apps/sim/app/api/folders/route.ts index c713b5a11cd..0451870cbd1 100644 --- a/apps/sim/app/api/folders/route.ts +++ b/apps/sim/app/api/folders/route.ts @@ -8,8 +8,6 @@ import { workflowFolder } from '@/db/schema' const logger = createLogger('FoldersAPI') -export const dynamic = 'force-dynamic' - // GET - Fetch folders for a workspace export async function GET(request: NextRequest) { try { diff --git a/apps/sim/app/api/knowledge/[id]/documents/[documentId]/chunks/[chunkId]/route.ts b/apps/sim/app/api/knowledge/[id]/documents/[documentId]/chunks/[chunkId]/route.ts index f453790ebe0..b2eeb803522 100644 --- a/apps/sim/app/api/knowledge/[id]/documents/[documentId]/chunks/[chunkId]/route.ts +++ b/apps/sim/app/api/knowledge/[id]/documents/[documentId]/chunks/[chunkId]/route.ts @@ -1,4 +1,4 @@ -import crypto from 'node:crypto' +import { createHash, randomUUID } from 'crypto' import { eq, sql } from 'drizzle-orm' import { type NextRequest, NextResponse } from 'next/server' import { z } from 'zod' @@ -22,7 +22,7 @@ export async function GET( req: NextRequest, { params }: { params: Promise<{ id: string; documentId: string; chunkId: string }> } ) { - const requestId = crypto.randomUUID().slice(0, 8) + const requestId = randomUUID().slice(0, 8) const { id: knowledgeBaseId, documentId, chunkId } = await params try { @@ -70,7 +70,7 @@ export async function PUT( req: NextRequest, { params }: { params: Promise<{ id: string; documentId: string; chunkId: string }> } ) { - const requestId = crypto.randomUUID().slice(0, 8) + const requestId = randomUUID().slice(0, 8) const { id: knowledgeBaseId, documentId, chunkId } = await params try { @@ -119,10 +119,7 @@ export async function PUT( updateData.contentLength = validatedData.content.length // Update token count estimation (rough approximation: 4 chars per token) updateData.tokenCount = Math.ceil(validatedData.content.length / 4) - updateData.chunkHash = crypto - .createHash('sha256') - .update(validatedData.content) - .digest('hex') + updateData.chunkHash = createHash('sha256').update(validatedData.content).digest('hex') } if (validatedData.enabled !== undefined) updateData.enabled = validatedData.enabled @@ -166,7 +163,7 @@ export async function DELETE( req: NextRequest, { params }: { params: Promise<{ id: string; documentId: string; chunkId: string }> } ) { - const requestId = crypto.randomUUID().slice(0, 8) + const requestId = randomUUID().slice(0, 8) const { id: knowledgeBaseId, documentId, chunkId } = await params try { diff --git a/apps/sim/app/api/knowledge/[id]/documents/route.ts b/apps/sim/app/api/knowledge/[id]/documents/route.ts index b7492151f15..c3b14ac4a79 100644 --- a/apps/sim/app/api/knowledge/[id]/documents/route.ts +++ b/apps/sim/app/api/knowledge/[id]/documents/route.ts @@ -1,4 +1,4 @@ -import crypto from 'node:crypto' +import { randomUUID } from 'crypto' import { and, desc, eq, inArray, isNull, sql } from 'drizzle-orm' import { type NextRequest, NextResponse } from 'next/server' import { z } from 'zod' @@ -114,7 +114,7 @@ async function processDocumentTags( // Create new tag definition if we have a slot if (targetSlot) { const newDefinition = { - id: crypto.randomUUID(), + id: randomUUID(), knowledgeBaseId, tagSlot: targetSlot as any, displayName: tagName, @@ -312,7 +312,7 @@ const BulkUpdateDocumentsSchema = z.object({ }) export async function GET(req: NextRequest, { params }: { params: Promise<{ id: string }> }) { - const requestId = crypto.randomUUID().slice(0, 8) + const requestId = randomUUID().slice(0, 8) const { id: knowledgeBaseId } = await params try { @@ -423,7 +423,7 @@ export async function GET(req: NextRequest, { params }: { params: Promise<{ id: } export async function POST(req: NextRequest, { params }: { params: Promise<{ id: string }> }) { - const requestId = crypto.randomUUID().slice(0, 8) + const requestId = randomUUID().slice(0, 8) const { id: knowledgeBaseId } = await params try { @@ -470,7 +470,7 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id: const createdDocuments = await db.transaction(async (tx) => { const documentPromises = validatedData.documents.map(async (docData) => { - const documentId = crypto.randomUUID() + const documentId = randomUUID() const now = new Date() // Process documentTagsData if provided (for knowledge base block) @@ -578,7 +578,7 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id: try { const validatedData = CreateDocumentSchema.parse(body) - const documentId = crypto.randomUUID() + const documentId = randomUUID() const now = new Date() // Process structured tag data if provided diff --git a/apps/sim/app/api/logs/route.ts b/apps/sim/app/api/logs/route.ts index d4832681676..ff3fd92bf70 100644 --- a/apps/sim/app/api/logs/route.ts +++ b/apps/sim/app/api/logs/route.ts @@ -41,7 +41,6 @@ function extractBlockExecutionsFromTraceSpans(traceSpans: any[]): any[] { return blockExecutions } -export const dynamic = 'force-dynamic' export const revalidate = 0 const QueryParamsSchema = z.object({ diff --git a/apps/sim/app/api/organizations/[id]/route.ts b/apps/sim/app/api/organizations/[id]/route.ts index b7e9314b9d6..2096e2a1579 100644 --- a/apps/sim/app/api/organizations/[id]/route.ts +++ b/apps/sim/app/api/organizations/[id]/route.ts @@ -7,9 +7,6 @@ import { updateOrganizationSeats, } from '@/lib/billing/validation/seat-management' import { createLogger } from '@/lib/logs/console/logger' - -export const dynamic = 'force-dynamic' - import { db } from '@/db' import { member, organization } from '@/db/schema' diff --git a/apps/sim/app/api/organizations/[id]/workspaces/route.ts b/apps/sim/app/api/organizations/[id]/workspaces/route.ts index 3393b60127f..8fb95ce81be 100644 --- a/apps/sim/app/api/organizations/[id]/workspaces/route.ts +++ b/apps/sim/app/api/organizations/[id]/workspaces/route.ts @@ -7,8 +7,6 @@ import { member, permissions, user, workspace } from '@/db/schema' const logger = createLogger('OrganizationWorkspacesAPI') -export const dynamic = 'force-dynamic' - /** * GET /api/organizations/[id]/workspaces * Get workspaces related to the organization with optional filtering diff --git a/apps/sim/app/api/providers/ollama/models/route.ts b/apps/sim/app/api/providers/ollama/models/route.ts new file mode 100644 index 00000000000..7c184588b60 --- /dev/null +++ b/apps/sim/app/api/providers/ollama/models/route.ts @@ -0,0 +1,52 @@ +import { type NextRequest, NextResponse } from 'next/server' +import { env } from '@/lib/env' +import { createLogger } from '@/lib/logs/console/logger' +import type { ModelsObject } from '@/providers/ollama/types' + +const logger = createLogger('OllamaModelsAPI') +const OLLAMA_HOST = env.OLLAMA_URL || 'http://localhost:11434' + +export const dynamic = 'force-dynamic' + +/** + * Get available Ollama models + */ +export async function GET(request: NextRequest) { + try { + logger.info('Fetching Ollama models', { + host: OLLAMA_HOST, + }) + + const response = await fetch(`${OLLAMA_HOST}/api/tags`, { + headers: { + 'Content-Type': 'application/json', + }, + }) + + if (!response.ok) { + logger.warn('Ollama service is not available', { + status: response.status, + statusText: response.statusText, + }) + return NextResponse.json({ models: [] }) + } + + const data = (await response.json()) as ModelsObject + const models = data.models.map((model) => model.name) + + logger.info('Successfully fetched Ollama models', { + count: models.length, + models, + }) + + return NextResponse.json({ models }) + } catch (error) { + logger.error('Failed to fetch Ollama models', { + error: error instanceof Error ? error.message : 'Unknown error', + host: OLLAMA_HOST, + }) + + // Return empty array instead of error to avoid breaking the UI + return NextResponse.json({ models: [] }) + } +} diff --git a/apps/sim/app/api/templates/[id]/route.ts b/apps/sim/app/api/templates/[id]/route.ts index 8a4a4e181bc..df7f32a85a4 100644 --- a/apps/sim/app/api/templates/[id]/route.ts +++ b/apps/sim/app/api/templates/[id]/route.ts @@ -7,7 +7,6 @@ import { templates } from '@/db/schema' const logger = createLogger('TemplateByIdAPI') -export const dynamic = 'force-dynamic' export const revalidate = 0 // GET /api/templates/[id] - Retrieve a single template by ID diff --git a/apps/sim/app/api/templates/route.ts b/apps/sim/app/api/templates/route.ts index 9d87d8a7b5e..9e84092a81a 100644 --- a/apps/sim/app/api/templates/route.ts +++ b/apps/sim/app/api/templates/route.ts @@ -9,7 +9,6 @@ import { templateStars, templates, workflow } from '@/db/schema' const logger = createLogger('TemplatesAPI') -export const dynamic = 'force-dynamic' export const revalidate = 0 // Function to sanitize sensitive data from workflow state diff --git a/apps/sim/app/api/tools/microsoft_planner/tasks/route.ts b/apps/sim/app/api/tools/microsoft_planner/tasks/route.ts new file mode 100644 index 00000000000..f25802e8c89 --- /dev/null +++ b/apps/sim/app/api/tools/microsoft_planner/tasks/route.ts @@ -0,0 +1,110 @@ +import { randomUUID } from 'crypto' +import { eq } from 'drizzle-orm' +import { type NextRequest, NextResponse } from 'next/server' +import { getSession } from '@/lib/auth' +import { createLogger } from '@/lib/logs/console/logger' +import { refreshAccessTokenIfNeeded } from '@/app/api/auth/oauth/utils' +import { db } from '@/db' +import { account } from '@/db/schema' +import type { PlannerTask } from '@/tools/microsoft_planner/types' + +const logger = createLogger('MicrosoftPlannerTasksAPI') + +export async function GET(request: NextRequest) { + const requestId = randomUUID().slice(0, 8) + + try { + const session = await getSession() + + if (!session?.user?.id) { + logger.warn(`[${requestId}] Unauthenticated request rejected`) + return NextResponse.json({ error: 'User not authenticated' }, { status: 401 }) + } + + const { searchParams } = new URL(request.url) + const credentialId = searchParams.get('credentialId') + const planId = searchParams.get('planId') + + if (!credentialId) { + logger.error(`[${requestId}] Missing credentialId parameter`) + return NextResponse.json({ error: 'Credential ID is required' }, { status: 400 }) + } + + if (!planId) { + logger.error(`[${requestId}] Missing planId parameter`) + return NextResponse.json({ error: 'Plan ID is required' }, { status: 400 }) + } + + // Get the credential from the database + const credentials = await db.select().from(account).where(eq(account.id, credentialId)).limit(1) + + if (!credentials.length) { + logger.warn(`[${requestId}] Credential not found`, { credentialId }) + return NextResponse.json({ error: 'Credential not found' }, { status: 404 }) + } + + const credential = credentials[0] + + // Check if the credential belongs to the user + if (credential.userId !== session.user.id) { + logger.warn(`[${requestId}] Unauthorized credential access attempt`, { + credentialUserId: credential.userId, + requestUserId: session.user.id, + }) + return NextResponse.json({ error: 'Unauthorized' }, { status: 403 }) + } + + // Refresh access token if needed + const accessToken = await refreshAccessTokenIfNeeded(credentialId, session.user.id, requestId) + + if (!accessToken) { + logger.error(`[${requestId}] Failed to obtain valid access token`) + return NextResponse.json({ error: 'Failed to obtain valid access token' }, { status: 401 }) + } + + // Fetch tasks directly from Microsoft Graph API + const response = await fetch(`https://graph.microsoft.com/v1.0/planner/plans/${planId}/tasks`, { + headers: { + Authorization: `Bearer ${accessToken}`, + }, + }) + + if (!response.ok) { + const errorText = await response.text() + logger.error(`[${requestId}] Microsoft Graph API error:`, errorText) + return NextResponse.json( + { error: 'Failed to fetch tasks from Microsoft Graph' }, + { status: response.status } + ) + } + + const data = await response.json() + const tasks = data.value || [] + + // Filter tasks to only include useful fields (matching our read_task tool) + const filteredTasks = tasks.map((task: PlannerTask) => ({ + id: task.id, + title: task.title, + planId: task.planId, + bucketId: task.bucketId, + percentComplete: task.percentComplete, + priority: task.priority, + dueDateTime: task.dueDateTime, + createdDateTime: task.createdDateTime, + completedDateTime: task.completedDateTime, + hasDescription: task.hasDescription, + assignments: task.assignments ? Object.keys(task.assignments) : [], + })) + + return NextResponse.json({ + tasks: filteredTasks, + metadata: { + planId, + planUrl: `https://graph.microsoft.com/v1.0/planner/plans/${planId}`, + }, + }) + } catch (error) { + logger.error(`[${requestId}] Error fetching Microsoft Planner tasks:`, error) + return NextResponse.json({ error: 'Failed to fetch tasks' }, { status: 500 }) + } +} diff --git a/apps/sim/app/api/tools/onedrive/folder/route.ts b/apps/sim/app/api/tools/onedrive/folder/route.ts new file mode 100644 index 00000000000..d29ad7e57bd --- /dev/null +++ b/apps/sim/app/api/tools/onedrive/folder/route.ts @@ -0,0 +1,83 @@ +import { randomUUID } from 'crypto' +import { eq } from 'drizzle-orm' +import { type NextRequest, NextResponse } from 'next/server' +import { getSession } from '@/lib/auth' +import { createLogger } from '@/lib/logs/console/logger' +import { refreshAccessTokenIfNeeded } from '@/app/api/auth/oauth/utils' +import { db } from '@/db' +import { account } from '@/db/schema' + +export const dynamic = 'force-dynamic' + +const logger = createLogger('OneDriveFolderAPI') + +/** + * Get a single folder from Microsoft OneDrive + */ +export async function GET(request: NextRequest) { + const requestId = randomUUID().slice(0, 8) + + try { + const session = await getSession() + if (!session?.user?.id) { + return NextResponse.json({ error: 'User not authenticated' }, { status: 401 }) + } + + const { searchParams } = new URL(request.url) + const credentialId = searchParams.get('credentialId') + const fileId = searchParams.get('fileId') + + if (!credentialId || !fileId) { + return NextResponse.json({ error: 'Credential ID and File ID are required' }, { status: 400 }) + } + + const credentials = await db.select().from(account).where(eq(account.id, credentialId)).limit(1) + if (!credentials.length) { + return NextResponse.json({ error: 'Credential not found' }, { status: 404 }) + } + + const credential = credentials[0] + if (credential.userId !== session.user.id) { + return NextResponse.json({ error: 'Unauthorized' }, { status: 403 }) + } + + const accessToken = await refreshAccessTokenIfNeeded(credentialId, session.user.id, requestId) + if (!accessToken) { + return NextResponse.json({ error: 'Failed to obtain valid access token' }, { status: 401 }) + } + + const response = await fetch( + `https://graph.microsoft.com/v1.0/me/drive/items/${fileId}?$select=id,name,folder,webUrl,createdDateTime,lastModifiedDateTime`, + { + headers: { + Authorization: `Bearer ${accessToken}`, + }, + } + ) + + if (!response.ok) { + const errorData = await response.json().catch(() => ({ error: { message: 'Unknown error' } })) + return NextResponse.json( + { error: errorData.error?.message || 'Failed to fetch folder from OneDrive' }, + { status: response.status } + ) + } + + const folder = await response.json() + + // Transform the response to match expected format + const transformedFolder = { + id: folder.id, + name: folder.name, + mimeType: 'application/vnd.microsoft.graph.folder', + webViewLink: folder.webUrl, + createdTime: folder.createdDateTime, + modifiedTime: folder.lastModifiedDateTime, + } + + return NextResponse.json({ file: transformedFolder }, { status: 200 }) + } catch (error) { + logger.error(`[${requestId}] Error fetching folder from OneDrive`, error) + return NextResponse.json({ error: 'Internal server error' }, { status: 500 }) + } +} diff --git a/apps/sim/app/api/tools/onedrive/folders/route.ts b/apps/sim/app/api/tools/onedrive/folders/route.ts new file mode 100644 index 00000000000..4194addfbab --- /dev/null +++ b/apps/sim/app/api/tools/onedrive/folders/route.ts @@ -0,0 +1,89 @@ +import { randomUUID } from 'crypto' +import { eq } from 'drizzle-orm' +import { type NextRequest, NextResponse } from 'next/server' +import { getSession } from '@/lib/auth' +import { createLogger } from '@/lib/logs/console/logger' +import { refreshAccessTokenIfNeeded } from '@/app/api/auth/oauth/utils' +import { db } from '@/db' +import { account } from '@/db/schema' + +export const dynamic = 'force-dynamic' + +const logger = createLogger('OneDriveFoldersAPI') + +import type { MicrosoftGraphDriveItem } from '@/tools/onedrive/types' + +/** + * Get folders from Microsoft OneDrive + */ +export async function GET(request: NextRequest) { + const requestId = randomUUID().slice(0, 8) + + try { + const session = await getSession() + if (!session?.user?.id) { + return NextResponse.json({ error: 'User not authenticated' }, { status: 401 }) + } + + const { searchParams } = new URL(request.url) + const credentialId = searchParams.get('credentialId') + const query = searchParams.get('query') || '' + + if (!credentialId) { + return NextResponse.json({ error: 'Credential ID is required' }, { status: 400 }) + } + + const credentials = await db.select().from(account).where(eq(account.id, credentialId)).limit(1) + if (!credentials.length) { + return NextResponse.json({ error: 'Credential not found' }, { status: 404 }) + } + + const credential = credentials[0] + if (credential.userId !== session.user.id) { + return NextResponse.json({ error: 'Unauthorized' }, { status: 403 }) + } + + const accessToken = await refreshAccessTokenIfNeeded(credentialId, session.user.id, requestId) + if (!accessToken) { + return NextResponse.json({ error: 'Failed to obtain valid access token' }, { status: 401 }) + } + + // Build URL for OneDrive folders + let url = `https://graph.microsoft.com/v1.0/me/drive/root/children?$filter=folder ne null&$select=id,name,folder,webUrl,createdDateTime,lastModifiedDateTime&$top=50` + + if (query) { + url += `&$search="${encodeURIComponent(query)}"` + } + + const response = await fetch(url, { + headers: { + Authorization: `Bearer ${accessToken}`, + }, + }) + + if (!response.ok) { + const errorData = await response.json().catch(() => ({ error: { message: 'Unknown error' } })) + return NextResponse.json( + { error: errorData.error?.message || 'Failed to fetch folders from OneDrive' }, + { status: response.status } + ) + } + + const data = await response.json() + const folders = (data.value || []) + .filter((item: MicrosoftGraphDriveItem) => item.folder) // Only folders + .map((folder: MicrosoftGraphDriveItem) => ({ + id: folder.id, + name: folder.name, + mimeType: 'application/vnd.microsoft.graph.folder', + webViewLink: folder.webUrl, + createdTime: folder.createdDateTime, + modifiedTime: folder.lastModifiedDateTime, + })) + + return NextResponse.json({ files: folders }, { status: 200 }) + } catch (error) { + logger.error(`[${requestId}] Error fetching folders from OneDrive`, error) + return NextResponse.json({ error: 'Internal server error' }, { status: 500 }) + } +} diff --git a/apps/sim/app/api/tools/sharepoint/site/route.ts b/apps/sim/app/api/tools/sharepoint/site/route.ts new file mode 100644 index 00000000000..225bd748e7a --- /dev/null +++ b/apps/sim/app/api/tools/sharepoint/site/route.ts @@ -0,0 +1,105 @@ +import { randomUUID } from 'crypto' +import { eq } from 'drizzle-orm' +import { type NextRequest, NextResponse } from 'next/server' +import { getSession } from '@/lib/auth' +import { createLogger } from '@/lib/logs/console/logger' +import { refreshAccessTokenIfNeeded } from '@/app/api/auth/oauth/utils' +import { db } from '@/db' +import { account } from '@/db/schema' + +export const dynamic = 'force-dynamic' + +const logger = createLogger('SharePointSiteAPI') + +/** + * Get a single SharePoint site from Microsoft Graph API + */ +export async function GET(request: NextRequest) { + const requestId = randomUUID().slice(0, 8) + + try { + const session = await getSession() + if (!session?.user?.id) { + return NextResponse.json({ error: 'User not authenticated' }, { status: 401 }) + } + + const { searchParams } = new URL(request.url) + const credentialId = searchParams.get('credentialId') + const siteId = searchParams.get('siteId') + + if (!credentialId || !siteId) { + return NextResponse.json({ error: 'Credential ID and Site ID are required' }, { status: 400 }) + } + + const credentials = await db.select().from(account).where(eq(account.id, credentialId)).limit(1) + if (!credentials.length) { + return NextResponse.json({ error: 'Credential not found' }, { status: 404 }) + } + + const credential = credentials[0] + if (credential.userId !== session.user.id) { + return NextResponse.json({ error: 'Unauthorized' }, { status: 403 }) + } + + const accessToken = await refreshAccessTokenIfNeeded(credentialId, session.user.id, requestId) + if (!accessToken) { + return NextResponse.json({ error: 'Failed to obtain valid access token' }, { status: 401 }) + } + + // Handle different ways to access SharePoint sites: + // 1. Site ID: sites/{site-id} + // 2. Root site: sites/root + // 3. Hostname: sites/{hostname} + // 4. Server-relative URL: sites/{hostname}:/{server-relative-path} + // 5. Group team site: groups/{group-id}/sites/root + + let endpoint: string + if (siteId === 'root') { + endpoint = 'sites/root' + } else if (siteId.includes(':')) { + // Server-relative URL format + endpoint = `sites/${siteId}` + } else if (siteId.includes('groups/')) { + // Group team site format + endpoint = siteId + } else { + // Standard site ID or hostname + endpoint = `sites/${siteId}` + } + + const response = await fetch( + `https://graph.microsoft.com/v1.0/${endpoint}?$select=id,name,displayName,webUrl,createdDateTime,lastModifiedDateTime`, + { + headers: { + Authorization: `Bearer ${accessToken}`, + }, + } + ) + + if (!response.ok) { + const errorData = await response.json().catch(() => ({ error: { message: 'Unknown error' } })) + return NextResponse.json( + { error: errorData.error?.message || 'Failed to fetch site from SharePoint' }, + { status: response.status } + ) + } + + const site = await response.json() + + // Transform the response to match expected format + const transformedSite = { + id: site.id, + name: site.displayName || site.name, + mimeType: 'application/vnd.microsoft.graph.site', + webViewLink: site.webUrl, + createdTime: site.createdDateTime, + modifiedTime: site.lastModifiedDateTime, + } + + logger.info(`[${requestId}] Successfully fetched SharePoint site: ${transformedSite.name}`) + return NextResponse.json({ site: transformedSite }, { status: 200 }) + } catch (error) { + logger.error(`[${requestId}] Error fetching site from SharePoint`, error) + return NextResponse.json({ error: 'Internal server error' }, { status: 500 }) + } +} diff --git a/apps/sim/app/api/tools/sharepoint/sites/route.ts b/apps/sim/app/api/tools/sharepoint/sites/route.ts new file mode 100644 index 00000000000..93bc5bd0942 --- /dev/null +++ b/apps/sim/app/api/tools/sharepoint/sites/route.ts @@ -0,0 +1,85 @@ +import { randomUUID } from 'crypto' +import { eq } from 'drizzle-orm' +import { type NextRequest, NextResponse } from 'next/server' +import { getSession } from '@/lib/auth' +import { createLogger } from '@/lib/logs/console/logger' +import { refreshAccessTokenIfNeeded } from '@/app/api/auth/oauth/utils' +import { db } from '@/db' +import { account } from '@/db/schema' +import type { SharepointSite } from '@/tools/sharepoint/types' + +export const dynamic = 'force-dynamic' + +const logger = createLogger('SharePointSitesAPI') + +/** + * Get SharePoint sites from Microsoft Graph API + */ +export async function GET(request: NextRequest) { + const requestId = randomUUID().slice(0, 8) + + try { + const session = await getSession() + if (!session?.user?.id) { + return NextResponse.json({ error: 'User not authenticated' }, { status: 401 }) + } + + const { searchParams } = new URL(request.url) + const credentialId = searchParams.get('credentialId') + const query = searchParams.get('query') || '' + + if (!credentialId) { + return NextResponse.json({ error: 'Credential ID is required' }, { status: 400 }) + } + + const credentials = await db.select().from(account).where(eq(account.id, credentialId)).limit(1) + if (!credentials.length) { + return NextResponse.json({ error: 'Credential not found' }, { status: 404 }) + } + + const credential = credentials[0] + if (credential.userId !== session.user.id) { + return NextResponse.json({ error: 'Unauthorized' }, { status: 403 }) + } + + const accessToken = await refreshAccessTokenIfNeeded(credentialId, session.user.id, requestId) + if (!accessToken) { + return NextResponse.json({ error: 'Failed to obtain valid access token' }, { status: 401 }) + } + + // Build URL for SharePoint sites + // Use search=* to get all sites the user has access to, or search for specific query + const searchQuery = query || '*' + const url = `https://graph.microsoft.com/v1.0/sites?search=${encodeURIComponent(searchQuery)}&$select=id,name,displayName,webUrl,createdDateTime,lastModifiedDateTime&$top=50` + + const response = await fetch(url, { + headers: { + Authorization: `Bearer ${accessToken}`, + }, + }) + + if (!response.ok) { + const errorData = await response.json().catch(() => ({ error: { message: 'Unknown error' } })) + return NextResponse.json( + { error: errorData.error?.message || 'Failed to fetch sites from SharePoint' }, + { status: response.status } + ) + } + + const data = await response.json() + const sites = (data.value || []).map((site: SharepointSite) => ({ + id: site.id, + name: site.displayName || site.name, + mimeType: 'application/vnd.microsoft.graph.site', + webViewLink: site.webUrl, + createdTime: site.createdDateTime, + modifiedTime: site.lastModifiedDateTime, + })) + + logger.info(`[${requestId}] Successfully fetched ${sites.length} SharePoint sites`) + return NextResponse.json({ files: sites }, { status: 200 }) + } catch (error) { + logger.error(`[${requestId}] Error fetching sites from SharePoint`, error) + return NextResponse.json({ error: 'Internal server error' }, { status: 500 }) + } +} diff --git a/apps/sim/app/api/users/me/settings/route.ts b/apps/sim/app/api/users/me/settings/route.ts index 25c20a2e64e..0cf028fd9e2 100644 --- a/apps/sim/app/api/users/me/settings/route.ts +++ b/apps/sim/app/api/users/me/settings/route.ts @@ -4,9 +4,6 @@ import { NextResponse } from 'next/server' import { z } from 'zod' import { getSession } from '@/lib/auth' import { createLogger } from '@/lib/logs/console/logger' - -export const dynamic = 'force-dynamic' - import { db } from '@/db' import { settings } from '@/db/schema' diff --git a/apps/sim/app/api/workflows/[id]/execute/route.test.ts b/apps/sim/app/api/workflows/[id]/execute/route.test.ts index 237ddb4cfed..7be75d9f8ad 100644 --- a/apps/sim/app/api/workflows/[id]/execute/route.test.ts +++ b/apps/sim/app/api/workflows/[id]/execute/route.test.ts @@ -87,7 +87,7 @@ describe('Workflow Execution API Route', () => { })) vi.doMock('@/lib/workflows/db-helpers', () => ({ - loadWorkflowFromNormalizedTables: vi.fn().mockResolvedValue({ + loadDeployedWorkflowState: vi.fn().mockResolvedValue({ blocks: { 'starter-id': { id: 'starter-id', @@ -121,7 +121,7 @@ describe('Workflow Execution API Route', () => { ], loops: {}, parallels: {}, - isFromNormalizedTables: true, + isFromNormalizedTables: false, // Changed to false since it's from deployed state }), })) @@ -516,7 +516,7 @@ describe('Workflow Execution API Route', () => { })) vi.doMock('@/lib/workflows/db-helpers', () => ({ - loadWorkflowFromNormalizedTables: vi.fn().mockResolvedValue({ + loadDeployedWorkflowState: vi.fn().mockResolvedValue({ blocks: { 'starter-id': { id: 'starter-id', @@ -550,7 +550,7 @@ describe('Workflow Execution API Route', () => { ], loops: {}, parallels: {}, - isFromNormalizedTables: true, + isFromNormalizedTables: false, // Changed to false since it's from deployed state }), })) diff --git a/apps/sim/app/api/workflows/[id]/execute/route.ts b/apps/sim/app/api/workflows/[id]/execute/route.ts index 893a5efa68a..927ae1f067b 100644 --- a/apps/sim/app/api/workflows/[id]/execute/route.ts +++ b/apps/sim/app/api/workflows/[id]/execute/route.ts @@ -9,7 +9,7 @@ import { createLogger } from '@/lib/logs/console/logger' import { LoggingSession } from '@/lib/logs/execution/logging-session' import { buildTraceSpans } from '@/lib/logs/execution/trace-spans/trace-spans' import { decryptSecret } from '@/lib/utils' -import { loadWorkflowFromNormalizedTables } from '@/lib/workflows/db-helpers' +import { loadDeployedWorkflowState } from '@/lib/workflows/db-helpers' import { createHttpResponseFromBlock, updateWorkflowRunCounts, @@ -111,20 +111,13 @@ async function executeWorkflow(workflow: any, requestId: string, input?: any): P runningExecutions.add(executionKey) logger.info(`[${requestId}] Starting workflow execution: ${workflowId}`) - // Load workflow data from normalized tables - logger.debug(`[${requestId}] Loading workflow ${workflowId} from normalized tables`) - const normalizedData = await loadWorkflowFromNormalizedTables(workflowId) + // Load workflow data from deployed state for API executions + const deployedData = await loadDeployedWorkflowState(workflowId) - if (!normalizedData) { - throw new Error( - `Workflow ${workflowId} has no normalized data available. Ensure the workflow is properly saved to normalized tables.` - ) - } - - // Use normalized data as primary source - const { blocks, edges, loops, parallels } = normalizedData - logger.info(`[${requestId}] Using normalized tables for workflow execution: ${workflowId}`) - logger.debug(`[${requestId}] Normalized data loaded:`, { + // Use deployed data as primary source for API executions + const { blocks, edges, loops, parallels } = deployedData + logger.info(`[${requestId}] Using deployed state for workflow execution: ${workflowId}`) + logger.debug(`[${requestId}] Deployed data loaded:`, { blocksCount: Object.keys(blocks || {}).length, edgesCount: (edges || []).length, loopsCount: Object.keys(loops || {}).length, diff --git a/apps/sim/app/api/workspaces/route.ts b/apps/sim/app/api/workspaces/route.ts index 55e53c2145c..ce15c76218d 100644 --- a/apps/sim/app/api/workspaces/route.ts +++ b/apps/sim/app/api/workspaces/route.ts @@ -3,9 +3,6 @@ import { and, desc, eq, isNull } from 'drizzle-orm' import { NextResponse } from 'next/server' import { getSession } from '@/lib/auth' import { createLogger } from '@/lib/logs/console/logger' - -export const dynamic = 'force-dynamic' - import { db } from '@/db' import { permissions, workflow, workflowBlocks, workspace } from '@/db/schema' diff --git a/apps/sim/app/api/yaml/diff/create/route.ts b/apps/sim/app/api/yaml/diff/create/route.ts index e92792c27ea..5fb24abf51c 100644 --- a/apps/sim/app/api/yaml/diff/create/route.ts +++ b/apps/sim/app/api/yaml/diff/create/route.ts @@ -70,6 +70,16 @@ export async function POST(request: NextRequest) { // Note: This endpoint is stateless, so we need to get this from the request const currentWorkflowState = (body as any).currentWorkflowState + // Ensure currentWorkflowState has all required properties with proper defaults if provided + if (currentWorkflowState) { + if (!currentWorkflowState.loops) { + currentWorkflowState.loops = {} + } + if (!currentWorkflowState.parallels) { + currentWorkflowState.parallels = {} + } + } + logger.info(`[${requestId}] Creating diff from YAML`, { contentLength: yamlContent.length, hasDiffAnalysis: !!diffAnalysis, diff --git a/apps/sim/app/api/yaml/diff/merge/route.ts b/apps/sim/app/api/yaml/diff/merge/route.ts index 8ad9b24bea8..c1ec661c3dd 100644 --- a/apps/sim/app/api/yaml/diff/merge/route.ts +++ b/apps/sim/app/api/yaml/diff/merge/route.ts @@ -24,8 +24,8 @@ const MergeDiffRequestSchema = z.object({ proposedState: z.object({ blocks: z.record(z.any()), edges: z.array(z.any()), - loops: z.record(z.any()), - parallels: z.record(z.any()), + loops: z.record(z.any()).optional(), + parallels: z.record(z.any()).optional(), }), diffAnalysis: z.any().optional(), metadata: z.object({ @@ -50,6 +50,14 @@ export async function POST(request: NextRequest) { const body = await request.json() const { existingDiff, yamlContent, diffAnalysis, options } = MergeDiffRequestSchema.parse(body) + // Ensure existingDiff.proposedState has all required properties with proper defaults + if (!existingDiff.proposedState.loops) { + existingDiff.proposedState.loops = {} + } + if (!existingDiff.proposedState.parallels) { + existingDiff.proposedState.parallels = {} + } + logger.info(`[${requestId}] Merging diff from YAML`, { contentLength: yamlContent.length, existingBlockCount: Object.keys(existingDiff.proposedState.blocks).length, diff --git a/apps/sim/app/layout.tsx b/apps/sim/app/layout.tsx index 06e9f2d919b..07ef379bb35 100644 --- a/apps/sim/app/layout.tsx +++ b/apps/sim/app/layout.tsx @@ -1,8 +1,10 @@ import { Analytics } from '@vercel/analytics/next' import { SpeedInsights } from '@vercel/speed-insights/next' -import { GeistSans } from 'geist/font/sans' import type { Metadata, Viewport } from 'next' import { PublicEnvScript } from 'next-runtime-env' +import { BrandedLayout } from '@/components/branded-layout' +import { generateBrandedMetadata, generateStructuredData } from '@/lib/branding/metadata' +import { env } from '@/lib/env' import { isHosted } from '@/lib/environment' import { createLogger } from '@/lib/logs/console/logger' import { getAssetUrl } from '@/lib/utils' @@ -51,149 +53,20 @@ export const viewport: Viewport = { userScalable: false, } -export const metadata: Metadata = { - title: { - template: '', - default: 'Sim', - }, - description: - 'Build and deploy AI agents using our Figma-like canvas. Build, write evals, and deploy AI agent workflows that automate workflows and streamline your business processes.', - applicationName: 'Sim', - authors: [{ name: 'Sim' }], - generator: 'Next.js', - keywords: [ - 'AI agent', - 'AI agent builder', - 'AI agent workflow', - 'AI workflow automation', - 'visual workflow editor', - 'AI agents', - 'workflow canvas', - 'intelligent automation', - 'AI tools', - 'workflow designer', - 'artificial intelligence', - 'business automation', - 'AI agent workflows', - 'visual programming', - ], - referrer: 'origin-when-cross-origin', - creator: 'Sim', - publisher: 'Sim', - metadataBase: new URL('https://sim.ai'), - alternates: { - canonical: '/', - languages: { - 'en-US': '/en-US', - }, - }, - robots: { - index: true, - follow: true, - googleBot: { - index: true, - follow: true, - 'max-image-preview': 'large', - 'max-video-preview': -1, - 'max-snippet': -1, - }, - }, - openGraph: { - type: 'website', - locale: 'en_US', - url: 'https://sim.ai', - title: 'Sim', - description: - 'Build and deploy AI agents using our Figma-like canvas. Build, write evals, and deploy AI agent workflows that automate workflows and streamline your business processes.', - siteName: 'Sim', - images: [ - { - url: getAssetUrl('social/facebook.png'), - width: 1200, - height: 630, - alt: 'Sim', - }, - ], - }, - twitter: { - card: 'summary_large_image', - title: 'Sim', - description: - 'Build and deploy AI agents using our Figma-like canvas. Build, write evals, and deploy AI agent workflows that automate workflows and streamline your business processes.', - images: [getAssetUrl('social/twitter.png')], - creator: '@simstudioai', - site: '@simstudioai', - }, - manifest: '/favicon/site.webmanifest', - icons: { - icon: [ - { url: '/favicon/favicon-16x16.png', sizes: '16x16', type: 'image/png' }, - { url: '/favicon/favicon-32x32.png', sizes: '32x32', type: 'image/png' }, - { - url: '/favicon/favicon-192x192.png', - sizes: '192x192', - type: 'image/png', - }, - { - url: '/favicon/favicon-512x512.png', - sizes: '512x512', - type: 'image/png', - }, - { url: '/sim.png', sizes: 'any', type: 'image/png' }, - ], - apple: '/favicon/apple-touch-icon.png', - shortcut: '/favicon/favicon.ico', - }, - appleWebApp: { - capable: true, - statusBarStyle: 'default', - title: 'Sim', - }, - formatDetection: { - telephone: false, - }, - category: 'technology', - other: { - 'apple-mobile-web-app-capable': 'yes', - 'mobile-web-app-capable': 'yes', - 'msapplication-TileColor': '#ffffff', - 'msapplication-config': '/favicon/browserconfig.xml', - }, -} +// Generate dynamic metadata based on brand configuration +export const metadata: Metadata = generateBrandedMetadata() export default function RootLayout({ children }: { children: React.ReactNode }) { + const structuredData = generateStructuredData() + return ( - + {/* Structured Data for SEO */}