Skip to content

Perf/codecovtrigger#152

Merged
Josephrp merged 62 commits intodevfrom
perf/codecovtrigger
Oct 13, 2025
Merged

Perf/codecovtrigger#152
Josephrp merged 62 commits intodevfrom
perf/codecovtrigger

Conversation

@Josephrp
Copy link
Copy Markdown
Collaborator

Pull Request

Description

Type of Change

  • 🐛 Bug fix (non-breaking change which fixes an issue)
  • ✨ New feature (non-breaking change which adds functionality)
  • 💥 Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • 📚 Documentation update
  • ⚡ Performance improvement
  • 🧹 Code refactoring
  • 🧪 Test addition or update
  • 🔧 Configuration change
  • 🧬 Bioinformatics enhancement
  • 🔄 Workflow improvement

Component

  • Core Workflow Engine
  • PRIME Flow (Protein Engineering)
  • Bioinformatics Flow (Data Fusion)
  • DeepSearch Flow (Web Research)
  • Challenge Flow (Experimental)
  • Tool Registry
  • Agent System
  • Configuration (Hydra)
  • Pydantic Graph
  • Documentation
  • Tests
  • Other:

Related Issues

  • Fixes #
  • Closes #
  • Related to #

Changes Made

Testing

  • I have tested these changes locally
  • I have added/updated tests for my changes
  • All existing tests pass
  • I have tested with different configurations
  • I have tested with different flows (PRIME, Bioinformatics, DeepSearch, etc.)

Test Configuration

# Example test command
uv run deepresearch question="..." app_mode=single_react

Configuration Changes

  • No configuration changes
  • Added new configuration options
  • Modified existing configuration
  • Removed configuration options

Configuration Details

# Example configuration changes
flows:
  new_flow:
    enabled: true
    params:
      new_param: "value"

Documentation

  • No documentation changes needed
  • Updated README
  • Updated API documentation
  • Updated configuration documentation
  • Added code comments
  • Updated examples

Performance Impact

  • No performance impact
  • Performance improvement
  • Performance regression (explain below)

Performance Details

  • Execution time:
  • Memory usage:
  • Other metrics:

Breaking Changes

  • No breaking changes
  • Breaking change (describe below)

Migration Guide

Checklist

  • My code follows the project's style guidelines
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • Any dependent changes have been merged and published

Additional Notes

Screenshots/Output

Before

After

Reviewer Notes

MarioAderman and others added 30 commits October 6, 2025 11:06
Removes all references to non-existent @defer decorator from codebase.
The @defer decorator never existed in Pydantic AI. Tools are correctly
implemented using standard Pydantic AI patterns.

Changes:
- Removed 16 @defer comments from tool files
- Updated README Known Issues section
- All tools continue to work correctly (no functional changes)

Fixes #2
- Implement VLLMModel wrapper around existing VLLMClient
- Add OpenAICompatibleModel for vLLM, llama.cpp, TGI servers
- Provide factory methods (from_vllm, from_llamacpp, from_tgi)
- Include streaming support and message conversion
- Add convenience aliases for VLLMModel and LlamaCppModel
- Replace non-existent OpenAIProvider with OllamaProvider from pydantic_ai
- Remove dataclass decorator to properly inherit from OpenAIChatModel
- Fix factory methods to pass model_name as positional argument
- Add comprehensive test suite with 8 passing tests
- Skip integration tests that require actual vLLM servers
- Add from_config() method to support Hydra DictConfig
- Update all factory methods (from_vllm, from_llamacpp, from_tgi, from_custom) to accept optional config
- Support config override via direct parameters
- Extract generation settings from config (temperature, max_tokens, etc.)
- Add environment variable fallbacks (LLM_BASE_URL, LLM_API_KEY)
- Create config files for llamacpp, tgi, and vllm
- Update tests to cover both config-based and direct parameter approaches
- All 10 tests passing
- Add LLMModelConfig and GenerationConfig datatypes
- Remove redundant vllm_model.py
- Update openai_compatible_model.py with validation
- Rewrite tests to use actual config files (30 tests)
Integrated latest upstream changes while preserving Issue #10 LLM client work.

Conflict resolutions:
- DeepResearch/src/datatypes/__init__.py: Added LLM model types to __all__ without duplicates
- Accepted upstream versions for bioinformatics_tools.py, deep_agent_tools.py, README.md
…in permissions

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
Signed-off-by: Tonic <joseph@legml.ai>
…in permissions

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
Signed-off-by: Tonic <joseph@legml.ai>
Signed-off-by: Tonic <joseph.pollack@emle.eu>
Josephrp and others added 29 commits October 13, 2025 01:52
Signed-off-by: Tonic <joseph.pollack@emle.eu>
Signed-off-by: Tonic <joseph.pollack@emle.eu>
Signed-off-by: Tonic <joseph.pollack@emle.eu>
Signed-off-by: Tonic <joseph.pollack@emle.eu>
Signed-off-by: Tonic <joseph.pollack@emle.eu>
@Josephrp Josephrp merged commit ca3565a into dev Oct 13, 2025
5 of 6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants