-
Notifications
You must be signed in to change notification settings - Fork 2.8k
plugins/google: http_options factory #4627
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
📝 WalkthroughWalkthroughAdded per-attempt tracking to LLMStream via a private Changes
Sequence Diagram(s)mermaid Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
📜 Recent review detailsConfiguration used: Organization UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🧰 Additional context used📓 Path-based instructions (1)**/*.py📄 CodeRabbit inference engine (AGENTS.md)
Files:
🧬 Code graph analysis (1)tests/test_google_llm_http_options.py (3)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
🔇 Additional comments (15)
✏️ Tip: You can disable this entire section by setting Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@tests/test_google_llm_http_options.py`:
- Around line 5-6: Remove the unused AsyncMock import to satisfy ruff F401: edit
the import line in tests/test_google_llm_http_options.py so it only imports the
used symbols (e.g., replace "from unittest.mock import AsyncMock, MagicMock,
patch" with "from unittest.mock import MagicMock, patch"), ensuring AsyncMock is
no longer referenced in the file.
- Around line 53-55: The test currently assigns a lambda to factory which
triggers ruff E731; replace the lambda with a regular function definition named
factory that accepts attempt and returns types.HttpOptions(timeout=4000 +
attempt * 1000). Update any references to the existing factory variable to use
this new def factory(attempt) function so ruff E731 is resolved while preserving
the same behavior and the use of types.HttpOptions.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
livekit-plugins/livekit-plugins-google/livekit/plugins/google/llm.pytests/test_google_llm_http_options.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Format code with ruff
Run ruff linter and auto-fix issues
Run mypy type checker in strict mode
Maintain line length of 100 characters maximum
Ensure Python 3.9+ compatibility
Use Google-style docstrings
Files:
livekit-plugins/livekit-plugins-google/livekit/plugins/google/llm.pytests/test_google_llm_http_options.py
🧬 Code graph analysis (2)
livekit-plugins/livekit-plugins-google/livekit/plugins/google/llm.py (1)
livekit-agents/livekit/agents/utils/misc.py (1)
is_given(25-26)
tests/test_google_llm_http_options.py (4)
livekit-agents/livekit/agents/_exceptions.py (1)
APIStatusError(45-81)livekit-agents/livekit/agents/llm/chat_context.py (2)
ChatContext(218-656)add_message(234-267)livekit-agents/livekit/agents/types.py (1)
APIConnectOptions(54-88)tests/fake_llm.py (3)
FakeLLM(45-67)FakeLLMResponse(28-42)FakeLLMStream(70-136)
🪛 GitHub Actions: CI
tests/test_google_llm_http_options.py
[error] 6-6: F401 'AsyncMock' imported but unused. (unittest.mock.AsyncMock)
🪛 GitHub Check: ruff
tests/test_google_llm_http_options.py
[failure] 53-55: Ruff (E731)
tests/test_google_llm_http_options.py:53:9: E731 Do not assign a lambda expression, use a def
[failure] 6-6: Ruff (F401)
tests/test_google_llm_http_options.py:6:27: F401 unittest.mock.AsyncMock imported but unused
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: unit-tests
🔇 Additional comments (7)
livekit-plugins/livekit-plugins-google/livekit/plugins/google/llm.py (2)
21-21: Nice API flexibility for per-attempt http_options.
Type updates and docstrings make the factory option clear and discoverable.Also applies to: 80-80, 116-118, 148-148
431-438: Per-attempt resolution + fallback looks solid.
Default timeout fallback and header injection are straightforward and consistent.tests/test_google_llm_http_options.py (5)
20-49: Good coverage for static vs callable http_options.
The polymorphism tests clearly validate static, callable, and 1‑indexed attempt behavior.Also applies to: 57-69
72-126: Factory behavior tests are clear and targeted.
Attempt numbering, header variance, and timeout growth are well exercised.
128-175: Mock stream helpers are clean and reusable.
The simulated response/429 flow reads clearly and supports the integration tests well.
178-332: Integration coverage for retry + header propagation looks strong.
Static and callable paths are both exercised with good assertions on headers and attempts.
334-381: Attempt-number assertions add good safety.
These checks ensure the retry tracking contract holds across base and subclass streams.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
|
I now realize that a FallbackAdapter can largely handle this use case. I'll leave this open in case the maintainers feel there's a use for it, but feel free to close as well. |
VertexAI supports some request options which users may want to enable on retried requests, but not on every request. For example, the
X-Vertex-AI-LLM-Request-Typeheader can be set to a value of "priority" to decrease the odds of receiving a 429. However, this options incurs 1.8x the standard cost, so it would be beneficial to enable it only on retries. I'm sure there are other scenarios as well, but this is the one my team is currently facing.Summary by CodeRabbit
New Features
Refactor
Tests
✏️ Tip: You can customize this high-level summary in your review settings.