Skip to content

VER-299: Fix stage 1: Use pinned gemini version instead of gemini-flash-latest#60

Merged
quancao-ea merged 1 commit intomainfrom
fix/gemini-model-name
Feb 12, 2026
Merged

VER-299: Fix stage 1: Use pinned gemini version instead of gemini-flash-latest#60
quancao-ea merged 1 commit intomainfrom
fix/gemini-model-name

Conversation

@quancao-ea
Copy link
Copy Markdown
Collaborator

@quancao-ea quancao-ea commented Feb 12, 2026

Important

Replace GeminiModel.GEMINI_FLASH_LATEST with GeminiModel.GEMINI_2_5_FLASH_PREVIEW_09_2025 across stage 1 and 3 processing functions.

  • Behavior:
    • Replace GeminiModel.GEMINI_FLASH_LATEST with GeminiModel.GEMINI_2_5_FLASH_PREVIEW_09_2025 in regenerate_timestamped_transcript() and process_audio_file() in flows.py.
    • Update initial_transcription_with_gemini(), initial_disinformation_detection_with_gemini(), transcribe_audio_file_with_timestamp_with_gemini(), and disinformation_detection_with_gemini() in tasks.py to use GeminiModel.GEMINI_2_5_FLASH_PREVIEW_09_2025.
    • Modify __structure_with_schema() in executors.py to use GeminiModel.GEMINI_2_5_FLASH_PREVIEW_09_2025.
  • Constants:
    • Add GEMINI_2_5_FLASH_PREVIEW_09_2025 to GeminiModel in constants.py.

This description was created by Ellipsis for 3006a89. You can customize this summary. It will automatically update as commits are pushed.

Summary by CodeRabbit

  • Chores
    • Updated AI model versions across audio transcription and content analysis processing stages to utilize the latest Gemini model variant.

Updates stage 1 and stage 3 to use the specific Gemini 2.5
Flash preview model instead of the latest flash alias for
consistent model behavior across the pipeline.
@linear
Copy link
Copy Markdown

linear Bot commented Feb 12, 2026

Copy link
Copy Markdown
Contributor

@ellipsis-dev ellipsis-dev Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed everything up to 3006a89 in 7 seconds. Click for details.
  • Reviewed 122 lines of code in 5 files
  • Skipped 0 files when reviewing.
  • Skipped posting 0 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.

Workflow ID: wflow_z9kQMoI0EMbSlt57

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @quancao-ea, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the stability and predictability of the processing pipeline by replacing the dynamic gemini-flash-latest model alias with a specific, pinned version, gemini-2.5-flash-preview-09-2025. This change ensures that all transcription, detection, and analysis tasks consistently utilize a known and stable model, mitigating potential issues arising from automatic updates to the 'latest' alias.

Highlights

  • New Gemini Model Version Added: A new specific Gemini model identifier, GEMINI_2_5_FLASH_PREVIEW_09_2025, has been added to the GeminiModel enumeration.
  • Pinned Gemini Model Usage: All instances previously using the GeminiModel.GEMINI_FLASH_LATEST alias have been updated to explicitly use the newly pinned GeminiModel.GEMINI_2_5_FLASH_PREVIEW_09_2025 across the processing pipeline.
Changelog
  • src/processing_pipeline/constants.py
    • Added GEMINI_2_5_FLASH_PREVIEW_09_2025 to the GeminiModel enum.
  • src/processing_pipeline/stage_1/flows.py
    • Updated the transcriptor model from GEMINI_FLASH_LATEST to GEMINI_2_5_FLASH_PREVIEW_09_2025 in regenerate_timestamped_transcript.
    • Updated the model_name for detection results from GEMINI_FLASH_LATEST to GEMINI_2_5_FLASH_PREVIEW_09_2025 in regenerate_timestamped_transcript.
  • src/processing_pipeline/stage_1/tasks.py
    • Changed the model_name for initial transcription from GEMINI_FLASH_LATEST to GEMINI_2_5_FLASH_PREVIEW_09_2025 in initial_transcription_with_gemini.
    • Updated the model_name for initial disinformation detection from GEMINI_FLASH_LATEST to GEMINI_2_5_FLASH_PREVIEW_09_2025 in initial_disinformation_detection_with_gemini.
    • Modified the default model_name for timestamped transcription from GEMINI_FLASH_LATEST to GEMINI_2_5_FLASH_PREVIEW_09_2025 in transcribe_audio_file_with_timestamp_with_gemini.
    • Updated the default model_name for disinformation detection from GEMINI_FLASH_LATEST to GEMINI_2_5_FLASH_PREVIEW_09_2025 in disinformation_detection_with_gemini.
    • Changed the transcriptor model from GEMINI_FLASH_LATEST to GEMINI_2_5_FLASH_PREVIEW_09_2025 in process_audio_file.
    • Updated the model_name for main detection results from GEMINI_FLASH_LATEST to GEMINI_2_5_FLASH_PREVIEW_09_2025 in process_audio_file.
  • src/processing_pipeline/stage_3/executors.py
    • Updated the model used for structuring analysis text from GEMINI_FLASH_LATEST to GEMINI_2_5_FLASH_PREVIEW_09_2025 in __structure_with_schema.
  • src/processing_pipeline/stage_3/tasks.py
    • Changed the fallback_model from GEMINI_FLASH_LATEST to GEMINI_2_5_FLASH_PREVIEW_09_2025 in analyze_snippet.
Activity
  • No specific activity (comments, reviews, etc.) has been recorded for this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Feb 12, 2026

Walkthrough

This PR adds a new Gemini model variant (GEMINI_2_5_FLASH_PREVIEW_09_2025) to the enum and replaces all references from GEMINI_FLASH_LATEST to this pinned model version across stage 1 and stage 3 processing modules.

Changes

Cohort / File(s) Summary
Model Definition
src/processing_pipeline/constants.py
Added new enum member GEMINI_2_5_FLASH_PREVIEW_09_2025 to GeminiModel with value "gemini-2.5-flash-preview-09-2025".
Stage 1 Pipeline
src/processing_pipeline/stage_1/flows.py, src/processing_pipeline/stage_1/tasks.py
Updated model references from GEMINI_FLASH_LATEST to GEMINI_2_5_FLASH_PREVIEW_09_2025 in timestamped transcription and disinformation detection flows. Updated default model_name parameter in two function signatures accordingly.
Stage 3 Pipeline
src/processing_pipeline/stage_3/executors.py, src/processing_pipeline/stage_3/tasks.py
Updated model references from GEMINI_FLASH_LATEST to GEMINI_2_5_FLASH_PREVIEW_09_2025 in schema-structure step and snippet analysis fallback logic.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Possibly related PRs

Suggested reviewers

  • nhphong

Poem

🐰 A shiny new model joins the crew,
Gemini 2.5 preview, pinned and true,
Stage 1 and 3 all unified,
No more "latest," we standardized!
The pipeline hops with joy anew! 🎉

🚥 Pre-merge checks | ✅ 4 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and accurately summarizes the main change: replacing gemini-flash-latest with a pinned Gemini version throughout stage 1 (and stage 3) to ensure consistent model behavior.
Linked Issues check ✅ Passed The PR successfully addresses VER-299 by replacing GEMINI_FLASH_LATEST with the pinned GEMINI_2_5_FLASH_PREVIEW_09_2025 model across stages 1 and 3, achieving consistent model behavior as required.
Out of Scope Changes check ✅ Passed All changes are directly related to the objective of replacing gemini-flash-latest with pinned gemini-2.5-flash-preview-09-2025. No unrelated modifications detected across the modified files.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/gemini-model-name

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 Pylint (4.0.4)
src/processing_pipeline/constants.py

************* Module .pylintrc
.pylintrc:1:0: F0011: error while parsing the configuration: File contains no section headers.
file: '.pylintrc', line: 1
'disable=C0116\n' (config-parse-error)
[
{
"type": "convention",
"module": "src.processing_pipeline.constants",
"obj": "",
"line": 97,
"column": 0,
"endLine": null,
"endColumn": null,
"path": "src/processing_pipeline/constants.py",
"symbol": "line-too-long",
"message": "Line too long (101/100)",
"message-id": "C0301"
},
{
"type": "convention",
"module": "src.processing_pipeline.constants",
"obj": "",
"line": 101,
"column": 0,
"endLine": null,
"endColumn": null,
"path": "src/processing_pipeline/constants.py",
"symbol": "line-too-long",
"message": "Line too long (115/100)",
"message-id": "C0301"
},
{
"type": "convention",
"module

... [truncated 7210 characters] ...

ini_timestamped_transcription_generation_prompt",
"line": 67,
"column": 0,
"endLine": 67,
"endColumn": 58,
"path": "src/processing_pipeline/constants.py",
"symbol": "missing-function-docstring",
"message": "Missing function or method docstring",
"message-id": "C0116"
},
{
"type": "warning",
"module": "src.processing_pipeline.constants",
"obj": "get_gemini_timestamped_transcription_generation_prompt",
"line": 68,
"column": 11,
"endLine": 68,
"endColumn": 85,
"path": "src/processing_pipeline/constants.py",
"symbol": "unspecified-encoding",
"message": "Using open without explicitly specifying an encoding",
"message-id": "W1514"
}
]

src/processing_pipeline/stage_1/flows.py

************* Module .pylintrc
.pylintrc:1:0: F0011: error while parsing the configuration: File contains no section headers.
file: '.pylintrc', line: 1
'disable=C0116\n' (config-parse-error)
[
{
"type": "convention",
"module": "src.processing_pipeline.stage_1.flows",
"obj": "",
"line": 41,
"column": 0,
"endLine": null,
"endColumn": null,
"path": "src/processing_pipeline/stage_1/flows.py",
"symbol": "line-too-long",
"message": "Line too long (116/100)",
"message-id": "C0301"
},
{
"type": "convention",
"module": "src.processing_pipeline.stage_1.flows",
"obj": "",
"line": 64,
"column": 0,
"endLine": null,
"endColumn": null,
"path": "src/processing_pipeline/stage_1/flows.py",
"symbol": "line-too-long",
"message": "Line too long (116/100)",
"message-id": "C0301"
},
{
"type": "convention",

... [truncated 13107 characters] ...

module": "src.processing_pipeline.stage_1.flows",
"obj": "regenerate_timestamped_transcript",
"line": 193,
"column": 0,
"endLine": 193,
"endColumn": 37,
"path": "src/processing_pipeline/stage_1/flows.py",
"symbol": "too-many-locals",
"message": "Too many local variables (16/15)",
"message-id": "R0914"
},
{
"type": "warning",
"module": "src.processing_pipeline.stage_1.flows",
"obj": "regenerate_timestamped_transcript",
"line": 216,
"column": 8,
"endLine": 216,
"endColumn": 10,
"path": "src/processing_pipeline/stage_1/flows.py",
"symbol": "redefined-builtin",
"message": "Redefining built-in 'id'",
"message-id": "W0622"
}
]

src/processing_pipeline/stage_3/tasks.py

************* Module .pylintrc
.pylintrc:1:0: F0011: error while parsing the configuration: File contains no section headers.
file: '.pylintrc', line: 1
'disable=C0116\n' (config-parse-error)
[
{
"type": "convention",
"module": "src.processing_pipeline.stage_3.tasks",
"obj": "",
"line": 21,
"column": 0,
"endLine": null,
"endColumn": null,
"path": "src/processing_pipeline/stage_3/tasks.py",
"symbol": "line-too-long",
"message": "Line too long (180/100)",
"message-id": "C0301"
},
{
"type": "convention",
"module": "src.processing_pipeline.stage_3.tasks",
"obj": "",
"line": 105,
"column": 0,
"endLine": null,
"endColumn": null,
"path": "src/processing_pipeline/stage_3/tasks.py",
"symbol": "line-too-long",
"message": "Line too long (119/100)",
"message-id": "C0301"
},
{
"type": "convention",

... [truncated 9644 characters] ...

ule": "src.processing_pipeline.stage_3.tasks",
"obj": "process_snippet",
"line": 185,
"column": 0,
"endLine": 185,
"endColumn": 19,
"path": "src/processing_pipeline/stage_3/tasks.py",
"symbol": "too-many-positional-arguments",
"message": "Too many positional arguments (6/5)",
"message-id": "R0917"
},
{
"type": "warning",
"module": "src.processing_pipeline.stage_3.tasks",
"obj": "process_snippet",
"line": 219,
"column": 11,
"endLine": 219,
"endColumn": 20,
"path": "src/processing_pipeline/stage_3/tasks.py",
"symbol": "broad-exception-caught",
"message": "Catching too general exception Exception",
"message-id": "W0718"
}
]

  • 2 others

Tip

Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/processing_pipeline/stage_1/flows.py (1)

176-176: ⚠️ Potential issue | 🟡 Minor

Stale log message: "Gemini Flash Latest" no longer reflects the actual model used.

Line 176 still prints "Processing the timestamped transcription with Gemini Flash Latest", but disinformation_detection_with_gemini now defaults to GEMINI_2_5_FLASH_PREVIEW_09_2025. The function itself already logs the correct model name on entry (line 196 of tasks.py), so this hardcoded string is misleading.

Suggested fix
-                print("Processing the timestamped transcription with Gemini Flash Latest")
+                print("Processing the timestamped transcription with Gemini")

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request successfully replaces the use of gemini-flash-latest with a pinned version, gemini-2.5-flash-preview-09-2025, which is an excellent practice for ensuring stability and reproducibility. The changes are applied consistently across the relevant files. I have added a couple of suggestions to improve maintainability by reusing a variable for the model name within the same function, which will make future updates easier.

metadata=metadata,
prompt_version=detection_prompt_version,
model_name=GeminiModel.GEMINI_FLASH_LATEST,
model_name=GeminiModel.GEMINI_2_5_FLASH_PREVIEW_09_2025,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To improve maintainability and avoid repeating the model name, consider reusing the transcriptor variable defined on line 233. This ensures that if the model needs to be changed in the future, it only needs to be updated in one place within this block.

Suggested change
model_name=GeminiModel.GEMINI_2_5_FLASH_PREVIEW_09_2025,
model_name=transcriptor,

metadata=metadata,
prompt_version=detection_prompt_version,
model_name=GeminiModel.GEMINI_FLASH_LATEST,
model_name=GeminiModel.GEMINI_2_5_FLASH_PREVIEW_09_2025,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To improve maintainability and avoid repeating the model name, consider reusing the transcriptor variable defined on line 296. This ensures that if the model needs to be changed in the future, it only needs to be updated in one place within this block.

Suggested change
model_name=GeminiModel.GEMINI_2_5_FLASH_PREVIEW_09_2025,
model_name=transcriptor,

@quancao-ea quancao-ea merged commit 757a589 into main Feb 12, 2026
2 checks passed
@quancao-ea quancao-ea deleted the fix/gemini-model-name branch February 26, 2026 03:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant