fix: fix colpali preprocessing, add examples to readme#487
Conversation
📝 WalkthroughWalkthroughThis pull request introduces the ability to extend dense text embeddings with custom models and support for late interaction multimodal tasks. The Changes
Sequence Diagram(s)sequenceDiagram
participant U as User
participant TE as TextEmbedding
participant LI as LateInteractionMultimodalEmbedding
participant ONNX as OnnxMultimodalModel
U->>TE: Call add_custom_model(custom_model, pooling, normalization, sources, dim, model_file)
U->>LI: Request multimodal embedding (text & image)
LI->>TE: Process text embedding
LI->>ONNX: Prepare image input using pixel_values
ONNX->>ONNX: Assert tokenizer and create onnx_input
LI->>U: Return combined embedding result
✨ Finishing Touches
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
README.md (1)
157-176: Consider clarifying example image paths.The example effectively demonstrates the usage of late interaction multimodal models. However, the image paths used in the example (
./path/to/qdrant_pdf_doc_1_screenshot.jpg) might be confusing.Consider using more realistic example paths or adding a comment to clarify that these are placeholder paths:
doc_images = [ - "./path/to/qdrant_pdf_doc_1_screenshot.jpg", - "./path/to/colpali_pdf_doc_2_screenshot.jpg", + # Replace with actual image paths + "./examples/qdrant_doc.jpg", + "./examples/colpali_doc.jpg", ]
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
README.md(2 hunks)fastembed/late_interaction_multimodal/colpali.py(1 hunks)fastembed/late_interaction_multimodal/onnx_multimodal_model.py(2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (16)
- GitHub Check: Python 3.13.x on windows-latest test
- GitHub Check: Python 3.13.x on macos-latest test
- GitHub Check: Python 3.13.x on ubuntu-latest test
- GitHub Check: Python 3.12.x on windows-latest test
- GitHub Check: Python 3.12.x on macos-latest test
- GitHub Check: Python 3.12.x on ubuntu-latest test
- GitHub Check: Python 3.11.x on windows-latest test
- GitHub Check: Python 3.11.x on macos-latest test
- GitHub Check: Python 3.11.x on ubuntu-latest test
- GitHub Check: Python 3.10.x on windows-latest test
- GitHub Check: Python 3.10.x on macos-latest test
- GitHub Check: Python 3.10.x on ubuntu-latest test
- GitHub Check: Python 3.9.x on windows-latest test
- GitHub Check: Python 3.9.x on macos-latest test
- GitHub Check: Python 3.9.x on ubuntu-latest test
- GitHub Check: Python 3.13 test
🔇 Additional comments (4)
fastembed/late_interaction_multimodal/colpali.py (1)
200-205: LGTM! Fixed placeholder generation based on pixel values.The changes correctly modify the placeholder generation to use
pixel_valuesinstead ofinput_ids, which aligns with the expected input structure for image processing.fastembed/late_interaction_multimodal/onnx_multimodal_model.py (2)
77-77: LGTM! Added early validation for tokenizer initialization.The assertion ensures that the tokenizer is properly initialized before it's used, preventing potential null pointer issues.
170-170: LGTM! Simplified image input handling.The code now directly uses
pixel_valuesas the key for encoded image data, making the input structure clearer and more consistent.README.md (1)
66-82: LGTM! Clear example for extending dense text embeddings.The example clearly demonstrates how to add custom models with specific parameters.
No description provided.