chore(deps): update dependency llama-index to ^0.13.0 [security]#395
Open
renovate[bot] wants to merge 1 commit intomainfrom
Open
chore(deps): update dependency llama-index to ^0.13.0 [security]#395renovate[bot] wants to merge 1 commit intomainfrom
renovate[bot] wants to merge 1 commit intomainfrom
Conversation
Contributor
Author
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
^0.12.0→^0.13.0GitHub Vulnerability Alerts
CVE-2025-6211
A vulnerability in the DocugamiReader class of the run-llama/llama_index repository, up to but excluding version 0.12.41, involves the use of MD5 hashing to generate IDs for document chunks. This approach leads to hash collisions when structurally distinct chunks contain identical text, resulting in one chunk overwriting another. This can cause loss of semantically or legally important document content, breakage of parent-child chunk hierarchies, and inaccurate or hallucinated responses in AI outputs. The issue is resolved in version 0.3.1.
CVE-2025-7707
The llama_index library version 0.12.33 sets the NLTK data directory to a subdirectory of the codebase by default, which is world-writable in multi-user environments. This configuration allows local users to overwrite, delete, or corrupt NLTK data files, leading to potential denial of service, data tampering, or privilege escalation. The vulnerability arises from the use of a shared cache directory instead of a user-specific one, making it susceptible to local data tampering and denial of service.
Release Notes
run-llama/llama_index (llama-index)
v0.13.0Compare Source
NOTE: All packages have been bumped to handle the latest llama-index-core version.
llama-index-core[0.13.0]FunctionCallingAgent, the olderReActAgentimplementation,AgentRunner, all step workers,StructuredAgentPlanner,OpenAIAgent, and more. All users should migrate to the new workflow based agents:FunctionAgent,CodeActAgent,ReActAgent, andAgentWorkflow(#19529)QueryPipelineclass and all associated code (#19554)index.as_chat_engine()to return aCondensePlusContextChatEngine. Agent-based chat engines have been removed (which was the previous default). If you need an agent, use the above mentioned agent classes. (#19529)llama-index-embeddings-mixedbreadai[0.5.0]llama-index-instrumentation[0.4.0]llama-index-llms-google-genai[0.3.0]llama-index-llms-nvidia[0.4.0]llama-index-llms-upstage[0.6.0]llama-index-postprocessor-mixedbreadai-rerank[0.5.0]llama-index-readers-github[0.8.0]llama-index-readers-s3[0.5.0]client_kwargsin S3Reader (#19546)llama-index-tools-valyu[0.4.0]llama-index-voice-agents-gemini-live[0.2.0]llama-index-vector-stores-astradb[0.5.0]llama-index-vector-stores-milvus[0.9.0]llama-index-vector-stores-s3[0.2.0]llama-index-vector-stores-postgres[0.6.0]v0.12.52Compare Source
llama-index-core[0.12.52.post1]llama-index-core[0.12.52]stores_text=True/False(#19501)llama-index-indices-managed-bge-m3[0.5.0]llama-index-readers-web[0.4.5]llama-index-tools-jira-issue[0.1.0]llama-index-vector-stores-azureaisearch[0.3.10]**kwargsinto AzureAISearchVectorStore super init (#19500)llama-index-vector-stores-neo4jvector[0.4.1]v0.12.51Compare Source
llama-index-core[0.12.51]llama-index-readers-azstorage-blob[0.3.2]v0.12.50Compare Source
llama-index-core[0.12.50]get_cache_dir()function more secure by changing default location (#19415)session_idmetadata_filter in VectorMemoryBlock (#19427)llama-index-instrumentation[0.3.0]llama-index-llms-bedrock-converse[0.7.6]llama-index-llms-cloudflare-ai-gateway[0.1.0]llama-index-llms-google-genai[0.2.5]google_searchTool Support to GoogleGenAI LLM Integration (#19406)llama-index-readers-confluence[0.3.2]llama-index-readers-service-now[0.1.0]llama-index-protocols-ag-ui[0.1.4]llama-index-tools-wikipedia[0.3.1]WikipediaToolSpec.load_datatool (#19464)llama-index-vector-stores-baiduvectordb[0.3.1]**kwargstosuper().__init__in BaiduVectorDB (#19436)llama-index-vector-stores-moorcheh[0.1.1]llama-index-vector-stores-s3[0.1.0]v0.12.49Compare Source
llama-index-core[0.12.49]llama-index-llms-bedrock-converse[0.7.5]llama-index-llms-nvidia[0.3.5]llama-index-llms-oci-genai[0.5.2]llama-index-postprocessor-flashrank-rerank[0.1.0]llama-index-readers-web[0.4.4]llama-index-storage-docstore-duckdb[0.1.0](#19282)
llama-index-storage-index-store-duckdb[0.1.0](#19282)
llama-index-storage-kvstore-duckdb[0.1.3](#19282)
llama-index-vector-stores-duckdb[0.4.6](#19383)
llama-index-vector-stores-moorcheh[0.1.0]v0.12.48Compare Source
llama-index-core[0.12.48]llama-index-embeddings-huggingface-optimum-intel[0.3.1]llama-index-indices-managed-lancedb[0.1.0]llama-index-indices-managed-llamacloud[0.7.10]llama-index-llms-google-genai[0.2.4]llama-index-llms-oci-genai[0.5.1]llama-index-readers-file[0.4.11]llama-index-storage-chat-stores-postgres[0.2.2]v0.12.47Compare Source
llama-index-core[0.12.47]max_iterationsarg to.run()of 20 for agents (#19035)tool_requiredtoTrueforFunctionCallingProgramand structured LLMs where supported (#19326)LLMcounterpartLLMclasses support multi-modal features inllama-index-coreLLMclasses useImageBlockinternally to support multi-modal featuresllama-index-cli[0.4.4]llama-index-indices-managed-lancedb[0.1.0]llama-index-llms-anthropic[0.7.6]llama-index-llms-oci-genai[0.5.1]llama-index-readers-web[0.4.3]v0.12.46Compare Source
llama-index-core[0.12.46]llama-index-embeddings-google-genai[0.2.1]llama-index-embeddings-nvidia[0.3.4]llama-index-llms-google-genai[0.2.3]llama-index-tools-mcp[0.2.6]llama-index-voice-agents-elevenlabs[0.3.0-beta]v0.12.45Compare Source
llama-index-core [0.14.8]
llama-index-embeddings-nvidia [0.4.2]
llama-index-embeddings-ollama [0.8.4]
llama-index-llms-anthropic [0.10.2]
llama-index-llms-bedrock-converse [0.11.1]
llama-index-llms-google-genai [0.7.3]
llama-index-llms-nvidia [0.4.4]
llama-index-llms-openai [0.6.8]
llama-index-llms-upstage [0.6.5]
llama-index-packs-streamlit-chatbot [0.5.2]
llama-index-packs-voyage-query-engine [0.5.2]
llama-index-postprocessor-nvidia-rerank [0.5.1]
llama-index-readers-web [0.5.6]
llama-index-readers-whisper [0.3.0]
llama-index-storage-kvstore-postgres [0.4.3]
llama-index-tools-brightdata [0.2.1]
llama-index-tools-mcp [0.4.3]
llama-index-utils-oracleai [0.3.1]
llama-index-vector-stores-lancedb [0.4.2]
v0.12.44Compare Source
llama-index-core[0.12.44]CachePointcontent block for caching chat messages (#19193)llama-index-embeddings-fastembed[0.3.5]llama-index-embeddings-huggingface[0.5.5]asyncio.to_thread(#19207)llama-index-llms-anthropic[0.7.4]llama-index-llms-google-genai[0.2.2]llama-index-llms-mistralai[0.6.1]llama-index-llms-perplexity[0.3.7]PPLX_API_KEYin perplexity llm integration (#19217)llama-index-postprocessor-bedrock-rerank[0.4.0]llama-index-postprocessor-sbert-rerank[0.3.2]cross_encoder_kwargsparameter for advanced configuration (#19148)llama-index-utils-workflow[0.3.5]llama-index-vector-stores-azureaisearch[0.3.8]llama-index-vector-stores-db2[0.1.0]llama-index-vector-stores-duckdb[0.4.0]llama-index-vector-stores-pinecone[0.6.0]>=3.9,<4.0forllama-index-vector-stores-pinecone(#19186)llama-index-vector-stores-qdrant[0.6.1]llama-index-voice-agents-openai[0.1.1-beta]v0.12.43Compare Source
llama-index-core[0.12.43]get_tqdm_iterablein SimpleDirectoryReader (#18722)llama-index-workflowsand keeping backward compatibility (#19043)llama-index-instrumentation(#19062)llama-index-llms-bedrock-converse[0.7.2]llama-index-llms-openai[0.4.7]llama-index-llms-perplexity[0.3.6]llama-index-postprocessor-sbert-rerank[0.3.1]llama-index-protocols-ag-ui[0.1.2]ag-uiprotocol support (#19104, #19103, #19102, #18898)llama-index-readers-google[0.6.2]llama-index-readers-hive[0.3.1]llama-index-readers-mongodb[0.3.2]alazy_load_datafor mongodb reader (#19038)llama-index-storage-chat-store-sqlite[0.1.1]llama-index-tools-hive[0.1.0]llama-index-utils-workflow[0.3.4]llama-index-vector-stores-lancedb[0.3.3]llama-index-vector-stores-milvus[0.8.5]Connections.connect()got multiple values for argumentalias(#19119)llama-index-vector-stores-opengauss[0.1.0]v0.12.42Compare Source
llama-index-core[0.12.42]llama-index-embeddings-bedrock[0.5.1]llama-index-indices-managed-llama-cloud[0.7.7]raw_figure_nodesis None type inpage_figure_nodes_to_node_with_score(#19053)llama-index-llms-mistralai[0.6.0]llama-index-llms-openai[0.4.5]llama-index-llms-perplexity[0.3.5]llama-index-multi-modal-llms-openai-like[0.1.0]llama-index-postprocessor-bedrock-rerank[0.3.3]llama-index-readers-papers[0.3.1]llama-index-tools-artifact-editor[0.1.0]llama-index-utils-workflow[0.3.3]llama-index-vector-stores-opensearch[0.5.6]llama-index-voice-agents-elevenlabs[0.2.0-beta]v0.12.41Compare Source
llama-index-core[0.12.48]llama-index-embeddings-huggingface-optimum-intel[0.3.1]llama-index-indices-managed-lancedb[0.1.0]llama-index-indices-managed-llamacloud[0.7.10]llama-index-llms-google-genai[0.2.4]llama-index-llms-oci-genai[0.5.1]llama-index-readers-file[0.4.11]llama-index-storage-chat-stores-postgres[0.2.2]v0.12.40Compare Source
llama-index-core[0.12.40]tool_requiredto LLM args (#18922)llama-index-embeddings-cohere[0.5.1]llama-index-llms-anthropic[0.7.2]llama-index-llms-google-genai[0.2.1]llama-index-llms-openai[0.4.2]reasoning_effortkwarg toreasoning_optionsdict in OpenAIResponses class (#18920)llama-index-tools-measurespace[0.1.0]llama-index-tools-mcp[0.2.3]v0.12.39Compare Source
llama-index-core[0.12.39]require_toolparam to function calling LLMs (#18654)llama-index-indices-managed-llama-cloud[0.7.2]llama-index-llms-bedrock-converse[0.7.0]llama-index-llms-ollama[0.6.1]llama-index-vector-stores-milvus[0.8.3]Configuration
📅 Schedule: Branch creation - "" in timezone America/Chicago, Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.