Skip to content

logprobs in ChatOllama #34207

@paascorb

Description

@paascorb

Checked other resources

  • This is a feature request, not a bug report or usage question.
  • I added a clear and descriptive title that summarizes the feature request.
  • I used the GitHub search to find a similar feature request and didn't find it.
  • I checked the LangChain documentation and API reference to see if this feature already exists.
  • This is not related to the langchain-community package.

Package (Required)

  • langchain
  • langchain-openai
  • langchain-anthropic
  • langchain-classic
  • langchain-core
  • langchain-cli
  • langchain-model-profiles
  • langchain-tests
  • langchain-text-splitters
  • langchain-chroma
  • langchain-deepseek
  • langchain-exa
  • langchain-fireworks
  • langchain-groq
  • langchain-huggingface
  • langchain-mistralai
  • langchain-nomic
  • langchain-ollama
  • langchain-perplexity
  • langchain-prompty
  • langchain-qdrant
  • langchain-xai
  • Other / not sure / general

Feature Description

Is it possible to put 'logprobs' of Ollama models to True trough the ChatOllama constructor?

Use Case

I'm trying to use white box metrics to measure hallucinations using uqlm library and it is not possible if the logprobs is set to None or False.

Proposed Solution

Just let user put to True logprobs flag and make a call to the Ollama server if possible to get the probabilities of the generated responses.

Alternatives Considered

No response

Additional Context

from langchain_ollama import ChatOllama
from uqlm import WhiteBoxUQ, BlackBoxUQ
import torch

llm = ChatOllama(model="gpt-oss:latest", logprobs=True)

prompts=["What is the capital of France?"]

wbuq = WhiteBoxUQ(llm=llm, scorers=["min_probability"])

results = await wbuq.generate_and_score(prompts=prompts)
results.to_df()

OUTPUT:

AssertionError Traceback (most recent call last)
Cell In[39], line 11
7 prompts=["What is the capital of France?"]
9 wbuq = WhiteBoxUQ(llm=llm, scorers=["min_probability"])
---> 11 results = await wbuq.generate_and_score(prompts=prompts)
12 results.to_df()
14 # bbuq = BlackBoxUQ(llm=llm, scorers=["semantic_negentropy"], use_best=True)
15
16 # results = await bbuq.generate_and_score(prompts=prompts, num_responses=5)
17 # results.to_df()

File ~/.conda/envs/py312/lib/python3.12/site-packages/uqlm/scorers/white_box.py:102, in WhiteBoxUQ.generate_and_score(self, prompts, num_responses, show_progress_bars)
81 async def generate_and_score(self, prompts: List[Union[str, List[BaseMessage]]], num_responses: Optional[int] = 5, show_progress_bars: Optional[bool] = True) -> UQResult:
82 """
83 Generate responses and compute white-box confidence scores based on extracted token probabilities.
84
(...) 100 UQResult containing prompts, responses, logprobs, and white-box UQ scores
101 """
--> 102 assert hasattr(self.llm, "logprobs"), """
103 BaseChatModel must have logprobs attribute and have logprobs=True
104 """
105 self.llm.logprobs = True
106 sampled_responses = None

AssertionError:
BaseChatModel must have logprobs attribute and have logprobs=True

Metadata

Metadata

Assignees

No one assigned

    Labels

    feature requestrequest for an enhancement / additional functionalityollama

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions