-
Notifications
You must be signed in to change notification settings - Fork 6
Closed
Description
After generating the response, we can check if the response contains hallucinations.
In a RAG context, there are several points to consider:
- does the response answer the question?
- is it based on the provided context?
- are there obvious hallucinations?
There are different ways to implement hallucination detection:
- Models specifically trained for this task, for instance from vectara
- LLMs answering the points above
- Other techniques, e.g. SelfCheckGPT
For now I've implemented a simple hallucination detection that makes use of an LLM in ef1763b.
There is the RAG Triad that is used to check for hallucinations.
This could be done by an LLM too, we can ask it to return a score for each of the criteria and classify them as hallucination based on thresholds.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels