-
Notifications
You must be signed in to change notification settings - Fork 18
Open
Description
I appreciate to your nice work.
However, I am curious of how you evaluate the model.
In the mbeir_retriever.py file, you use below code as recall.
However, I think this is Hits@k not Recall@k.
is this metric Hits@k? or Recall@k?
def compute_recall_at_k(relevant_docs, retrieved_indices, k):
# Recall used by CLIP and BLIP codebase
# Return 0 if there are no relevant documents
if not relevant_docs:
return 0.0
# Get the set of indices for the top k retrieved documents
top_k_retrieved_indices_set = set(retrieved_indices[:k])
# Convert the relevant documents to a set
relevant_docs_set = set(relevant_docs)
# Check if there is an intersection between relevant docs and top k retrieved docs
# If there is, we return 1, indicating successful retrieval; otherwise, we return 0
if relevant_docs_set.intersection(top_k_retrieved_indices_set):
return 1.0
else:
return 0.0
Thank you.
Metadata
Metadata
Assignees
Labels
No labels