Skip to content

Recall vs. Hits #29

@kimwongyuda

Description

@kimwongyuda

I appreciate to your nice work.

However, I am curious of how you evaluate the model.

In the mbeir_retriever.py file, you use below code as recall.
However, I think this is Hits@k not Recall@k.
is this metric Hits@k? or Recall@k?

def compute_recall_at_k(relevant_docs, retrieved_indices, k):
    # Recall used by CLIP and BLIP codebase
    # Return 0 if there are no relevant documents
    if not relevant_docs:
        return 0.0

    # Get the set of indices for the top k retrieved documents
    top_k_retrieved_indices_set = set(retrieved_indices[:k])

    # Convert the relevant documents to a set
    relevant_docs_set = set(relevant_docs)

    # Check if there is an intersection between relevant docs and top k retrieved docs
    # If there is, we return 1, indicating successful retrieval; otherwise, we return 0
    if relevant_docs_set.intersection(top_k_retrieved_indices_set):
        return 1.0
    else:
        return 0.0

Thank you.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions