Replies: 1 comment
-
|
Hi! Thanks for the feedback; we could try to add a metrics evaluation script. But you can use the official script for MS MARCO: https://github.com/microsoft/MSMARCO-Passage-Ranking/blob/master/ms_marco_eval.py Usage: Keep in mind this script prints MRR@10 divided by the total number of queries in the qrels file, so you want to use dev.small (7k queries) not the full dev (55k queries). Hope this helps! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
Let me start by thanking you for releasing and improving your code with new versions.
I'm using v0.2 , after indexing to faiss and running end-to-end retrieval I'm trying to get the different metrics results on the retrieval. I couldn't find it in the stdout of
retrieve.py.I assume I need to use
ranking.tsvoutput file, but couldn't find anywhere in the existing code where you can read this file and get the different metrics. Only place I found wastest.pywhich uses the actual model for reranking.I guess I'm missing something obvious.
Thanks
-Bar
Beta Was this translation helpful? Give feedback.
All reactions