-
Notifications
You must be signed in to change notification settings - Fork 160
Fix .rank() method for multiple models #615
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@tqtg Could you please elaborate more? Based on my understanding, these models compute ranking scores by weighting the tradeoff between predicted ratings and top-k aspects via the following formula: We can replace |
|
I was thinking whether we can reuse the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
The new Recommender.rank() function adds k as required value, which breaks some models that do not use k in ranking evaluation (e.g., ComparER, EFM, LRPPM). This commit updates .rank() for mentioned models with topK option.
The new Recommender.rank() function adds k as required value, which breaks some models that do not use k in ranking evaluation (e.g., ComparER, EFM, LRPPM). This commit updates .rank() for mentioned models with topK option.
Description
The new
Recommender.rank()function addskas required value, which breaks some models that do not usekin ranking evaluation (e.g., ComparER, EFM, LRPPM).I putkinto**kwargsand usek = kwargs.get("k", -1)instead.Another option is to addk=Noneto incompatible models.I added and updated topK ranking for mentioned models.