Skip to content

Conversation

@huang3eng
Copy link

  • bug: TypeError: LLM.generate() got an unexpected keyword argument 'prompt_token_ids', when using LLMJudgeRewardWorker on vllm 0.10.2.

  • fix: The _run_local_inference method of LLMJudgeRewardWorker will call llm.generate, and the vllm llm.generate interface has modified the input parameters. The parameter prompt_token_ids has been deprecated, and prompt_token_ids is now part of the prompts parameter.

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.


benyi seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants