Replies: 1 comment
-
|
any update? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Inspired from this paper, we're exploring ways to bootstrap a bidirectional-context LLM from a decoder-only Causal LLM (e.g. llama-3). This is very easy to do in huggingface transformers by passing a custom attention mask.
Looking for guidance on how to make this happen in vLLM?
TLDR;
Help appreciated!
Beta Was this translation helpful? Give feedback.
All reactions