Conversation
71a2933 to
e560108
Compare
| @@ -39,8 +39,9 @@ dependencies: | |||
| - sacremoses | |||
|
|
|||
| # Warning: jiant currently depends on *both* pytorch_pretrained_bert > 0.6 _and_ | |||
There was a problem hiding this comment.
Check if this is still true—is the old package still a dependency via Allen? If not, delete.
There was a problem hiding this comment.
allennlp v0.8.4 still depends on pytorch-pretrained-bert, and we've not changed the allennlp requirement.
There was a problem hiding this comment.
Dang. I guess it's not worth the effort to update the Allen dependency, assuming that the 2.0 migration is coming up fairly soon.
|
Looks good so far! Thanks! |
|
Since you asked, this still looks good to me. If the testing infrastructure is all ready to go, though, it couldn't hurt to kick off tests with 2.8.0 now, too. |
|
Updated the full table. I think we should be good to merge. Given the upcoming deadlines, I recommend waiting till after EMNLP to update transformers again. |
pyeres
left a comment
There was a problem hiding this comment.
Thanks @zphang — can you say a few words in the description about what in addition to the version bump is being done here (e.g., XLMRobertaTokenizer changes). This will help me make sure I understand the PR, and it'll help inform the next set of release notes.
|
@zphang Why delay the merge? If it we've vetted it to our usual degree, than we should get some additional speedups/options out of this PR. Of course, it's not great to make major changes after a round of experiments has already started, but the solution to that would just be to maintain a separate branch for each major experiment, which is a good idea in any case. |
dea9fc4 to
72b05d8
Compare
|
@pyeres I've removed the commit concerting the XLMRoBERTaTokenizer. This PR should only update the requirements (transformers, and tokenizers). @sleepinyourhat To clarify, I support merging in the update to v2.6.0 now, and putting off the update to v2.8.0. |
|
Unrelated: |
* Transformers v2.6.0 * requirements update
Performance comparison on a set of representative tasks.