Doctest for PiecewiseLinear lr scheduler#2290
Conversation
|
I'm new to writing doctests and felt this doctest is pretty large, but was not able to think of a smaller and cleaner test. Also, I have added a few functions that might be redundant in the future doctests to the config (as suggested here) |
|
@DevPranjal Thank you for your work. I'm wondering if having global but hidden methods like |
|
I am with the same opinion here, maybe we can think of a cleaner test? Or does this work well (with the methods defined within the example)? |
Maybe you could use a simple model model = nn.Linear(1, 1)and define a dataset as a list of scalars. It would worth adding outputs, for instance the learning rates. |
|
Hello @sdesrozis, I have made the changes... Could you please review them? Also about the test output... I thought of adding the weights and biases of the network (which were supposed to be close to 1 and -1 respectively according to the example I added) but that would not be representative of the test.
Could you also please elaborate on this? I am a bit confused here. Thanks for the patience :) |
|
@DevPranjal Sorry for the late answer, I'm quite busy at the moment. Thus, thank you for your patience ! I will review and provide you a more detailed explanation asap. |
|
@DevPranjal Very sorry for the delay... I was so busy :( Here is a code that you could use to add some outputs import torch
from ignite.engine import Events, Engine
from ignite.handlers.param_scheduler import PiecewiseLinear
tensor = torch.zeros([1], requires_grad=True)
optimizer = torch.optim.SGD([tensor], lr=0)
milestones_values = [(1, 1.0), (3, 0.8), (5, 0.2)]
scheduler = PiecewiseLinear(optimizer, "lr", milestones_values=milestones_values)
def save_lr(engine):
lrs.append(optimizer.param_groups[0]["lr"])
trainer = Engine(lambda engine, batch: None)
trainer.add_event_handler(Events.ITERATION_COMPLETED, scheduler)
trainer.add_event_handler(Events.ITERATION_COMPLETED, save_lr)
lrs = []
trainer.run([0] * 6, max_epochs=1)
print(lrs)
# >>> [1.0, 1.0, 0.9, 0.8, 0.5, 0.2]
lrs = []
trainer.run([0], max_epochs=1)
print(lrs)
# >>> [0.2] |
|
Thanks a lot for this snippet! I am really impressed by the ignite API here, want to learn more such use cases in the future :) |
This is the right place to be :) |
|
@DevPranjal Are you still working on this ? Thanks ! |
|
Sorry for the delay... We have our finals coming up at college and hence I have been busy this week. I will make the changes as soon as possible. |
|
@sdesrozis, I have added the suggested code, please review. Thanks! |
|
Let me a little time to merge a PR I have done on this topic. I have introduced some features we could use in this PR. I keep you in touch. |
|
@DevPranjal Could you update this PR accordingly to #2327 ? Thanks !! |
|
Yes, I have been following #2327... And now that it is merged, I will update this one by tomorrow :) |
15bfc10 to
16b5b57
Compare
|
ignite/ignite/handlers/param_scheduler.py Lines 978 to 988 in 60e2af4 Should I remove this snippet? @sdesrozis Also, sorry for the mess in commit history |
|
Your test should reflect the current test, even adding a few comments to be very explicit about what is done. Please look #2327 with attention, it's mostly the same. |
|
@sdesrozis Gentle ping, in case you missed the last commit. I have added the comments as suggested |
|
@ydcjeff can you please also review this PR ? |
sdesrozis
left a comment
There was a problem hiding this comment.
@DevPranjal Thank you ! LGTM !
Addresses #2266
Description: Added doctest for
PiecewiseLinearlr schedulerCheck list: