Skip to content

Doctest for PiecewiseLinear lr scheduler#2290

Merged
sdesrozis merged 11 commits intopytorch:masterfrom
DevPranjal:param_scheduler_doctest
Nov 29, 2021
Merged

Doctest for PiecewiseLinear lr scheduler#2290
sdesrozis merged 11 commits intopytorch:masterfrom
DevPranjal:param_scheduler_doctest

Conversation

@DevPranjal
Copy link
Contributor

Addresses #2266

Description: Added doctest for PiecewiseLinear lr scheduler

Check list:

  • New tests are added (if a new feature is added)
  • New doc strings: description and/or example code are in RST format
  • Documentation is updated (if required)

@github-actions github-actions bot added docs module: handlers Core Handlers module labels Oct 23, 2021
@DevPranjal
Copy link
Contributor Author

DevPranjal commented Oct 23, 2021

I'm new to writing doctests and felt this doctest is pretty large, but was not able to think of a smaller and cleaner test.
Could anyone please suggest something better? @vfdev-5 @sdesrozis @ydcjeff @Ishan-Kumar2

Also, I have added a few functions that might be redundant in the future doctests to the config (as suggested here)

@sdesrozis
Copy link
Contributor

@DevPranjal Thank you for your work.

I'm wondering if having global but hidden methods like generate_data_with_noise is a good option. IMO the examples must work as copy/past. What do you think ?

@DevPranjal
Copy link
Contributor Author

I am with the same opinion here, maybe we can think of a cleaner test? Or does this work well (with the methods defined within the example)?

@sdesrozis
Copy link
Contributor

I am with the same opinion here, maybe we can think of a cleaner test? Or does this work well (with the methods defined within the example)?

Maybe you could use a simple model

model = nn.Linear(1, 1)

and define a dataset as a list of scalars. It would worth adding outputs, for instance the learning rates.

@DevPranjal
Copy link
Contributor Author

DevPranjal commented Oct 25, 2021

Hello @sdesrozis, I have made the changes... Could you please review them? Also about the test output... I thought of adding the weights and biases of the network (which were supposed to be close to 1 and -1 respectively according to the example I added) but that would not be representative of the test.

It would worth adding outputs, for instance the learning rates.

Could you also please elaborate on this? I am a bit confused here.

Thanks for the patience :)

@sdesrozis
Copy link
Contributor

@DevPranjal Sorry for the late answer, I'm quite busy at the moment. Thus, thank you for your patience !

I will review and provide you a more detailed explanation asap.

@sdesrozis
Copy link
Contributor

@DevPranjal Very sorry for the delay... I was so busy :(

Here is a code that you could use to add some outputs

import torch

from ignite.engine import Events, Engine
from ignite.handlers.param_scheduler import PiecewiseLinear

tensor = torch.zeros([1], requires_grad=True)
optimizer = torch.optim.SGD([tensor], lr=0)

milestones_values = [(1, 1.0), (3, 0.8), (5, 0.2)]
scheduler = PiecewiseLinear(optimizer, "lr", milestones_values=milestones_values)
        
def save_lr(engine):
    lrs.append(optimizer.param_groups[0]["lr"])

trainer = Engine(lambda engine, batch: None)
trainer.add_event_handler(Events.ITERATION_COMPLETED, scheduler)
trainer.add_event_handler(Events.ITERATION_COMPLETED, save_lr)

lrs = []
trainer.run([0] * 6, max_epochs=1)
print(lrs)
# >>> [1.0, 1.0, 0.9, 0.8, 0.5, 0.2]

lrs = []
trainer.run([0], max_epochs=1)
print(lrs)
# >>> [0.2]

@DevPranjal
Copy link
Contributor Author

Thanks a lot for this snippet! I am really impressed by the ignite API here, want to learn more such use cases in the future :)

@sdesrozis
Copy link
Contributor

Thanks a lot for this snippet! I am really impressed by the ignite API here, want to learn more such use cases in the future :)

This is the right place to be :)

@sdesrozis
Copy link
Contributor

@DevPranjal Are you still working on this ? Thanks !

@DevPranjal
Copy link
Contributor Author

Sorry for the delay... We have our finals coming up at college and hence I have been busy this week. I will make the changes as soon as possible.

@DevPranjal
Copy link
Contributor Author

@sdesrozis, I have added the suggested code, please review. Thanks!

@sdesrozis
Copy link
Contributor

Let me a little time to merge a PR I have done on this topic. I have introduced some features we could use in this PR. I keep you in touch.

@sdesrozis
Copy link
Contributor

@DevPranjal Could you update this PR accordingly to #2327 ? Thanks !!

@DevPranjal
Copy link
Contributor Author

Yes, I have been following #2327... And now that it is merged, I will update this one by tomorrow :)

@DevPranjal DevPranjal force-pushed the param_scheduler_doctest branch from 15bfc10 to 16b5b57 Compare November 20, 2021 08:39
@DevPranjal
Copy link
Contributor Author

.. code-block:: python
scheduler = PiecewiseLinear(optimizer, "lr",
milestones_values=[(10, 0.5), (20, 0.45), (21, 0.3), (30, 0.1), (40, 0.1)])
# Attach to the trainer
trainer.add_event_handler(Events.ITERATION_STARTED, scheduler)
#
# Sets the learning rate to 0.5 over the first 10 iterations, then decreases linearly from 0.5 to 0.45 between
# 10th and 20th iterations. Next there is a jump to 0.3 at the 21st iteration and LR decreases linearly
# from 0.3 to 0.1 between 21st and 30th iterations and remains 0.1 until the end of the iterations.
#

Should I remove this snippet? @sdesrozis

Also, sorry for the mess in commit history

@sdesrozis
Copy link
Contributor

sdesrozis commented Nov 20, 2021

Your test should reflect the current test, even adding a few comments to be very explicit about what is done.

Please look #2327 with attention, it's mostly the same.

@DevPranjal
Copy link
Contributor Author

@sdesrozis Gentle ping, in case you missed the last commit. I have added the comments as suggested

@vfdev-5
Copy link
Collaborator

vfdev-5 commented Nov 29, 2021

@ydcjeff can you please also review this PR ?

@vfdev-5 vfdev-5 requested a review from sdesrozis November 29, 2021 14:32
Copy link
Contributor

@sdesrozis sdesrozis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@DevPranjal Thank you ! LGTM !

@sdesrozis sdesrozis merged commit f8c3982 into pytorch:master Nov 29, 2021
@DevPranjal DevPranjal deleted the param_scheduler_doctest branch November 29, 2021 16:57
Ishan-Kumar2 pushed a commit to Ishan-Kumar2/ignite that referenced this pull request Dec 26, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

docs module: handlers Core Handlers module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants