[skip ci] Doctest for Accuracy#2287
Conversation
ignite/metrics/accuracy.py
Outdated
| def process_function(engine, batch): | ||
| y_pred, y = batch | ||
| return y_pred, y | ||
| engine = Engine(process_function) | ||
| metric = Accuracy() | ||
| metric.attach(engine, 'accuracy') |
There was a problem hiding this comment.
I think we could create a default evaluator as above and hide this part of code from the user ?
cc @ydcjeff
There was a problem hiding this comment.
I feel it would be good to have this part of the code so that it remains transparent.
Could you also explain the alternative that you are proposing?
There was a problem hiding this comment.
There is a way to define a global context using this:
Lines 344 to 353 in 74dabca
|
@vfdev-5 I have made the necessary change. |
|
|
||
|
|
||
| metric_name='accuracy' | ||
| engine=evaluator(Accuracy(), metric_name) |
There was a problem hiding this comment.
@Chandan-h-509 thanks for the update, but (here I agree with your previous comment) it is non-obvious to understand what it is evaluator with its particular API as evaluator(Accuracy(), metric_name). I think a tradeoff would be something like
metric = Accuracy()
metric.attach(evaluator, "accuracy")
preds = torch.tensor([[1,0,0,1], ])
target = torch.tensor([[1,0,0,0], ])
state = evaluator.run([[preds, target], ])
print(state.metrics["accuracy"])There was a problem hiding this comment.
In addition, I think it would be more helpful to show all possible options while computing accuracy:
- binary case
- multiclass case
- multilabel case
There was a problem hiding this comment.
@Chandan-h-509 thanks for the update, but (here I agree with your previous comment) it is non-obvious to understand what it is
evaluatorwith its particular API asevaluator(Accuracy(), metric_name). I think a tradeoff would be something likemetric = Accuracy() metric.attach(evaluator, "accuracy") preds = torch.tensor([[1,0,0,1], ]) target = torch.tensor([[1,0,0,0], ]) state = evaluator.run([[preds, target], ]) print(state.metrics["accuracy"])
@vfdev-5 I didn't understand the solution clearly.
Could you please elaborate more on it.
There was a problem hiding this comment.
@Chandan-h-509 current docstring you proposing in this PR does not look good especially this line:
engine=evaluator(Accuracy(), metric_name)because it is not obvious to understand what it is evaluator method and what is its API.
I was proposing you to rework the docstring and doctest_global_setup such that the new docstring ressembles to https://github.com/pytorch/ignite/pull/2287/files#r735168912
Additionally, you need to add more docstrings for other cases as described https://github.com/pytorch/ignite/pull/2287/files#r735169293
|
@Chandan-h-509 Thank you for this PR. Are you still working on it ? |
|
@sdesrozis sorry for the delay.. I have been very busy these last 20days with my college exams and other activities.. I will try to complete it in a few days time. |
|
@Chandan-h-509 Thank you for your help. I close this PR since a doctest for Accuracy has been merged. |
Addresses #2265
Description: Added doctests for 'Accuracy'
Check list: