Skip to content
This repository was archived by the owner on Jul 1, 2025. It is now read-only.
This repository was archived by the owner on Jul 1, 2025. It is now read-only.

Metrics code is not as educational as it could be #2

@qwitwa

Description

@qwitwa

The latest docs state that validation_end should return a dict, optionally with two special keys, progress_bar and log. The code in this template only puts the metrics instead at the top level, resulting in them not showing up in whatever logger the user has selected. This is particularly confusing because the example code on the main docs page doesn't make use of this either, and so the only way to learn about the existence of the log key is to look directly at the validation_end signature directly, leading new users to assume all metrics are logged automatically as in issues such as Lightning-AI/pytorch-lightning#324 (comment). I stumbled upon that issue whilst trying to debug the same problem, but since I started with MLFlow I initially thought I must be doing something wrong with the non-default logger and ended up somewhat confused.

This has been clarified in the example code talked about in the above issue; that change should probably be ported to this repo. I also think there's a good argument this is non-obvious enough to put in the "minimal example" in the docs; if agreed I'll open an issue on the main repo for that.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions