When we're doing video model validation, it's very slow. This would be a perfect time to enable something like TinyAutoencoder if supported by a given model family to quickly export per-step video samples that show the progression over the gen to eg. Discord or raw webhook consumers.