-
Notifications
You must be signed in to change notification settings - Fork 110
Closed
Description
Hi,
first of all great work!
Could you maybe explain what you used as an image data size for training your Unpaired GAN and for testing it? I mean even if you train on images of size 512x512, how are you able to then get the images that you present in your paper which are obviously not square images?
Here a few points:
- In your paper under the section "4. Generator" you write: "The size of input images is fixed at 512×512."
- But then in the Results section you write: "Although the model was trained on 512×512 inputs, we have extended it so that it can handle arbitrary resolutions".
What does that mean? You train it on an input of size 512 but then for inference you allow any kind of input? How does that work? I cannot find that part in your code. - In your Github description you write:" I directly used Lightroom to decode the images to TIF format and used Lightroom to resize the long side of the images to 512 resolution"
But that is not a 512x512 image then, only the longer length of it will be set to 512 right?
To me, these 3 points are kind of confusing, could you elaborate on them?
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels