Skip to content

Conversation

@pyup-bot
Copy link
Contributor

torchvision is not pinned to a specific version.

I'm pinning it to the latest version 0.2.0 for now.

These links might come in handy: PyPI | Changelog | Repo

Changelog

0.2.0

This version introduced a functional interface to the transforms, allowing for joint random transformation of inputs and targets. We also introduced a few breaking changes to some datasets and transforms (see below for more details).

Transforms
We have introduced a functional interface for the torchvision transforms, available under torchvision.transforms.functional. This now makes it possible to do joint random transformations on inputs and targets, which is especially useful in tasks like object detection, segmentation and super resolution. For example, you can now do the following:

from torchvision import transforms
import torchvision.transforms.functional as F
import random

def my_segmentation_transform(input, target):
i, j, h, w = transforms.RandomCrop.get_params(input, (100, 100))
input = F.crop(input, i, j, h, w)
target = F.crop(target, i, j, h, w)
if random.random() > 0.5:
input = F.hflip(input)
target = F.hflip(target)
F.to_tensor(input), F.to_tensor(target)
return input, target

The following transforms have also been added:
- [`F.vflip` and `RandomVerticalFlip`](http://pytorch.org/docs/master/torchvision/transforms.htmltorchvision.transforms.RandomVerticalFlip)
- [FiveCrop](http://pytorch.org/docs/master/torchvision/transforms.htmltorchvision.transforms.FiveCrop) and [TenCrop](http://pytorch.org/docs/master/torchvision/transforms.htmltorchvision.transforms.TenCrop)
- Various color transformations:
 - [`ColorJitter`](http://pytorch.org/docs/master/torchvision/transforms.htmltorchvision.transforms.ColorJitter)
 - `F.adjust_brightness`
 - `F.adjust_contrast`
 - `F.adjust_saturation`
 - `F.adjust_hue`
- `LinearTransformation` for applications such as whitening
- `Grayscale` and `RandomGrayscale`
- `Rotate` and `RandomRotation`
- `ToPILImage` now supports `RGBA` images
- `ToPILImage` now accepts a `mode` argument so you can specify which colorspace the image should be
- `RandomResizedCrop` now accepts `scale` and `ratio` ranges as input parameters

Documentation
Documentation is now auto generated and publishing to pytorch.org

Datasets:
SEMEION Dataset of handwritten digits added
Phototour dataset patches computed via multi-scale Harris corners now available by setting name equal to notredame_harris, yosemite_harris or liberty_harris in the Phototour dataset

Bug fixes:

  • Pre-trained densenet models is now CPU compatible 251

Breaking changes:
This version also introduced some breaking changes:

  • The SVHN dataset has now been made consistent with other datasets by making the label for the digit 0 be 0, instead of 10 (as it was previously) (see 194 for more details)
  • the labels for the unlabelled STL10 dataset is now an array filled with -1
  • the order of the input args to the deprecated Scale transform has changed from (width, height) to (height, width) to be consistent with other transforms

0.1.9

  • Ability to switch image backends between PIL and accimage
  • Added more tests
  • Various bug fixes and doc improvements

Models

Datasets

Transforms

  • transforms.Scale now accepts a tuple as new size or single integer

Utils

  • can now pass a pad value to make_grid and save_image

0.1.8

New Features
Models

  • SqueezeNet 1.0 and 1.1 models added, along with pre-trained weights
  • Add pre-trained weights for VGG models
  • Fix location of dropout in VGG
  • torchvision.models now expose num_classes as a constructor argument
  • Add InceptionV3 model and pre-trained weights
  • Add DenseNet models and pre-trained weights

Datasets

  • Add STL10 dataset
  • Add SVHN dataset
  • Add PhotoTour dataset

Transforms and Utilities

  • transforms.Pad now allows fill colors of either number tuples, or named colors like "white"
  • add normalization options to make_grid and save_image
  • ToTensor now supports more input types

Performance Improvements

Bug Fixes

  • ToPILImage now supports a single image
  • Python3 compatibility bug fixes
  • ToTensor now copes with all PIL Image types, not just RGB images
  • ImageFolder now only scans subdirectories.
  • Having files like .DS_Store is now no longer a blocking hindrance
  • Check for non-zero number of images in ImageFolder
  • Subdirectories of classes have recursive scans for images
  • LSUN test set loads now

0.1.7

A small release, just needed a version bump because of PyPI.

0.1.6

New Features

  • Add torchvision.models: Definitions and pre-trained models for common vision models
  • ResNet, AlexNet, VGG models added with downloadable pre-trained weights
  • adding padding to RandomCrop. Also add transforms.Pad
  • Add MNIST dataset

Performance Fixes

  • Fixing performance of LSUN Dataset

Bug Fixes

  • Some Python3 fixes
  • Bug fixes in save_image, add single channel support

0.1.5

Introduced Datasets and Transforms.

Added common datasets

  • COCO (Captioning and Detection)
  • LSUN Classification
  • ImageFolder
  • Imagenet-12
  • CIFAR10 and CIFAR100
  • Added utilities for saving images from Tensors.

Got merge conflicts? Close this PR and delete the branch. I'll create a new PR for you.

Happy merging! 🤖

@yngtodd yngtodd merged commit e807d63 into master Jan 18, 2018
@yngtodd yngtodd deleted the pyup-pin-torchvision-0.2.0 branch January 18, 2018 04:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants