All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
- Tidy up cgo flags
- ctype
longcaused compiling error in MacOS as noted on #44. Not working on linux box.
- [#111, #112, #113] Fixed concurrency memory leak
- #118 Fixed incorrect cbool conversion
- Upgrade libtorch v2.1.0
- Upgrade libtorch v2.0.0
- Upgrade Go version 1.20
- Switched to use hybrid of Go garbage collection and manually memory management
- Fixed #100 #102
- Fixed incorrect indexing at
dutil/Dataset.Next() - Added
nn.MSELoss() - reworked
ts.Format() - Added conv2d benchmark
- Fixed #88 memory leak at
example/char-rnn - Added missing tensor
Stride()andMustDataPtr(),IsMkldnn,MustIsMkldnn,IsContiguous,MustIsContiguous - Added ts
New()
- Added
WsNameandBsNamefields tonn.LayerNorm.Config - [#70] Upgraded to libtorch 1.11
- Added API
Path.Remove();Path.MustRemove()
- Fixed
dutil/MapDataset - [#69] change package name
tensor->tsfor easy coding. - [#68] simplify
VarStorestruct and adde more APIs forVarStoreandOptimizer - Fixed pickle with zero data length
- Added
gotch.CleanCache()API.
- Fixed wrong
cacheDirand switch off logging. - Added more pickle classes to handle unpickling
- Added subpackage
pickle. Now we can load directly Python Pytorch pretrained model without any Python script conversion. - Added
gotch.CachePath()andgotch.ModelUrls - Remove Travis CI for now.
- fixed
tensor.OfSlice()throw error due to "Unsupported Go type" (e.g. []float32) - added
nn.Path.Paths()method - added
nn.VarStore.Summary()method - fixed incorrect tensor method
ts.Meshgrid->Meshgrid - added new API
ConstantPadNdWithValato_constant_pad_ndwith padding value. - fixed "nn/rnn NewLSTM() clashed weight names"
- fixed some old API at
vision/aug/function.go - fixed
tensor.OfSlice()not supporting[]intdata type - fixed make
tensor.ValueGo()returning[]intinstead of[]int32 - added more building block modules: Dropout, MaxPool2D, Parameter, Identity
- added nn.BatchNorm.Forward() with default training=true
- added exposing
tensor.Ctensor() - added API
tensor.FromCtensor() - [#67]: fixed incorrect type casting at
atc_cuda_count
- Upgraded to libtorch 1.10
- #58 Fixed incorrect converting IValue from CIValue case 1 (Tensor).
- Added Conv3DConfig and Conv3DConfig Option
- Added missing Tensor methods APIs those return multiple tensors (e.g.
tensor.Svd).
- Dropped libtch
dummy_cuda_dependency()andfake_cuda_dependency()as libtorch ldd linking Okay now.
- Export nn/scheduler DefaultSchedulerOptions()
- Added nn/scheduler NewLRScheduler()
- Added nn/conv config options
- fixed cuda error
undefined reference to 'at::cuda::warp_size()'
- Update libtorch to 1.9. Generated 1716 APIs. There are APIs naming changes ie.
Name1change toNameDimorNameTensor.
- Fixed temporary fix huge number of learning group returned from C at
libtch/tensor.go AtoGetLearningRates - Fixed incorrect
nn.AdamWConfigand some documentation. - Fixed - reworked on
vision.ResNetandvision.DenseNetto fix incorrect layers and memory leak - Changed
dutil.DataLoader.Reset()to reshuffle when resetting DataLoader if flag is true - Changed
dutil.DataLoader.Next(). Deleted case batch size == 1 to make consistency by always returning items in a slice[]element dtypeeven with batchsize = 1. - Added
nn.CrossEntropyLossandnn.BCELoss - Fixed
tensor.ForwardIsreturnTupleandTensorListinstead of always returningTensorList - Changed exporting augment options and make ColorJitter forward output dtype
uint8for chaining with other augment options. - #45 Fixed
init/RandIntincorrect initialization - #48 Fixed
init/RandInitwhen init with mean = 0.0.
- Fixed multiple memory leakage at
vision/image.go - Fixed memory leakage at
dutil/dataloader.go - Fixed multiple memory leakage at
efficientnet.go - Added
dataloader.Len()method - Fixed deleting input tensor inside function at
tensor/other.gotensor.CrossEntropyForLogitsandtensor.AccuracyForLogits - Added warning to
varstore.LoadPartialwhen mismatched tensor shapes between source and varstore. - Fixed incorrect message mismatched tensor shape at
nn.Varstore.Load - Fixed incorrect y -> x at
vision/aug/affine.gogetParam func - Fixed double free tensor at
vision/aug/function.goEqualize func. - Changed
vision/augall input image should beuint8(Byte) dtype and transformed output has the same dtype (uint8) so thatCompose()can compose any transformer options. - Fixed wrong result of
aug.RandomAdjustSharpness - Fixed memory leak at
aug/function.getAffineGrid - Changed
vision/augand correct ColorJitter - Changed
vision/augand correct Resize - Changed
dutil/samplerto accept batchsize from 1. - Fixed double free in
vision/image.go/resizePreserveAspectRatio
Skip this tag
Same as [0.3.10]
- Update installation at README.md
- [#38] fixed JIT model
- Added Optimizer Learning Rate Schedulers
- Added AdamW Optimizer
- #24, #26: fixed memory leak.
- #30: fixed varstore.Save() randomly panic - segmentfault
- #32: nn.Seq Forward return nil tensor if length of layers = 1
- [#36]: resolved image augmentation
- #20: fixed IValue.Value() method return
[]interface{}instead of[]Tensor
- Added trainable JIT Module APIs and example/jit-train. Now, a Python Pytorch model
.ptcan be loaded then continue training/fine-tuning in Go.
- Added
dutilsub-package that serves PytorchDataSetandDataLoaderconcepts
- Added function
gotch.CudaIfAvailable(). NOTE that:device := gotch.NewCuda().CudaIfAvailable()will throw error if CUDA is not available.
- Switched back to install libtorch inside gotch library as go init() function is triggered after cgo called.
- #4 Automatically download and install Libtorch and setup environment variables.
- #6: Go native tensor print using
fmt.Formatterinterface. Now, a tensor can be printed out like:fmt.Printf("%.3f", tensor)(for float type)
- nn/sequential: fixed missing case number of layers = 1 causing panic
- nn/varstore: fixed(nn/varstore): fixed nil pointer at LoadPartial due to not break loop
- Changed to use
map[string]*Tensoratnn/varstore.go - Changed to use
*Pathargument ofNewLayerNormmethod atnn/layer-norm.go - Lots of clean-up return variables i.e. retVal, err
- Updated to Pytorch C++ APIs v1.7.0
- Switched back to
lib.AtoAddParametersOldas theato_add_parametershas not been implemented correctly. Using the updated API will cause optimizer stops working.
- Convert all APIs to using Pointer Receiver
- Added drawing image label at
example/yoloexample - Added some example images and README files for
example/yoloandexample/neural-style-transfer
- Added
tensor.SaveMultiNew
- Reverse changes #10 to original.
- #10:
ts.Drop()andts.MustDrop()now can call multiple times without panic