From cb5e7c223d1110d4c619d756ce243a86f4f3a212 Mon Sep 17 00:00:00 2001 From: Steffen Rochel Date: Sun, 16 Dec 2018 01:42:45 -0800 Subject: [PATCH 01/28] update with release notes for 1.4.0 release --- NEWS.md | 559 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 559 insertions(+) diff --git a/NEWS.md b/NEWS.md index 68cb2b053aec..b8532303c5f9 100644 --- a/NEWS.md +++ b/NEWS.md @@ -1,6 +1,565 @@ MXNet Change Log ================ +## 1.4.0 +### New Features +#### Java Inference API + +Model inference is run and managed by software engineers in a production eco-system which is built with tools and frameworks that use Java/Scala as a primary language. Inference on a trained model has two different use-cases: + + 1. Real time or Online Inference - tasks that require immediate feedback, such as fraud detection + 2. Batch or Offline Inference - tasks that don't require immediate feedback, these are use-cases where you have massive amounts of data and want to run Inference or pre-compute inference results +Batch Inference is performed on big data platforms such as Spark using Scala or Java while Real time Inference is typically performed and deployed on popular web frameworks such as Tomcat, Netty, Jetty, etc. which use Java. With this project, we want to build a new set of APIs which are Java friendly, compatible with Java 7+, are easy to use for inference, and lowers the entry barrier of consuming MXNet for production use-cases. More details can be found at the [Java Inference API document](https://cwiki.apache.org/confluence/display/MXNET/MXNet+Java+Inference+API). + +#### Julia API + +MXNet.jl is the Julia package of Apache MXNet. MXNet.jl brings flexible and efficient GPU computing and state-of-art deep learning to Julia. Some highlight of features include: + + * Efficient tensor/matrix computation across multiple devices, including multiple CPUs, GPUs and distributed server nodes. + * Flexible symbolic manipulation to composite and construct state-of-the-art deep learning models. + +#### Control Flow Operators + +Today we observe more and more dynamic neural network models, especially in the fields of natural language processing and graph analysis. The dynamics in these models come from multiple sources, including: + + * Models are expressed with control flow, such as conditions and loops; + * NDArrays in a model may have dynamic shapes, meaning the NDArrays of a model or some of the NDArrays have different shapes for different batches; + * Models may want to use more dynamic data structures, such as lists or dictionaries. +It's natural to express the dynamic models in frameworks with the imperative programming interface (e.g., Gluon, Pytorch, TensorFlow Eager). In this interface, users can simply use Python control flows, or NDArrays with any shape at any moment, or use Python lists and dictionaries to store data as they want. The problem of this approach is that it highly depends on the front-end programming languages (mainly Python). A model implemented in one language can only run in the same language. A common use case is that machine learning scientists want to develop their models in Python but engineers who deploy the models usually have to use a different language (e.g., Java and C). Gluon tries to close the gap between the model development and deployment. Machine learning scientists design and implement their models in Python with the imperative interface and Gluon turns the implementations into symbolic implementations by simply invoking hybridize() for model exporting. + +The goal of this project is to enhance Gluon to turn a dynamic neural network into a static computation graph (where the dynamic control flows are expressed by control flow operators) with Gluon hybridization and export them for deployment. More information can be found at [Optimize dynamic neural network models with control flow operators](https://cwiki.apache.org/confluence/display/MXNET/Optimize+dynamic+neural+network+models+with+control+flow+operators) + +#### SVRG Optimization + +SVRG stands for Stochastic Variance Reduced Gradient, which was first introduced in the paper [Accelerating Stochastic Gradient Descent using Predicative Variance Reduction in 2013](https://papers.nips.cc/paper/4937-accelerating-stochastic-gradient-descent-using-predictive-variance-reduction.pdf). It is an optimization technique that complements SGD. SGD is known for large scale optimization but it suffers from slow convergence asymptotically due to the inherent variance. SGD approximates the full gradient using a small batch of samples which introduces variance. In order to converge faster, SGD often needs to start with a smaller learning rate. SVRG remedies the problem by keeping a version of the estimated weights that is close to the optimal parameters and maintain average of full gradient over full pass of data. The average of full gradients of all data is calculated w.r.t to parameters of last mth epochs. It has provable guarantees for strongly convex smooth functions, and a more detailed proof can be found in section 3 of the paper. SVRG uses a different update rule: gradients w.r.t current parameters minus gradients w.r.t parameters from the last mth epoch, plus the average of gradients over all data. Key Characteristics of SVRG: + + * Explicit variance reduction + * Ability to use relatively large learning rate compared to SGD, which leads to faster convergence. +More details can be found at [SVRG Optimization in MXNet Python Module](https://cwiki.apache.org/confluence/display/MXNET/Unified+integration+with+external+backend+libraries) + +#### Subgraph API + +MXNet can integrate with many different kinds of backend libraries, including TVM, MKLDNN, TensorRT, Intel nGraph and more. These backend in general support a limited number of operators, and thus running computation in a model usually involves in interaction between backend-supported operators and MXNet operators. These backend libraries share some common requirements: + +TVM , MKLDNN and nGraph uses customized data formats. Interaction between these backends with MXNet requires data format conversion. +TVM, MKLDNN, TensorRT and nGraph fuses operators. +Integration with these backends should happen in the granularity of subgraphs instead of in the granularity of operators. To fuse operators, it's obvious that we need to divide a graph into subgraphs so that the operators in a subgraph can be fused into a single operator. To handle customized data formats, we should partition a computation graph into subgraphs as well. Each subgraph contains only TVM, MKLDNN or ngraph operators. In this way, MXNet converts data formats only when entering such a subgraph and the operators inside a subgraph handle format conversion themselves if necessary. This makes interaction of TVM and MKLDNN with MXNet much easier. Neither the MXNet executor nor the MXNet operators need to deal with customized data formats. Even though invoking these libraries from MXNet requires similar steps, the partitioning rule and the subgraph execution of these backends can be different. As such, we define the following interface for backends to customize graph partitioning and subgraph execution inside an operator. More details can be found at PR 12157 and [Subgraph API](https://cwiki.apache.org/confluence/display/MXNET/Unified+integration+with+external+backend+libraries). + +#### MXNet nGraph integration + +As the diversity of deep learning hardware accelerators increase, it is important to have an efficient abstraction layer so developers can avoid having to enable each accelerator/compute separately. Intel nGraph enables that vision. The primary goal of this integration is to provide a seamless development and deployment experience to data scientists and machine learning engineers to leverage Intel nGraph ecosystem with MXNet. As Subgraph API seamlessly integrates with MXNet frontend API, users should just be able to use or switch nGraph backend with any existing MXNet scripts, models and deployments using the symbolic interface. For more details see [MXNet nGraph integration using subgraph backend interface](https://cwiki.apache.org/confluence/display/MXNET/MXNet+nGraph+integration+using+subgraph+backend+interface) + +#### JVM Memory Management + +MXNet Scala and Java API uses native memory to manage NDArray, Symbol, Executor, DataIterators using the MXNet c_api. C APIs provide appropriate interfaces to create, access and free these objects MXNet Scala has corresponding Wrappers and APIs which have pointer references to the native memory. Before this project, JVM users(Scala/Clojure/Java..) of Apache MXNet have to manage MXNet objects manually using the dispose pattern, there are a few usability problems with this approach: + +Users have to track the MXNet objects manually and remember to call dispose, this is not Java Idiomatic and not user-friendly, quoting a user "this feels like I am writing C++ code which I stopped ages ago" +Leads to memory leaks if dispose is not called. +Many Objects in MXNet-Scala are managed in native memory, needing to use dispose on them as well. +bloated code with dispose() methods. +hard to debug memory-leaks +Goals of the project are to provide MXNet JVM Users automated memory management which can release native memory when there are no references to JVM objects, to be able to manage both GPU and CPU Memory automatically without performance degradation with automated memory management. More details can be found here: [JVM Memory Management](https://cwiki.apache.org/confluence/display/MXNET/JVM+Memory+Management) + +#### Topology-aware AllReduce +For distributed training, the ring Reduce communication pattern used by NCCL and Parameter server Reduce currently used in MXNet are not optimal for small batch sizes on p3.16xlarge instances with 8 GPUs. The approach is based on the idea of using trees to perform the Reduce and Broadcast. We can use the idea of minimum spanning trees to do a binary tree Reduce communication pattern to improve it following this paper by Wang, Li, Edo and Smola [1]. Our strategy will be to use: + + * a single tree (latency-optimal for small messages) to handle Reduce on small messages + * multiple trees (bandwidth-optimal for large messages) to handle large messages. + +More details can be found here: [Topology-aware AllReduce](https://cwiki.apache.org/confluence/display/MXNET/Single+machine+All+Reduce+Topology-aware+Communication) + +### New Operators + +* Add trigonometric operators (#12424) +* [MXNET-807] Support integer label type in ctc_loss operator (#12468) +* [MXNET-876] make CachedOp a normal operator (#11641) +* Add index_copy() operator (#12810) +* getnnz operator for CSR matrix (#12908) +* [MXNET-1173] Debug operators - isfinite, isinf and isnan (#12967) +* sample_like operators (#13034) +* Add gauss err function operator (#13229) +* [MXNET -1030] Cosine Embedding Loss (#12750) +* Add bytearray support back to imdecode (#12855, #12868) (#12912) +* Add Psroipooling CPU implementation (#12738) + +### Feature improvements +#### Operator +* [MXNET-912] Refactoring ctc loss operator (#12637) +* Refactor L2_normalization (#13059) +* customized take forward for CPU (#12997) +* Allow stop of arange to be inferred from dims. (#12064) +* Make check_isfinite, check_scale optional in clip_global_norm (#12042) add FListInputNames attribute to softmax_cross_entropy (#12701) [MXNET-867] Pooling1D with same padding (#12594) +* Add support for more req patterns for bilinear sampler backward (#12386) [MXNET-882] Support for N-d arrays added to diag op. (#12430) + +#### Optimizer +* Adagrad optimizer with row-wise learning rate (#12365) +* Adding python SVRGModule for performing SVRG Optimization Logic (#12376) + +#### Sparse + +* Fall back when sparse arrays are passed to MKLDNN-enabled operators (#11664) +* further bump up tolerance for sparse dot (#12527) +* Sparse support for logic ops (#12860) +* sparse support for take(csr, axis=0) (#12889) + +#### ONNX + +* ONNX export - Clip operator (#12457) +* Onnx version update from 1.2.1 to 1.3 in CI (#12633) +* Use modern onnx API to load model from file (#12777) +* [MXNET-892] ONNX export/import: DepthToSpace, SpaceToDepth operators (#12731) +* ONNX export: Fully connected operator w/o bias, ReduceSum, Square (#12646) +* ONNX export/import: Selu (#12785) +* ONNX export: Cleanup (#12878) +* Added operators: Selu, DepthToSpace, SpaceToDepth, HardSigmoid, Logical operators + +#### MKLDNN + +* MKLDNN Forward FullyConnected op cache (#11611) +* [MXNET-753] Fallback when using non-MKLDNN supported operators (#12019) +* MKLDNN Backward op cache (#11301) +* Implement mkldnn convolution fusion and quantization. (#12530) +* Improve mkldnn fallback. (#12663) +* Update MKL-DNN dependency (#12953) +* Update MKLML dependency (#13181) +* [MXNET-33] Enhance mkldnn pooling to support full convention (#11047) + +#### Inference +* [MXNET-910] Multithreading inference. (#12456) +* Tweaked the copy in c_predict_api.h (#12600) + +#### Other +* support for upper triangular matrices in linalg (#12904) +* [MXNET-918] Introduce Random module / Refact code generation (#13038) +* [MXNET-779]Add DLPack Transformation API (#12047) +* Draw labels name (#9496) +* Change the way NDArrayIter handle the last batch (#12285) +* Revert Change the way NDArrayIter handle the last batch (#12537) +* Track epoch metric separately (#12182) +* Set correct update on kvstore flag in dist_device_sync mode (#12786) + +### Frontend API updates + +#### Gluon + +* Update basic_layers.py (#13299) +* Gluon LSTM Projection and Clipping Support (#13056) +* Make Gluon download function to be atomic (#12572) +* [MXNET -1004] Poisson NegativeLog Likelihood loss (#12697) +* add activation information for mxnet.gluon.nn._Conv (#12354) +* Gluon DataLoader: avoid recursionlimit error (#12622) + +#### Symbol +* Addressed dumplicate object reference issues (#13214) +* Throw exception if MXSymbolInferShape fails. (#12733) +* Infer dtype in SymbolBlock import from input symbol (#12412) + +### Language API updates +#### Java +* [MXNET-1198] MXNet Java API (#13162) + +#### R +* Refactor R Optimizers to fix memory leak - 11374 +* Add new Vignettes to the R package + * Char-level Language modeling - 12670 + * Multidimensional Time series forecasting - 12664 +* Fix broken Examples and tutorials + * Tutorial on neural network introduction - 12117 + * CGAN example - 12283 + * Test classification with LSTMs - 12263 + +#### Scala +* explain the details for Scala Experimental (#12348) +* MXNET-873 - Bring Clojure Package Inline with New DataDesc and Layout in Scala Package (#12387) +* [MXNET-716] Adding Scala Inference Benchmarks (#12721) +* [MXNET-716][MIRROR #12723] Scala Benchmark Extension pack (#12758) +* NativeResource Management in Scala (#12647) +* Ignore generated scala files. (#12928) +* use ResourceScope in Model/Trainer/FeedForward.scala (#12882) +* [MXNET-1180] Scala Image API (#12995) +* ONNX export: Scalar, Reshape - Set appropriate tensor type (#13067) +* Port of scala Image API to clojure (#13107) +* update log4j version of Scala package (#13131) +* review require() usages to add meaningful messages. (#12570) + +#### Clojure +* Introduction to Clojure-MXNet video link. (#12754) +* Improve the Clojure Package README to Make it Easier to Get Started (#12881) + +#### Perl +* [MXNET-1026] [Perl] Sync with recent changes in Python's API (#12739) + +#### Julia +* import Julia binding + +### Performance improvements +* update mshadow for omp acceleration when nvcc is not present (#12674) +* [MXNET-860] Avoid implicit double conversions (#12361) + +### Bug fixes +* Fix a bug in where op with 1-D input (#12325) +* [MXNET-825] Fix CGAN R Example with MNIST dataset (#12283) +* [MXNET-535] Fix bugs in LR Schedulers and add warmup (#11234) +* Fix speech recognition example (#12291) +* fix bug in 'device' type kvstore (#12350) +* fix search result 404s (#12414) +* fix help in imread (#12420) +* fix render issue on < and > (#12482) +* [MXNET-853] Fix for smooth_l1 operator scalar default value (#12284) +* fix subscribe links, remove disabled icons (#12474) +* Fix broken URLs (#12508) +* Fix/public internal header (#12374) +* Fix lazy record io when used with dataloader and multi_worker > 0 (#12554) +* Fix error in try/finally block for blc (#12561) +* add cudnn_off parameter to SpatialTransformer Op and fix the inconsistency between CPU & GPU code (#12557) +* [MXNET-798] Fix the dtype cast from non float32 in Gradient computation (#12290) +* Fix CodeCovs proper commit detection (#12551) +* add TensorRT tutorial to index and fix ToC (#12587) +* Fixed typo in c_predict_api.cc (#12601) +* Fix typo in profiler.h (#12599) +* Fixed NoSuchMethodError for Jenkins Job for MBCC (#12618) +* [MXNET-922] Fix memleak in profiler (#12499) +* [MXNET-969] Fix buffer overflow in RNNOp (#12603) +* Fixed param coercion of clojure executor/forward (#12627) (#12630) +* Fix version dropdown behavior (#12632) +* Fix reference to wrong function (#12644) +* Fix the location of the tutorial of control flow operators (#12638) +* fix bug, issue 12613 (#12614) +* [MXNET-780] Fix exception handling bug (#12051) +* fix bug in prelu , issue 12061 (#12660) +* [MXNET-833] [R] Char-level RNN tutorial fix (#12670) +* Fix static / dynamic linking of gperftools and jemalloc (#12714) +* Fix #12672, importing numpy scalars (zero-dimensional arrays) (#12678) +* [MXNET-623] Fixing an integer overflow bug in large NDArray (#11742) +* fix benchmark on control flow operators. (#12693) +* Fix regression in MKLDNN caused by PR 12019 (#12740) +* Fixed broken link for Baidu's WARP CTC (#12774) +* fix cnn visualization tutorial (#12719) +* [MXNET-979] Add fix_beta support in BatchNorm (#12625) +* R fix metric shape (#12776) +* Revert [MXNET-979] Add fix_beta support in BatchNorm (#12625) (#12789) +* Fix mismatch shapes (#12793) +* fixed symbols naming in RNNCell, LSTMCell, GRUCell (#12794) +* Fixed __setattr__ method of _MXClassPropertyMetaClass (#12811) +* Fixed regex for matching platform type in Scala Benchmark scripts (#12826) +* Fix broken links (#12856) +* Fix Flaky Topk (#12798) +* [MXNET-1033] Fix a bug in MultiboxTarget GPU implementation (#12840) +* [MXNET-1107] Fix CPUPinned unexpected behaviour (#12031) +* Fix __all__ in optimizer/optimizer.py (#12886) +* Fix Batch input issue with Scala Benchmark (#12848) +* fix type inference in index_copy. (#12890) +* fix the paths issue for downloading script (#12913) +* fix indpt[0] for take(csr) (#12927) +* Fix the bug of assigning large integer to NDArray (#12921) +* fix Sphinx errors for tutorials and install ToCs (#12945) +* fix readme (#13082) +* Fix variable name in tutorial code snippet (#13052) +* Fix example for mxnet.nd.contrib.cond and fix typo in src/engine (#12954) +* Fix a typo in operator guide (#13115) +* Fix variational autoencoder example (#12880) +* Fix problem with some OSX not handling the cast on imDecode (#13207) +* Sphinx failure fixes (#13213) +* Revert Sphinx failure fixes (#13230) +* [MXNET-953] Fix oob memory read (#12631) +* Fix Sphinx error in ONNX file (#13251) +* [Example] Fixing Gradcam implementation (#13196) +* fix train mnist for inception-bn and resnet (#13239) +* Fix a bug in index_copy (#13218) +* Fix Sphinx errors in box_nms (#13261) +* Fix Sphinx errors (#13252) +* fix the flag (#13293) +* Made fixes to sparse.py and sparse.md (#13305) +* [Example] Gradcam- Fixing a link (#13307) +* Manually track num_max_thread (#12380) +* [Issue #11912] throw mxnet exceptions when decoding invalid images. (#12999) +* Undefined name: load_model() --> utils.load_model() (#12867) +* Change the way NDArrayIter handle the last batch (#12545) +* Add embedding to print_summary (#12796) +* allow foreach on input with 0 length (#12471) +* [MXNET-360]auto convert str to bytes in img.imdecode when py3 (#10697) + +### Licensing updates +* Add license headers to R-package (#12559) +* License header (#13178) +* add url and license to clojure package project (#13304) + +### Improvements +#### Tutorial +* [MXNET-422] Distributed training tutorial (#10955) +* Add a tutorial for control flow operators. (#12340) +* Add tutorial Gotchas using NumPy (#12007) +* Updated Symbol tutorial with Gluon (#12190) +* improve tutorial redirection (#12607) +* Include missing import in TensorRT tutorial (#12609) +* Update Operator Implementation Tutorial (#12230) +* add a tutorial for the subgraph API. (#12698) +* Improve clojure tutorial (#12974) +* Update scala intellij tutorial (#12827) +* [Example] Gradcam consolidation in tutorial (#13255) +* [MXNET-1203] Tutorial infogan (#13144) +* [MXNET-703] Add a TensorRT walkthrough (#12548) + +#### Example +* update C++ example so it is easier to run (#12397) +* [MXNET-580] Add SN-GAN example (#12419) +* [MXNET-637] Multidimensional LSTM example for MXNetR (#12664) +* [MXNET-982] Provide example to illustrate usage of CSVIter in C++ API (#12636) +* [MXNET-947] Expand scala imclassification example with resnet (#12639) +* MKL-DNN Quantization Examples and README (#12808) +* Extending the DCGAN example implemented by gluon API to provide a more straight-forward evaluation on the generated image (#12790) +* [MXNET-1017] Updating the readme file for cpp-package and adding readme file for example directory. (#12773) +* Update tree lstm example (#12960) +* Update bilstm integer array sorting example (#12929) +* Updated / Deleted some examples (#12968) +* Update module example (#12961) +* Update adversary attack generation example (#12918) +* Update Gluon example folder (#12951) +* Update dec example (#12950) +* Updated capsnet example (#12934) +* Updates to several examples (#13068) +* Update multi-task learning example (#12964) +* Remove obsolete memory cost example (#13235) +* [Example] Update cpp example README (#13280) +* [Example]update NER example readme on module prediction (#13184) +* Update proposal_target.py (#12709) +* Removing the re-size for validation data, which breaking the validation accuracy of CIFAR training (#12362) + +#### Documentation +* Update ONNX API docs references (#12317) +* Documentation update related to sparse support (#12367) +* Edit shape.array doc and some style improvements (#12162) +* fixed docs/website build checkout bug (#12413) +* Add Python API docs for test_utils and visualization (#12455) +* Fix the installation doc for MKL-DNN backend (#12534) +* Added comment to docs regarding ToTensor transform (#12186) +* Pinned dockcross to a tag with fixed ABI for RPi (#12588) +* Refine the documentation of im2rec (#12606) +* Update and modify Windows docs (#12620) +* update docs to list cmake required for build from source page (#12592) +* update the distributed_training document (#12626) +* Add docstring in im2rec.py (#12621) +* [Doc] Change the description for pip packages (#12584) +* Change dependencies documentation opencv2-->opencv (#12654) +* Add documents for two new environment variables for memory pool. (#12668) +* Scala Docs - Replace old Symbol api usages (#12759) +* add/update infer_range docs (#12879) +* Fix typo in formula in docstring for GRU cell and layer and add clarification to description (gluon.rnn) (#12896) +* Fix the operator API documentation (#12942) +* fix broken docs (#12871) +* fix mac r install and windows python build from source docs (#12919) +* Document the newly added env variable (#13049) +* Add documentation on GPU performance on Quantization example (#13145) +* Fix Sphinx python docstring formatting error. (#13177) +* [Doc] Fix repo paths in Ubuntu build doc (#13101) +* Fix Sphinx document parsing error. (#13195) +* Fix #13090, Add image.imread to python API doc. (#13176) +* Fix Sphinx docstring formatting error. (#13004, #13005, #13006) (#13175) +* Fix #12944, Fix Sphinx python docstring formatting error. (#13174) +* Fix #13013, Fix Sphinx python docstring error. (#13173) +* Fixed Sparse astype doc string formatting error (#13171) +* Fixed Documentation issues (#13215) +* update the doc (#13205) +* Fix Sphinx doc errors (#13170) +* Fix Sphinx python docstring error: initializer.InitDesc (#12939) (#13148) +* Fix Sphinx python docstring error: text contrib module (#12949) (#13149) +* Fix Sphinx python docstrings (#13160) +* Add Java API docs generation (#13071) +* Fix scaladoc build errors (#13189) +* Add missing documentations for getnnz (#13128) +* Addressed ONNX module documentation warnings and added notes for short-form representation (#13259) +* Doc fixes (#13256) +* Addressed doc issues (#13165) +* stop gap fix to let website builds through; scaladoc fix pending (#13298) +* Fix Sphinx python docstring formatting error. (#13194) +* Visualization doc fix. Added notes for shortform (#13291) +* [Example] Add docstring for test optimizer and test score (#13286) +* Fix descriptions in scaladocs for macro ndarray/sybmol APIs (#13210) +* Sphinx error reduction (#12323) +* Sphinx errors in Gluon (#13275) +* Update env_var.md (#12702) +* Updated the Instructions for use of the label bot (#13192) +* update the README (#13186) +* Added/changed file_name, brief description comments in some files (#13033) +* Add more models to benchmark_score (#12780) +* Add resnet50-v1 to benchmark_score (#12595) + +#### Test +* Add cloverage codecov report to CI for clojure (#12335) +* Disable flaky test test_operator.test_dropout (#12330) +* add initializer test (#12196) +* [MXAPPS-581] Disable a long test in the SD nightly. (#12326) +* [MXAPPS-581] Disable a long test in the SD nightly. (#12326) (#12339) +* Disable flaky test test_ndarray.test_order (#12311) +* fix flaky test: test_broadcast_binary_op (#11875) +* [MXAPPS-581] Disable an additional long test in the SD nightly (#12343) +* [MXNET-690] Add tests for initializers in R (#12360) +* Disabled flaky test: test_mkldnn.test_activation (#12378) +* support softmin operator with unit test (#12306) +* Revert Revert Disable kvstore test (#11798) (#12279) (#12379) +* fixed flaky test issue for test_operator_gpu.test_convolution_grouping (#12385) +* fixed flaky test issue for test_operator_gpu.test_depthwise_convolution (#12402) +* adjust tolerance levels of test_l2_normalization (#12429) +* Revert fixed flaky test issue for test_operator_gpu.test_depthwise_convolution (#12402) (#12441) +* remove flaky test and add consistency test for stable testing (#12427) +* Fix flaky test test_operator_gpu.test_batchnorm_with_type (#11873) +* [MXNET-909] Disable tvm_bridge test (#12476) +* Fix flaky test: test_mkldnn.test_activation #12377 (#12418) +* Temporarily disable flaky tests (#12513) +* Temporarily disable flaky tests (#12520) +* Revert Fix flaky test: test_mkldnn.test_activation #12377 (#12418) (#12516) +* [MXNET-851] Test coverage metrics for R-package (#12391) +* redirecting navigation items to latest info (#12540) +* Disable installation nightly test (#12571) +* [MXNET-952] Check for correlation kernel size along with unittest (#12558) +* [MXNET-968] Fix MacOS python tests (#12590) +* [MXNET-908] Enable python tests in Travis (#12550) +* fix test_activation by lowering threshold + validate eps for check_numeric_gradient (#12560) +* Enable gluon multi worker data loader test (#12315) +* Remove fixed seed for test_ctc_loss (#12686) +* fix for test order (#12358) +* [MXNET-500]Test cases improvement for MKLDNN on Gluon (#10921) +* Enable test_gluon.test_export (#12688) +* Disable test batchnorm slice (#12716) +* Reenable test_gluon.test_conv (#12718) +* Update packages and tests in the straight dope nightly (#12744) +* Fix failing GPU test on single GPU host (kvstore) (#12726) +* [MXNET-915] Java Inference API core wrappers and tests (#12757) +* Disabled flaky test: test_mkldnn.test_Deconvolution (#12770) +* Re-enables test_dropout (#12717) +* [MXNET-707] Add unit test for mxnet to coreml converter (#11952) +* Added context object to run TestCharRnn example (#12841) +* [MXNET-703] Show perf info for TensorRT during tests (#12656) +* [MXNET-793] ★ Virtualized testing in CI with QEMU ★ (#12094) +* Disabled flaky test: test_gluon_gpu.test_slice_batchnorm_reshape_batchnorm (#12768) +* Refactor mkldnn test files (#12410) +* enable batchnorm unit tests (#12986) +* Disable flaky test test_operator.test_dropout (#13057) +* Disable flaky test test_prelu (#13060) +* [MXNET-793] Virtual testing with Qemu, refinement and extract test results to root MXNet folder (#13065) +* Disable travis tests (#13137) +* [MXNET-1194] Reenable nightly tutorials tests for Python2 and Python3 (#13099) +* Refactor kvstore test (#13140) +* Disable Flaky test test_operator.test_clip (#12902) +* Tool to ease compilation and reproduction of test results (#13202) +* Implemented a regression unit test for #11793 (#12975) +* Fix test failure due to hybridize call in test_gluon_rnn.test_layer_fill_shape (#13043) +* adding unit test for MKLDNN FullyConnected operator (#12985) +* enabling test_dropout after fixing flaky issue (#13276) +* set proper atol for check_with_uniform (#12313) + +#### Website +* adding apache conf promo to home page (#12347) +* Consistent website theme and custom 404 (#12426) +* update apachecon links to https (#12521) +* [HOLD] 1.3.0 release website updates (#12509) +* add mentions of the gluon toolkits and links to resources (#12667) +* remove apachecon promo (#12695) +* [MXNet-1002] Add GluonCV and NLP tookits, Keras, and developer wiki to navigation (#12704) + +#### MXNet Distributions +* Make the output of ci/docker/install/ubuntu_mklml.sh less verbose (#12422) +* Fix tvm dependency for docker (#12479) +* [MXNET-703] Add TensorRT runtime Dockerfile (#12549) +* [MXNET-951] Python dockerfiles built on pip binaries and build/release script (#12556) +* Change numpy version to 1.15.2 in python and docker install requirements (#12711) +* Add mkl-dnn to docker install method (#12643) +* Fix docker cleanup race condition (#13092) +* Bugfix in ci/docker_cache.py (#13249) +* Update PyPI version number (#11773) +* update download links to apache distros (#12617) + +#### Installation +* Installation instructions consolidation (#12388) +* Refine mxnet python installation (#12696) +* R install instructions update for macOS (#12832) +* remove legacy installation of Roxygen2 5.0 and add R-specific clean target (#12993) (#12998) +* Force APT cache update before executing install (#13285) +* Make the Ubuntu scripts executable after download. (#12180) +* replacing windows setup with newer instructions (#12504) +* Updated download links and verification instructions (#12651) +* Remove pip overwrites (#12604) + +#### Build and CI +* [MXNET-908] Enable minimal OSX Travis build (#12462) +* Used jom for parallel windows builds (#12533) +* [MXNET-950] Enable parallel R dep builds in CI (#12552) +* Speed up CI windows builds (#12563) +* [MXNET-908] Speed up travis builds to avoid timeouts (#12706) +* simplify mac mkldnn build (#12724) +* [MXNET-674] Speed up GPU builds in CI (#12782) +* Improved git reset for CI builds (#12784) +* Improve cpp-package example project build files. (#13093) +* Add --no-cache option to build.py when building containers (#13182) +* Addressed sphinx build issue (#13246) +* Tighten up PyLint directives again (#12322) +* [MXNET-859] Add a clang-tidy stage to CI (#12282) +* A solution to prevent zombie containers locally and in CI (#12381) +* [MXNET-696][PYTHON][UNDEFINED NAME] import logging in ci/util.py (#12488) +* [MXNET-703] Static linking for libprotobuf with TensorRT (#12475) +* Remove regression checks for website links (#12507) +* [MXNET-953] - Add ASAN sanitizer, Enable in CI (#12370) +* allow custom path and static linking for custom mallocs in make (#12645) +* Correct PR branch detection in code coverage (#12615) +* Update osx.mk - Added apple to USE_BLAS comment (#12819) +* [MXNET-953] Correct ASAN cflags flag (#12659) +* [MXNET-1025] Add Jetpack 3.3 support to Jetson (#12735) +* Fail the broken link job when broken links are found (#12905) +* removed unused header (#13066) +* Maven Surefire bug workaround (#13081) +* Add Turing and Volta support to arch_name (#13168) +* Moves f16c autodetection to its own cmake module (#12331) +* la_op_inline.h to la_op-inl.h for consistency (#13045) +* [MXNET-793] Virtualized ARMv7 with Qemu CI integration (#13203) +* remove unused variable rotateM_ (#10803) +* Separate refactoring from #12276 in a prior PR (#12296) +* [MXNET-860] Remove std::moves that have no affect (#12730) +* [MXNET-860] Use emplace where helpful (#12694) +* Enable C++ coverage (#12642) +* [MXNET-860] Update to modern nullptr usage (#12352) +* [MXNET-860] Reduce redundant copies, check for regressions with clang-tidy (#12355) + + +#### 3rd party +##### TVM: +* Updated tvm submodule head (#12764) +* Updated tvm submodule head (#12448) +##### CUDNN: +* [MXNET-1179] Enforce deterministic algorithms in convolution layers (#12992) +* CudnnFind() usage improvements (#12804) +* Add option for automatic downcasting dtype for cudnn to allow using Tensorcore for fp32 (#12722) +##### Horovod: +* [MXNET-1111] Remove CPUPinned in ImageRecordIter (#12666) + +### Deprications +* Add a deprecate message (#13042) contrib_CTCLoss is deprecated. Added a message in command +### Other +* Updating news, readme files and bumping master version to 1.3.1 (#12525) +* dd new name to CONTRIBUTORS.md (#12763) +* Update contribute.md (#12685) +* Updated CONTRIBUTORS.md to include lebeg and gigasquid, moved mabreu to committers section (#12766) +* Update CONTRIBUTORS.md (#12996) +* Updated CONTRIBUTORS.md to include mxnet-label-bot (#13048) + +### How to build MXNet +Please follow the instructions at https://mxnet.incubator.apache.org/install/index.html + +### List of submodules used by Apache MXNet (Incubating) and when they were updated last +Submodule@commit ID::Last updated by MXNet:: Last update in submodule + +* cub@::Jul 31, 2017 :: Jul 31, 2017 +* dlpack@:: Oct 30, 2017 :: Aug 23, 2018 +* dmlc-core@:: Aug 15, 2018 :: Nov 15, 2018 +* googletest@:: July 14, 2016 :: July 14, 2016 +* mkldnn@:: Nov 7, 2018 :: Nov 5, 2018 +* mshadow@:: Sep 28, 2018 :: Nov 7, 2018 +* onnx-tensorrt@:: Aug 22, 2018 :: Nov 10, 2018 +* openmp@: Nov 22, 2017 :: Nov 13, 2018 +* ps-lite@: April 25, 2018 :: Oct 9, 2018 +* tvm@: Oct 10, 2018 :: Oct 8, 2018 + + + ## 1.3.1 ### Bug fixes From bab464586d3445b7720aa9bb1fe6fbdfdec4f9eb Mon Sep 17 00:00:00 2001 From: Steffen Rochel Date: Mon, 17 Dec 2018 08:49:36 -0800 Subject: [PATCH 02/28] addressed feedback from December 17 addressed feedback from December 17 except ngraph related comments. --- NEWS.md | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-) diff --git a/NEWS.md b/NEWS.md index b8532303c5f9..d178451caecb 100644 --- a/NEWS.md +++ b/NEWS.md @@ -47,7 +47,8 @@ Integration with these backends should happen in the granularity of subgraphs in #### MXNet nGraph integration -As the diversity of deep learning hardware accelerators increase, it is important to have an efficient abstraction layer so developers can avoid having to enable each accelerator/compute separately. Intel nGraph enables that vision. The primary goal of this integration is to provide a seamless development and deployment experience to data scientists and machine learning engineers to leverage Intel nGraph ecosystem with MXNet. As Subgraph API seamlessly integrates with MXNet frontend API, users should just be able to use or switch nGraph backend with any existing MXNet scripts, models and deployments using the symbolic interface. For more details see [MXNet nGraph integration using subgraph backend interface](https://cwiki.apache.org/confluence/display/MXNET/MXNet+nGraph+integration+using+subgraph+backend+interface) +As the diversity of deep learning hardware accelerators increase, it is important to have an efficient abstraction layer so developers can avoid having to enable each accelerator/compute separately. Intel nGraph enables that vision. The primary goal of this integration is to provide a seamless development and deployment experience to data scientists and machine learning engineers to leverage Intel nGraph ecosystem with MXNet. As Subgraph API seamlessly integrates with MXNet frontend API, users should just be able to use or switch nGraph backend with any existing MXNet scripts, models and deployments using the symbolic interface. For more details see [MXNet nGraph integration using subgraph backend interface](https://cwiki.apache.org/confluence/display/MXNET/MXNet+nGraph+integration+using+subgraph+backend+interface). +After building MXNet with nGraph support, users can enable nGraph backend by setting MXNET_SUBGRAPH_BACKEND="ngraph" environmental variable. #### JVM Memory Management @@ -67,6 +68,20 @@ For distributed training, the ring Reduce communication pattern used by NCCL and * multiple trees (bandwidth-optimal for large messages) to handle large messages. More details can be found here: [Topology-aware AllReduce](https://cwiki.apache.org/confluence/display/MXNET/Single+machine+All+Reduce+Topology-aware+Communication) +Note: This is an experimental feature and has known problems - see [13341](https://github.com/apache/incubator-mxnet/issues/13341). Please help to contribute to improve the robustness of the feature. + +#### MKDNN backend: Graph optimization and Quantization (experimental) + +Two advanced features, graph optimization (operator fusion) and reduced-precision (INT8) computation, are introduced to MKL-DNN backend in this release ([#12530](https://github.com/apache/incubator-mxnet/pull/12530), [#13297](https://github.com/apache/incubator-mxnet/pull/13297), [#13260](https://github.com/apache/incubator-mxnet/pull/13260)). +These features significantly boost the inference performance on CPU (up to 4X) for a broad range of deep learning topologies. + +##### Graph Optimization +MKL-DNN backend takes advantage of MXNet subgraph to implement the most of possible operator fusions for inference, such as Convolution + ReLU, Batch Normalization folding, etc. When using mxnet-mkl package, users can easily enable this feature by setting export MXNET_SUBGRAPH_BACKEND=MKLDNN. + +##### Quantization +Performance of reduced-precision (INT8) computation is also dramatically improved after the graph optimization feature is applied on CPU Platforms. Various models are supported and can benefit from reduced-precision computation, including symbolic models, Gluon models and even custom models. Users can run most of the pre-trained models with only a few lines of commands and a new quantization script imagenet_gen_qsym_mkldnn.py. The observed accuracy loss is less than 0.5% for popular CNN networks, like ResNet-50, Inception-BN, MobileNet, etc. + +Please find detailed information and performance/accuracy numbers here: [MKLDNN README](https://github.com/apache/incubator-mxnet/blob/master/MKLDNN_README.md), [quantization README](https://github.com/apache/incubator-mxnet/tree/master/example/quantization#1) and [design proposal](https://cwiki.apache.org/confluence/display/MXNET/MXNet+Graph+Optimization+and+Quantization+based+on+subgraph+and+MKL-DNN) ### New Operators @@ -197,6 +212,7 @@ More details can be found here: [Topology-aware AllReduce](https://cwiki.apache. * [MXNET-860] Avoid implicit double conversions (#12361) ### Bug fixes +* [MXNET-1234] Fix shape inference problems in Activation backward (#13409) * Fix a bug in where op with 1-D input (#12325) * [MXNET-825] Fix CGAN R Example with MNIST dataset (#12283) * [MXNET-535] Fix bugs in LR Schedulers and add warmup (#11234) From c54ca847177d609804538c5d875d1bd7ae4db2c0 Mon Sep 17 00:00:00 2001 From: Aaron Markham Date: Mon, 17 Dec 2018 15:21:35 -0800 Subject: [PATCH 03/28] Update NEWS.md Co-Authored-By: srochel --- NEWS.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/NEWS.md b/NEWS.md index d178451caecb..b508370da488 100644 --- a/NEWS.md +++ b/NEWS.md @@ -5,7 +5,9 @@ MXNet Change Log ### New Features #### Java Inference API -Model inference is run and managed by software engineers in a production eco-system which is built with tools and frameworks that use Java/Scala as a primary language. Inference on a trained model has two different use-cases: +Model inference is often managed in a production ecosystem using primarily Java/Scala tools and frameworks. This release seeks to alleviate the need for software engineers to write custom MXNet wrappers to fit their production environment. + +Inference on a trained model has a couple of common use cases: 1. Real time or Online Inference - tasks that require immediate feedback, such as fraud detection 2. Batch or Offline Inference - tasks that don't require immediate feedback, these are use-cases where you have massive amounts of data and want to run Inference or pre-compute inference results From 8156570bbb4675da657bf03413aad958b66779c6 Mon Sep 17 00:00:00 2001 From: Aaron Markham Date: Mon, 17 Dec 2018 15:21:46 -0800 Subject: [PATCH 04/28] Update NEWS.md Co-Authored-By: srochel --- NEWS.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/NEWS.md b/NEWS.md index b508370da488..f7b44ea555f5 100644 --- a/NEWS.md +++ b/NEWS.md @@ -9,7 +9,7 @@ Model inference is often managed in a production ecosystem using primarily Java/ Inference on a trained model has a couple of common use cases: - 1. Real time or Online Inference - tasks that require immediate feedback, such as fraud detection + 1. Real-time or Online Inference - tasks that require immediate feedback, such as fraud detection 2. Batch or Offline Inference - tasks that don't require immediate feedback, these are use-cases where you have massive amounts of data and want to run Inference or pre-compute inference results Batch Inference is performed on big data platforms such as Spark using Scala or Java while Real time Inference is typically performed and deployed on popular web frameworks such as Tomcat, Netty, Jetty, etc. which use Java. With this project, we want to build a new set of APIs which are Java friendly, compatible with Java 7+, are easy to use for inference, and lowers the entry barrier of consuming MXNet for production use-cases. More details can be found at the [Java Inference API document](https://cwiki.apache.org/confluence/display/MXNET/MXNet+Java+Inference+API). From 0eeabd76c5402084a7b46045a5568b17e04589cb Mon Sep 17 00:00:00 2001 From: Aaron Markham Date: Mon, 17 Dec 2018 15:21:59 -0800 Subject: [PATCH 05/28] Update NEWS.md Co-Authored-By: srochel --- NEWS.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/NEWS.md b/NEWS.md index f7b44ea555f5..899d514ddaeb 100644 --- a/NEWS.md +++ b/NEWS.md @@ -10,7 +10,7 @@ Model inference is often managed in a production ecosystem using primarily Java/ Inference on a trained model has a couple of common use cases: 1. Real-time or Online Inference - tasks that require immediate feedback, such as fraud detection - 2. Batch or Offline Inference - tasks that don't require immediate feedback, these are use-cases where you have massive amounts of data and want to run Inference or pre-compute inference results + 2. Batch or Offline Inference - tasks that don't require immediate feedback, these are use cases where you have massive amounts of data and want to run inference or pre-compute inference results Batch Inference is performed on big data platforms such as Spark using Scala or Java while Real time Inference is typically performed and deployed on popular web frameworks such as Tomcat, Netty, Jetty, etc. which use Java. With this project, we want to build a new set of APIs which are Java friendly, compatible with Java 7+, are easy to use for inference, and lowers the entry barrier of consuming MXNet for production use-cases. More details can be found at the [Java Inference API document](https://cwiki.apache.org/confluence/display/MXNET/MXNet+Java+Inference+API). #### Julia API From a0809816df33e53b2c25dcab5582e82a4df50306 Mon Sep 17 00:00:00 2001 From: Aaron Markham Date: Mon, 17 Dec 2018 15:22:18 -0800 Subject: [PATCH 06/28] Update NEWS.md Co-Authored-By: srochel --- NEWS.md | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/NEWS.md b/NEWS.md index 899d514ddaeb..961ff87aed1a 100644 --- a/NEWS.md +++ b/NEWS.md @@ -11,7 +11,14 @@ Inference on a trained model has a couple of common use cases: 1. Real-time or Online Inference - tasks that require immediate feedback, such as fraud detection 2. Batch or Offline Inference - tasks that don't require immediate feedback, these are use cases where you have massive amounts of data and want to run inference or pre-compute inference results -Batch Inference is performed on big data platforms such as Spark using Scala or Java while Real time Inference is typically performed and deployed on popular web frameworks such as Tomcat, Netty, Jetty, etc. which use Java. With this project, we want to build a new set of APIs which are Java friendly, compatible with Java 7+, are easy to use for inference, and lowers the entry barrier of consuming MXNet for production use-cases. More details can be found at the [Java Inference API document](https://cwiki.apache.org/confluence/display/MXNET/MXNet+Java+Inference+API). +Real-time Inference is often performed and deployed on popular web frameworks such as Tomcat, Netty, Jetty, etc., all of which use Java. +Batch Inference is often performed on big data platforms such as Spark using Scala or Java. + +With this project, we had the following goals: +* Build a new set of APIs that are Java friendly, compatible with Java 7+, are easy to use for inference. +* Lower the barrier to entry of consuming MXNet for production use cases. + +More details can be found at the [Java Inference API document](https://cwiki.apache.org/confluence/display/MXNET/MXNet+Java+Inference+API). #### Julia API From 4e3797cee51078014e55ecdf5e61483f3b0a5629 Mon Sep 17 00:00:00 2001 From: Aaron Markham Date: Mon, 17 Dec 2018 15:22:38 -0800 Subject: [PATCH 07/28] Update NEWS.md Co-Authored-By: srochel --- NEWS.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/NEWS.md b/NEWS.md index 961ff87aed1a..e962c301a5b2 100644 --- a/NEWS.md +++ b/NEWS.md @@ -509,7 +509,7 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN * [MXNET-908] Enable minimal OSX Travis build (#12462) * Used jom for parallel windows builds (#12533) * [MXNET-950] Enable parallel R dep builds in CI (#12552) -* Speed up CI windows builds (#12563) +* Speed up CI Windows builds (#12563) * [MXNET-908] Speed up travis builds to avoid timeouts (#12706) * simplify mac mkldnn build (#12724) * [MXNET-674] Speed up GPU builds in CI (#12782) From bbf775a3a71cecaad7550804ed9ef72a834284fe Mon Sep 17 00:00:00 2001 From: Aaron Markham Date: Mon, 17 Dec 2018 15:22:49 -0800 Subject: [PATCH 08/28] Update NEWS.md Co-Authored-By: srochel --- NEWS.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/NEWS.md b/NEWS.md index e962c301a5b2..6ccd6eedca29 100644 --- a/NEWS.md +++ b/NEWS.md @@ -524,7 +524,7 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN * [MXNET-703] Static linking for libprotobuf with TensorRT (#12475) * Remove regression checks for website links (#12507) * [MXNET-953] - Add ASAN sanitizer, Enable in CI (#12370) -* allow custom path and static linking for custom mallocs in make (#12645) +* Allow custom path and static linking for custom mallocs in make (#12645) * Correct PR branch detection in code coverage (#12615) * Update osx.mk - Added apple to USE_BLAS comment (#12819) * [MXNET-953] Correct ASAN cflags flag (#12659) From b6c2128a89a398628e2a0a003865b730d3242e28 Mon Sep 17 00:00:00 2001 From: Aaron Markham Date: Mon, 17 Dec 2018 15:22:58 -0800 Subject: [PATCH 09/28] Update NEWS.md Co-Authored-By: srochel --- NEWS.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/NEWS.md b/NEWS.md index 6ccd6eedca29..5ad6927716fb 100644 --- a/NEWS.md +++ b/NEWS.md @@ -530,7 +530,7 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN * [MXNET-953] Correct ASAN cflags flag (#12659) * [MXNET-1025] Add Jetpack 3.3 support to Jetson (#12735) * Fail the broken link job when broken links are found (#12905) -* removed unused header (#13066) +* Removed unused header (#13066) * Maven Surefire bug workaround (#13081) * Add Turing and Volta support to arch_name (#13168) * Moves f16c autodetection to its own cmake module (#12331) From 68f94d9f31cdea888ae6590fb5a2dc9f18cb56d7 Mon Sep 17 00:00:00 2001 From: Aaron Markham Date: Mon, 17 Dec 2018 15:23:09 -0800 Subject: [PATCH 10/28] Update NEWS.md Co-Authored-By: srochel --- NEWS.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/NEWS.md b/NEWS.md index 5ad6927716fb..6354bff81f7d 100644 --- a/NEWS.md +++ b/NEWS.md @@ -536,7 +536,7 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN * Moves f16c autodetection to its own cmake module (#12331) * la_op_inline.h to la_op-inl.h for consistency (#13045) * [MXNET-793] Virtualized ARMv7 with Qemu CI integration (#13203) -* remove unused variable rotateM_ (#10803) +* Remove unused variable `rotateM_` (#10803) * Separate refactoring from #12276 in a prior PR (#12296) * [MXNET-860] Remove std::moves that have no affect (#12730) * [MXNET-860] Use emplace where helpful (#12694) From f96f3c8dd3215d6679b1ce98bf6e0a8e4f039cca Mon Sep 17 00:00:00 2001 From: Aaron Markham Date: Mon, 17 Dec 2018 15:23:21 -0800 Subject: [PATCH 11/28] Update NEWS.md Co-Authored-By: srochel --- NEWS.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/NEWS.md b/NEWS.md index 6354bff81f7d..719073f5a144 100644 --- a/NEWS.md +++ b/NEWS.md @@ -560,7 +560,7 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN * Add a deprecate message (#13042) contrib_CTCLoss is deprecated. Added a message in command ### Other * Updating news, readme files and bumping master version to 1.3.1 (#12525) -* dd new name to CONTRIBUTORS.md (#12763) +* Add new name to CONTRIBUTORS.md (#12763) * Update contribute.md (#12685) * Updated CONTRIBUTORS.md to include lebeg and gigasquid, moved mabreu to committers section (#12766) * Update CONTRIBUTORS.md (#12996) From ecd2f73561c71c1f858d59420e191a83fce9eebb Mon Sep 17 00:00:00 2001 From: Aaron Markham Date: Mon, 17 Dec 2018 15:23:41 -0800 Subject: [PATCH 12/28] Update NEWS.md Co-Authored-By: srochel --- NEWS.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/NEWS.md b/NEWS.md index 719073f5a144..0f4bfcdad0ef 100644 --- a/NEWS.md +++ b/NEWS.md @@ -22,7 +22,7 @@ More details can be found at the [Java Inference API document](https://cwiki.apa #### Julia API -MXNet.jl is the Julia package of Apache MXNet. MXNet.jl brings flexible and efficient GPU computing and state-of-art deep learning to Julia. Some highlight of features include: +MXNet.jl is the Julia package of Apache MXNet. MXNet.jl brings flexible and efficient GPU computing and state-of-art deep learning to Julia. Some highlights of features include: * Efficient tensor/matrix computation across multiple devices, including multiple CPUs, GPUs and distributed server nodes. * Flexible symbolic manipulation to composite and construct state-of-the-art deep learning models. From 7965321da8e894fcd0d8e5e83516759e441c6c72 Mon Sep 17 00:00:00 2001 From: Aaron Markham Date: Mon, 17 Dec 2018 15:24:42 -0800 Subject: [PATCH 13/28] Update NEWS.md Co-Authored-By: srochel --- NEWS.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/NEWS.md b/NEWS.md index 0f4bfcdad0ef..263d254ac247 100644 --- a/NEWS.md +++ b/NEWS.md @@ -1,4 +1,4 @@ -MXNet Change Log +Apache MXNet (incubating) Change Log ================ ## 1.4.0 From 2020e103e4f3a9353796e52eff2893e184015932 Mon Sep 17 00:00:00 2001 From: Steffen Rochel Date: Mon, 17 Dec 2018 15:26:51 -0800 Subject: [PATCH 14/28] Update News.md removed nGraph integration from 1.4.0 release notes --- NEWS.md | 5 ----- 1 file changed, 5 deletions(-) diff --git a/NEWS.md b/NEWS.md index 263d254ac247..76782a7b088e 100644 --- a/NEWS.md +++ b/NEWS.md @@ -54,11 +54,6 @@ TVM , MKLDNN and nGraph uses customized data formats. Interaction between these TVM, MKLDNN, TensorRT and nGraph fuses operators. Integration with these backends should happen in the granularity of subgraphs instead of in the granularity of operators. To fuse operators, it's obvious that we need to divide a graph into subgraphs so that the operators in a subgraph can be fused into a single operator. To handle customized data formats, we should partition a computation graph into subgraphs as well. Each subgraph contains only TVM, MKLDNN or ngraph operators. In this way, MXNet converts data formats only when entering such a subgraph and the operators inside a subgraph handle format conversion themselves if necessary. This makes interaction of TVM and MKLDNN with MXNet much easier. Neither the MXNet executor nor the MXNet operators need to deal with customized data formats. Even though invoking these libraries from MXNet requires similar steps, the partitioning rule and the subgraph execution of these backends can be different. As such, we define the following interface for backends to customize graph partitioning and subgraph execution inside an operator. More details can be found at PR 12157 and [Subgraph API](https://cwiki.apache.org/confluence/display/MXNET/Unified+integration+with+external+backend+libraries). -#### MXNet nGraph integration - -As the diversity of deep learning hardware accelerators increase, it is important to have an efficient abstraction layer so developers can avoid having to enable each accelerator/compute separately. Intel nGraph enables that vision. The primary goal of this integration is to provide a seamless development and deployment experience to data scientists and machine learning engineers to leverage Intel nGraph ecosystem with MXNet. As Subgraph API seamlessly integrates with MXNet frontend API, users should just be able to use or switch nGraph backend with any existing MXNet scripts, models and deployments using the symbolic interface. For more details see [MXNet nGraph integration using subgraph backend interface](https://cwiki.apache.org/confluence/display/MXNET/MXNet+nGraph+integration+using+subgraph+backend+interface). -After building MXNet with nGraph support, users can enable nGraph backend by setting MXNET_SUBGRAPH_BACKEND="ngraph" environmental variable. - #### JVM Memory Management MXNet Scala and Java API uses native memory to manage NDArray, Symbol, Executor, DataIterators using the MXNet c_api. C APIs provide appropriate interfaces to create, access and free these objects MXNet Scala has corresponding Wrappers and APIs which have pointer references to the native memory. Before this project, JVM users(Scala/Clojure/Java..) of Apache MXNet have to manage MXNet objects manually using the dispose pattern, there are a few usability problems with this approach: From 4eeaa2c85cb1a9f38664d9c86e940fe43ebd7b5e Mon Sep 17 00:00:00 2001 From: Aaron Markham Date: Mon, 17 Dec 2018 15:27:45 -0800 Subject: [PATCH 15/28] Update NEWS.md Co-Authored-By: srochel --- NEWS.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/NEWS.md b/NEWS.md index 76782a7b088e..48a1998de668 100644 --- a/NEWS.md +++ b/NEWS.md @@ -25,7 +25,7 @@ More details can be found at the [Java Inference API document](https://cwiki.apa MXNet.jl is the Julia package of Apache MXNet. MXNet.jl brings flexible and efficient GPU computing and state-of-art deep learning to Julia. Some highlights of features include: * Efficient tensor/matrix computation across multiple devices, including multiple CPUs, GPUs and distributed server nodes. - * Flexible symbolic manipulation to composite and construct state-of-the-art deep learning models. + * Flexible manipulation of symbolic to composite for construction of state-of-the-art deep learning models. #### Control Flow Operators From 0af34050e51622526deb65ba0089124c21fea75f Mon Sep 17 00:00:00 2001 From: Aaron Markham Date: Mon, 17 Dec 2018 15:28:05 -0800 Subject: [PATCH 16/28] Update NEWS.md Co-Authored-By: srochel --- NEWS.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/NEWS.md b/NEWS.md index 48a1998de668..d2f5cb48529e 100644 --- a/NEWS.md +++ b/NEWS.md @@ -34,7 +34,9 @@ Today we observe more and more dynamic neural network models, especially in the * Models are expressed with control flow, such as conditions and loops; * NDArrays in a model may have dynamic shapes, meaning the NDArrays of a model or some of the NDArrays have different shapes for different batches; * Models may want to use more dynamic data structures, such as lists or dictionaries. -It's natural to express the dynamic models in frameworks with the imperative programming interface (e.g., Gluon, Pytorch, TensorFlow Eager). In this interface, users can simply use Python control flows, or NDArrays with any shape at any moment, or use Python lists and dictionaries to store data as they want. The problem of this approach is that it highly depends on the front-end programming languages (mainly Python). A model implemented in one language can only run in the same language. A common use case is that machine learning scientists want to develop their models in Python but engineers who deploy the models usually have to use a different language (e.g., Java and C). Gluon tries to close the gap between the model development and deployment. Machine learning scientists design and implement their models in Python with the imperative interface and Gluon turns the implementations into symbolic implementations by simply invoking hybridize() for model exporting. +It's natural to express dynamic models in frameworks with an imperative programming interface (e.g., Gluon, Pytorch, TensorFlow Eager). In this kind of interface, developers can use Python control flows, or NDArrays with any shape at any moment, or use Python lists and dictionaries to store data as they want. The problem of this approach is that it highly dependent on the originating front-end programming language (mainly Python). A model implemented in one language can only run in the same language. + +A common use case is that machine learning scientists want to develop their models in Python, whereas engineers who deploy the models usually have to use a different "production" language (e.g., Java or C). Gluon tries to close the gap between the model development and production deployment. Machine learning scientists design and implement their models in Python with the imperative interface, and then Gluon converts the implementations from imperative to symbolic by invoking `hybridize()` for model exporting. The goal of this project is to enhance Gluon to turn a dynamic neural network into a static computation graph (where the dynamic control flows are expressed by control flow operators) with Gluon hybridization and export them for deployment. More information can be found at [Optimize dynamic neural network models with control flow operators](https://cwiki.apache.org/confluence/display/MXNET/Optimize+dynamic+neural+network+models+with+control+flow+operators) From cfc4aac616ae7f65e68c8d884875cda5c68e0a62 Mon Sep 17 00:00:00 2001 From: Aaron Markham Date: Mon, 17 Dec 2018 15:28:24 -0800 Subject: [PATCH 17/28] Update NEWS.md Co-Authored-By: srochel --- NEWS.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/NEWS.md b/NEWS.md index d2f5cb48529e..8f75a930c4bd 100644 --- a/NEWS.md +++ b/NEWS.md @@ -38,7 +38,9 @@ It's natural to express dynamic models in frameworks with an imperative programm A common use case is that machine learning scientists want to develop their models in Python, whereas engineers who deploy the models usually have to use a different "production" language (e.g., Java or C). Gluon tries to close the gap between the model development and production deployment. Machine learning scientists design and implement their models in Python with the imperative interface, and then Gluon converts the implementations from imperative to symbolic by invoking `hybridize()` for model exporting. -The goal of this project is to enhance Gluon to turn a dynamic neural network into a static computation graph (where the dynamic control flows are expressed by control flow operators) with Gluon hybridization and export them for deployment. More information can be found at [Optimize dynamic neural network models with control flow operators](https://cwiki.apache.org/confluence/display/MXNET/Optimize+dynamic+neural+network+models+with+control+flow+operators) +The goal of this project is to enhance Gluon to turn a dynamic neural network into a static computation graph. The dynamic control flows are expressed by control flow operators with Gluon hybridization, and these are exported for deployment. + +More information can be found at [Optimize dynamic neural network models with control flow operators](https://cwiki.apache.org/confluence/display/MXNET/Optimize+dynamic+neural+network+models+with+control+flow+operators) #### SVRG Optimization From bb0581abd05e27a46477fe0ace058e07808a6b00 Mon Sep 17 00:00:00 2001 From: Steffen Rochel Date: Tue, 18 Dec 2018 14:51:27 -0800 Subject: [PATCH 18/28] Updates to address feedback until 12/17 updates to address feedback from szha@, pengzhao-intel@ and TaoLv@ --- NEWS.md | 34 +++++++++++++++++----------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/NEWS.md b/NEWS.md index 8f75a930c4bd..7ac743985cf1 100644 --- a/NEWS.md +++ b/NEWS.md @@ -27,7 +27,7 @@ MXNet.jl is the Julia package of Apache MXNet. MXNet.jl brings flexible and effi * Efficient tensor/matrix computation across multiple devices, including multiple CPUs, GPUs and distributed server nodes. * Flexible manipulation of symbolic to composite for construction of state-of-the-art deep learning models. -#### Control Flow Operators +#### Control Flow Operators (experimental) Today we observe more and more dynamic neural network models, especially in the fields of natural language processing and graph analysis. The dynamics in these models come from multiple sources, including: @@ -50,7 +50,7 @@ SVRG stands for Stochastic Variance Reduced Gradient, which was first introduced * Ability to use relatively large learning rate compared to SGD, which leads to faster convergence. More details can be found at [SVRG Optimization in MXNet Python Module](https://cwiki.apache.org/confluence/display/MXNET/Unified+integration+with+external+backend+libraries) -#### Subgraph API +#### Subgraph API (experimental) MXNet can integrate with many different kinds of backend libraries, including TVM, MKLDNN, TensorRT, Intel nGraph and more. These backend in general support a limited number of operators, and thus running computation in a model usually involves in interaction between backend-supported operators and MXNet operators. These backend libraries share some common requirements: @@ -69,7 +69,7 @@ bloated code with dispose() methods. hard to debug memory-leaks Goals of the project are to provide MXNet JVM Users automated memory management which can release native memory when there are no references to JVM objects, to be able to manage both GPU and CPU Memory automatically without performance degradation with automated memory management. More details can be found here: [JVM Memory Management](https://cwiki.apache.org/confluence/display/MXNET/JVM+Memory+Management) -#### Topology-aware AllReduce +#### Topology-aware AllReduce (experimental) For distributed training, the ring Reduce communication pattern used by NCCL and Parameter server Reduce currently used in MXNet are not optimal for small batch sizes on p3.16xlarge instances with 8 GPUs. The approach is based on the idea of using trees to perform the Reduce and Broadcast. We can use the idea of minimum spanning trees to do a binary tree Reduce communication pattern to improve it following this paper by Wang, Li, Edo and Smola [1]. Our strategy will be to use: * a single tree (latency-optimal for small messages) to handle Reduce on small messages @@ -78,13 +78,13 @@ For distributed training, the ring Reduce communication pattern used by NCCL and More details can be found here: [Topology-aware AllReduce](https://cwiki.apache.org/confluence/display/MXNET/Single+machine+All+Reduce+Topology-aware+Communication) Note: This is an experimental feature and has known problems - see [13341](https://github.com/apache/incubator-mxnet/issues/13341). Please help to contribute to improve the robustness of the feature. -#### MKDNN backend: Graph optimization and Quantization (experimental) +#### MKLDNN backend: Graph optimization and Quantization (experimental) -Two advanced features, graph optimization (operator fusion) and reduced-precision (INT8) computation, are introduced to MKL-DNN backend in this release ([#12530](https://github.com/apache/incubator-mxnet/pull/12530), [#13297](https://github.com/apache/incubator-mxnet/pull/13297), [#13260](https://github.com/apache/incubator-mxnet/pull/13260)). -These features significantly boost the inference performance on CPU (up to 4X) for a broad range of deep learning topologies. +Two advanced features, graph optimization (operator fusion) and reduced-precision (INT8) computation, are introduced to MKLDNN backend in this release ([#12530](https://github.com/apache/incubator-mxnet/pull/12530), [#13297](https://github.com/apache/incubator-mxnet/pull/13297), [#13260](https://github.com/apache/incubator-mxnet/pull/13260)). +These features significantly boost the inference performance on CPU (up to 4X) for a broad range of deep learning topologies. Currently, this feature is only available for the inference on the CPU platforms. ##### Graph Optimization -MKL-DNN backend takes advantage of MXNet subgraph to implement the most of possible operator fusions for inference, such as Convolution + ReLU, Batch Normalization folding, etc. When using mxnet-mkl package, users can easily enable this feature by setting export MXNET_SUBGRAPH_BACKEND=MKLDNN. +MKLDNN backend takes advantage of MXNet subgraph to implement the most of possible operator fusions for inference, such as Convolution + ReLU, Batch Normalization folding, etc. When using mxnet-mkl package, users can easily enable this feature by setting export MXNET_SUBGRAPH_BACKEND=MKLDNN. ##### Quantization Performance of reduced-precision (INT8) computation is also dramatically improved after the graph optimization feature is applied on CPU Platforms. Various models are supported and can benefit from reduced-precision computation, including symbolic models, Gluon models and even custom models. Users can run most of the pre-trained models with only a few lines of commands and a new quantization script imagenet_gen_qsym_mkldnn.py. The observed accuracy loss is less than 0.5% for popular CNN networks, like ResNet-50, Inception-BN, MobileNet, etc. @@ -571,16 +571,16 @@ Please follow the instructions at https://mxnet.incubator.apache.org/install/ind ### List of submodules used by Apache MXNet (Incubating) and when they were updated last Submodule@commit ID::Last updated by MXNet:: Last update in submodule -* cub@::Jul 31, 2017 :: Jul 31, 2017 -* dlpack@:: Oct 30, 2017 :: Aug 23, 2018 -* dmlc-core@:: Aug 15, 2018 :: Nov 15, 2018 -* googletest@:: July 14, 2016 :: July 14, 2016 -* mkldnn@:: Nov 7, 2018 :: Nov 5, 2018 -* mshadow@:: Sep 28, 2018 :: Nov 7, 2018 -* onnx-tensorrt@:: Aug 22, 2018 :: Nov 10, 2018 -* openmp@: Nov 22, 2017 :: Nov 13, 2018 -* ps-lite@: April 25, 2018 :: Oct 9, 2018 -* tvm@: Oct 10, 2018 :: Oct 8, 2018 +* cub@05eb57f::Jul 31, 2017 :: Jul 31, 2017 +* dlpack@10892ac:: Oct 30, 2017 :: Aug 23, 2018 +* dmlc-core@0a0e8ad:: Aug 15, 2018 :: Nov 15, 2018 +* googletest@ec44c6c:: July 14, 2016 :: July 14, 2016 +* mkldnn@a7c5f53:: Nov 7, 2018 :: Nov 5, 2018 +* mshadow@696803b:: Sep 28, 2018 :: Nov 7, 2018 +* onnx-tensorrt@3d8ee04:: Aug 22, 2018 :: Nov 10, 2018 +* openmp@37c7212: Nov 22, 2017 :: Nov 13, 2018 +* ps-lite@8a76389: April 25, 2018 :: Oct 9, 2018 +* tvm@0f053c8: Oct 10, 2018 :: Oct 8, 2018 From f64176c03a7d6b321f24f72fbc86289071c9a7a2 Mon Sep 17 00:00:00 2001 From: Aaron Markham Date: Tue, 18 Dec 2018 14:53:41 -0800 Subject: [PATCH 19/28] including Aaron changes for SVRG Co-Authored-By: srochel --- NEWS.md | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/NEWS.md b/NEWS.md index 7ac743985cf1..b23c598ef961 100644 --- a/NEWS.md +++ b/NEWS.md @@ -44,7 +44,13 @@ More information can be found at [Optimize dynamic neural network models with co #### SVRG Optimization -SVRG stands for Stochastic Variance Reduced Gradient, which was first introduced in the paper [Accelerating Stochastic Gradient Descent using Predicative Variance Reduction in 2013](https://papers.nips.cc/paper/4937-accelerating-stochastic-gradient-descent-using-predictive-variance-reduction.pdf). It is an optimization technique that complements SGD. SGD is known for large scale optimization but it suffers from slow convergence asymptotically due to the inherent variance. SGD approximates the full gradient using a small batch of samples which introduces variance. In order to converge faster, SGD often needs to start with a smaller learning rate. SVRG remedies the problem by keeping a version of the estimated weights that is close to the optimal parameters and maintain average of full gradient over full pass of data. The average of full gradients of all data is calculated w.r.t to parameters of last mth epochs. It has provable guarantees for strongly convex smooth functions, and a more detailed proof can be found in section 3 of the paper. SVRG uses a different update rule: gradients w.r.t current parameters minus gradients w.r.t parameters from the last mth epoch, plus the average of gradients over all data. Key Characteristics of SVRG: +SVRG stands for Stochastic Variance Reduced Gradient, which was first introduced in the paper [Accelerating Stochastic Gradient Descent using Predicative Variance Reduction in 2013](https://papers.nips.cc/paper/4937-accelerating-stochastic-gradient-descent-using-predictive-variance-reduction.pdf). It is an optimization technique that complements SGD. + +SGD is known for large scale optimization, but it suffers from slow convergence asymptotically due to the inherent variance. SGD approximates the full gradient using a small batch of samples which introduces variance. In order to converge faster, SGD often needs to start with a smaller learning rate. + +SVRG remedies the slow convergence problem by keeping a version of the estimated weights that is close to the optimal parameters and maintains the average of the full gradient over the full pass of data. The average of the full gradients of all data is calculated w.r.t to parameters of last mth epochs. It has provable guarantees for strongly convex smooth functions; a detailed proof can be found in section 3 of the [paper](https://papers.nips.cc/paper/4937-accelerating-stochastic-gradient-descent-using-predictive-variance-reduction.pdf). SVRG uses a different update rule than SGD: gradients w.r.t current parameters minus gradients w.r.t parameters from the last mth epoch, plus the average of gradients over all data. + +Key Characteristics of SVRG: * Explicit variance reduction * Ability to use relatively large learning rate compared to SGD, which leads to faster convergence. From fa1b80c4a62e2984c2ee6c11edb712ce86ce2608 Mon Sep 17 00:00:00 2001 From: Aaron Markham Date: Tue, 18 Dec 2018 15:05:16 -0800 Subject: [PATCH 20/28] Apply suggestions from code review Co-Authored-By: srochel --- NEWS.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/NEWS.md b/NEWS.md index b23c598ef961..95b6ec43cbb8 100644 --- a/NEWS.md +++ b/NEWS.md @@ -212,7 +212,7 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN * review require() usages to add meaningful messages. (#12570) #### Clojure -* Introduction to Clojure-MXNet video link. (#12754) +* Introduction to Clojure-MXNet video link (#12754) * Improve the Clojure Package README to Make it Easier to Get Started (#12881) #### Perl @@ -512,11 +512,11 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN #### Build and CI * [MXNET-908] Enable minimal OSX Travis build (#12462) -* Used jom for parallel windows builds (#12533) +* Use jom for parallel Windows builds (#12533) * [MXNET-950] Enable parallel R dep builds in CI (#12552) * Speed up CI Windows builds (#12563) * [MXNET-908] Speed up travis builds to avoid timeouts (#12706) -* simplify mac mkldnn build (#12724) +* Simplify mac MKLDNN build (#12724) * [MXNET-674] Speed up GPU builds in CI (#12782) * Improved git reset for CI builds (#12784) * Improve cpp-package example project build files. (#13093) From 57aa184e6957197183d4c9fbfb9399b15bb4cc41 Mon Sep 17 00:00:00 2001 From: Aaron Markham Date: Tue, 18 Dec 2018 15:10:42 -0800 Subject: [PATCH 21/28] Apply suggestions from code review Co-Authored-By: srochel --- NEWS.md | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/NEWS.md b/NEWS.md index 95b6ec43cbb8..eed2f15a3a94 100644 --- a/NEWS.md +++ b/NEWS.md @@ -134,7 +134,7 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN #### ONNX * ONNX export - Clip operator (#12457) -* Onnx version update from 1.2.1 to 1.3 in CI (#12633) +* ONNX version update from 1.2.1 to 1.3 in CI (#12633) * Use modern onnx API to load model from file (#12777) * [MXNET-892] ONNX export/import: DepthToSpace, SpaceToDepth operators (#12731) * ONNX export: Fully connected operator w/o bias, ReduceSum, Square (#12646) @@ -209,7 +209,7 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN * ONNX export: Scalar, Reshape - Set appropriate tensor type (#13067) * Port of scala Image API to clojure (#13107) * update log4j version of Scala package (#13131) -* review require() usages to add meaningful messages. (#12570) +* Review require() usages to add meaningful messages (#12570) #### Clojure * Introduction to Clojure-MXNet video link (#12754) @@ -231,20 +231,20 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN * [MXNET-825] Fix CGAN R Example with MNIST dataset (#12283) * [MXNET-535] Fix bugs in LR Schedulers and add warmup (#11234) * Fix speech recognition example (#12291) -* fix bug in 'device' type kvstore (#12350) +* Fix bug in 'device' type kvstore (#12350) * fix search result 404s (#12414) -* fix help in imread (#12420) -* fix render issue on < and > (#12482) +* Fix help in imread (#12420) +* Fix render issue on < and > (#12482) * [MXNET-853] Fix for smooth_l1 operator scalar default value (#12284) -* fix subscribe links, remove disabled icons (#12474) +* Fix subscribe links, remove disabled icons (#12474) * Fix broken URLs (#12508) * Fix/public internal header (#12374) * Fix lazy record io when used with dataloader and multi_worker > 0 (#12554) * Fix error in try/finally block for blc (#12561) -* add cudnn_off parameter to SpatialTransformer Op and fix the inconsistency between CPU & GPU code (#12557) +* Add cudnn_off parameter to SpatialTransformer Op and fix the inconsistency between CPU & GPU code (#12557) * [MXNET-798] Fix the dtype cast from non float32 in Gradient computation (#12290) * Fix CodeCovs proper commit detection (#12551) -* add TensorRT tutorial to index and fix ToC (#12587) +* Add TensorRT tutorial to index and fix ToC (#12587) * Fixed typo in c_predict_api.cc (#12601) * Fix typo in profiler.h (#12599) * Fixed NoSuchMethodError for Jenkins Job for MBCC (#12618) @@ -256,20 +256,20 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN * Fix the location of the tutorial of control flow operators (#12638) * fix bug, issue 12613 (#12614) * [MXNET-780] Fix exception handling bug (#12051) -* fix bug in prelu , issue 12061 (#12660) +* Fix bug in prelu, issue 12061 (#12660) * [MXNET-833] [R] Char-level RNN tutorial fix (#12670) * Fix static / dynamic linking of gperftools and jemalloc (#12714) * Fix #12672, importing numpy scalars (zero-dimensional arrays) (#12678) * [MXNET-623] Fixing an integer overflow bug in large NDArray (#11742) -* fix benchmark on control flow operators. (#12693) +* Fix benchmark on control flow operators (#12693) * Fix regression in MKLDNN caused by PR 12019 (#12740) * Fixed broken link for Baidu's WARP CTC (#12774) -* fix cnn visualization tutorial (#12719) +* Fix CNN visualization tutorial (#12719) * [MXNET-979] Add fix_beta support in BatchNorm (#12625) * R fix metric shape (#12776) * Revert [MXNET-979] Add fix_beta support in BatchNorm (#12625) (#12789) * Fix mismatch shapes (#12793) -* fixed symbols naming in RNNCell, LSTMCell, GRUCell (#12794) +* Fixed symbols naming in RNNCell, LSTMCell, GRUCell (#12794) * Fixed __setattr__ method of _MXClassPropertyMetaClass (#12811) * Fixed regex for matching platform type in Scala Benchmark scripts (#12826) * Fix broken links (#12856) @@ -279,10 +279,10 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN * Fix __all__ in optimizer/optimizer.py (#12886) * Fix Batch input issue with Scala Benchmark (#12848) * fix type inference in index_copy. (#12890) -* fix the paths issue for downloading script (#12913) -* fix indpt[0] for take(csr) (#12927) +* Fix the paths issue for downloading script (#12913) +* Fix indpt[0] for take(csr) (#12927) * Fix the bug of assigning large integer to NDArray (#12921) -* fix Sphinx errors for tutorials and install ToCs (#12945) +* Fix Sphinx errors for tutorials and install ToCs (#12945) * fix readme (#13082) * Fix variable name in tutorial code snippet (#13052) * Fix example for mxnet.nd.contrib.cond and fix typo in src/engine (#12954) From d9038cd1e2e469cbf3ca521c52bb45ac168d0106 Mon Sep 17 00:00:00 2001 From: Aaron Markham Date: Tue, 18 Dec 2018 15:14:17 -0800 Subject: [PATCH 22/28] Apply suggestions from code review Co-Authored-By: srochel --- NEWS.md | 30 ++++++++++++++++-------------- 1 file changed, 16 insertions(+), 14 deletions(-) diff --git a/NEWS.md b/NEWS.md index eed2f15a3a94..3ad6e7fa5a77 100644 --- a/NEWS.md +++ b/NEWS.md @@ -58,28 +58,30 @@ More details can be found at [SVRG Optimization in MXNet Python Module](https:// #### Subgraph API (experimental) -MXNet can integrate with many different kinds of backend libraries, including TVM, MKLDNN, TensorRT, Intel nGraph and more. These backend in general support a limited number of operators, and thus running computation in a model usually involves in interaction between backend-supported operators and MXNet operators. These backend libraries share some common requirements: +MXNet can integrate with many different kinds of backend libraries, including TVM, MKLDNN, TensorRT, Intel nGraph and more. In general, these backends support a limited number of operators, so running computation in a model usually involves an interaction between backend-supported operators and MXNet operators. These backend libraries share some common requirements: -TVM , MKLDNN and nGraph uses customized data formats. Interaction between these backends with MXNet requires data format conversion. +TVM , MKLDNN and nGraph use customized data formats. Interaction between these backends with MXNet requires data format conversion. TVM, MKLDNN, TensorRT and nGraph fuses operators. -Integration with these backends should happen in the granularity of subgraphs instead of in the granularity of operators. To fuse operators, it's obvious that we need to divide a graph into subgraphs so that the operators in a subgraph can be fused into a single operator. To handle customized data formats, we should partition a computation graph into subgraphs as well. Each subgraph contains only TVM, MKLDNN or ngraph operators. In this way, MXNet converts data formats only when entering such a subgraph and the operators inside a subgraph handle format conversion themselves if necessary. This makes interaction of TVM and MKLDNN with MXNet much easier. Neither the MXNet executor nor the MXNet operators need to deal with customized data formats. Even though invoking these libraries from MXNet requires similar steps, the partitioning rule and the subgraph execution of these backends can be different. As such, we define the following interface for backends to customize graph partitioning and subgraph execution inside an operator. More details can be found at PR 12157 and [Subgraph API](https://cwiki.apache.org/confluence/display/MXNET/Unified+integration+with+external+backend+libraries). +Integration with these backends should happen in the granularity of subgraphs instead of in the granularity of operators. To fuse operators, it's obvious that we need to divide a graph into subgraphs so that the operators in a subgraph can be fused into a single operator. To handle customized data formats, we should partition a computation graph into subgraphs as well. Each subgraph contains only TVM, MKLDNN or nGraph operators. In this way, MXNet converts data formats only when entering such a subgraph, and the operators inside a subgraph handle format conversion themselves if necessary. This makes interaction of TVM and MKLDNN with MXNet much easier. Neither the MXNet executor nor the MXNet operators need to deal with customized data formats. Even though invoking these libraries from MXNet requires similar steps, the partitioning rule and the subgraph execution of these backends can be different. As such, we define the following interface for backends to customize graph partitioning and subgraph execution inside an operator. More details can be found at PR 12157 and [Subgraph API](https://cwiki.apache.org/confluence/display/MXNET/Unified+integration+with+external+backend+libraries). #### JVM Memory Management -MXNet Scala and Java API uses native memory to manage NDArray, Symbol, Executor, DataIterators using the MXNet c_api. C APIs provide appropriate interfaces to create, access and free these objects MXNet Scala has corresponding Wrappers and APIs which have pointer references to the native memory. Before this project, JVM users(Scala/Clojure/Java..) of Apache MXNet have to manage MXNet objects manually using the dispose pattern, there are a few usability problems with this approach: +The MXNet Scala and Java API uses native memory to manage NDArray, Symbol, Executor, DataIterators using MXNet's internal C APIs. The C APIs provide appropriate interfaces to create, access and free these objects. MXNet Scala has corresponding Wrappers and APIs that have pointer references to the native memory. Before this project, JVM users (e.g. Scala, Clojure, or Java) of MXNet have to manage MXNet objects manually using the dispose pattern. There are a few usability problems with this approach: -Users have to track the MXNet objects manually and remember to call dispose, this is not Java Idiomatic and not user-friendly, quoting a user "this feels like I am writing C++ code which I stopped ages ago" -Leads to memory leaks if dispose is not called. -Many Objects in MXNet-Scala are managed in native memory, needing to use dispose on them as well. -bloated code with dispose() methods. -hard to debug memory-leaks -Goals of the project are to provide MXNet JVM Users automated memory management which can release native memory when there are no references to JVM objects, to be able to manage both GPU and CPU Memory automatically without performance degradation with automated memory management. More details can be found here: [JVM Memory Management](https://cwiki.apache.org/confluence/display/MXNET/JVM+Memory+Management) +* Users have to track the MXNet objects manually and remember to call `dispose`. This is not Java idiomatic and not user friendly. Quoting a user: "this feels like I am writing C++ code which I stopped ages ago". +* Leads to memory leaks if `dispose` is not called. +* Many objects in MXNet-Scala are managed in native memory, needing to use `dispose` on them as well. +* Bloated code with `dispose()` methods. +* Hard to debug memory-leaks. +Goals of the project are: +* Provide MXNet JVM users automated memory management that can release native memory when there are no references to JVM objects. +* Provide automated memory management for both GPU and CPU memory without performance degradation. More details can be found here: [JVM Memory Management](https://cwiki.apache.org/confluence/display/MXNET/JVM+Memory+Management) #### Topology-aware AllReduce (experimental) -For distributed training, the ring Reduce communication pattern used by NCCL and Parameter server Reduce currently used in MXNet are not optimal for small batch sizes on p3.16xlarge instances with 8 GPUs. The approach is based on the idea of using trees to perform the Reduce and Broadcast. We can use the idea of minimum spanning trees to do a binary tree Reduce communication pattern to improve it following this paper by Wang, Li, Edo and Smola [1]. Our strategy will be to use: +For distributed training, the `Reduce` communication patterns used by NCCL and MXNet are not optimal for small batch sizes. The `Topology-aware AllReduce` approach is based on the idea of using trees to perform the `Reduce` and `Broadcast` operations. We can use the idea of minimum spanning trees to do a binary tree `Reduce` communication pattern to improve distributed training following this paper by Wang, Li, Edo and Smola [1]. Our strategy is to use: - * a single tree (latency-optimal for small messages) to handle Reduce on small messages - * multiple trees (bandwidth-optimal for large messages) to handle large messages. + * a single tree (latency-optimal for small messages) to handle `Reduce` on small messages + * multiple trees (bandwidth-optimal for large messages) to handle `Reduce` on large messages More details can be found here: [Topology-aware AllReduce](https://cwiki.apache.org/confluence/display/MXNET/Single+machine+All+Reduce+Topology-aware+Communication) Note: This is an experimental feature and has known problems - see [13341](https://github.com/apache/incubator-mxnet/issues/13341). Please help to contribute to improve the robustness of the feature. @@ -122,7 +124,7 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN #### Optimizer * Adagrad optimizer with row-wise learning rate (#12365) -* Adding python SVRGModule for performing SVRG Optimization Logic (#12376) +* Add a Python SVRGModule for performing SVRG Optimization Logic (#12376) #### Sparse From 315a7717ba1679ecd4d667033a17a06414b6b230 Mon Sep 17 00:00:00 2001 From: Aaron Markham Date: Tue, 18 Dec 2018 15:19:53 -0800 Subject: [PATCH 23/28] Apply suggestions from code review Co-Authored-By: srochel --- NEWS.md | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/NEWS.md b/NEWS.md index 3ad6e7fa5a77..63b5b25b031c 100644 --- a/NEWS.md +++ b/NEWS.md @@ -131,13 +131,13 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN * Fall back when sparse arrays are passed to MKLDNN-enabled operators (#11664) * further bump up tolerance for sparse dot (#12527) * Sparse support for logic ops (#12860) -* sparse support for take(csr, axis=0) (#12889) +* Sparse support for take(csr, axis=0) (#12889) #### ONNX * ONNX export - Clip operator (#12457) * ONNX version update from 1.2.1 to 1.3 in CI (#12633) -* Use modern onnx API to load model from file (#12777) +* Use modern ONNX API to load a model from file (#12777) * [MXNET-892] ONNX export/import: DepthToSpace, SpaceToDepth operators (#12731) * ONNX export: Fully connected operator w/o bias, ReduceSum, Square (#12646) * ONNX export/import: Selu (#12785) @@ -161,10 +161,10 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN #### Other * support for upper triangular matrices in linalg (#12904) -* [MXNET-918] Introduce Random module / Refact code generation (#13038) +* Introduce Random module / Refactor code generation (#13038) * [MXNET-779]Add DLPack Transformation API (#12047) * Draw labels name (#9496) -* Change the way NDArrayIter handle the last batch (#12285) +* Change the way NDArrayIter handles the last batch (#12285) * Revert Change the way NDArrayIter handle the last batch (#12537) * Track epoch metric separately (#12182) * Set correct update on kvstore flag in dist_device_sync mode (#12786) @@ -177,12 +177,12 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN * Gluon LSTM Projection and Clipping Support (#13056) * Make Gluon download function to be atomic (#12572) * [MXNET -1004] Poisson NegativeLog Likelihood loss (#12697) -* add activation information for mxnet.gluon.nn._Conv (#12354) +* Add activation information for `mxnet.gluon.nn._Conv` (#12354) * Gluon DataLoader: avoid recursionlimit error (#12622) #### Symbol * Addressed dumplicate object reference issues (#13214) -* Throw exception if MXSymbolInferShape fails. (#12733) +* Throw exception if MXSymbolInferShape fails (#12733) * Infer dtype in SymbolBlock import from input symbol (#12412) ### Language API updates @@ -200,17 +200,17 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN * Test classification with LSTMs - 12263 #### Scala -* explain the details for Scala Experimental (#12348) +* Explain the details for Scala Experimental (#12348) * MXNET-873 - Bring Clojure Package Inline with New DataDesc and Layout in Scala Package (#12387) * [MXNET-716] Adding Scala Inference Benchmarks (#12721) * [MXNET-716][MIRROR #12723] Scala Benchmark Extension pack (#12758) * NativeResource Management in Scala (#12647) -* Ignore generated scala files. (#12928) -* use ResourceScope in Model/Trainer/FeedForward.scala (#12882) +* Ignore generated Scala files (#12928) +* Use ResourceScope in Model/Trainer/FeedForward.scala (#12882) * [MXNET-1180] Scala Image API (#12995) * ONNX export: Scalar, Reshape - Set appropriate tensor type (#13067) -* Port of scala Image API to clojure (#13107) -* update log4j version of Scala package (#13131) +* Port of Scala Image API to Clojure (#13107) +* Update log4j version of Scala package (#13131) * Review require() usages to add meaningful messages (#12570) #### Clojure @@ -221,15 +221,15 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN * [MXNET-1026] [Perl] Sync with recent changes in Python's API (#12739) #### Julia -* import Julia binding +* Import Julia binding ### Performance improvements -* update mshadow for omp acceleration when nvcc is not present (#12674) +* Update mshadow for omp acceleration when nvcc is not present (#12674) * [MXNET-860] Avoid implicit double conversions (#12361) ### Bug fixes * [MXNET-1234] Fix shape inference problems in Activation backward (#13409) -* Fix a bug in where op with 1-D input (#12325) +* Fix a bug in `where` op with 1-D input (#12325) * [MXNET-825] Fix CGAN R Example with MNIST dataset (#12283) * [MXNET-535] Fix bugs in LR Schedulers and add warmup (#11234) * Fix speech recognition example (#12291) From a1bba294c3fd547ef393929d6e8e675910702955 Mon Sep 17 00:00:00 2001 From: Steffen Rochel Date: Tue, 18 Dec 2018 15:51:53 -0800 Subject: [PATCH 24/28] Address code review feedback Addressed more of Aaron's comments and requests. --- NEWS.md | 42 ++++++++++++++++++++---------------------- 1 file changed, 20 insertions(+), 22 deletions(-) diff --git a/NEWS.md b/NEWS.md index 63b5b25b031c..8f4ef34edf8d 100644 --- a/NEWS.md +++ b/NEWS.md @@ -105,11 +105,11 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN * [MXNET-807] Support integer label type in ctc_loss operator (#12468) * [MXNET-876] make CachedOp a normal operator (#11641) * Add index_copy() operator (#12810) -* getnnz operator for CSR matrix (#12908) +* Fix getnnz operator for CSR matrix (#12908) - issue #12872 * [MXNET-1173] Debug operators - isfinite, isinf and isnan (#12967) -* sample_like operators (#13034) +* Add sample_like operators (#13034) * Add gauss err function operator (#13229) -* [MXNET -1030] Cosine Embedding Loss (#12750) +* [MXNET -1030] Enhanced Cosine Embedding Loss (#12750) * Add bytearray support back to imdecode (#12855, #12868) (#12912) * Add Psroipooling CPU implementation (#12738) @@ -117,21 +117,20 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN #### Operator * [MXNET-912] Refactoring ctc loss operator (#12637) * Refactor L2_normalization (#13059) -* customized take forward for CPU (#12997) -* Allow stop of arange to be inferred from dims. (#12064) +* Customized and faster `TakeOpForward` operator on CPU (#12997) +* Allow stop of arange operator to be inferred from dims. (#12064) * Make check_isfinite, check_scale optional in clip_global_norm (#12042) add FListInputNames attribute to softmax_cross_entropy (#12701) [MXNET-867] Pooling1D with same padding (#12594) * Add support for more req patterns for bilinear sampler backward (#12386) [MXNET-882] Support for N-d arrays added to diag op. (#12430) #### Optimizer -* Adagrad optimizer with row-wise learning rate (#12365) +* Add a special version of Adagrad optimizer with row-wise learning rate (#12365) * Add a Python SVRGModule for performing SVRG Optimization Logic (#12376) #### Sparse * Fall back when sparse arrays are passed to MKLDNN-enabled operators (#11664) -* further bump up tolerance for sparse dot (#12527) -* Sparse support for logic ops (#12860) -* Sparse support for take(csr, axis=0) (#12889) +* Add Sparse support for logic operators (#12860) +* Add Sparse support for take(csr, axis=0) (#12889) #### ONNX @@ -143,6 +142,7 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN * ONNX export/import: Selu (#12785) * ONNX export: Cleanup (#12878) * Added operators: Selu, DepthToSpace, SpaceToDepth, HardSigmoid, Logical operators +* ONNX export: Scalar, Reshape - Set appropriate tensor type (#13067) #### MKLDNN @@ -163,9 +163,7 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN * support for upper triangular matrices in linalg (#12904) * Introduce Random module / Refactor code generation (#13038) * [MXNET-779]Add DLPack Transformation API (#12047) -* Draw labels name (#9496) -* Change the way NDArrayIter handles the last batch (#12285) -* Revert Change the way NDArrayIter handle the last batch (#12537) +* Draw label name next to corresponding bounding boxes when the mapping of id to names is specified (#9496) * Track epoch metric separately (#12182) * Set correct update on kvstore flag in dist_device_sync mode (#12786) @@ -208,10 +206,10 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN * Ignore generated Scala files (#12928) * Use ResourceScope in Model/Trainer/FeedForward.scala (#12882) * [MXNET-1180] Scala Image API (#12995) -* ONNX export: Scalar, Reshape - Set appropriate tensor type (#13067) * Port of Scala Image API to Clojure (#13107) * Update log4j version of Scala package (#13131) * Review require() usages to add meaningful messages (#12570) +* Fix Scala readme (#13082) #### Clojure * Introduction to Clojure-MXNet video link (#12754) @@ -228,6 +226,7 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN * [MXNET-860] Avoid implicit double conversions (#12361) ### Bug fixes +* Fix for #10920 - increase tolerance for sparse dot (#12527) * [MXNET-1234] Fix shape inference problems in Activation backward (#13409) * Fix a bug in `where` op with 1-D input (#12325) * [MXNET-825] Fix CGAN R Example with MNIST dataset (#12283) @@ -256,7 +255,7 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN * Fix version dropdown behavior (#12632) * Fix reference to wrong function (#12644) * Fix the location of the tutorial of control flow operators (#12638) -* fix bug, issue 12613 (#12614) +* Fix issue 12613 (#12614) * [MXNET-780] Fix exception handling bug (#12051) * Fix bug in prelu, issue 12061 (#12660) * [MXNET-833] [R] Char-level RNN tutorial fix (#12670) @@ -285,7 +284,6 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN * Fix indpt[0] for take(csr) (#12927) * Fix the bug of assigning large integer to NDArray (#12921) * Fix Sphinx errors for tutorials and install ToCs (#12945) -* fix readme (#13082) * Fix variable name in tutorial code snippet (#13052) * Fix example for mxnet.nd.contrib.cond and fix typo in src/engine (#12954) * Fix a typo in operator guide (#13115) @@ -296,11 +294,11 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN * [MXNET-953] Fix oob memory read (#12631) * Fix Sphinx error in ONNX file (#13251) * [Example] Fixing Gradcam implementation (#13196) -* fix train mnist for inception-bn and resnet (#13239) +* Fix train mnist for inception-bn and resnet (#13239) * Fix a bug in index_copy (#13218) * Fix Sphinx errors in box_nms (#13261) * Fix Sphinx errors (#13252) -* fix the flag (#13293) +* Fix the cpp example compiler flag (#13293) * Made fixes to sparse.py and sparse.md (#13305) * [Example] Gradcam- Fixing a link (#13307) * Manually track num_max_thread (#12380) @@ -308,7 +306,7 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN * Undefined name: load_model() --> utils.load_model() (#12867) * Change the way NDArrayIter handle the last batch (#12545) * Add embedding to print_summary (#12796) -* allow foreach on input with 0 length (#12471) +* Allow foreach on input with 0 length (#12471) * [MXNET-360]auto convert str to bytes in img.imdecode when py3 (#10697) ### Licensing updates @@ -322,10 +320,10 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN * Add a tutorial for control flow operators. (#12340) * Add tutorial Gotchas using NumPy (#12007) * Updated Symbol tutorial with Gluon (#12190) -* improve tutorial redirection (#12607) +* Improve tutorial redirection (#12607) * Include missing import in TensorRT tutorial (#12609) * Update Operator Implementation Tutorial (#12230) -* add a tutorial for the subgraph API. (#12698) +* Add a tutorial for the subgraph API. (#12698) * Improve clojure tutorial (#12974) * Update scala intellij tutorial (#12827) * [Example] Gradcam consolidation in tutorial (#13255) @@ -333,7 +331,7 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN * [MXNET-703] Add a TensorRT walkthrough (#12548) #### Example -* update C++ example so it is easier to run (#12397) +* Update C++ example so it is easier to run (#12397) * [MXNET-580] Add SN-GAN example (#12419) * [MXNET-637] Multidimensional LSTM example for MXNetR (#12664) * [MXNET-982] Provide example to illustrate usage of CSVIter in C++ API (#12636) @@ -361,7 +359,7 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN * Update ONNX API docs references (#12317) * Documentation update related to sparse support (#12367) * Edit shape.array doc and some style improvements (#12162) -* fixed docs/website build checkout bug (#12413) +* Fixed docs/website build checkout bug (#12413) * Add Python API docs for test_utils and visualization (#12455) * Fix the installation doc for MKL-DNN backend (#12534) * Added comment to docs regarding ToTensor transform (#12186) From ead82f12c6e1de55b14705718f7b7f6216a01016 Mon Sep 17 00:00:00 2001 From: Steffen Rochel Date: Wed, 19 Dec 2018 07:26:53 -0800 Subject: [PATCH 25/28] more updates on review feedback Address all of Aaron's feedback, except question regarding Jira and Test section. --- NEWS.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/NEWS.md b/NEWS.md index 8f4ef34edf8d..cd4b5feca48a 100644 --- a/NEWS.md +++ b/NEWS.md @@ -141,8 +141,9 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN * ONNX export: Fully connected operator w/o bias, ReduceSum, Square (#12646) * ONNX export/import: Selu (#12785) * ONNX export: Cleanup (#12878) -* Added operators: Selu, DepthToSpace, SpaceToDepth, HardSigmoid, Logical operators +* [MXNET-892] ONNX export/import: DepthToSpace, SpaceToDepth operators (#12731) * ONNX export: Scalar, Reshape - Set appropriate tensor type (#13067) +* [MXNET-886] ONNX export: HardSigmoid, Less, Greater, Equal (#12812) #### MKLDNN @@ -221,9 +222,11 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN #### Julia * Import Julia binding -### Performance improvements +### Performance benchmarks and improvements * Update mshadow for omp acceleration when nvcc is not present (#12674) * [MXNET-860] Avoid implicit double conversions (#12361) +* Add more models to benchmark_score (#12780) +* Add resnet50-v1 to benchmark_score (#12595) ### Bug fixes * Fix for #10920 - increase tolerance for sparse dot (#12527) @@ -289,8 +292,6 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN * Fix a typo in operator guide (#13115) * Fix variational autoencoder example (#12880) * Fix problem with some OSX not handling the cast on imDecode (#13207) -* Sphinx failure fixes (#13213) -* Revert Sphinx failure fixes (#13230) * [MXNET-953] Fix oob memory read (#12631) * Fix Sphinx error in ONNX file (#13251) * [Example] Fixing Gradcam implementation (#13196) @@ -354,6 +355,7 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN * [Example]update NER example readme on module prediction (#13184) * Update proposal_target.py (#12709) * Removing the re-size for validation data, which breaking the validation accuracy of CIFAR training (#12362) +* Update the README with instruction to redirect the user to gluon-cv (#13186) #### Documentation * Update ONNX API docs references (#12317) @@ -409,10 +411,8 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN * Sphinx errors in Gluon (#13275) * Update env_var.md (#12702) * Updated the Instructions for use of the label bot (#13192) -* update the README (#13186) * Added/changed file_name, brief description comments in some files (#13033) -* Add more models to benchmark_score (#12780) -* Add resnet50-v1 to benchmark_score (#12595) + #### Test * Add cloverage codecov report to CI for clojure (#12335) From 50e2637297340ef32faf879ade732e3b3bf41fdd Mon Sep 17 00:00:00 2001 From: Aaron Markham Date: Wed, 19 Dec 2018 11:23:03 -0800 Subject: [PATCH 26/28] Apply suggestions from code review Co-Authored-By: srochel --- NEWS.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/NEWS.md b/NEWS.md index cd4b5feca48a..89ff30ed4eb8 100644 --- a/NEWS.md +++ b/NEWS.md @@ -89,7 +89,7 @@ Note: This is an experimental feature and has known problems - see [13341](https #### MKLDNN backend: Graph optimization and Quantization (experimental) Two advanced features, graph optimization (operator fusion) and reduced-precision (INT8) computation, are introduced to MKLDNN backend in this release ([#12530](https://github.com/apache/incubator-mxnet/pull/12530), [#13297](https://github.com/apache/incubator-mxnet/pull/13297), [#13260](https://github.com/apache/incubator-mxnet/pull/13260)). -These features significantly boost the inference performance on CPU (up to 4X) for a broad range of deep learning topologies. Currently, this feature is only available for the inference on the CPU platforms. +These features significantly boost the inference performance on CPU (up to 4X) for a broad range of deep learning topologies. Currently, this feature is only available for inference on platforms with [supported Intel CPUs](https://github.com/intel/mkl-dnn#system-requirements). ##### Graph Optimization MKLDNN backend takes advantage of MXNet subgraph to implement the most of possible operator fusions for inference, such as Convolution + ReLU, Batch Normalization folding, etc. When using mxnet-mkl package, users can easily enable this feature by setting export MXNET_SUBGRAPH_BACKEND=MKLDNN. From fa5c48cdf1e3b10b1bbebb28a66097417e55bdcb Mon Sep 17 00:00:00 2001 From: Aaron Markham Date: Wed, 19 Dec 2018 11:23:28 -0800 Subject: [PATCH 27/28] Apply suggestions from code review Co-Authored-By: srochel --- NEWS.md | 51 +++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 51 insertions(+) diff --git a/NEWS.md b/NEWS.md index 89ff30ed4eb8..b17a0570e514 100644 --- a/NEWS.md +++ b/NEWS.md @@ -2,6 +2,57 @@ Apache MXNet (incubating) Change Log ================ ## 1.4.0 + +- [New Features](#new-features) + * [Java Inference API](#java-inference-api) + * [Julia API](#julia-api) + * [Control Flow Operators (experimental)](#control-flow-operators--experimental-) + * [SVRG Optimization](#svrg-optimization) + * [Subgraph API (experimental)](#subgraph-api--experimental-) + * [JVM Memory Management](#jvm-memory-management) + * [Topology-aware AllReduce (experimental)](#topology-aware-allreduce--experimental-) + * [MKLDNN backend: Graph optimization and Quantization (experimental)](#mkldnn-backend--graph-optimization-and-quantization--experimental-) + + [Graph Optimization](#graph-optimization) + + [Quantization](#quantization) +- [New Operators](#new-operators) +- [Feature improvements](#feature-improvements) + * [Operator](#operator) + * [Optimizer](#optimizer) + * [Sparse](#sparse) + * [ONNX](#onnx) + * [MKLDNN](#mkldnn) + * [Inference](#inference) + * [Other](#other) +- [Frontend API updates](#frontend-api-updates) + * [Gluon](#gluon) + * [Symbol](#symbol) +- [Language API updates](#language-api-updates) + * [Java](#java) + * [R](#r) + * [Scala](#scala) + * [Clojure](#clojure) + * [Perl](#perl) + * [Julia](#julia) +- [Performance benchmarks and improvements](#performance-benchmarks-and-improvements) +- [Bug fixes](#bug-fixes) +- [Licensing updates](#licensing-updates) +- [Improvements](#improvements) + * [Tutorial](#tutorial) + * [Example](#example) + * [Documentation](#documentation) + * [Test](#test) + * [Website](#website) + * [MXNet Distributions](#mxnet-distributions) + * [Installation](#installation) + * [Build and CI](#build-and-ci) + * [3rd party](#3rd-party) + + [TVM:](#tvm-) + + [CUDNN:](#cudnn-) + + [Horovod:](#horovod-) +- [Deprications](#deprications) +- [Other](#other-1) +- [How to build MXNet](#how-to-build-mxnet) +- [List of submodules used by Apache MXNet (Incubating) and when they were updated last](#list-of-submodules-used-by-apache-mxnet--incubating--and-when-they-were-updated-last) ### New Features #### Java Inference API From 5bd59ccebd546a1a4e0c867210e224a9ac9286a6 Mon Sep 17 00:00:00 2001 From: Steffen Rochel Date: Wed, 19 Dec 2018 11:28:26 -0800 Subject: [PATCH 28/28] final updates from review feedback --- NEWS.md | 72 +++------------------------------------------------------ 1 file changed, 3 insertions(+), 69 deletions(-) diff --git a/NEWS.md b/NEWS.md index b17a0570e514..c324e8db5dc7 100644 --- a/NEWS.md +++ b/NEWS.md @@ -40,7 +40,6 @@ Apache MXNet (incubating) Change Log * [Tutorial](#tutorial) * [Example](#example) * [Documentation](#documentation) - * [Test](#test) * [Website](#website) * [MXNet Distributions](#mxnet-distributions) * [Installation](#installation) @@ -251,14 +250,12 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN #### Scala * Explain the details for Scala Experimental (#12348) -* MXNET-873 - Bring Clojure Package Inline with New DataDesc and Layout in Scala Package (#12387) * [MXNET-716] Adding Scala Inference Benchmarks (#12721) * [MXNET-716][MIRROR #12723] Scala Benchmark Extension pack (#12758) * NativeResource Management in Scala (#12647) * Ignore generated Scala files (#12928) * Use ResourceScope in Model/Trainer/FeedForward.scala (#12882) * [MXNET-1180] Scala Image API (#12995) -* Port of Scala Image API to Clojure (#13107) * Update log4j version of Scala package (#13131) * Review require() usages to add meaningful messages (#12570) * Fix Scala readme (#13082) @@ -266,12 +263,14 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN #### Clojure * Introduction to Clojure-MXNet video link (#12754) * Improve the Clojure Package README to Make it Easier to Get Started (#12881) +* MXNET-873 - Bring Clojure Package Inline with New DataDesc and Layout in Scala Package (#12387) +* Port of Scala Image API to Clojure (#13107) #### Perl * [MXNET-1026] [Perl] Sync with recent changes in Python's API (#12739) #### Julia -* Import Julia binding +* Import Julia binding (#10149), how to use is available at https://github.com/apache/incubator-mxnet/tree/master/julia ### Performance benchmarks and improvements * Update mshadow for omp acceleration when nvcc is not present (#12674) @@ -464,71 +463,6 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN * Updated the Instructions for use of the label bot (#13192) * Added/changed file_name, brief description comments in some files (#13033) - -#### Test -* Add cloverage codecov report to CI for clojure (#12335) -* Disable flaky test test_operator.test_dropout (#12330) -* add initializer test (#12196) -* [MXAPPS-581] Disable a long test in the SD nightly. (#12326) -* [MXAPPS-581] Disable a long test in the SD nightly. (#12326) (#12339) -* Disable flaky test test_ndarray.test_order (#12311) -* fix flaky test: test_broadcast_binary_op (#11875) -* [MXAPPS-581] Disable an additional long test in the SD nightly (#12343) -* [MXNET-690] Add tests for initializers in R (#12360) -* Disabled flaky test: test_mkldnn.test_activation (#12378) -* support softmin operator with unit test (#12306) -* Revert Revert Disable kvstore test (#11798) (#12279) (#12379) -* fixed flaky test issue for test_operator_gpu.test_convolution_grouping (#12385) -* fixed flaky test issue for test_operator_gpu.test_depthwise_convolution (#12402) -* adjust tolerance levels of test_l2_normalization (#12429) -* Revert fixed flaky test issue for test_operator_gpu.test_depthwise_convolution (#12402) (#12441) -* remove flaky test and add consistency test for stable testing (#12427) -* Fix flaky test test_operator_gpu.test_batchnorm_with_type (#11873) -* [MXNET-909] Disable tvm_bridge test (#12476) -* Fix flaky test: test_mkldnn.test_activation #12377 (#12418) -* Temporarily disable flaky tests (#12513) -* Temporarily disable flaky tests (#12520) -* Revert Fix flaky test: test_mkldnn.test_activation #12377 (#12418) (#12516) -* [MXNET-851] Test coverage metrics for R-package (#12391) -* redirecting navigation items to latest info (#12540) -* Disable installation nightly test (#12571) -* [MXNET-952] Check for correlation kernel size along with unittest (#12558) -* [MXNET-968] Fix MacOS python tests (#12590) -* [MXNET-908] Enable python tests in Travis (#12550) -* fix test_activation by lowering threshold + validate eps for check_numeric_gradient (#12560) -* Enable gluon multi worker data loader test (#12315) -* Remove fixed seed for test_ctc_loss (#12686) -* fix for test order (#12358) -* [MXNET-500]Test cases improvement for MKLDNN on Gluon (#10921) -* Enable test_gluon.test_export (#12688) -* Disable test batchnorm slice (#12716) -* Reenable test_gluon.test_conv (#12718) -* Update packages and tests in the straight dope nightly (#12744) -* Fix failing GPU test on single GPU host (kvstore) (#12726) -* [MXNET-915] Java Inference API core wrappers and tests (#12757) -* Disabled flaky test: test_mkldnn.test_Deconvolution (#12770) -* Re-enables test_dropout (#12717) -* [MXNET-707] Add unit test for mxnet to coreml converter (#11952) -* Added context object to run TestCharRnn example (#12841) -* [MXNET-703] Show perf info for TensorRT during tests (#12656) -* [MXNET-793] ★ Virtualized testing in CI with QEMU ★ (#12094) -* Disabled flaky test: test_gluon_gpu.test_slice_batchnorm_reshape_batchnorm (#12768) -* Refactor mkldnn test files (#12410) -* enable batchnorm unit tests (#12986) -* Disable flaky test test_operator.test_dropout (#13057) -* Disable flaky test test_prelu (#13060) -* [MXNET-793] Virtual testing with Qemu, refinement and extract test results to root MXNet folder (#13065) -* Disable travis tests (#13137) -* [MXNET-1194] Reenable nightly tutorials tests for Python2 and Python3 (#13099) -* Refactor kvstore test (#13140) -* Disable Flaky test test_operator.test_clip (#12902) -* Tool to ease compilation and reproduction of test results (#13202) -* Implemented a regression unit test for #11793 (#12975) -* Fix test failure due to hybridize call in test_gluon_rnn.test_layer_fill_shape (#13043) -* adding unit test for MKLDNN FullyConnected operator (#12985) -* enabling test_dropout after fixing flaky issue (#13276) -* set proper atol for check_with_uniform (#12313) - #### Website * adding apache conf promo to home page (#12347) * Consistent website theme and custom 404 (#12426)