Merged
Conversation
fix runtime_context_cache bug when gpu model has an op runs only on cpu
add checkpoint functions for graph. test=develop
* implement distributed transpiler with fleet
implement dygraph.parallel.DataParallel to hook reduce op.
* Init mixed precision training interface * Add fp16 test script test=develop * All initializers support float16 test=develop * Code cleanup & add more code annotations test=develop * Update API spec test=develop * Add usage example in doc test=develop
test=develop
…near interpolation in forward (#17090) * Cache the information of linear interpolation in forward and use it in backward. test=develop * Fix cuda kernel. test=develop
test=develop
backward of backward: leaky_relu
test=develop
* Detailed coordinate description for yolov3 loss test=develop * modified api.spec test=develop * modified loss name * fix api.spec test=develop * polish description test=develop * modified api.spec test=develop
test=develop
* fix python/paddle/fluid/__init__.py detecting problems
1. Use CudnnWorkspaceHandle in exhaustive search of conv_cudnn. 2. For Ops using CudnnWorkspaceHandle in exhaustive search, release their GPU memory after exhaustive search. test=develop
* refine_dropout_mem,test=develop * # This is a combination of 14 commits. # The first commit's message is: remove ut test_dist_word2vec in mac ci, will fix it in private, test=develop (#17066) # This is the 2nd commit message: Fleet unify distributed training (#16791) * implement distributed transpiler with fleet # This is the 3rd commit message: ParallelDyGraph with GPU collective mode (#16827) implement dygraph.parallel.DataParallel to hook reduce op. # This is the 4th commit message: Init mixed precision training interface (#16856) * Init mixed precision training interface * Add fp16 test script test=develop * All initializers support float16 test=develop * Code cleanup & add more code annotations test=develop * Update API spec test=develop * Add usage example in doc test=develop # This is the 5th commit message: fix reference_count_pass,test=develop (#17060) test=develop # This is the 6th commit message: Speedup roi_perspective_transform op by caching the information of linear interpolation in forward (#17090) * Cache the information of linear interpolation in forward and use it in backward. test=develop * Fix cuda kernel. test=develop # This is the 7th commit message: remove unnecessary prepare_data (#17080) test=develop # This is the 8th commit message: fix interpolate cu. test=develop (#17101) # This is the 9th commit message: test=develop, double backward leaky_relu (#17067) backward of backward: leaky_relu # This is the 10th commit message: fix fuse optimizer ops (#17102) test=develop # This is the 11th commit message: truncated_gaussian_random supported in distributed training, test=develop (#17091) # This is the 12th commit message: Detailed coordinate description for yolov3 loss (#17007) * Detailed coordinate description for yolov3 loss test=develop * modified api.spec test=develop * modified loss name * fix api.spec test=develop * polish description test=develop * modified api.spec test=develop # This is the 13th commit message: fix test_weight_decay (#17109) test=develop # This is the 14th commit message: Path flag (#17105) * fix python/paddle/fluid/__init__.py detecting problems
* 1. move the API check into CPU process 2. adjust the check order
cvm without LoD.
fix runtimeerror : dictionary changed size during iteration when calling uniform_random in python3+
resolve #17147 test=develop
* polish the label_smooth test=develop * polish code test=develop
fix python3 run_time_error in layers.ops caused by locals()
* remove async executor python api test=develop * remove test_async_executor.py add executor train_from_dataset demo test=develop * fix import bug test=develop
* remove unnecessary set_devices
* test=develop * test=deelop
* enhance_concat, test=develop
test=develop
* add use_cuda to inplace pass,test=develop * add test softmax_with_xe_inplace test,test=develop
* fix tensor_py,test=develop * change class name,test=develop
test_distillation_strategy always failed on a machine with 4 gpus only, disable temporarily and need to figure out the root cause and add it back later
* fix profiler and name_scope API examples test=develop * update API.spec test=develop
* fix distribute fpn proposals, test=develop
* fix unexecutable API comments, test=develop * add API.spec,test=develop
* refine api comment, test=develop
test=develop
* cherry-pick commit from 8877054 * cherry-pick commit from 3f0b97d * cherry-pick from 16691:Anakin subgraph support yolo_v3 and faster-rcnn (cherry picked from commit 8643dbc) * Cherry-Pick from 16662 : Anakin subgraph cpu support (cherry picked from commit 7ad182e) * Cherry-pick from 1662, 16797.. : add anakin int8 support (cherry picked from commit e14ab18) * Cherry-pick from 16813 : change singleton to graph RegistBlock test=release/1.4 (cherry picked from commit 4b9fa42) * Cherry Pick : 16837 Support ShuffleNet and MobileNet-v2 Support ShuffleNet and MobileNet-v2, test=release/1.4 (cherry picked from commit a6fb066) * Cherry-pick : anakin subgraph add opt config layout argument #16846 test=release/1.4 (cherry picked from commit 8121b3e) * 1. add shuffle_channel_detect (cherry picked from commit 6efdea8) * update shuffle_channel op convert, test=release/1.4 (cherry picked from commit e4726a0) * Modify symbol export rules test=develop
* optimize sum op fuse multi eigen kernel calls into one cuda kernel. refine code test=develop. Signed-off-by: zhaoyuchen <[email protected]> * Refine code. test=develop Signed-off-by: zhaoyuchen <[email protected]> * Refine code according to comments. test=develop * refine code delete sum_op_gpu.h test=develop * Fix test error. test=develop Signed-off-by: zhaoyuchen <[email protected]> * refine code in format. test=develop. * refine code test=develop Signed-off-by: zhaoyuchen <[email protected]> * refine code test=develop Signed-off-by: zhaoyuchen <[email protected]>
* Add MovingAverageAbsMaxScale operator which is only used for calculating the quantization scale. * test=develop * change the output into inplace. test=develop * Revert "test=develop" This reverts commit 696cf62. * Revert "change the output into inplace. test=develop" This reverts commit a19acd2. * test=develop. * update the MovingAverageAbsMaxScaleOp test. test=develop
integer', test=develop
* add attr axis infershape. test=develop * add CUDA kernel. test=develop * fix unittest. test=develop * fix unittest for soft_label. test=develop * fix fp16 unittest. test=develop * remove comment code. test=develop * refine test for axis. test=develop * add python api. test=develop * fix doc. test=develop * fix fp16 unittest. test=develop * fix ngraph test. test=develop * fix ENFORCE for test_imperative_transformer. test=develop * fit for ngraph test. test=develop * fix after rebase develop. test=develop * fix doc. test=develop * fix API.spec. test=develop * fix test_layers. test=develop * fix format. test=develop
* remove unused FLAGS_warpctc_dir test=develop * remove FLAGS_warpctc_dir test=develop
test=develop
…tivations (#17235) * fix api doc of hash, relu, concat, argmin, argmax, argsoft and all activations funcs with no attrs test=develop * refine doc example code test=develop * remove >>> in doc example test=develop * refine python code block test=develop * update API spec test=develop
test=develop
…p inplace (#17225) * add use_cuda to inplace pass,test=develop * add test softmax_with_xe_inplace test,test=develop * fix potential inplace bug test=develop * add more skip vars in mem opt pass,test=develop * follow comment,test=develop * follow comments,move duplicate out arg check to program->graph,test=develop
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.