Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
72 commits
Select commit Hold shift + click to select a range
1f76a2f
Elastic as module (#34572)
kuizhiqing Aug 4, 2021
d9e63a8
Add gradient with optimizer API (#34395)
MingMingShangTian Aug 4, 2021
420570c
paddle/nn/functional docs' bug fix (#34580)
sunzhongkai588 Aug 4, 2021
0989211
Revert pull request 34212 (#34558)
youth123 Aug 4, 2021
090c863
add CuddEvent destructor function (#34610)
MingMingShangTian Aug 5, 2021
4cc3d9a
[HybridParallel]Fix bug of p2p for partial_send/recv (#34615)
ForFishes Aug 5, 2021
8144a73
[NPU] Support npu op flatten2 (#34579)
YuanRisheng Aug 5, 2021
7e707ce
add not_equal NPU op (#34560)
baoachun Aug 5, 2021
4d6f8f2
optimize ClipGradByGlobalNorm (#34586)
wangxicoding Aug 5, 2021
e47d8a5
[pass_enhance]fix the mkldnn model performance drop problem. test=dev…
winter-wang Aug 5, 2021
7a38b76
[NPU] Support npu op index_select (#34611)
limin2021 Aug 5, 2021
a68709d
add NPU support for zero_copy_tensor. (#34629)
houj04 Aug 5, 2021
1d7b75d
Support Ternary ops in elmentwise and broadcast (#33976)
JamesLim-sy Aug 5, 2021
911c859
optimize pipeline performance with recompute and amp, test=allcase (#…
wangxicoding Aug 5, 2021
bb7b4c0
remove boost::algorithm::ends_with ,boost macro and boost::lexical_ca…
MingMingShangTian Aug 5, 2021
a842828
[Dy2Stat]Support Mixed Precision training in @to_static (#34562)
Aurelius84 Aug 5, 2021
ff062a4
fix output dtype for paddle.sum (#34313)
GuoxiaWang Aug 5, 2021
a9ee383
[Dy2st]Integrated gast library to fix compatibility problem permanent…
0x45f Aug 5, 2021
012d12b
New executor dev (#34407)
phlrain Aug 5, 2021
6839994
[NPU] Add relu6 and relu6_grad npu op (#34596)
wjj19950828 Aug 5, 2021
6151ccd
[NPU] Support npu op: (1) cos (2) cos_grad (#34573)
veyron95 Aug 5, 2021
6c8a10a
rm detach (#34644)
ForFishes Aug 5, 2021
4a52c0c
[paddle-trt] fix_teller_reshape (#34583)
Wangzheee Aug 5, 2021
68377b4
fix dygraph has_grad (#34649)
sneaxiy Aug 5, 2021
06651c4
Fix ut test_pe_fix_op_run_order by using smaller model and batch size…
sneaxiy Aug 6, 2021
436a9f1
fix log_softmax if any dimension is 0-d (#34635)
juncaipeng Aug 6, 2021
c91b1e0
[NPU]Use another method to void c_allreduce_sum core! (#34619)
gongweibao Aug 6, 2021
ce73349
del wait in sharding for npu (#34637)
Baibaifan Aug 6, 2021
c16421c
fix npu compile error, test=develop (#34656)
qili93 Aug 6, 2021
6e442e6
Support npu kernel for eye op (#34543)
yeliang2258 Aug 6, 2021
0f19ac7
paddle/nn fix formula bugs (#34643)
sunzhongkai588 Aug 6, 2021
47d81b0
[NPU]add reduce_prod (#34182)
windstamp Aug 6, 2021
cabfb4a
[NPU] Support npu kernel for atan and atan_grad op, test=develop (#34…
Liu-xiandong Aug 6, 2021
8a9dc5d
add get xpu version api (#34594)
tangzhiyi11 Aug 6, 2021
fa16c21
fix bug of inplace (#34665)
ForFishes Aug 6, 2021
21beef9
support kunlun black list and add kl1 op (#34605)
QingshuChen Aug 6, 2021
4caf60d
fix simple_rnn_cell, gru_cell and lstm_cell zero_div_error (#34627)
joey12300 Aug 6, 2021
52e38a0
zero_copy_tensor unittest: support XPU. (#34670)
houj04 Aug 6, 2021
46808af
fix no value for parameter (#34091)
Jiangxinz Aug 7, 2021
338f9e0
add sequence_mask_op_npu and tests (#34455)
ronny1996 Aug 8, 2021
b7355d8
[NPU] add broadcast supporting for elementwise_add_op_npu (#34057)
ronny1996 Aug 9, 2021
0dff82c
Recompute: fix bug with transformer attention mask (#34664)
JZ-LIANG Aug 9, 2021
898acb1
fix split on empty tensor (#34356)
zhiqiu Aug 9, 2021
a3cc2d0
bugfix remove fluid (#34680)
JZ-LIANG Aug 9, 2021
56759ff
optimization batch_norm 2D and NCHW format on CPU (#34585)
Zjq9409 Aug 9, 2021
4c1ba73
[NPU] add one_hot_op_npu and tests (#34258)
ronny1996 Aug 9, 2021
aab4d6e
Increase the speed of incremental compilation (#34616)
zhwesky2010 Aug 9, 2021
3380778
limit chunk.axis (#34630)
wangzhen38 Aug 9, 2021
7afd31b
[NPU] Support npu op flatten2_grad (#34669)
YuanRisheng Aug 9, 2021
8009257
fix_trt_int8 (#34704)
Wangzheee Aug 9, 2021
e285258
[NPU] add lock for npu_pinned_allocator (#34700)
zhiqiu Aug 9, 2021
bf54534
Revert "add CuddEvent destructor function (#34610)" (#34720)
MingMingShangTian Aug 9, 2021
3f32b73
Fix error of HSigmoidLoss (#34719)
linjieccc Aug 10, 2021
202c240
Support npu kernel for expand_as_v2 op (#34620)
rainyfly Aug 10, 2021
8a6aa59
Support npu kernel for tile op (#34606)
rainyfly Aug 10, 2021
84eb675
kill all procs on exiting (#34741)
kuizhiqing Aug 10, 2021
d86c26d
fix for div zero (#34724)
zh794390558 Aug 10, 2021
1289292
copy boost/any.hpp to utils and replace boost::any with self defined …
MingMingShangTian Aug 10, 2021
f30a5c4
add cudaEvent destructor function (#34734)
MingMingShangTian Aug 10, 2021
a160379
[hybrid] refine sharding code (#34678)
wangxicoding Aug 10, 2021
4f4662b
[bug fix] fix unfold fpe bug (#34673)
ghostxsl Aug 10, 2021
cfd49ac
fix a quantization bug (#34647)
XGZhang11 Aug 10, 2021
ed2641c
[NPU] Support op kernel for Fill constant batch size like op (#34721)
andyjiang1116 Aug 10, 2021
e8df322
Support npu op fill_any_like (#34518)
zyfncg Aug 10, 2021
b64312f
[NPU] add squared_l2_norm squared_l2_norm_grad and tests (#34708)
Aganlengzi Aug 10, 2021
8b9bd16
fix format_string_append test cast,test=develop (#34753)
MingMingShangTian Aug 10, 2021
8f9d573
Kernel primitives api (#34672)
AnnaTrainingG Aug 10, 2021
79be842
[NPU] Support npu kernel for flatten_contiguous_range op, test=develo…
Liu-xiandong Aug 10, 2021
17c1dae
Add no need output to gc check list (#34754)
phlrain Aug 11, 2021
bb01b12
[NPU] Support NPU kernel for TopKV2 op (#34599)
From00 Aug 11, 2021
6a9fac1
modified reduce_sum_op and reduce_mean_op for higher_performance (#32…
AnnaTrainingG Aug 11, 2021
4d2994c
Optimize fused allreduce in raw program (#34509)
FeixLiu Aug 11, 2021
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ if(WIN32)
# NOTE(zhouwei25): GPU compile have too high memory utilization when parallel compiling,
# For Visual Studio generators, /MP should be added.
# For other generators like Ninja, it is not need to add /MP.
if("${CMAKE_GENERATOR}" STREQUAL "Visual Studio" AND NOT WITH_GPU)
if(CMAKE_GENERATOR MATCHES "Visual Studio" AND NOT WITH_GPU)
math(EXPR PROCESS_MAX "${CPU_CORES} * 2 / 3")
set(${flag_var} "${${flag_var}} /MP${PROCESS_MAX}")
endif()
Expand Down
1 change: 1 addition & 0 deletions cmake/external/gflags.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@ ExternalProject_Add(
${SHALLOW_CLONE}
"${GFLAGS_DOWNLOAD_CMD}"
PREFIX ${GFLAGS_PREFIX_DIR}
UPDATE_COMMAND ""
SOURCE_DIR ${GFLAGS_SOURCE_DIR}
BUILD_COMMAND ${BUILD_COMMAND}
INSTALL_COMMAND ${INSTALL_COMMAND}
Expand Down
1 change: 1 addition & 0 deletions cmake/external/glog.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@ ExternalProject_Add(
DEPENDS gflags
PREFIX ${GLOG_PREFIX_DIR}
SOURCE_DIR ${GLOG_SOURCE_DIR}
UPDATE_COMMAND ""
CMAKE_ARGS -DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
-DCMAKE_CXX_FLAGS=${GLOG_CMAKE_CXX_FLAGS}
Expand Down
55 changes: 25 additions & 30 deletions cmake/external/mkldnn.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -79,49 +79,44 @@ ExternalProject_Add(
-DCMAKE_CXX_FLAGS=${MKLDNN_CXXFLAG}
-DDNNL_BUILD_TESTS=OFF -DDNNL_BUILD_EXAMPLES=OFF
CMAKE_CACHE_ARGS -DCMAKE_INSTALL_PREFIX:PATH=${MKLDNN_INSTALL_DIR}
BUILD_BYPRODUCTS ${MKLDNN_LIB}
)

ADD_LIBRARY(shared_mkldnn SHARED IMPORTED GLOBAL)
SET_PROPERTY(TARGET shared_mkldnn PROPERTY IMPORTED_LOCATION ${MKLDNN_LIB})
ADD_DEPENDENCIES(shared_mkldnn ${MKLDNN_PROJECT})
MESSAGE(STATUS "MKLDNN library: ${MKLDNN_LIB}")
add_definitions(-DPADDLE_WITH_MKLDNN)

# generate a static dummy target to track mkldnn dependencies
# for cc_library(xxx SRCS xxx.c DEPS mkldnn)
generate_dummy_static_lib(LIB_NAME "mkldnn" GENERATOR "mkldnn.cmake")

TARGET_LINK_LIBRARIES(mkldnn ${MKLDNN_LIB} ${MKLML_IOMP_LIB})
ADD_DEPENDENCIES(mkldnn ${MKLDNN_PROJECT})

# copy the real so.0 lib to install dir
# it can be directly contained in wheel or capi
if(WIN32)
SET(MKLDNN_SHARED_LIB ${MKLDNN_INSTALL_DIR}/bin/mkldnn.dll)

file(TO_NATIVE_PATH ${MKLDNN_INSTALL_DIR} NATIVE_MKLDNN_INSTALL_DIR)
file(TO_NATIVE_PATH ${MKLDNN_SHARED_LIB} NATIVE_MKLDNN_SHARED_LIB)
ADD_CUSTOM_COMMAND(TARGET ${MKLDNN_PROJECT} POST_BUILD
COMMAND (copy ${NATIVE_MKLDNN_INSTALL_DIR}\\bin\\dnnl.dll ${NATIVE_MKLDNN_SHARED_LIB} /Y))
add_custom_command(TARGET ${MKLDNN_PROJECT} POST_BUILD VERBATIM
COMMAND dumpbin /exports ${MKLDNN_INSTALL_DIR}/bin/mkldnn.dll > ${MKLDNN_INSTALL_DIR}/bin/exports.txt)
add_custom_command(TARGET ${MKLDNN_PROJECT} POST_BUILD VERBATIM
COMMAND echo LIBRARY mkldnn > ${MKLDNN_INSTALL_DIR}/bin/mkldnn.def)
add_custom_command(TARGET ${MKLDNN_PROJECT} POST_BUILD VERBATIM
COMMAND echo EXPORTS >> ${MKLDNN_INSTALL_DIR}/bin/mkldnn.def)
add_custom_command(TARGET ${MKLDNN_PROJECT} POST_BUILD VERBATIM
COMMAND echo off && (for /f "skip=19 tokens=4" %A in (${MKLDNN_INSTALL_DIR}/bin/exports.txt) do echo %A >> ${MKLDNN_INSTALL_DIR}/bin/mkldnn.def) && echo on)
add_custom_command(TARGET ${MKLDNN_PROJECT} POST_BUILD VERBATIM
COMMAND lib /def:${MKLDNN_INSTALL_DIR}/bin/mkldnn.def /out:${MKLDNN_INSTALL_DIR}/bin/mkldnn.lib /machine:x64)

ADD_CUSTOM_COMMAND(OUTPUT ${MKLDNN_LIB}
COMMAND (copy ${NATIVE_MKLDNN_INSTALL_DIR}\\bin\\dnnl.dll ${NATIVE_MKLDNN_SHARED_LIB} /Y)
COMMAND dumpbin /exports ${MKLDNN_INSTALL_DIR}/bin/mkldnn.dll > ${MKLDNN_INSTALL_DIR}/bin/exports.txt
COMMAND echo LIBRARY mkldnn > ${MKLDNN_INSTALL_DIR}/bin/mkldnn.def
COMMAND echo EXPORTS >> ${MKLDNN_INSTALL_DIR}/bin/mkldnn.def
COMMAND echo off && (for /f "skip=19 tokens=4" %A in (${MKLDNN_INSTALL_DIR}/bin/exports.txt) do echo %A >> ${MKLDNN_INSTALL_DIR}/bin/mkldnn.def) && echo on
COMMAND lib /def:${MKLDNN_INSTALL_DIR}/bin/mkldnn.def /out:${MKLDNN_LIB} /machine:x64
COMMENT "Generate mkldnn.lib manually--->"
DEPENDS ${MKLDNN_PROJECT}
VERBATIM)
ADD_CUSTOM_TARGET(mkldnn_cmd ALL DEPENDS ${MKLDNN_LIB})
else(WIN32)
SET(MKLDNN_SHARED_LIB ${MKLDNN_INSTALL_DIR}/libmkldnn.so.0)
SET(MKLDNN_SHARED_LIB_1 ${MKLDNN_INSTALL_DIR}/libdnnl.so.1)
SET(MKLDNN_SHARED_LIB_2 ${MKLDNN_INSTALL_DIR}/libdnnl.so.2)
ADD_CUSTOM_COMMAND(TARGET ${MKLDNN_PROJECT} POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy ${MKLDNN_LIB} ${MKLDNN_SHARED_LIB})
ADD_CUSTOM_COMMAND(TARGET ${MKLDNN_PROJECT} POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy ${MKLDNN_LIB} ${MKLDNN_SHARED_LIB_1})
ADD_CUSTOM_COMMAND(TARGET ${MKLDNN_PROJECT} POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy ${MKLDNN_LIB} ${MKLDNN_SHARED_LIB_2})
ADD_CUSTOM_COMMAND(OUTPUT ${MKLDNN_SHARED_LIB_2}
COMMAND ${CMAKE_COMMAND} -E copy ${MKLDNN_LIB} ${MKLDNN_SHARED_LIB}
COMMAND ${CMAKE_COMMAND} -E copy ${MKLDNN_LIB} ${MKLDNN_SHARED_LIB_1}
COMMAND ${CMAKE_COMMAND} -E copy ${MKLDNN_LIB} ${MKLDNN_SHARED_LIB_2}
DEPENDS ${MKLDNN_PROJECT})
ADD_CUSTOM_TARGET(mkldnn_cmd ALL DEPENDS ${MKLDNN_SHARED_LIB_2})
endif(WIN32)

# generate a static dummy target to track mkldnn dependencies
# for cc_library(xxx SRCS xxx.c DEPS mkldnn)
generate_dummy_static_lib(LIB_NAME "mkldnn" GENERATOR "mkldnn.cmake")

TARGET_LINK_LIBRARIES(mkldnn ${MKLDNN_LIB} ${MKLML_IOMP_LIB})
ADD_DEPENDENCIES(mkldnn ${MKLDNN_PROJECT} mkldnn_cmd)
20 changes: 10 additions & 10 deletions cmake/external/protobuf.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -198,16 +198,16 @@ FUNCTION(build_protobuf TARGET_NAME BUILD_FOR_HOST)
"-Dprotobuf_MSVC_STATIC_RUNTIME=${MSVC_STATIC_CRT}")
ENDIF()

if(WITH_ASCEND AND NOT WITH_ASCEND_CXX11)
SET(PROTOBUF_REPOSITORY https://gitee.com/tianjianhe/protobuf.git)
SET(PROTOBUF_TAG v3.8.0)
elseif(WITH_ASCEND_CL AND NOT WITH_ASCEND_CXX11)
SET(PROTOBUF_REPOSITORY https://gitee.com/tianjianhe/protobuf.git)
SET(PROTOBUF_TAG v3.8.0)
else()
SET(PROTOBUF_REPOSITORY ${GIT_URL}/protocolbuffers/protobuf.git)
SET(PROTOBUF_TAG 9f75c5aa851cd877fb0d93ccc31b8567a6706546)
endif()
if(WITH_ASCEND AND NOT WITH_ASCEND_CXX11)
SET(PROTOBUF_REPOSITORY https://gitee.com/tianjianhe/protobuf.git)
SET(PROTOBUF_TAG v3.8.0)
elseif(WITH_ASCEND_CL AND NOT WITH_ASCEND_CXX11)
SET(PROTOBUF_REPOSITORY https://gitee.com/tianjianhe/protobuf.git)
SET(PROTOBUF_TAG v3.8.0)
else()
SET(PROTOBUF_REPOSITORY ${GIT_URL}/protocolbuffers/protobuf.git)
SET(PROTOBUF_TAG 9f75c5aa851cd877fb0d93ccc31b8567a6706546)
endif()

cache_third_party(${TARGET_NAME}
REPOSITORY ${PROTOBUF_REPOSITORY}
Expand Down
1 change: 1 addition & 0 deletions cmake/external/pybind11.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@ ExternalProject_Add(
# to be modified without triggering incremental compilation, and the
# third-party library version changes cannot be incorporated.
# reference: https://cmake.org/cmake/help/latest/module/ExternalProject.html
UPDATE_COMMAND ""
CONFIGURE_COMMAND ""
BUILD_COMMAND ""
INSTALL_COMMAND ""
Expand Down
2 changes: 1 addition & 1 deletion cmake/external/xpu.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ ELSE ()
ENDIF()

SET(XPU_BASE_URL_WITHOUT_DATE "https://baidu-kunlun-product.cdn.bcebos.com/KL-SDK/klsdk-dev")
SET(XPU_BASE_URL "${XPU_BASE_URL_WITHOUT_DATE}/20210729")
SET(XPU_BASE_URL "${XPU_BASE_URL_WITHOUT_DATE}/20210804")
SET(XPU_XRE_URL "${XPU_BASE_URL}/${XPU_XRE_DIR_NAME}.tar.gz" CACHE STRING "" FORCE)
SET(XPU_XDNN_URL "${XPU_BASE_URL}/${XPU_XDNN_DIR_NAME}.tar.gz" CACHE STRING "" FORCE)
SET(XPU_XCCL_URL "${XPU_BASE_URL_WITHOUT_DATE}/20210623/${XPU_XCCL_DIR_NAME}.tar.gz" CACHE STRING "" FORCE)
Expand Down
3 changes: 3 additions & 0 deletions cmake/inference_lib.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -205,6 +205,9 @@ copy(inference_lib_dist
copy(inference_lib_dist
SRCS ${PADDLE_SOURCE_DIR}/paddle/fluid/platform/float16.h
DSTS ${PADDLE_INFERENCE_INSTALL_DIR}/paddle/include/experimental/)
copy(inference_lib_dist
SRCS ${PADDLE_SOURCE_DIR}/paddle/utils/any.h
DSTS ${PADDLE_INFERENCE_INSTALL_DIR}/paddle/include/experimental/)

# CAPI inference library for only inference
set(PADDLE_INFERENCE_C_INSTALL_DIR "${CMAKE_BINARY_DIR}/paddle_inference_c_install_dir" CACHE STRING
Expand Down
9 changes: 4 additions & 5 deletions paddle/fluid/distributed/common/sparse_sharding_merge.h
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,6 @@
#include <vector>

#include <ThreadPool.h>
#include "boost/lexical_cast.hpp"
#include "glog/logging.h"
#include "paddle/fluid/distributed/common/utils.h"
#include "paddle/fluid/framework/blocking_queue.h"
Expand All @@ -36,8 +35,6 @@ constexpr int Q_SIZE = 10000;
constexpr int BUCKET = 10;
constexpr char XEOF[] = "EOF";

using boost::lexical_cast;

inline double GetCurrentUS() {
struct timeval time;
gettimeofday(&time, NULL);
Expand Down Expand Up @@ -208,8 +205,10 @@ class ShardingMerge {
for (int x = 0; x < embedding_dim; ++x) {
float v = 0.0;
try {
v = lexical_cast<float>(values_str[x]);
} catch (boost::bad_lexical_cast &e) {
v = std::stof(values_str[x]);
} catch (std::invalid_argument &e) {
VLOG(0) << " get unexpected line: " << line;
} catch (std::out_of_range &e) {
VLOG(0) << " get unexpected line: " << line;
}
out->push_back(v);
Expand Down
4 changes: 1 addition & 3 deletions paddle/fluid/distributed/index_dataset/index_wrapper.cc
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,6 @@ limitations under the License. */
#include <vector>
#include "paddle/fluid/framework/io/fs.h"

#include <boost/algorithm/string.hpp>
#include <boost/lexical_cast.hpp>
#include "paddle/fluid/distributed/index_dataset/index_wrapper.h"

namespace paddle {
Expand Down Expand Up @@ -65,7 +63,7 @@ int TreeIndex::Load(const std::string filename) {
if (item.key() == ".tree_meta") {
meta_.ParseFromString(item.value());
} else {
auto code = boost::lexical_cast<uint64_t>(item.key());
auto code = std::stoull(item.key());
IndexNode node;
node.ParseFromString(item.value());
PADDLE_ENFORCE_NE(node.id(), 0,
Expand Down
17 changes: 9 additions & 8 deletions paddle/fluid/distributed/table/common_sparse_table.cc
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,6 @@
#include "paddle/fluid/distributed/table/common_sparse_table.h"
#include <sstream>

#include "boost/lexical_cast.hpp"
#include "glog/logging.h"
#include "paddle/fluid/platform/enforce.h"

Expand Down Expand Up @@ -50,8 +49,11 @@ void CommonSparseTable::ProcessALine(const std::vector<std::string>& columns,
float v = 0.0;

try {
v = lexical_cast<float>(va);
} catch (boost::bad_lexical_cast& e) {
v = std::stof(va);
} catch (std::invalid_argument& e) {
VLOG(0) << "id: " << id << " get unexpected value: " << va
<< " and be reset to: 0.0";
} catch (std::out_of_range& e) {
VLOG(0) << "id: " << id << " get unexpected value: " << va
<< " and be reset to: 0.0";
}
Expand Down Expand Up @@ -131,7 +133,7 @@ int64_t CommonSparseTable::LoadFromText(

while (std::getline(file, line)) {
auto values = paddle::string::split_string<std::string>(line, "\t");
auto id = lexical_cast<uint64_t>(values[0]);
auto id = std::stoull(values[0]);

if (id % pserver_num != pserver_id) {
VLOG(3) << "will not load " << values[0] << " from " << valuepath
Expand All @@ -150,10 +152,9 @@ int64_t CommonSparseTable::LoadFromText(
VALUE* value_instant = block->GetValue(id);

if (values.size() == 5) {
value_instant->count_ = lexical_cast<int>(values[1]);
value_instant->unseen_days_ = lexical_cast<int>(values[2]);
value_instant->is_entry_ =
static_cast<bool>(lexical_cast<int>(values[3]));
value_instant->count_ = std::stoi(values[1]);
value_instant->unseen_days_ = std::stoi(values[2]);
value_instant->is_entry_ = static_cast<bool>(std::stoi(values[3]));
}

std::vector<float*> block_values = block->Get(id, meta.names, meta.dims);
Expand Down
1 change: 0 additions & 1 deletion paddle/fluid/distributed/table/common_sparse_table.h
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,6 @@
#include "paddle/fluid/string/string_helper.h"

#define PSERVER_SAVE_SUFFIX ".shard"
using boost::lexical_cast;

namespace paddle {
namespace distributed {
Expand Down
9 changes: 4 additions & 5 deletions paddle/fluid/distributed/table/ssd_sparse_table.cc
Original file line number Diff line number Diff line change
Expand Up @@ -310,7 +310,7 @@ int64_t SSDSparseTable::LoadFromText(

while (std::getline(file, line)) {
auto values = paddle::string::split_string<std::string>(line, "\t");
auto id = lexical_cast<uint64_t>(values[0]);
auto id = std::stoull(values[0]);

if (id % pserver_num != pserver_id) {
VLOG(3) << "will not load " << values[0] << " from " << valuepath
Expand All @@ -329,10 +329,9 @@ int64_t SSDSparseTable::LoadFromText(
VALUE* value_instant = block->GetValue(id);

if (values.size() == 5) {
value_instant->count_ = lexical_cast<int>(values[1]);
value_instant->unseen_days_ = lexical_cast<int>(values[2]);
value_instant->is_entry_ =
static_cast<bool>(lexical_cast<int>(values[3]));
value_instant->count_ = std::stoi(values[1]);
value_instant->unseen_days_ = std::stoi(values[2]);
value_instant->is_entry_ = static_cast<bool>(std::stoi(values[3]));
}

std::vector<float*> block_values = block->Get(id, meta.names, meta.dims);
Expand Down
Loading