Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions test/collective/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,30 +6,30 @@
and specify the properties for the new unit test
the properties are the following:
* `name`: the test's name
* `os`: The supported operator system, ignoring case. If the test run in multiple operator systems, use ";" to split systems, for example, `apple;linux` means the test runs on both Apple and Linux. The supported values are `linux`,`win32` and `apple`. If the value is empty, this means the test runs on all opertaor systems.
* `arch`: the device's architecture. similar to `os`, multiple valuse ars splited by ";" and ignoring case. The supported architectures are `gpu`, `xpu` and `rocm`.
* `os`: The supported operator system, ignoring case. If the test run in multiple operator systems, use ";" to split systems, for example, `apple;linux` means the test runs on both Apple and Linux. The supported values are `linux`,`win32` and `apple`. If the value is empty, this means the test runs on all operator systems.
* `arch`: the device's architecture. similar to `os`, multiple values are splited by ";" and ignoring case. The supported architectures are `gpu`, `xpu` and `rocm`.
* `timeout`: timeout of a unittest, whose unit is second. Blank means default.
* `run_type`: run_type of a unittest. Supported values are `NIGHTLY`, `EXCLUSIVE`, `CINN`, `DIST`, `GPUPS`, `INFER`, `EXCLUSIVE:NIGHTLY`, `DIST:NIGHTLY`,which are case-insensitive.
* `launcher`: the test launcher.Supported values are test_runner.py, dist_test.sh and custom scripts' name. Blank means test_runner.py.
* `num_port`: the number of port used in a distributed unit test. Blank means automatically distributed port.
* `run_serial`: whether in serial mode. the value can be 1 or 0.Default (empty) is 0. Blank means default.
* `ENVS`: required environments. multiple envirenmonts are splited by ";".
* `ENVS`: required environments. multiple environments are splited by ";".
* `conditions`: extra required conditions for some tests. The value is a list of boolean expression in cmake programmer, splited with ";". For example, the value can be `WITH_DGC;NOT WITH_NCCL` or `WITH_NCCL;${NCCL_VERSION} VERSION_GREATER_EQUAL 2212`,The relationship between these expressions is a conjunction.

### step 3. Generate CmakeLists.txt
### step 3. Generate CMakeLists.txt
Run the cmd:
```bash
python3 ${PADDLE_ROOT}/tools/gen_ut_cmakelists.py -f ${PADDLE_ROOT}/python/paddle/fluid/tests/unittests/collective/testslist.csv
```
Then the cmd generates a file named CMakeLists.txt in the same directory with the testslist.csv.
* usgae:
* usage:
The command accepts --files/-f or --dirpaths/-d options, both of which accepts multiple values.
Option -f accepts a list of testslist.csv.
Option -d accepts a list of directory path including files named testslist.csv.
Type `python3 ${PADDLE_ROOT}/tools/gen_ut_cmakelists.py --help` for details.

* note:
When commiting the codes, you should commit both the testslist.csv and the generated CMakeLists.txt. Once you pulled the repo, you don't need to run this command until you modify the testslists.csv file.
When committing the codes, you should commit both the testslist.csv and the generated CMakeLists.txt. Once you pulled the repo, you don't need to run this command until you modify the testslists.csv file.

### step 4. Build and test
Build paddle and run ctest for the new unit test
6 changes: 3 additions & 3 deletions test/contrib/test_bf16_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -158,7 +158,7 @@ def test_find_true_post_op_with_search_all(self):

var1 = block.create_var(name="X", shape=[3], dtype='float32')
var2 = block.create_var(name="Y", shape=[3], dtype='float32')
inititializer_op = startup_block._prepend_op(
initializer_op = startup_block._prepend_op(
type="fill_constant",
outputs={"Out": var1},
attrs={"shape": var1.shape, "dtype": var1.dtype, "value": 1.0},
Expand All @@ -168,11 +168,11 @@ def test_find_true_post_op_with_search_all(self):
type="abs", inputs={"X": [var1]}, outputs={"Out": [var2]}
)
result = amp.bf16.amp_utils.find_true_post_op(
block.ops, inititializer_op, "X", search_all=False
block.ops, initializer_op, "X", search_all=False
)
assert len(result) == 0
result = amp.bf16.amp_utils.find_true_post_op(
block.ops, inititializer_op, "X", search_all=True
block.ops, initializer_op, "X", search_all=True
)
assert result == [op1]

Expand Down
4 changes: 2 additions & 2 deletions test/contrib/test_image_classification_fp16.py
Original file line number Diff line number Diff line change
Expand Up @@ -181,7 +181,7 @@ def train_loop(main_program):
fetch_list=[scaled_loss, avg_cost],
)
print(
'PassID {:1}, BatchID {:04}, train loss {:2.4}, scaled train closs {:2.4}'.format(
'PassID {:1}, BatchID {:04}, train loss {:2.4}, scaled train loss {:2.4}'.format(
pass_id,
batch_id + 1,
float(loss),
Expand Down Expand Up @@ -272,7 +272,7 @@ def infer(use_cuda, save_dirname=None):
] = paddle.static.io.load_inference_model(save_dirname, exe)

# The input's dimension of conv should be 4-D or 5-D.
# Use normilized image pixels as input data, which should be in the range [0, 1.0].
# Use normalized image pixels as input data, which should be in the range [0, 1.0].
batch_size = 1
tensor_img = numpy.random.rand(batch_size, 3, 32, 32).astype("float32")

Expand Down
2 changes: 1 addition & 1 deletion test/cpp/auto_parallel/custom_op_spmd_rule_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ TEST(CustomOp, Ctor) {
std::vector<CustomSpmdInferAttrArg> attrs = {axis};

auto infered_dist_attrs = forward_spmd_func(infer_inputs, attrs);
// list of tensor => sigle tensor
// list of tensor => single tensor
EXPECT_EQ(infered_dist_attrs.first.size(), static_cast<size_t>(1));
EXPECT_EQ(infered_dist_attrs.second.size(), static_cast<size_t>(1));
EXPECT_TRUE(
Expand Down
22 changes: 11 additions & 11 deletions test/cpp/auto_parallel/spmd_rule_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -182,7 +182,7 @@ TEST(MatmulSPMDRule, Ctor) {

check_dim_mapping(infered_dist_attrs.first[0], {-1, -1, 0, 1});
check_dim_mapping(infered_dist_attrs.first[1],
{-1, 0}); // confilct and should be changed to [-1, 0]
{-1, 0}); // conflict and should be changed to [-1, 0]
check_dim_mapping(infered_dist_attrs.second[0], {-1, -1, 1, -1});
check_partial_dims(infered_dist_attrs.second[0], {0});

Expand Down Expand Up @@ -374,7 +374,7 @@ TEST(MatmulSPMDRuleInferBackward, Ctor) {
auto matmul_spmd_rule =
phi::distributed::SpmdRuleFactory::Instance().GetSpmdRule("matmul");

// TODO(zyc) update in future: propogate the partial in inferbackward
// TODO(zyc) update in future: propagate the partial in inferbackward
// abmn[-1, -1, 1, -1] + partial[0] --> abmk[-1, -1, 1, -1], a1kn[-1, -1, -1,
// -1]
phi::distributed::InferSpmdContext ctx(
Expand Down Expand Up @@ -546,8 +546,8 @@ TEST(DefaultDataParallelSPMDRule, Ctor) {
phi::distributed::DistMetaTensor out2(common::make_ddim(out2_shape),
out2_dist_attr);

// 2 inputs 2 outputs, batch axis sharding is propagatd while other axes are
// replicatd call in vector arguments format
// 2 inputs 2 outputs, batch axis sharding is propagated while other axes are
// replicated call in vector arguments format
auto infered_dist_attrs_st =
phi::distributed::DefaultDataParallelInferSpmd({&x, &y}, {&out1, &out2});
// call in variadic arguments format
Expand Down Expand Up @@ -677,7 +677,7 @@ TEST(ConcatRule, Ctor) {
// test 1, inputs are aligned according to cost, and partial status is cleared
auto inputs = build_inputs();
auto infered_dist_attrs = phi::distributed::ConcatInferSpmd(inputs, 0);
// list of tensor => sigle tensor
// list of tensor => single tensor
EXPECT_EQ(infered_dist_attrs.first.size(), static_cast<size_t>(1));
EXPECT_EQ(infered_dist_attrs.second.size(), static_cast<size_t>(1));
EXPECT_TRUE(
Expand Down Expand Up @@ -734,7 +734,7 @@ TEST(ConcatRule, Ctor) {
// test 2,force replicate along concat axis
inputs = build_inputs();
infered_dist_attrs = phi::distributed::ConcatInferSpmd(inputs, 1);
// list of tensor => sigle tensor
// list of tensor => single tensor
EXPECT_EQ(infered_dist_attrs.first.size(), static_cast<size_t>(1));
EXPECT_EQ(infered_dist_attrs.second.size(), static_cast<size_t>(1));
EXPECT_TRUE(
Expand Down Expand Up @@ -792,10 +792,10 @@ TEST(StackRule, Ctor) {
t_dist_attr);
};

// test 1, inputs are aligned according to cost,
// test 1, inputs are aligned according to cost.
auto inputs = build_inputs();
auto infered_dist_attrs = phi::distributed::StackInferSpmd(inputs, 0);
// list of tensor => sigle tensor
// list of tensor => single tensor
EXPECT_EQ(infered_dist_attrs.first.size(), static_cast<size_t>(1));
EXPECT_EQ(infered_dist_attrs.second.size(), static_cast<size_t>(1));
EXPECT_TRUE(
Expand Down Expand Up @@ -840,7 +840,7 @@ TEST(StackRule, Ctor) {
// test 2,force replicate along concat axis
inputs = build_inputs();
infered_dist_attrs = phi::distributed::StackInferSpmd(inputs, 1);
// list of tensor => sigle tensor
// list of tensor => single tensor
EXPECT_EQ(infered_dist_attrs.first.size(), static_cast<size_t>(1));
EXPECT_EQ(infered_dist_attrs.second.size(), static_cast<size_t>(1));
EXPECT_TRUE(
Expand Down Expand Up @@ -1432,7 +1432,7 @@ TEST(EmbeddingGradInferSpmd, Ctor) {
<< std::endl
<< std::endl;

// indices'rank is greater than 1, x and weight is replicated, out_grad is
// Indices' rank is greater than 1, x and weight is replicated, out_grad is
// sharded along axis 1
x_dist_attr.set_dims_mapping({-1, -1});
w_dist_attr.set_dims_mapping({-1, 1});
Expand Down Expand Up @@ -1464,7 +1464,7 @@ TEST(EmbeddingGradInferSpmd, Ctor) {
<< std::endl
<< std::endl;

// Indices's rank equals 1, indices and out_grad is sharded.
// Indices' rank equals 1, indices and out_grad is sharded.
x_shape = {5};
w_shape = {10, 3};
out_grad_shape = {5, 3};
Expand Down
6 changes: 3 additions & 3 deletions test/cpp/cinn/benchmark/test_all_ops_default.cc
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ namespace tests {
using cinn::hlir::framework::AttrType;

#define TEST_DEFAULT(op_name__, shape_name__, input_types_, output_types_) \
TEST(op_defualt, shape_name__) { \
TEST(op_default, shape_name__) { \
std::vector<std::vector<int>> input_shapes = shapes_##shape_name__; \
std::string op_name = #op_name__; \
hlir::framework::NodeAttr attrs; \
Expand All @@ -44,7 +44,7 @@ using cinn::hlir::framework::AttrType;

#define TEST_DEFAULT1( \
op_name__, shape_name__, input_types_, output_types_, attr_store__) \
TEST(op_defualt1, shape_name__) { \
TEST(op_default1, shape_name__) { \
std::vector<std::vector<int>> input_shapes = shapes_##shape_name__; \
std::string op_name = #op_name__; \
OpBenchmarkTester tester(op_name, input_shapes); \
Expand All @@ -60,7 +60,7 @@ using cinn::hlir::framework::AttrType;
}

#define TEST_DEFAULT_INT(op_name__, shape_name__, input_types_, output_types_) \
TEST(op_defualt, shape_name__) { \
TEST(op_default, shape_name__) { \
std::vector<std::vector<int>> input_shapes = shapes_##shape_name__; \
std::string op_name = #op_name__; \
hlir::framework::NodeAttr attrs; \
Expand Down
16 changes: 8 additions & 8 deletions test/cpp/cinn/benchmark/test_matmul.cc
Original file line number Diff line number Diff line change
Expand Up @@ -73,9 +73,9 @@ std::vector<ir::Tensor> MatmulBlockTester::CreateSpecificStrategy(
auto k1 = Var(input_shapes_[0][1], "k1");
CHECK_EQ(input_shapes_.size(), 2U) << "matmul's input shape should be 2.\n";
CHECK_EQ(input_shapes_[0].size(), 2U)
<< "matmul's input teosor's shape should be 2.\n";
<< "matmul's input tensor's shape should be 2.\n";
CHECK_EQ(input_shapes_[1].size(), 2U)
<< "matmul's input teosor's shape should be 2.\n";
<< "matmul's input tensor's shape should be 2.\n";
CHECK_EQ(input_shapes_[0][1], input_shapes_[1][0])
<< "matmul's reduce axis shape should be same\n";
auto C = Compute(
Expand Down Expand Up @@ -109,9 +109,9 @@ std::vector<ir::Tensor> MatmulVectorizeTester::CreateSpecificStrategy(
auto k1 = Var(input_shapes_[0][1], "k1");
CHECK_EQ(input_shapes_.size(), 2U) << "matmul's input shape should be 2.\n";
CHECK_EQ(input_shapes_[0].size(), 2U)
<< "matmul's input teosor's shape should be 2.\n";
<< "matmul's input tensor's shape should be 2.\n";
CHECK_EQ(input_shapes_[1].size(), 2U)
<< "matmul's input teosor's shape should be 2.\n";
<< "matmul's input tensor's shape should be 2.\n";
CHECK_EQ(input_shapes_[0][1], input_shapes_[1][0])
<< "matmul's reduce axis shape should be same\n";
auto C = Compute(
Expand Down Expand Up @@ -146,9 +146,9 @@ std::vector<ir::Tensor> MatmulLoopPermutationTester::CreateSpecificStrategy(
auto k1 = Var(input_shapes_[0][1], "k1");
CHECK_EQ(input_shapes_.size(), 2U) << "matmul's input shape should be 2.\n";
CHECK_EQ(input_shapes_[0].size(), 2U)
<< "matmul's input teosor's shape should be 2.\n";
<< "matmul's input tensor's shape should be 2.\n";
CHECK_EQ(input_shapes_[1].size(), 2U)
<< "matmul's input teosor's shape should be 2.\n";
<< "matmul's input tensor's shape should be 2.\n";
CHECK_EQ(input_shapes_[0][1], input_shapes_[1][0])
<< "matmul's reduce axis shape should be same\n";
auto C = Compute(
Expand Down Expand Up @@ -183,9 +183,9 @@ std::vector<ir::Tensor> MatmulArrayPackingTester::CreateSpecificStrategy(
std::vector<ir::Tensor> outs;
CHECK_EQ(input_shapes_.size(), 2U) << "matmul's input shape should be 2.\n";
CHECK_EQ(input_shapes_[0].size(), 2U)
<< "matmul's input teosor's shape should be 2.\n";
<< "matmul's input tensor's shape should be 2.\n";
CHECK_EQ(input_shapes_[1].size(), 2U)
<< "matmul's input teosor's shape should be 2.\n";
<< "matmul's input tensor's shape should be 2.\n";
CHECK_EQ(input_shapes_[0][1], input_shapes_[1][0])
<< "matmul's reduce axis shape should be same\n";

Expand Down
8 changes: 4 additions & 4 deletions test/cpp/cinn/benchmark/test_utils.cc
Original file line number Diff line number Diff line change
Expand Up @@ -39,9 +39,9 @@ void OpBenchmarkTester::TestOp(const std::string& test_name,
const hlir::framework::NodeAttr& attrs,
const std::vector<Type>& input_types,
const std::vector<Type>& out_types,
bool use_default_stragegy) {
bool use_default_strategy) {
auto module =
CreateCinnModule(input_tensors, attrs, out_types, use_default_stragegy);
CreateCinnModule(input_tensors, attrs, out_types, use_default_strategy);
auto engine = CreateExecutionEngine(module);
auto test_func_ptr =
reinterpret_cast<void (*)(void**, int32_t)>(engine->Lookup(op_name_));
Expand All @@ -68,15 +68,15 @@ Module OpBenchmarkTester::CreateCinnModule(
const std::vector<Tensor>& input_tensors,
const hlir::framework::NodeAttr& attrs,
const std::vector<Type>& out_types,
bool use_default_stragegy) {
bool use_default_strategy) {
std::vector<Tensor> outs;
std::vector<Tensor> rets;
poly::StageMap stages;
CHECK(!out_types.empty());
rets = input_tensors;
Module::Builder builder("module_" + op_name_, target_);

if (use_default_stragegy) {
if (use_default_strategy) {
auto strategy =
hlir::framework::Operator::GetAttrs<hlir::framework::StrategyFunction>(
"CINNStrategy");
Expand Down
6 changes: 3 additions & 3 deletions test/cpp/cinn/benchmark/test_utils.h
Original file line number Diff line number Diff line change
Expand Up @@ -47,14 +47,14 @@ class OpBenchmarkTester {
const hlir::framework::NodeAttr &attrs,
const std::vector<Type> &input_types,
const std::vector<Type> &out_types,
bool use_default_stragegy = true);
bool use_default_strategy = true);

virtual Module CreateCinnModule(const std::vector<ir::Tensor> &input_tensors,
const hlir::framework::NodeAttr &attrs,
const std::vector<Type> &out_types,
bool use_default_stragegy = true);
bool use_default_strategy = true);

// should define specific stragey if not use default schedule
// should define specific strategy if not use default schedule
virtual std::vector<ir::Tensor> CreateSpecificStrategy(
const std::vector<ir::Tensor> &inputs, poly::StageMap *stages) {
CINN_NOT_IMPLEMENTED
Expand Down
2 changes: 1 addition & 1 deletion test/cpp/cinn/test02_matmul_case.cc
Original file line number Diff line number Diff line change
Expand Up @@ -172,7 +172,7 @@ TEST(test02, basic) {

// Currently, the execution of a LoweredFunc is scheduled by the outer
// framework, so no need to Call inside another LoweredFunc.
// TODO(Superjomn) Fixit latter.
// TODO(Superjomn) Fix it later.
// TEST_FUNC(matmul_main);

#define TEST_LLVM_MATMUL(test_name, TARGET) \
Expand Down
2 changes: 1 addition & 1 deletion test/cpp/eager/task_tests/generated_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ TEST(Generated, Sigmoid) {
eager_test::CompareTensorWithValue<float>(output_tensor, 0.5);

std::vector<paddle::Tensor> target_tensors = {output_tensor};
VLOG(6) << "Runing Backward";
VLOG(6) << "Running Backward";
Backward(target_tensors, {});

VLOG(6) << "Finish Backward";
Expand Down
4 changes: 2 additions & 2 deletions test/cpp/eager/task_tests/grad_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ TEST(Grad, SingleNodeEmptyGrad) {
// Set grad in/out meta
node0_ptr->SetDefaultGradInOutMeta();

// Output_tensor set GradNode、OutRank、StopGradient propertis
// Output_tensor set GradNode、OutRank、StopGradient properties
AutogradMeta* auto_grad_meta = EagerUtils::autograd_meta(&output_tensor);
auto_grad_meta->SetGradNode(
std::dynamic_pointer_cast<GradNodeBase>(node0_ptr));
Expand All @@ -80,7 +80,7 @@ TEST(Grad, SingleNodeEmptyGrad) {
auto acc_node_ptr =
std::make_shared<egr::GradNodeAccumulation>(auto_grad_meta1);

// input tensor set GradNode、OutRank、StopGradient propertis
// input tensor set GradNode、OutRank、StopGradient properties
auto_grad_meta1->SetGradNode(
std::dynamic_pointer_cast<GradNodeBase>(acc_node_ptr));
auto_grad_meta1->SetSingleOutRankWithSlot(0, 0);
Expand Down
8 changes: 4 additions & 4 deletions test/cpp/eager/task_tests/hook_test_intermidiate.cc
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ void test_sigmoid(bool is_remove_gradient_hook) {
VLOG(6) << "Register ReduceHook for Tensor";
egr_utils_api::RegisterReduceHookForTensor(tensor, reduce_hook);

VLOG(6) << "Runing Forward";
VLOG(6) << "Running Forward";
auto output_tensor = sigmoid_dygraph_function(tensor, {});
VLOG(6) << "Finish Forward";

Expand All @@ -108,7 +108,7 @@ void test_sigmoid(bool is_remove_gradient_hook) {
grad_node_tmp->RemoveGradientHook(hook_id);
}

VLOG(6) << "Runing Backward";
VLOG(6) << "Running Backward";
Backward(target_tensors, {});
VLOG(6) << "Finish Backward";

Expand Down Expand Up @@ -290,7 +290,7 @@ void test_backward_final_hooks() {
VLOG(6) << "Register Backward Final Hook";
egr_utils_api::RegisterBackwardFinalHook(backward_final_hook);

VLOG(6) << "Runing Forward";
VLOG(6) << "Running Forward";
auto output_tensor = matmul_v2_dygraph_function(
X, Y, {{"trans_x", false}, {"trans_y", false}});
auto res = sigmoid_dygraph_function(output_tensor, {});
Expand All @@ -300,7 +300,7 @@ void test_backward_final_hooks() {

std::vector<paddle::Tensor> target_tensors = {output_tensor};

VLOG(6) << "Runing Backward";
VLOG(6) << "Running Backward";
Backward(target_tensors, {});
VLOG(6) << "Finish Backward";
eager_test::CompareTensorWithValue<float>(X, 100.0);
Expand Down
2 changes: 1 addition & 1 deletion test/cpp/fluid/benchmark/op_tester.cc
Original file line number Diff line number Diff line change
Expand Up @@ -181,7 +181,7 @@ void OpTester::CreateInputVarDesc() {
PADDLE_ENFORCE_NOT_NULL(
input,
platform::errors::NotFound(
"The input %s of operator %s is not correctlly provided.",
"The input %s of operator %s is not correctly provided.",
name,
config_.op_type));

Expand Down
Loading