Skip to content
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
42 commits
Select commit Hold shift + click to select a range
1962a9a
Merge some of the previous interfaces into a branch
Le-soleile Feb 23, 2026
74f284f
Some interfaces that were not committed before
Le-soleile Feb 23, 2026
21d33fc
merge
Le-soleile Feb 23, 2026
1629589
merge
Le-soleile Feb 23, 2026
7d264c0
Merge branch '217' of https://github.com/Le-soleile/Paddle into 217
Le-soleile Feb 23, 2026
0c79448
Add include for squeeze operation
Le-soleile Feb 23, 2026
948edb7
fix
Le-soleile Feb 24, 2026
7679832
fix and add test
Le-soleile Feb 24, 2026
4ba11d2
solve conflicts
Le-soleile Feb 24, 2026
f0d5ab1
fix
Le-soleile Feb 24, 2026
867baff
fix
Le-soleile Feb 24, 2026
a82e739
fix
Le-soleile Feb 24, 2026
27068e2
improve coverage
Le-soleile Feb 25, 2026
ba924ad
fix test
Le-soleile Feb 25, 2026
bf84ee3
fix test
Le-soleile Feb 25, 2026
dbe28a6
fix
Le-soleile Feb 25, 2026
ec0e598
fix index
Le-soleile Feb 25, 2026
ea804dd
fix test
Le-soleile Feb 26, 2026
13a3bf7
fix test
Le-soleile Feb 26, 2026
51f60e2
fix test
Le-soleile Feb 26, 2026
fb366ca
Update paddle/phi/api/include/compat/c10/core/List.h
Le-soleile Feb 27, 2026
4dddbf7
fix
Le-soleile Feb 27, 2026
b5def57
Merge branch '217' of https://github.com/Le-soleile/Paddle into 217
Le-soleile Feb 27, 2026
675e095
fix
Le-soleile Feb 27, 2026
81d1eba
fix error
Le-soleile Feb 28, 2026
75c049f
Add and fix as_strided test
Le-soleile Mar 1, 2026
af2f91f
fix
Le-soleile Mar 1, 2026
92151c3
fix
Le-soleile Mar 1, 2026
7a0665f
fix
Le-soleile Mar 2, 2026
834ef07
Merge branch 'develop' into 217
Le-soleile Mar 2, 2026
fedd418
codestyle after conflict
Le-soleile Mar 2, 2026
3bfcd87
codestyle after conflict
Le-soleile Mar 2, 2026
4a7ed29
fix
Le-soleile Mar 2, 2026
1713cf3
fix
Le-soleile Mar 2, 2026
eec6fc2
Merge branch 'PaddlePaddle:develop' into 217
Le-soleile Mar 2, 2026
6af70c7
fix
Le-soleile Mar 2, 2026
f93f1c5
Merge branch '217' of https://github.com/Le-soleile/Paddle into 217
Le-soleile Mar 2, 2026
7d30522
Merge branch 'develop' into 217
Le-soleile Mar 3, 2026
f6f463e
Improve test coverage
Le-soleile Mar 3, 2026
e7de3b0
Merge branch '217' of https://github.com/Le-soleile/Paddle into 217
Le-soleile Mar 3, 2026
1a3217c
Resolve conflict
Le-soleile Mar 3, 2026
0f33719
Merge branch 'develop' into 217
Le-soleile Mar 3, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions paddle/phi/api/include/compat/ATen/Functions.h
Original file line number Diff line number Diff line change
Expand Up @@ -17,12 +17,14 @@
#include <ATen/ops/abs.h>
#include <ATen/ops/arange.h>
#include <ATen/ops/cat.h>
#include <ATen/ops/clamp.h>
#include <ATen/ops/empty.h>
#include <ATen/ops/empty_like.h>
#include <ATen/ops/flatten.h>
#include <ATen/ops/from_blob.h>
#include <ATen/ops/full.h>
#include <ATen/ops/index.h>
#include <ATen/ops/index_put.h>
#include <ATen/ops/narrow.h>
#include <ATen/ops/narrow_copy.h>
#include <ATen/ops/ones.h>
Expand All @@ -32,6 +34,7 @@
#include <ATen/ops/sparse_coo_tensor.h>
#include <ATen/ops/sparse_csr_tensor.h>
#include <ATen/ops/squeeze.h>
#include <ATen/ops/std.h>
#include <ATen/ops/transpose.h>
#include <ATen/ops/unflatten.h>
#include <ATen/ops/unsqueeze.h>
Expand Down
192 changes: 183 additions & 9 deletions paddle/phi/api/include/compat/ATen/core/TensorBody.h
Original file line number Diff line number Diff line change
Expand Up @@ -31,10 +31,32 @@
#endif

#include <c10/core/Device.h>
#include <c10/core/List.h>
#include <c10/core/ScalarType.h>
#include <c10/core/SymIntArrayRef.h>
#include <limits>
#include <optional>
#include <utility>
#include <vector>
#include "paddle/common/ddim.h"
#include "paddle/phi/common/place.h"

#ifdef PADDLE_WITH_CUDA
#include "paddle/phi/backends/gpu/forwards.h"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个具体是用哪个函数的?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image

这个函数里面用到了 cudaStream_t&
forwards.h 第27行:using cudaStream_t = struct CUstream_st *;

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

可是这个 record_stream 应该不是这个 PR 加的吧?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

是的,不是这次加的

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

那这个 PR 为什么要加这个 include?如果有问题的话,之前加的时候就应该挂掉?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

嗯这个接口不是我加的,我先把forwards.h删掉吧。确实,不加也没有挂那应该没问题

#endif

namespace at {
class Tensor;

// Type aliases for ATen compatibility
using Scalar = c10::Scalar;
using TensorOptions = c10::TensorOptions;
using MemoryFormat = c10::MemoryFormat;
using IntArrayRef = c10::IntArrayRef;
using OptionalIntArrayRef = c10::OptionalIntArrayRef;
using ScalarType = c10::ScalarType;
} // namespace at

namespace at { // NOLINT(build/namespaces)
using PaddleTensor = paddle::Tensor;
using PaddlePlace = phi::Place;
Expand Down Expand Up @@ -236,6 +258,163 @@ class Tensor : public TensorBase {
return *(cpu_tensor.data<T>());
}

// Clamp functions
at::Tensor clamp(
const ::std::optional<at::Scalar>& min,
const ::std::optional<at::Scalar>& max = ::std::nullopt) const;

at::Tensor clamp(const ::std::optional<at::Tensor>& min = {},
const ::std::optional<at::Tensor>& max = {}) const;

at::Tensor& clamp_(
const ::std::optional<at::Scalar>& min,
const ::std::optional<at::Scalar>& max = ::std::nullopt) const;

at::Tensor& clamp_(const ::std::optional<at::Tensor>& min = {},
const ::std::optional<at::Tensor>& max = {}) const;

at::Tensor clamp_max(const at::Scalar& max) const;
at::Tensor clamp_max(const at::Tensor& max) const;
at::Tensor& clamp_max_(const at::Scalar& max) const;
at::Tensor& clamp_max_(const at::Tensor& max) const;

at::Tensor clamp_min(const at::Scalar& min) const;
at::Tensor clamp_min(const at::Tensor& min) const;
at::Tensor& clamp_min_(const at::Scalar& min) const;
at::Tensor& clamp_min_(const at::Tensor& min) const;
Comment on lines +240 to +253
Copy link

Copilot AI Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The inplace methods clamp_, clamp_max_, and clamp_min_ are declared as const methods but modify the internal tensor state. These methods should not be const since they perform in-place modifications. The const_cast usage indicates an incorrect API design. Either remove the const qualifier from these methods or implement them correctly as non-const methods.

Suggested change
const ::std::optional<at::Scalar>& max = ::std::nullopt) const;
at::Tensor& clamp_(const ::std::optional<at::Tensor>& min = {},
const ::std::optional<at::Tensor>& max = {}) const;
at::Tensor clamp_max(const at::Scalar& max) const;
at::Tensor clamp_max(const at::Tensor& max) const;
at::Tensor& clamp_max_(const at::Scalar& max) const;
at::Tensor& clamp_max_(const at::Tensor& max) const;
at::Tensor clamp_min(const at::Scalar& min) const;
at::Tensor clamp_min(const at::Tensor& min) const;
at::Tensor& clamp_min_(const at::Scalar& min) const;
at::Tensor& clamp_min_(const at::Tensor& min) const;
const ::std::optional<at::Scalar>& max = ::std::nullopt);
at::Tensor& clamp_(const ::std::optional<at::Tensor>& min = {},
const ::std::optional<at::Tensor>& max = {});
at::Tensor clamp_max(const at::Scalar& max) const;
at::Tensor clamp_max(const at::Tensor& max) const;
at::Tensor& clamp_max_(const at::Scalar& max);
at::Tensor& clamp_max_(const at::Tensor& max);
at::Tensor clamp_min(const at::Scalar& min) const;
at::Tensor clamp_min(const at::Tensor& min) const;
at::Tensor& clamp_min_(const at::Scalar& min);
at::Tensor& clamp_min_(const at::Tensor& min);

Copilot uses AI. Check for mistakes.

// as_strided: Create a tensor view with custom size, stride, and
// storage_offset
at::Tensor as_strided(
at::IntArrayRef size,
at::IntArrayRef stride,
::std::optional<int64_t> storage_offset = ::std::nullopt) const {
auto src_impl = tensor_.impl();
auto* src_tensor =
std::dynamic_pointer_cast<phi::DenseTensor>(src_impl).get();
if (!src_tensor) {
PD_THROW("as_strided: tensor must be a DenseTensor");
}
auto new_tensor = std::make_shared<phi::DenseTensor>();
new_tensor->ShareDataWith(*src_tensor);
std::vector<int64_t> size_vec(size.begin(), size.end());
std::vector<int64_t> stride_vec(stride.begin(), stride.end());
new_tensor->Resize(common::make_ddim(size_vec));
new_tensor->set_strides(common::make_ddim(stride_vec));
int64_t offset = storage_offset.has_value() ? storage_offset.value() : 0;
if (offset != 0) {
auto meta = phi::DenseTensorMeta(new_tensor->meta());
meta.offset = static_cast<size_t>(offset);
new_tensor->set_meta(meta);
}
PaddleTensor result;
result.set_impl(new_tensor);
return Tensor(result);
}
Copy link

Copilot AI Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The as_strided method doesn't validate that the provided size and stride parameters are valid for the tensor's underlying storage. Invalid stride/size combinations could lead to out-of-bounds memory access. Add validation to ensure the strided view doesn't exceed the available storage bounds.

Copilot uses AI. Check for mistakes.

// as_strided_: Inplace version
const at::Tensor& as_strided_(
at::IntArrayRef size,
at::IntArrayRef stride,
::std::optional<int64_t> storage_offset = ::std::nullopt) const {
auto src_impl = tensor_.impl();
auto* src_tensor =
std::dynamic_pointer_cast<phi::DenseTensor>(src_impl).get();
if (!src_tensor) {
PD_THROW("as_strided_: tensor must be a DenseTensor");
}
Copy link

Copilot AI Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The as_strided and as_strided_ methods use PD_THROW for error handling without checking if the tensor is initialized first. If an uninitialized tensor is passed, calling tensor_.impl() could lead to undefined behavior before the check. Add an initialization check before accessing the impl.

Copilot uses AI. Check for mistakes.
std::vector<int64_t> size_vec(size.begin(), size.end());
std::vector<int64_t> stride_vec(stride.begin(), stride.end());
src_tensor->Resize(common::make_ddim(size_vec));
src_tensor->set_strides(common::make_ddim(stride_vec));
int64_t offset = storage_offset.has_value() ? storage_offset.value() : 0;
if (offset != 0) {
auto meta = phi::DenseTensorMeta(src_tensor->meta());
meta.offset = static_cast<size_t>(offset);
src_tensor->set_meta(meta);
}
return *this;
}
Copy link

Copilot AI Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The as_strided_ method is declared as const but modifies the tensor's internal state (size, stride, and metadata). This violates const correctness. The method should either be non-const or should not modify the internal state.

Copilot uses AI. Check for mistakes.

// as_strided_scatter: Scatter src into a strided view
at::Tensor as_strided_scatter(
const at::Tensor& src,
at::IntArrayRef size,
at::IntArrayRef stride,
::std::optional<int64_t> storage_offset = ::std::nullopt) const {
at::Tensor strided_view = as_strided(size, stride, storage_offset);
strided_view.copy_(src);
return strided_view;
Copy link

Copilot AI Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The as_strided_scatter method calls copy_ on a view, but the result is the view itself, not the original tensor. This means the changes are only reflected in the strided view and won't affect the original tensor that the method was called on. The return value should be a modified copy of the original tensor with the scattered values, not just the view.

Suggested change
return strided_view;
// Return the original tensor (now containing the scattered values),
// rather than the strided view.
return *this;

Copilot uses AI. Check for mistakes.
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这几个 as_strided 是不是也应该将实现放在 ops 目录?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已修改


// Standard deviation functions
Tensor std(int dim) const;
Tensor std(bool unbiased = true) const;
Tensor std(at::OptionalIntArrayRef dim,
bool unbiased = true,
bool keepdim = false) const;
Tensor std(at::OptionalIntArrayRef dim,
const ::std::optional<at::Scalar>& correction,
bool keepdim = false) const;

Tensor tensor_data() const {
PaddleTensor result;
if (tensor_.initialized()) {
auto src_impl = tensor_.impl();
auto* src_tensor =
std::dynamic_pointer_cast<phi::DenseTensor>(src_impl).get();
if (src_tensor && src_tensor->meta().is_contiguous()) {
result.set_impl(std::make_shared<phi::DenseTensor>());
auto* dst_tensor =
std::dynamic_pointer_cast<phi::DenseTensor>(result.impl()).get();
dst_tensor->ShareDataWith(*src_tensor);
} else {
result = paddle::experimental::assign(tensor_);
}
} else {
result = paddle::experimental::assign(tensor_);
}
return Tensor(result);
}

Tensor variable_data() const {
PaddleTensor result;
if (tensor_.initialized()) {
auto src_impl = tensor_.impl();
auto* src_tensor =
std::dynamic_pointer_cast<phi::DenseTensor>(src_impl).get();
if (src_tensor && src_tensor->meta().is_contiguous()) {
result.set_impl(std::make_shared<phi::DenseTensor>());
auto* dst_tensor =
std::dynamic_pointer_cast<phi::DenseTensor>(result.impl()).get();
dst_tensor->ShareDataWith(*src_tensor);
} else {
result = paddle::experimental::assign(tensor_);
}
} else {
result = paddle::experimental::assign(tensor_);
}
return Tensor(result);
}

// index: Get values at specified tensor indices
at::Tensor index(const c10::List<::std::optional<at::Tensor>>& indices) const;

// index_put_: Set values at specified indices in-place
at::Tensor& index_put_(const c10::List<::std::optional<at::Tensor>>& indices,
const at::Tensor& values,
bool accumulate = false) const;

// index_put_: Set scalar value at specified indices in-place
at::Tensor& index_put_(const c10::List<::std::optional<at::Tensor>>& indices,
const at::Scalar& v,
bool accumulate = false) const;

// index_put: Non-inplace version of index_put_
at::Tensor index_put(const c10::List<::std::optional<at::Tensor>>& indices,
const at::Tensor& values,
bool accumulate = false) const;
Comment on lines +329 to +341
Copy link

Copilot AI Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The index_put_ methods are declared as const but modify the tensor state in-place. These methods should not be const since they perform in-place modifications. The const_cast pattern used throughout indicates a fundamental const-correctness issue.

Copilot uses AI. Check for mistakes.

at::Tensor to(
at::ScalarType dtype,
bool non_blocking = false,
Expand Down Expand Up @@ -393,7 +572,7 @@ class Tensor : public TensorBase {
at::Tensor& floor_divide_(const at::Scalar& other) const {
paddle::experimental::floor_divide_(
const_cast<PaddleTensor&>(tensor_),
paddle::experimental::full({}, other, other.dtype()));
paddle::experimental::full({}, other, tensor_.dtype()));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里应该是other.dtype

return const_cast<at::Tensor&>(*this);
}

Expand Down Expand Up @@ -432,14 +611,7 @@ class Tensor : public TensorBase {

at::Tensor& absolute_() const { return abs_(); }

Tensor operator[](int64_t index) const {
return paddle::experimental::slice(tensor_,
/*axes=*/{0},
/*starts=*/{index},
/*ends=*/{index + 1},
/*infer_flags=*/{1},
/*decrease_axis=*/{0});
}
Tensor operator[](int64_t index) const;

#if defined(PADDLE_WITH_CUDA)
void record_stream(const cudaStream_t& stream) const {
Expand Down Expand Up @@ -577,7 +749,9 @@ class Tensor : public TensorBase {
PaddleTensor _PD_GetInner() const { return tensor_; }
PaddleTensor& _PD_GetInner() { return tensor_; }
}; // NOLINT(readability/braces)

} // namespace at

namespace torch {
using at::Tensor;
} // namespace torch
Loading
Loading