Skip to content

Feature/orc adapter 2 review#26

Open
cbb330 wants to merge 9 commits intomain-originalfrom
feature/orc-adapter-2-review
Open

Feature/orc adapter 2 review#26
cbb330 wants to merge 9 commits intomain-originalfrom
feature/orc-adapter-2-review

Conversation

@cbb330
Copy link
Copy Markdown
Owner

@cbb330 cbb330 commented Mar 30, 2026

Thanks for opening a pull request!

If this is your first pull request you can find detailed information on how to contribute here:

Please remove this line and the above text before creating your pull request.

Rationale for this change

What changes are included in this PR?

Are these changes tested?

Are there any user-facing changes?

This PR includes breaking changes to public APIs. (If there are any breaking changes to public APIs, please explain which changes are breaking. If not, you can remove this.)

This PR contains a "Critical Fix". (If the changes fix either (a) a security vulnerability, (b) a bug that caused incorrect or invalid data to be produced, or (c) a bug that causes a crash (even when the API contract is upheld), please provide explanation. If not, you can remove this.)

cbb330 and others added 9 commits February 25, 2026 17:42
Add ConvertColumnStatistics, BuildSchemaManifest, and related helpers
to the ORC adapter along with comprehensive tests.
… and add tests

- Prevent int64 overflow when converting timestamp millis to nanoseconds
  by checking bounds before multiplication
- Return empty table instead of error when ReadStripes gets empty indices
- Use RecordBatch vector + FromRecordBatches instead of Table + ConcatenateTables
- Inline dynamic_cast in ConvertColumnStatistics for cleaner code
- Fix Date32Scalar constructor to use default type
- Add decimal statistics test and nested schema manifest test
- Add precise timestamp statistics value assertions

Co-Authored-By: Claude Opus 4.6 <[email protected]>
Add an ORC statistics wrapper that owns liborc statistics lifetimes and centralizes type downcasting behind a stable API, then move scalar materialization into OrcStatisticsAsScalars. Update ORC reader statistics APIs and tests to consume the wrapper boundary.

Made-with: Cursor
Align the ORC reader surface with a container/view style by introducing file and stripe
statistics views and routing column access through those views. This keeps scalar
conversion boundaries intact while removing direct convenience getters from the public API.

Made-with: Cursor
Align ORC adapter statistics surfaces with Parquet-style metadata naming by
hard-replacing file/stripe stats containers and reader entrypoints while
preserving semantics and scalar conversion behavior.

Made-with: Cursor
Introduce a publish-safe ORC metadata hierarchy with FileMetaData, StripeMetaData,
and ColumnMetaData views, and route reader access through a canonical file metadata
entrypoint while preserving statistics conversion semantics.

Made-with: Cursor
Restore a single BuildSchemaManifest wrapper and add the missing Impl::GetORCType
helper after restacking to fix adapter build breakages.

Made-with: Cursor
Hard-replace ORC statistics null-count naming, switch scalar conversion to a
Status + out-params signature, and align ORC GetRecordBatchReader ownership
with parquet::arrow::FileReader by returning unique_ptr readers.

Made-with: Cursor
Switch ORC metadata traversal APIs from Result-wrapped values to pointer-style
returns, make Statistics::null_count() non-optional with HasNullCount
precondition semantics, and update adapter tests to match the new contracts.

Made-with: Cursor
@github-actions
Copy link
Copy Markdown

Thanks for opening a pull request!

If this is not a minor PR. Could you open an issue for this pull request on GitHub? https://github.com/apache/arrow/issues/new/choose

Opening GitHub issues ahead of time contributes to the Openness of the Apache Arrow project.

Then could you also rename the pull request title in the following format?

GH-${GITHUB_ISSUE_ID}: [${COMPONENT}] ${SUMMARY}

or

MINOR: [${COMPONENT}] ${SUMMARY}

See also:

#include "arrow/type.h"
#include "arrow/util/bit_util.h"
#include "arrow/util/checked_cast.h"
#include "arrow/util/decimal.h"
Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is decimal.h excluded?

ORC_CATCH_NOT_OK(liborc_reader = createReader(std::move(io_wrapper), options));
pool_ = pool;
reader_ = std::move(liborc_reader);
reader_ = std::shared_ptr<liborc::Reader>(liborc_reader.release());
Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this a backwards incompatible change? what is the impliciation of this change? i also see unique_ptr was used

return ReadBatch(opts, schema, stripes_[static_cast<size_t>(stripe)].num_rows);
}

Result<std::shared_ptr<Table>> ReadStripes(
Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this same signature as parquet's equivalent api?

}

Result<std::shared_ptr<Table>> ReadStripes(
const std::vector<int64_t>& stripe_indices) {
Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just curious about c++, is there a more convient/syntactic sugar carrying type we should be using in modern c++? look at broader arrow repo to be sure.

batches.push_back(std::move(batch));
}
ARROW_ASSIGN_OR_RAISE(auto schema, ReadSchema());
return Table::FromRecordBatches(schema, std::move(batches));
Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this efficient? build batches, then move. hoping that we minimize mem allocatons

Comment on lines +104 to +107
Statistics(std::shared_ptr<const ::orc::Statistics> owner,
const ::orc::ColumnStatistics* column_statistics)
: owner_(std::move(owner)), column_statistics_(column_statistics) {}
explicit Statistics(const ::orc::ColumnStatistics* column_statistics)
Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as parquet?

explicit Statistics(const ::orc::ColumnStatistics* column_statistics)
: column_statistics_(column_statistics) {}

bool valid() const { return column_statistics_ != nullptr; }
Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are next 5 lines same signature as parquet?

int64_t num_values() const;
bool HasMinMax() const;

const ::orc::ColumnStatistics* raw() const { return column_statistics_; }
Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i don't understand these entirely. i also don't know much about cpp. explain them to me.

};

/// \brief Column-level metadata container exposing statistics.
class ARROW_EXPORT ColumnMetaData {
Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this implemented in the same way as parquet (signature, name, impelmentation, attributes, types, etc.)

Statistics statistics_;
};

ARROW_EXPORT Status StatisticsAsScalars(const Statistics& statistics,
Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as parquet?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant