Skip to content

Conversation

@andygrove
Copy link
Member

@andygrove andygrove commented Oct 2, 2024

Which issue does this PR close?

Part of #949

Rationale for this change

Provide better documentation for tuning memory usage.

Rendered version of this PR

What changes are included in this PR?

How are these changes tested?

Comment on lines +60 to +61
--conf spark.memory.offHeap.enabled=true \
--conf spark.memory.offHeap.size=10g \
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

enable unified memory management

@andygrove andygrove marked this pull request as ready for review October 3, 2024 14:44
@andygrove andygrove requested review from comphead and viirya October 3, 2024 14:49
@andygrove
Copy link
Member Author

@Kontinuation Could you review?

Copy link
Member

@Kontinuation Kontinuation left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Comment on lines +50 to +51
For example, if the executor can execute 4 plans concurrently, then the total amount of memory allocated will be
`4 * spark.comet.memory.overhead.factor * spark.executor.memory`.
Copy link
Member

@Kontinuation Kontinuation Oct 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIK this is a simplified, sometimes inaccurate model for estimating the amount of memory allocated, since each stage may create multiple Comet native plans (see point 3 and the example DAG in #949), but I think it is good enough for most of the cases.


`spark.comet.exec.shuffle.mode` to `auto` will let Comet choose the best shuffle mode based on the query plan.
`CometScanExec` uses nanoseconds for total scan time. Spark also measures scan time in nanoseconds but converts to
milliseconds _per batch_ which can result in a large loss of precision. In one case we saw total scan time
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can probably get away from exact numbers, just highlight the loss of precision can be twice?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made this change but looks like I failed to push this before merging the PR. I will address in my next PR.

Copy link
Contributor

@comphead comphead left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it mostly looks good to me.
I dont remember exactly the reason why unified manager was not enabled by default. It was a tricky edge case if Spark decides to abort native plans so it needs more jvm memory than native. And if this happens Spark jobs will fail on OOM because pod memory occupied by native buffer which is not in use.

@sunchao if you could chime in on this matter?

@andygrove
Copy link
Member Author

Thanks for the reviews @Kontinuation @comphead @viirya.

@andygrove andygrove merged commit 3413397 into apache:main Oct 3, 2024
@andygrove andygrove deleted the update-tuning-guide branch October 3, 2024 17:07
@sunchao
Copy link
Member

sunchao commented Oct 3, 2024

I don't remember any issue related to off-heap memory mode itself, but just that all the memory related configurations need to be careful tuned. For instance we may need to still reserve some JVM memory for certain operations (like broadcast?).

One thing I was trying to do is to hide all these configuration changes behind the Comet driver plugin, so when user enables Comet, the existing job configurations, including spark.executor.memory, spark.executor.memoryOverhead, etc, would be converted to offheap memory transparently. This would require some Spark-side changes, such as https://issues.apache.org/jira/browse/SPARK-46947. There is one more change I did internally to use Java reflection to overwrite certain config in Spark memory manager, because of early initialization in Spark.

coderfender pushed a commit to coderfender/datafusion-comet that referenced this pull request Dec 13, 2025
## Which issue does this PR close?

<!--
We generally require a GitHub issue to be filed for all bug fixes and
enhancements and this helps us generate change logs for our releases.
You can link an issue to this PR using the GitHub syntax. For example
`Closes apache#123` indicates that this PR will close issue apache#123.
-->

Closes #.

## Rationale for this change

<!--
Why are you proposing this change? If this is already explained clearly
in the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand
your changes and offer better suggestions for fixes.
-->

## What changes are included in this PR?

<!--
There is no need to duplicate the description in the issue here but it
is sometimes worth providing a summary of the individual changes in this
PR.
-->

```
cb3e977 perf: Add experimental feature to replace SortMergeJoin with ShuffledHashJoin (apache#1007)
3df9d5c fix: Make comet-git-info.properties optional (apache#1027)
4033687 chore: Reserve memory for native shuffle writer per partition (apache#1022)
bd541d6 (public/main) remove hard-coded version number from Dockerfile (apache#1025)
e3ac6cf feat: Implement bloom_filter_agg (apache#987)
8d097d5 (origin/main) chore: Revert "chore: Reserve memory for native shuffle writer per partition (apache#988)" (apache#1020)
591f45a chore: Bump arrow-rs to 53.1.0 and datafusion (apache#1001)
e146cfa chore: Reserve memory for native shuffle writer per partition (apache#988)
abd9f85 fix: Fallback to Spark if named_struct contains duplicate field names (apache#1016)
22613e9 remove legacy comet-spark-shell (apache#1013)
d40c802 clarify that Maven central only has jars for Linux (apache#1009)
837c256 docs: Various documentation improvements (apache#1005)
0667c60 chore: Make parquet reader options Comet options instead of Hadoop options (apache#968)
0028f1e fix: Fallback to Spark if scan has meta columns (apache#997)
b131cc3 feat: Support `GetArrayStructFields` expression (apache#993)
3413397 docs: Update tuning guide (apache#995)
afd28b9 Quality of life fixes for easier hacking (apache#982)
18150fb chore: Don't transform the HashAggregate to CometHashAggregate if Comet shuffle is disabled (apache#991)
a1599e2 chore: Update for 0.3.0 release, prepare for 0.4.0 development (apache#970)
```

## How are these changes tested?

<!--
We typically require tests for all PRs in order to:
1. Prevent the code from being accidentally broken by subsequent changes
2. Serve as another way to document the expected behavior of the code

If tests are not included in your PR, please explain why (for example,
are they covered by existing tests)?
-->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants