-
Notifications
You must be signed in to change notification settings - Fork 265
docs: Update tuning guide #995
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| --conf spark.memory.offHeap.enabled=true \ | ||
| --conf spark.memory.offHeap.size=10g \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
enable unified memory management
|
@Kontinuation Could you review? |
Kontinuation
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
| For example, if the executor can execute 4 plans concurrently, then the total amount of memory allocated will be | ||
| `4 * spark.comet.memory.overhead.factor * spark.executor.memory`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AFAIK this is a simplified, sometimes inaccurate model for estimating the amount of memory allocated, since each stage may create multiple Comet native plans (see point 3 and the example DAG in #949), but I think it is good enough for most of the cases.
|
|
||
| `spark.comet.exec.shuffle.mode` to `auto` will let Comet choose the best shuffle mode based on the query plan. | ||
| `CometScanExec` uses nanoseconds for total scan time. Spark also measures scan time in nanoseconds but converts to | ||
| milliseconds _per batch_ which can result in a large loss of precision. In one case we saw total scan time |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can probably get away from exact numbers, just highlight the loss of precision can be twice?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I made this change but looks like I failed to push this before merging the PR. I will address in my next PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it mostly looks good to me.
I dont remember exactly the reason why unified manager was not enabled by default. It was a tricky edge case if Spark decides to abort native plans so it needs more jvm memory than native. And if this happens Spark jobs will fail on OOM because pod memory occupied by native buffer which is not in use.
@sunchao if you could chime in on this matter?
|
Thanks for the reviews @Kontinuation @comphead @viirya. |
|
I don't remember any issue related to off-heap memory mode itself, but just that all the memory related configurations need to be careful tuned. For instance we may need to still reserve some JVM memory for certain operations (like broadcast?). One thing I was trying to do is to hide all these configuration changes behind the Comet driver plugin, so when user enables Comet, the existing job configurations, including |
## Which issue does this PR close? <!-- We generally require a GitHub issue to be filed for all bug fixes and enhancements and this helps us generate change logs for our releases. You can link an issue to this PR using the GitHub syntax. For example `Closes apache#123` indicates that this PR will close issue apache#123. --> Closes #. ## Rationale for this change <!-- Why are you proposing this change? If this is already explained clearly in the issue then this section is not needed. Explaining clearly why changes are proposed helps reviewers understand your changes and offer better suggestions for fixes. --> ## What changes are included in this PR? <!-- There is no need to duplicate the description in the issue here but it is sometimes worth providing a summary of the individual changes in this PR. --> ``` cb3e977 perf: Add experimental feature to replace SortMergeJoin with ShuffledHashJoin (apache#1007) 3df9d5c fix: Make comet-git-info.properties optional (apache#1027) 4033687 chore: Reserve memory for native shuffle writer per partition (apache#1022) bd541d6 (public/main) remove hard-coded version number from Dockerfile (apache#1025) e3ac6cf feat: Implement bloom_filter_agg (apache#987) 8d097d5 (origin/main) chore: Revert "chore: Reserve memory for native shuffle writer per partition (apache#988)" (apache#1020) 591f45a chore: Bump arrow-rs to 53.1.0 and datafusion (apache#1001) e146cfa chore: Reserve memory for native shuffle writer per partition (apache#988) abd9f85 fix: Fallback to Spark if named_struct contains duplicate field names (apache#1016) 22613e9 remove legacy comet-spark-shell (apache#1013) d40c802 clarify that Maven central only has jars for Linux (apache#1009) 837c256 docs: Various documentation improvements (apache#1005) 0667c60 chore: Make parquet reader options Comet options instead of Hadoop options (apache#968) 0028f1e fix: Fallback to Spark if scan has meta columns (apache#997) b131cc3 feat: Support `GetArrayStructFields` expression (apache#993) 3413397 docs: Update tuning guide (apache#995) afd28b9 Quality of life fixes for easier hacking (apache#982) 18150fb chore: Don't transform the HashAggregate to CometHashAggregate if Comet shuffle is disabled (apache#991) a1599e2 chore: Update for 0.3.0 release, prepare for 0.4.0 development (apache#970) ``` ## How are these changes tested? <!-- We typically require tests for all PRs in order to: 1. Prevent the code from being accidentally broken by subsequent changes 2. Serve as another way to document the expected behavior of the code If tests are not included in your PR, please explain why (for example, are they covered by existing tests)? -->
Which issue does this PR close?
Part of #949
Rationale for this change
Provide better documentation for tuning memory usage.
Rendered version of this PR
What changes are included in this PR?
How are these changes tested?