-
Notifications
You must be signed in to change notification settings - Fork 265
chore: Reserve memory for native shuffle writer per partition #1022
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…r per partition (apache#988)" (apache#1020)" This reverts commit 8d097d5.
ff3b262 to
851427f
Compare
72c7a0c to
2d5478a
Compare
|
I am testing this PR out now with benchmarks. |
|
I am testing with TPC-H sf=100. I usually test with one executor and 8 cores, but with this PR I can only run with a single core. I tried with 2 cores with this config: The job fails with: |
|
I will try it with sf=100. |
| /// The difference in memory usage after appending rows | ||
| MemDiff(Result<isize>), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will this always be an increase in memory? Should this use usize?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It could be decrease too, if flush happens.
| // Cannot allocate enough memory for the array builders in the partition, | ||
| // spill partitions and retry. | ||
| self.spill().await?; | ||
| self.reservation.free(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Forgot to free memory reservation in previous commit.
@andygrove Could you try run benchmarks again? Thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I no longer see the memory error, but there seems to be a significant performance regression. TPC_H q2 used to take 12 seconds and is now taking many minutes. I do not see spill happening in Spark UI. I am going to add some debug logging to try and understand what is happening.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In native metrics, I do see excessive spilling:
spill_count=8, spilled_bytes=19441254400, data_size=877436
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay. I will take a look it further.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess that is because we silently use some memory but never report them into the reservation, like the memory usage on array builders, now we count for them. So under same memory settings, it is more likely you hit the bar of memory pool. Have you try to increase the Comet memory like spark.comet.memoryOverhead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried setting the overhead:
--conf spark.executor.instances=1 \
--conf spark.executor.memory=16G \
--conf spark.executor.cores=8 \
--conf spark.cores.max=8 \
--conf spark.memory.offHeap.enabled=true \
--conf spark.memory.offHeap.size=20g \
--conf spark.comet.memoryOverhead=16g \
This did not help with performance:
Query 2 took 352.05119466781616 seconds
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With latest commit, I ran TPCH sf=100 locally and didn't see regression now. Can you verify it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
Query 2 took 12.09787917137146 seconds
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1022 +/- ##
============================================
+ Coverage 34.30% 34.43% +0.13%
- Complexity 887 898 +11
============================================
Files 112 112
Lines 43429 43538 +109
Branches 9623 9660 +37
============================================
+ Hits 14897 14994 +97
- Misses 25473 25479 +6
- Partials 3059 3065 +6 ☔ View full report in Codecov by Sentry. |
andygrove
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @viirya
|
Thanks @andygrove |
## Which issue does this PR close? <!-- We generally require a GitHub issue to be filed for all bug fixes and enhancements and this helps us generate change logs for our releases. You can link an issue to this PR using the GitHub syntax. For example `Closes apache#123` indicates that this PR will close issue apache#123. --> Closes #. ## Rationale for this change <!-- Why are you proposing this change? If this is already explained clearly in the issue then this section is not needed. Explaining clearly why changes are proposed helps reviewers understand your changes and offer better suggestions for fixes. --> ## What changes are included in this PR? <!-- There is no need to duplicate the description in the issue here but it is sometimes worth providing a summary of the individual changes in this PR. --> ``` cb3e977 perf: Add experimental feature to replace SortMergeJoin with ShuffledHashJoin (apache#1007) 3df9d5c fix: Make comet-git-info.properties optional (apache#1027) 4033687 chore: Reserve memory for native shuffle writer per partition (apache#1022) bd541d6 (public/main) remove hard-coded version number from Dockerfile (apache#1025) e3ac6cf feat: Implement bloom_filter_agg (apache#987) 8d097d5 (origin/main) chore: Revert "chore: Reserve memory for native shuffle writer per partition (apache#988)" (apache#1020) 591f45a chore: Bump arrow-rs to 53.1.0 and datafusion (apache#1001) e146cfa chore: Reserve memory for native shuffle writer per partition (apache#988) abd9f85 fix: Fallback to Spark if named_struct contains duplicate field names (apache#1016) 22613e9 remove legacy comet-spark-shell (apache#1013) d40c802 clarify that Maven central only has jars for Linux (apache#1009) 837c256 docs: Various documentation improvements (apache#1005) 0667c60 chore: Make parquet reader options Comet options instead of Hadoop options (apache#968) 0028f1e fix: Fallback to Spark if scan has meta columns (apache#997) b131cc3 feat: Support `GetArrayStructFields` expression (apache#993) 3413397 docs: Update tuning guide (apache#995) afd28b9 Quality of life fixes for easier hacking (apache#982) 18150fb chore: Don't transform the HashAggregate to CometHashAggregate if Comet shuffle is disabled (apache#991) a1599e2 chore: Update for 0.3.0 release, prepare for 0.4.0 development (apache#970) ``` ## How are these changes tested? <!-- We typically require tests for all PRs in order to: 1. Prevent the code from being accidentally broken by subsequent changes 2. Serve as another way to document the expected behavior of the code If tests are not included in your PR, please explain why (for example, are they covered by existing tests)? -->
Which issue does this PR close?
Closes #1019.
Rationale for this change
This restore the patch merged in #988. The patch causes the issue #1019. This patch includes a fix for that.
What changes are included in this PR?
How are these changes tested?
Manually run TPCH benchmark locally.