-
Notifications
You must be signed in to change notification settings - Fork 264
chore: Reserve memory for native shuffle writer per partition #988
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
18be135 to
d063f15
Compare
|
I've copied the tests on my branch to this PR and the test hangs: It is possibly caused by deadlocking on |
|
Thanks. I knew the cause of the deadlocks. I'm going to revamp some codes. |
20b3711 to
d2c6102
Compare
64c7c0d to
d25837a
Compare
d25837a to
da8d679
Compare
718c15c to
8172a7c
Compare
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #988 +/- ##
============================================
- Coverage 34.03% 33.97% -0.07%
+ Complexity 875 857 -18
============================================
Files 112 112
Lines 43289 43426 +137
Branches 9572 9622 +50
============================================
+ Hits 14734 14752 +18
- Misses 25521 25630 +109
- Partials 3034 3044 +10
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
|
Hmm, these tests for large partition number shuffle fail on MacOS runners only. And no stack trace...But I cannot reproduce it locally. |
|
Okay, it is the error I expected before: But I had increase it by |
5e50f98 to
ebf4663
Compare
ebf4663 to
e121814
Compare
|
|
||
| #[test] | ||
| #[cfg_attr(miri, ignore)] // miri can't call foreign function `ZSTD_createCCtx` | ||
| #[cfg(not(target_os = "macos"))] // Github MacOS runner fails with "Too many open files". |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These tests fail on MacOS runners with "Too many open files" error. ulimit cannot help too.
I skip them on MacOS runners. We have ubuntu runners to test them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test shuffle_write_test(10000, 10, 200, Some(10 * 1024 * 1024)) spilled 1700 times, it spills too frequently for data of this size. Seems that the excessive spilling problem is inevitable if we reserve full batch capacity for the arrow builder.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR seems like an important improvement because it now uses the memory pool features. Perhaps we can follow up with optimizations to reduce spilling. wdyt @Kontinuation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure. Let's merge this.
I'm also considering adding a native sort-based shuffle writer that works better with constraint resources.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We've discussed to support sort-based shuffle in the native shuffle writer, similar to Spark shuffle, in the early development. So I think it is on our roadmap though it is not urgent at that moment.
|
I'm testing this PR out now, in conjunction with some other PRs because I currently have a reproducible deadlock caused by memory pool issues, as far as I can tell. |
cc9e531 to
6763b1e
Compare
andygrove
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @viirya
|
Thanks @andygrove @Kontinuation |
…apache#988)" This reverts commit e146cfa.
…apache#988)" This reverts commit e146cfa.
…apache#988)" This reverts commit e146cfa.
…artition (apache#988)"" This reverts commit 481127d.
…er per partition (apache#988)""" This reverts commit 9469d16.
…fle writer per partition (apache#988)"""" This reverts commit 6002726.
…r per partition (apache#988)" (apache#1020)" This reverts commit 8d097d5.
…HashJoin (#1007) * experiment * fix and add credit * disable by default and make internal * remove sort * minor optimization * minor optimization * remove unused import * disable feature by default * fix dockerfile * Add section to tuning guide * update benchmarking guide * Revert "chore: Reserve memory for native shuffle writer per partition (#988)" This reverts commit e146cfa. * mark feature as experimental and explain risks * workaround for TPC-DS q14 hanging on a RightSemi join * revert a change * remove debug logging: * format * add link to tuning guide
…#988) * chore: Reserve memory for native shuffle writer per partition * Revise * skip large partition number shuffle on macos runners * For review
## Which issue does this PR close? <!-- We generally require a GitHub issue to be filed for all bug fixes and enhancements and this helps us generate change logs for our releases. You can link an issue to this PR using the GitHub syntax. For example `Closes apache#123` indicates that this PR will close issue apache#123. --> Closes #. ## Rationale for this change <!-- Why are you proposing this change? If this is already explained clearly in the issue then this section is not needed. Explaining clearly why changes are proposed helps reviewers understand your changes and offer better suggestions for fixes. --> ## What changes are included in this PR? <!-- There is no need to duplicate the description in the issue here but it is sometimes worth providing a summary of the individual changes in this PR. --> ``` cb3e977 perf: Add experimental feature to replace SortMergeJoin with ShuffledHashJoin (apache#1007) 3df9d5c fix: Make comet-git-info.properties optional (apache#1027) 4033687 chore: Reserve memory for native shuffle writer per partition (apache#1022) bd541d6 (public/main) remove hard-coded version number from Dockerfile (apache#1025) e3ac6cf feat: Implement bloom_filter_agg (apache#987) 8d097d5 (origin/main) chore: Revert "chore: Reserve memory for native shuffle writer per partition (apache#988)" (apache#1020) 591f45a chore: Bump arrow-rs to 53.1.0 and datafusion (apache#1001) e146cfa chore: Reserve memory for native shuffle writer per partition (apache#988) abd9f85 fix: Fallback to Spark if named_struct contains duplicate field names (apache#1016) 22613e9 remove legacy comet-spark-shell (apache#1013) d40c802 clarify that Maven central only has jars for Linux (apache#1009) 837c256 docs: Various documentation improvements (apache#1005) 0667c60 chore: Make parquet reader options Comet options instead of Hadoop options (apache#968) 0028f1e fix: Fallback to Spark if scan has meta columns (apache#997) b131cc3 feat: Support `GetArrayStructFields` expression (apache#993) 3413397 docs: Update tuning guide (apache#995) afd28b9 Quality of life fixes for easier hacking (apache#982) 18150fb chore: Don't transform the HashAggregate to CometHashAggregate if Comet shuffle is disabled (apache#991) a1599e2 chore: Update for 0.3.0 release, prepare for 0.4.0 development (apache#970) ``` ## How are these changes tested? <!-- We typically require tests for all PRs in order to: 1. Prevent the code from being accidentally broken by subsequent changes 2. Serve as another way to document the expected behavior of the code If tests are not included in your PR, please explain why (for example, are they covered by existing tests)? -->
Which issue does this PR close?
Closes #887.
Rationale for this change
What changes are included in this PR?
How are these changes tested?