-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-26024][SQL]: Update documentation for repartitionByRange #23025
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| * When no explicit sort order is specified, "ascending nulls first" is assumed. | ||
| * Note, the rows are not sorted in each partition of the resulting Dataset. | ||
| * | ||
| * [SPARK-26024] Note that due to performance reasons this method uses sampling to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can drop [SPARK-26024] here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. Done.
| * [SPARK-26024] Note that due to performance reasons this method uses sampling to | ||
| * estimate the ranges. Hence, the output may not be consistent, since sampling can return | ||
| * different values. The sample size can be controlled by setting the value of the parameter | ||
| * {{spark.sql.execution.rangeExchange.sampleSizePerPartition}}. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
`spark.sql.execution.rangeExchange.sampleSizePerPartition`.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. Done.
python/pyspark/sql/dataframe.py
Outdated
| At least one partition-by expression must be specified. | ||
| When no explicit sort order is specified, "ascending nulls first" is assumed. | ||
| [SPARK-26024] Note that due to performance reasons this method uses sampling to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"[SPARK-26024]" can be removed too.
| * Note, the rows are not sorted in each partition of the resulting Dataset. | ||
| * | ||
| * | ||
| * [SPARK-26024] Note that due to performance reasons this method uses sampling to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto.
|
cc @cloud-fan |
|
@viirya OK all references to SPARK-26024 removed from the doc. |
|
ok to test |
| * Note that due to performance reasons this method uses sampling to estimate the ranges. | ||
| * Hence, the output may not be consistent, since sampling can return different values. | ||
| * The sample size can be controlled by setting the value of the parameter | ||
| * `spark.sql.execution.rangeExchange.sampleSizePerPartition`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not a parameter but a config. So I'd like to propose
The sample size can be controlled by the config `xxx`
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@cloud-fan the sentence has been changed according to your suggestion (in both Spark & PySpark).
|
Test build #98987 has finished for PR 23025 at commit
|
| At least one partition-by expression must be specified. | ||
| When no explicit sort order is specified, "ascending nulls first" is assumed. | ||
| Note that due to performance reasons this method uses sampling to estimate the ranges. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Besides Python, we also have repartitionByRange API in R. Can you also update it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh right, I missed it! Pushed.
|
Test build #98992 has finished for PR 23025 at commit
|
|
Test build #98991 has finished for PR 23025 at commit
|
|
retest this please. |
|
Test build #98995 has finished for PR 23025 at commit
|
|
thanks, merging to master! |
|
Thanks all for the reviews! |
| #' using \code{spark.sql.shuffle.partitions} as number of partitions.} | ||
| #'} | ||
| #' | ||
| #' At least one partition-by expression must be specified. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this won't be formatted correctly in R doc due to the fact that "empty line" is significant. L769 should be removed to ensure it is in description
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. What about on 761? I see several docs around here with empty lines (829, 831 below). Are those different? These comments are secondary, but I guess they belong in the public docs as much as anything.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
761 is significant also, but correct.
essentially:
- first line of the blob is the title (L760)
- second text after "empty line" is the description (L762)
- third after another "empty line" is the "detail note" which is stashed all the way to the bottom of the doc page
so generally you want "important" part of the description on top and not in the "detail" section because it is easily missed.
this is the most common pattern in this code base. there's another, where multiple function is doc together as a group, eg. collection sql function (in functions.R). other finer control is possible as well but not used today in this code base.
similarly L829 is good, L831 is a bit fuzzy - I'd personally prefer without L831 to keep the whole text in the description section of the doc. for me, generally if the doc text starts with "Note that" I'm ok with it in the "detail" section.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@felixcheung have a look at #23167
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@felixcheung Thanks, I did not know about this strict doc formatting rule in R.
@srowen Thanks for taking care of the fix!
Following [SPARK-26024](https://issues.apache.org/jira/browse/SPARK-26024), I noticed the number of elements in each partition after repartitioning using `df.repartitionByRange` can vary for the same setup: ```scala // Shuffle numbers from 0 to 1000, and make a DataFrame val df = Random.shuffle(0.to(1000)).toDF("val") // Repartition it using 3 partitions // Sum up number of elements in each partition, and collect it. // And do it several times for (i <- 0 to 9) { var counts = df.repartitionByRange(3, col("val")) .mapPartitions{part => Iterator(part.size)} .collect() println(counts.toList) } // -> the number of elements in each partition varies ``` This is expected as for performance reasons this method uses sampling to estimate the ranges (with default size of 100). Hence, the output may not be consistent, since sampling can return different values. But documentation was not mentioning it at all, leading to misunderstanding. ## What changes were proposed in this pull request? Update the documentation (Spark & PySpark) to mention the impact of `spark.sql.execution.rangeExchange.sampleSizePerPartition` on the resulting partitioned DataFrame. Closes apache#23025 from JulienPeloton/SPARK-26024. Authored-by: Julien <[email protected]> Signed-off-by: Wenchen Fan <[email protected]>
Following SPARK-26024, I noticed the number of elements in each partition after repartitioning using
df.repartitionByRangecan vary for the same setup:This is expected as for performance reasons this method uses sampling to estimate the ranges (with default size of 100). Hence, the output may not be consistent, since sampling can return different values. But documentation was not mentioning it at all, leading to misunderstanding.
What changes were proposed in this pull request?
Update the documentation (Spark & PySpark) to mention the impact of
spark.sql.execution.rangeExchange.sampleSizePerPartitionon the resulting partitioned DataFrame.