forked from apache/spark
-
Notifications
You must be signed in to change notification settings - Fork 51
Cherry pick SPARK-27291 and SPARK-27990 #631
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
… on listing files In apache#23130, all empty files are excluded from target file splits in `FileSourceScanExec`. In File source V2, we should keep the same behavior. This PR suggests to filter out empty files on listing files in `PartitioningAwareFileIndex` so that the upper level doesn't need to handle them. Unit test Closes apache#24227 from gengliangwang/ignoreEmptyFile. Authored-by: Gengliang Wang <[email protected]> Signed-off-by: Wenchen Fan <[email protected]>
…tasource Provide a way to recursively load data from datasource. I add a "recursiveFileLookup" option. When "recursiveFileLookup" option turn on, then partition inferring is turned off and all files from the directory will be loaded recursively. If some datasource explicitly specify the partitionSpec, then if user turn on "recursive" option, then exception will be thrown. Unit tests. Please review https://spark.apache.org/contributing.html before opening a pull request. Closes apache#24830 from WeichenXu123/recursive_ds. Authored-by: WeichenXu <[email protected]> Signed-off-by: Wenchen Fan <[email protected]>
|
LGTM, just have to manually merge not to squash the commits together |
## What changes were proposed in this pull request? Implement binary file data source in Spark. Format name: "binaryFile" (case-insensitive) Schema: - content: BinaryType - status: StructType - path: StringType - modificationTime: TimestampType - length: LongType Options: * pathGlobFilter (instead of pathFilterRegex) to reply on GlobFilter behavior * maxBytesPerPartition is not implemented since it is controlled by two SQL confs: maxPartitionBytes and openCostInBytes. ## How was this patch tested? Unit test added. Please review http://spark.apache.org/contributing.html before opening a pull request. Closes apache#24354 from WeichenXu123/binary_file_datasource. Lead-authored-by: WeichenXu <[email protected]> Co-authored-by: Xiangrui Meng <[email protected]> Signed-off-by: Xiangrui Meng <[email protected]>
…ry file data source
## What changes were proposed in this pull request?
Support 4 kinds of filters:
- LessThan
- LessThanOrEqual
- GreatThan
- GreatThanOrEqual
Support filters applied on 2 columns:
- modificationTime
- length
Note:
In order to support datasource filter push-down, I flatten schema to be:
```
val schema = StructType(
StructField("path", StringType, false) ::
StructField("modificationTime", TimestampType, false) ::
StructField("length", LongType, false) ::
StructField("content", BinaryType, true) :: Nil)
```
## How was this patch tested?
To be added.
Please review http://spark.apache.org/contributing.html before opening a pull request.
Closes apache#24387 from WeichenXu123/binary_ds_filter.
Lead-authored-by: WeichenXu <[email protected]>
Co-authored-by: Xiangrui Meng <[email protected]>
Signed-off-by: Xiangrui Meng <[email protected]>
… if it is not selected
## What changes were proposed in this pull request?
A follow-up task from SPARK-25348. To save I/O cost, Spark shouldn't attempt to read the file if users didn't request the `content` column. For example:
```
spark.read.format("binaryFile").load(path).filter($"length" < 1000000).count()
```
## How was this patch tested?
Unit test added.
Please review http://spark.apache.org/contributing.html before opening a pull request.
Closes apache#24473 from WeichenXu123/SPARK-27534.
Lead-authored-by: Xiangrui Meng <[email protected]>
Co-authored-by: WeichenXu <[email protected]>
Signed-off-by: Xiangrui Meng <[email protected]>
…to read very large files If a file is too big (>2GB), we should fail fast and do not try to read the file. (Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests) (If this patch involves UI changes, please attach a screenshot; otherwise, remove this) Please review http://spark.apache.org/contributing.html before opening a pull request. Closes apache#24483 from mengxr/SPARK-27588. Authored-by: Xiangrui Meng <[email protected]> Signed-off-by: Xiangrui Meng <[email protected]>
…or all file sources The data source option `pathGlobFilter` is introduced for Binary file format: apache#24354 , which can be used for filtering file names, e.g. reading `.png` files only while there is `.json` files in the same directory. Make the option `pathGlobFilter` as a general option for all file sources. The path filtering should happen in the path globbing on Driver. Filtering the file path names in file scan tasks on executors is kind of ugly. 1. The splitting of file partitions will be more balanced. 2. The metrics of file scan will be more accurate. 3. Users can use the option for reading other file sources. Unit tests Closes apache#24518 from gengliangwang/globFilter. Authored-by: Gengliang Wang <[email protected]> Signed-off-by: HyukjinKwon <[email protected]>
## What changes were proposed in this pull request? Convert `PartitionedFile.filePath` to URI first in binary file data source. Otherwise Spark will throw a FileNotFound exception because we create `Path` with URL encoded string, instead of wrapping it with URI. ## How was this patch tested? Unit test. Closes apache#24855 from mengxr/SPARK-28030. Authored-by: Xiangrui Meng <[email protected]> Signed-off-by: Xiangrui Meng <[email protected]>
bb1c3c8 to
42769a2
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Upstream SPARK-XXXXX ticket and PR link (if not applicable, explain)
apache#24227
apache#24830
What changes were proposed in this pull request?
Filter empty files on listing files and (more importantly) provide a way to recursively load data from a datasource. The former is included to reduce conflicts (and because it's a nice thing).
How was this patch tested?
(Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
(If this patch involves UI changes, please attach a screenshot; otherwise, remove this)
Please review http://spark.apache.org/contributing.html before opening a pull request.