Skip to content

Conversation

@lwwmanning
Copy link

Upstream SPARK-XXXXX ticket and PR link (if not applicable, explain)

apache#24227
apache#24830

What changes were proposed in this pull request?

Filter empty files on listing files and (more importantly) provide a way to recursively load data from a datasource. The former is included to reduce conflicts (and because it's a nice thing).

How was this patch tested?

(Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
(If this patch involves UI changes, please attach a screenshot; otherwise, remove this)

Please review http://spark.apache.org/contributing.html before opening a pull request.

gengliangwang and others added 2 commits January 9, 2020 13:13
… on listing files

In apache#23130, all empty files are excluded from target file splits in `FileSourceScanExec`.
In File source V2, we should keep the same behavior.

This PR suggests to filter out empty files on listing files in `PartitioningAwareFileIndex` so that the upper level doesn't need to handle them.

Unit test

Closes apache#24227 from gengliangwang/ignoreEmptyFile.

Authored-by: Gengliang Wang <[email protected]>
Signed-off-by: Wenchen Fan <[email protected]>
…tasource

Provide a way to recursively load data from datasource.
I add a "recursiveFileLookup" option.

When "recursiveFileLookup" option turn on, then partition inferring is turned off and all files from the directory will be loaded recursively.

If some datasource explicitly specify the partitionSpec, then if user turn on "recursive" option, then exception will be thrown.

Unit tests.

Please review https://spark.apache.org/contributing.html before opening a pull request.

Closes apache#24830 from WeichenXu123/recursive_ds.

Authored-by: WeichenXu <[email protected]>
Signed-off-by: Wenchen Fan <[email protected]>
@lwwmanning lwwmanning requested a review from robert3005 January 9, 2020 13:29
@robert3005
Copy link

LGTM, just have to manually merge not to squash the commits together

WeichenXu123 and others added 6 commits January 9, 2020 14:38
## What changes were proposed in this pull request?

Implement binary file data source in Spark.

Format name: "binaryFile" (case-insensitive)

Schema:
- content: BinaryType
- status: StructType
  - path: StringType
  - modificationTime: TimestampType
  - length: LongType

Options:
* pathGlobFilter (instead of pathFilterRegex) to reply on GlobFilter behavior
* maxBytesPerPartition is not implemented since it is controlled by two SQL confs: maxPartitionBytes and openCostInBytes.

## How was this patch tested?

Unit test added.

Please review http://spark.apache.org/contributing.html before opening a pull request.

Closes apache#24354 from WeichenXu123/binary_file_datasource.

Lead-authored-by: WeichenXu <[email protected]>
Co-authored-by: Xiangrui Meng <[email protected]>
Signed-off-by: Xiangrui Meng <[email protected]>
…ry file data source

## What changes were proposed in this pull request?

Support 4 kinds of filters:
- LessThan
- LessThanOrEqual
- GreatThan
- GreatThanOrEqual

Support filters applied on 2 columns:
- modificationTime
- length

Note:
In order to support datasource filter push-down, I flatten schema to be:
```
val schema = StructType(
    StructField("path", StringType, false) ::
    StructField("modificationTime", TimestampType, false) ::
    StructField("length", LongType, false) ::
    StructField("content", BinaryType, true) :: Nil)
```

## How was this patch tested?

To be added.

Please review http://spark.apache.org/contributing.html before opening a pull request.

Closes apache#24387 from WeichenXu123/binary_ds_filter.

Lead-authored-by: WeichenXu <[email protected]>
Co-authored-by: Xiangrui Meng <[email protected]>
Signed-off-by: Xiangrui Meng <[email protected]>
… if it is not selected

## What changes were proposed in this pull request?

A follow-up task from SPARK-25348. To save I/O cost, Spark shouldn't attempt to read the file if users didn't request the `content` column. For example:
```
spark.read.format("binaryFile").load(path).filter($"length" < 1000000).count()
```

## How was this patch tested?

Unit test added.

Please review http://spark.apache.org/contributing.html before opening a pull request.

Closes apache#24473 from WeichenXu123/SPARK-27534.

Lead-authored-by: Xiangrui Meng <[email protected]>
Co-authored-by: WeichenXu <[email protected]>
Signed-off-by: Xiangrui Meng <[email protected]>
…to read very large files

If a file is too big (>2GB), we should fail fast and do not try to read the file.

(Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
(If this patch involves UI changes, please attach a screenshot; otherwise, remove this)

Please review http://spark.apache.org/contributing.html before opening a pull request.

Closes apache#24483 from mengxr/SPARK-27588.

Authored-by: Xiangrui Meng <[email protected]>
Signed-off-by: Xiangrui Meng <[email protected]>
…or all file sources

The data source option `pathGlobFilter` is introduced for Binary file format: apache#24354 , which can be used for filtering file names, e.g. reading `.png` files only while there is `.json` files in the same directory.

Make the option `pathGlobFilter` as a general option for all file sources. The path filtering should happen in the path globbing on Driver.

Filtering the file path names in file scan tasks on executors is kind of ugly.

1. The splitting of file partitions will be more balanced.
2. The metrics of file scan will be more accurate.
3. Users can use the option for reading other file sources.

Unit tests

Closes apache#24518 from gengliangwang/globFilter.

Authored-by: Gengliang Wang <[email protected]>
Signed-off-by: HyukjinKwon <[email protected]>
## What changes were proposed in this pull request?

Convert `PartitionedFile.filePath` to URI first in binary file data source. Otherwise Spark will throw a FileNotFound exception because we create `Path` with URL encoded string, instead of wrapping it with URI.

## How was this patch tested?

Unit test.

Closes apache#24855 from mengxr/SPARK-28030.

Authored-by: Xiangrui Meng <[email protected]>
Signed-off-by: Xiangrui Meng <[email protected]>
@robert3005 robert3005 merged commit d50bdf7 into master Jan 10, 2020
@robert3005 robert3005 deleted the wm/spark-27990 branch January 10, 2020 11:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants