Skip to content
Closed
Original file line number Diff line number Diff line change
Expand Up @@ -497,7 +497,11 @@ object DataSourceStrategy {
Some(sources.IsNotNull(a.name))

case expressions.And(left, right) =>
(translateFilter(left) ++ translateFilter(right)).reduceOption(sources.And)
// See SPARK-12218 and PR 10362 for detailed discussion
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the comment, you need to give an example to explain why.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure. I have added more comments there with an example. Thanks, Sean!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Usually we don't list PR number but just JIRA number is enough.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@viirya I see. Thanks, Simon! I've removed the PR number from the comment.

for {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's add a small comment like the PR you pointed out.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure. Will do. Thanks.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. Just did that as you suggested.

leftFilter <- translateFilter(left)
rightFilter <- translateFilter(right)
} yield sources.And(leftFilter, rightFilter)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we still need SPARK-12218 after this?

Copy link
Contributor Author

@jliwork jliwork Nov 20, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would think so. SPARK-12218 put fixes into ParquetFilters.createFilter and OrcFilters.createFilter. They're similar to DataSourceStrategy.translateFilter but have different signature customized for Parquet and ORC. For all datasources including JDBC, Parquet, etc, translateFilter is called to determine if a predicate Expression can be pushed down as a Filter or not. Next for Parquet and ORC, Filters get mapped to Parquet or ORC specific filters with their own createFilter method.

So this PR does help all data sources to get the correct set of push down predicates. Without this PR we simply got lucky with Parquet and ORC in terms of result correctness because 1) it looks like we always apply Filter on top of scan; 2) we end up with same number of or more rows returned with one leg missing from AND.

JDBC data source does not always come with Filter on top of scan therefore exposed the bug.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do not need to clean up the codes in this PR. Let us minimize the code changes and it can simplify the backport.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although Catalyst predicate expressions are all converted to sources.Filter when we try to push down them. Not all convertible filters can be handled by Parquet and ORC. So I think we still can face the case only one sub-filter of AND can be pushed down by the file format.


case expressions.Or(left, right) =>
for {
Expand Down
25 changes: 25 additions & 0 deletions sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCSuite.scala
Original file line number Diff line number Diff line change
Expand Up @@ -296,8 +296,33 @@ class JDBCSuite extends SparkFunSuite
// The older versions of spark have this kind of bugs in parquet data source.
val df1 = sql("SELECT * FROM foobar WHERE NOT (THEID != 2 AND NAME != 'mary')")
Copy link
Member

@viirya viirya Nov 21, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I leave the comment in #10468 (comment), the above test doesn't actually test against SPARK-12218 issue. Maybe we can simply drop it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

val df2 = sql("SELECT * FROM foobar WHERE NOT (THEID != 2) OR NOT (NAME != 'mary')")
val df3 = sql("SELECT * FROM foobar WHERE (THEID > 0 AND NAME = 'mary') OR (NAME = 'fred')")
val df4 = sql("SELECT * FROM foobar " +
"WHERE (THEID > 0 AND TRIM(NAME) = 'mary') OR (NAME = 'fred')")
val df5 = sql("SELECT * FROM foobar " +
"WHERE THEID > 0 AND TRIM(NAME) = 'mary' AND LENGTH(NAME) > 3")
val df6 = sql("SELECT * FROM foobar " +
"WHERE THEID < 0 OR NAME = 'mary' OR NAME = 'fred'")
val df7 = sql("SELECT * FROM foobar " +
"WHERE THEID < 0 OR TRIM(NAME) = 'mary' OR NAME = 'fred'")
val df8 = sql("SELECT * FROM foobar " +
"WHERE NOT((THEID < 0 OR NAME != 'mary') AND (THEID != 1 OR NAME != 'fred'))")
val df9 = sql("SELECT * FROM foobar " +
"WHERE NOT((THEID < 0 OR NAME != 'mary') AND (THEID != 1 OR TRIM(NAME) != 'fred'))")
val df10 = sql("SELECT * FROM foobar " +
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need to test so many cases? as an end-to-end test, I think we only need a typical case.

"WHERE (NOT(THEID < 0 OR TRIM(NAME) != 'mary')) OR (THEID = 1 AND NAME = 'fred')")

assert(df1.collect.toSet === Set(Row("mary", 2)))
assert(df2.collect.toSet === Set(Row("mary", 2)))
assert(df3.collect.toSet === Set(Row("fred", 1), Row("mary", 2)))
assert(df4.collect.toSet === Set(Row("fred", 1), Row("mary", 2)))
assert(df5.collect.toSet === Set(Row("mary", 2)))
assert(df6.collect.toSet === Set(Row("fred", 1), Row("mary", 2)))
assert(df7.collect.toSet === Set(Row("fred", 1), Row("mary", 2)))
assert(df8.collect.toSet === Set(Row("fred", 1), Row("mary", 2)))
assert(df9.collect.toSet === Set(Row("fred", 1), Row("mary", 2)))
assert(df10.collect.toSet === Set(Row("fred", 1), Row("mary", 2)))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like to create a new DataSourceStrategySuite to test the translateFilter.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure. I can help.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They are end-to-end test cases.

If you can, we should also add such a unit test suite. In the future, we can add more unit test cases for verifying more complex cases.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I went ahead and added a new DataSourceStrategySuite to test the translateFilter. Please free feel to let me know of any further comments. Thanks!



def checkNotPushdown(df: DataFrame): DataFrame = {
val parentPlan = df.queryExecution.executedPlan
Expand Down