Skip to content

Conversation

@skonto
Copy link
Contributor

@skonto skonto commented Jul 10, 2019

What changes were proposed in this pull request?

This PR adds some tests converted from group-by.sql to test UDFs. Please see contribution guide of this umbrella ticket - SPARK-27921.

Diff comparing to 'group-by.sql'

diff --git a/sql/core/src/test/resources/sql-tests/results/udf/udf-group-by.sql.out b/sql/core/src/test/resources/sql-tests/results/udf/udf-group-by.sql.out
index 3a5df254f2..0118c05b1d 100644
--- a/sql/core/src/test/resources/sql-tests/results/udf/udf-group-by.sql.out
+++ b/sql/core/src/test/resources/sql-tests/results/udf/udf-group-by.sql.out
@@ -13,26 +13,26 @@ struct<>
 
 
 -- !query 1
-SELECT a, COUNT(b) FROM testData
+SELECT udf(a), udf(COUNT(b)) FROM testData
 -- !query 1 schema
 struct<>
 -- !query 1 output
 org.apache.spark.sql.AnalysisException
-grouping expressions sequence is empty, and 'testdata.`a`' is not an aggregate function. Wrap '(count(testdata.`b`) AS `count(b)`)' in windowing function(s) or wrap 'testdata.`a`' in first() (or first_value) if you don't care which value you get.;
+grouping expressions sequence is empty, and 'testdata.`a`' is not an aggregate function. Wrap '(CAST(udf(cast(count(b) as string)) AS BIGINT) AS `CAST(udf(cast(count(b) as string)) AS BIGINT)`)' in windowing function(s) or wrap 'testdata.`a`' in first() (or first_value) if you don't care which value you get.;
 
 
 -- !query 2
-SELECT COUNT(a), COUNT(b) FROM testData
+SELECT COUNT(udf(a)), udf(COUNT(b)) FROM testData
 -- !query 2 schema
-struct<count(a):bigint,count(b):bigint>
+struct<count(CAST(udf(cast(a as string)) AS INT)):bigint,CAST(udf(cast(count(b) as string)) AS BIGINT):bigint>
 -- !query 2 output
 7	7
 
 
 -- !query 3
-SELECT a, COUNT(b) FROM testData GROUP BY a
+SELECT udf(a), COUNT(udf(b)) FROM testData GROUP BY a
 -- !query 3 schema
-struct<a:int,count(b):bigint>
+struct<CAST(udf(cast(a as string)) AS INT):int,count(CAST(udf(cast(b as string)) AS INT)):bigint>
 -- !query 3 output
 1	2
 2	2
@@ -41,7 +41,7 @@ NULL	1
 
 
 -- !query 4
-SELECT a, COUNT(b) FROM testData GROUP BY b
+SELECT udf(a), udf(COUNT(udf(b))) FROM testData GROUP BY b
 -- !query 4 schema
 struct<>
 -- !query 4 output
@@ -50,9 +50,9 @@ expression 'testdata.`a`' is neither present in the group by, nor is it an aggre
 
 
 -- !query 5
-SELECT COUNT(a), COUNT(b) FROM testData GROUP BY a
+SELECT COUNT(udf(a)), COUNT(udf(b)) FROM testData GROUP BY udf(a)
 -- !query 5 schema
-struct<count(a):bigint,count(b):bigint>
+struct<count(CAST(udf(cast(a as string)) AS INT)):bigint,count(CAST(udf(cast(b as string)) AS INT)):bigint>
 -- !query 5 output
 0	1
 2	2
@@ -61,15 +61,15 @@ struct<count(a):bigint,count(b):bigint>
 
 
 -- !query 6
-SELECT 'foo', COUNT(a) FROM testData GROUP BY 1
+SELECT 'foo', COUNT(udf(a)) FROM testData GROUP BY 1
 -- !query 6 schema
-struct<foo:string,count(a):bigint>
+struct<foo:string,count(CAST(udf(cast(a as string)) AS INT)):bigint>
 -- !query 6 output
 foo	7
 
 
 -- !query 7
-SELECT 'foo' FROM testData WHERE a = 0 GROUP BY 1
+SELECT 'foo' FROM testData WHERE a = 0 GROUP BY udf(1)
 -- !query 7 schema
 struct<foo:string>
 -- !query 7 output
@@ -77,25 +77,25 @@ struct<foo:string>
 
 
 -- !query 8
-SELECT 'foo', APPROX_COUNT_DISTINCT(a) FROM testData WHERE a = 0 GROUP BY 1
+SELECT 'foo', udf(APPROX_COUNT_DISTINCT(udf(a))) FROM testData WHERE a = 0 GROUP BY 1
 -- !query 8 schema
-struct<foo:string,approx_count_distinct(a):bigint>
+struct<foo:string,CAST(udf(cast(approx_count_distinct(cast(udf(cast(a as string)) as int), 0.05, 0, 0) as string)) AS BIGINT):bigint>
 -- !query 8 output
 
 
 
 -- !query 9
-SELECT 'foo', MAX(STRUCT(a)) FROM testData WHERE a = 0 GROUP BY 1
+SELECT 'foo', MAX(STRUCT(udf(a))) FROM testData WHERE a = 0 GROUP BY 1
 -- !query 9 schema
-struct<foo:string,max(named_struct(a, a)):struct<a:int>>
+struct<foo:string,max(named_struct(col1, CAST(udf(cast(a as string)) AS INT))):struct<col1:int>>
 -- !query 9 output
 
 
 
 -- !query 10
-SELECT a + b, COUNT(b) FROM testData GROUP BY a + b
+SELECT udf(a + b), udf(COUNT(b)) FROM testData GROUP BY a + b
 -- !query 10 schema
-struct<(a + b):int,count(b):bigint>
+struct<CAST(udf(cast((a + b) as string)) AS INT):int,CAST(udf(cast(count(b) as string)) AS BIGINT):bigint>
 -- !query 10 output
 2	1
 3	2
@@ -105,7 +105,7 @@ NULL	1
 
 
 -- !query 11
-SELECT a + 2, COUNT(b) FROM testData GROUP BY a + 1
+SELECT udf(a + 2), udf(COUNT(b)) FROM testData GROUP BY a + 1
 -- !query 11 schema
 struct<>
 -- !query 11 output
@@ -114,37 +114,35 @@ expression 'testdata.`a`' is neither present in the group by, nor is it an aggre
 
 
 -- !query 12
-SELECT a + 1 + 1, COUNT(b) FROM testData GROUP BY a + 1
+SELECT udf(a + 1 + 1), udf(COUNT(b)) FROM testData GROUP BY udf(a + 1)
 -- !query 12 schema
-struct<((a + 1) + 1):int,count(b):bigint>
+struct<>
 -- !query 12 output
-3	2
-4	2
-5	2
-NULL	1
+org.apache.spark.sql.AnalysisException
+expression 'testdata.`a`' is neither present in the group by, nor is it an aggregate function. Add to group by or wrap in first() (or first_value) if you don't care which value you get.;
 
 
 -- !query 13
-SELECT SKEWNESS(a), KURTOSIS(a), MIN(a), MAX(a), AVG(a), VARIANCE(a), STDDEV(a), SUM(a), COUNT(a)
+SELECT SKEWNESS(udf(a)), udf(KURTOSIS(a)), udf(MIN(a)), MAX(udf(a)), udf(AVG(udf(a))), udf(VARIANCE(a)), STDDEV(udf(a)), udf(SUM(a)), udf(COUNT(a))
 FROM testData
 -- !query 13 schema
-struct<skewness(CAST(a AS DOUBLE)):double,kurtosis(CAST(a AS DOUBLE)):double,min(a):int,max(a):int,avg(a):double,var_samp(CAST(a AS DOUBLE)):double,stddev_samp(CAST(a AS DOUBLE)):double,sum(a):bigint,count(a):bigint>
+struct<skewness(CAST(CAST(udf(cast(a as string)) AS INT) AS DOUBLE)):double,CAST(udf(cast(kurtosis(cast(a as double)) as string)) AS DOUBLE):double,CAST(udf(cast(min(a) as string)) AS INT):int,max(CAST(udf(cast(a as string)) AS INT)):int,CAST(udf(cast(avg(cast(cast(udf(cast(a as string)) as int) as bigint)) as string)) AS DOUBLE):double,CAST(udf(cast(var_samp(cast(a as double)) as string)) AS DOUBLE):double,stddev_samp(CAST(CAST(udf(cast(a as string)) AS INT) AS DOUBLE)):double,CAST(udf(cast(sum(cast(a as bigint)) as string)) AS BIGINT):bigint,CAST(udf(cast(count(a) as string)) AS BIGINT):bigint>
 -- !query 13 output
 -0.2723801058145729	-1.5069204152249134	1	3	2.142857142857143	0.8095238095238094	0.8997354108424372	15	7
 
 
 -- !query 14
-SELECT COUNT(DISTINCT b), COUNT(DISTINCT b, c) FROM (SELECT 1 AS a, 2 AS b, 3 AS c) GROUP BY a
+SELECT COUNT(DISTINCT udf(b)), udf(COUNT(DISTINCT b, c)) FROM (SELECT 1 AS a, 2 AS b, 3 AS c) GROUP BY a
 -- !query 14 schema
-struct<count(DISTINCT b):bigint,count(DISTINCT b, c):bigint>
+struct<count(DISTINCT CAST(udf(cast(b as string)) AS INT)):bigint,CAST(udf(cast(count(distinct b, c) as string)) AS BIGINT):bigint>
 -- !query 14 output
 1	1
 
 
 -- !query 15
-SELECT a AS k, COUNT(b) FROM testData GROUP BY k
+SELECT a AS k, COUNT(udf(b)) FROM testData GROUP BY k
 -- !query 15 schema
-struct<k:int,count(b):bigint>
+struct<k:int,count(CAST(udf(cast(b as string)) AS INT)):bigint>
 -- !query 15 output
 1	2
 2	2
@@ -153,21 +151,21 @@ NULL	1
 
 
 -- !query 16
-SELECT a AS k, COUNT(b) FROM testData GROUP BY k HAVING k > 1
+SELECT a AS k, udf(COUNT(b)) FROM testData GROUP BY k HAVING k > 1
 -- !query 16 schema
-struct<k:int,count(b):bigint>
+struct<k:int,CAST(udf(cast(count(b) as string)) AS BIGINT):bigint>
 -- !query 16 output
 2	2
 3	2
 
 
 -- !query 17
-SELECT COUNT(b) AS k FROM testData GROUP BY k
+SELECT udf(COUNT(b)) AS k FROM testData GROUP BY k
 -- !query 17 schema
 struct<>
 -- !query 17 output
 org.apache.spark.sql.AnalysisException
-aggregate functions are not allowed in GROUP BY, but found count(testdata.`b`);
+aggregate functions are not allowed in GROUP BY, but found CAST(udf(cast(count(b) as string)) AS BIGINT);
 
 
 -- !query 18
@@ -180,7 +178,7 @@ struct<>
 
 
 -- !query 19
-SELECT k AS a, COUNT(v) FROM testDataHasSameNameWithAlias GROUP BY a
+SELECT k AS a, udf(COUNT(udf(v))) FROM testDataHasSameNameWithAlias GROUP BY a
 -- !query 19 schema
 struct<>
 -- !query 19 output
@@ -197,32 +195,32 @@ spark.sql.groupByAliases	false
 
 
 -- !query 21
-SELECT a AS k, COUNT(b) FROM testData GROUP BY k
+SELECT a AS k, udf(COUNT(udf(b))) FROM testData GROUP BY k
 -- !query 21 schema
 struct<>
 -- !query 21 output
 org.apache.spark.sql.AnalysisException
-cannot resolve '`k`' given input columns: [testdata.a, testdata.b]; line 1 pos 47
+cannot resolve '`k`' given input columns: [testdata.a, testdata.b]; line 1 pos 57
 
 
 -- !query 22
-SELECT a, COUNT(1) FROM testData WHERE false GROUP BY a
+SELECT a, COUNT(udf(1)) FROM testData WHERE false GROUP BY a
 -- !query 22 schema
-struct<a:int,count(1):bigint>
+struct<a:int,count(CAST(udf(cast(1 as string)) AS INT)):bigint>
 -- !query 22 output
 
 
 
 -- !query 23
-SELECT COUNT(1) FROM testData WHERE false
+SELECT udf(COUNT(1)) FROM testData WHERE false
 -- !query 23 schema
-struct<count(1):bigint>
+struct<CAST(udf(cast(count(1) as string)) AS BIGINT):bigint>
 -- !query 23 output
 0
 
 
 -- !query 24
-SELECT 1 FROM (SELECT COUNT(1) FROM testData WHERE false) t
+SELECT 1 FROM (SELECT udf(COUNT(1)) FROM testData WHERE false) t
 -- !query 24 schema
 struct<1:int>
 -- !query 24 output
@@ -232,7 +230,7 @@ struct<1:int>
 -- !query 25
 SELECT 1 from (
   SELECT 1 AS z,
-  MIN(a.x)
+  udf(MIN(a.x))
   FROM (select 1 as x) a
   WHERE false
 ) b
@@ -244,32 +242,32 @@ struct<1:int>
 
 
 -- !query 26
-SELECT corr(DISTINCT x, y), corr(DISTINCT y, x), count(*)
+SELECT corr(DISTINCT x, y), udf(corr(DISTINCT y, x)), count(*)
   FROM (VALUES (1, 1), (2, 2), (2, 2)) t(x, y)
 -- !query 26 schema
-struct<corr(DISTINCT CAST(x AS DOUBLE), CAST(y AS DOUBLE)):double,corr(DISTINCT CAST(y AS DOUBLE), CAST(x AS DOUBLE)):double,count(1):bigint>
+struct<corr(DISTINCT CAST(x AS DOUBLE), CAST(y AS DOUBLE)):double,CAST(udf(cast(corr(distinct cast(y as double), cast(x as double)) as string)) AS DOUBLE):double,count(1):bigint>
 -- !query 26 output
 1.0	1.0	3
 
 
 -- !query 27
-SELECT 1 FROM range(10) HAVING true
+SELECT udf(1) FROM range(10) HAVING true
 -- !query 27 schema
-struct<1:int>
+struct<CAST(udf(cast(1 as string)) AS INT):int>
 -- !query 27 output
 1
 
 
 -- !query 28
-SELECT 1 FROM range(10) HAVING MAX(id) > 0
+SELECT udf(udf(1)) FROM range(10) HAVING MAX(id) > 0
 -- !query 28 schema
-struct<1:int>
+struct<CAST(udf(cast(cast(udf(cast(1 as string)) as int) as string)) AS INT):int>
 -- !query 28 output
 1
 
 
 -- !query 29
-SELECT id FROM range(10) HAVING id > 0
+SELECT udf(id) FROM range(10) HAVING id > 0
 -- !query 29 schema
 struct<>
 -- !query 29 output
@@ -291,33 +289,33 @@ struct<>
 
 
 -- !query 31
-SELECT every(v), some(v), any(v) FROM test_agg WHERE 1 = 0
+SELECT udf(every(v)), udf(some(v)), any(v) FROM test_agg WHERE 1 = 0
 -- !query 31 schema
-struct<every(v):boolean,some(v):boolean,any(v):boolean>
+struct<CAST(udf(cast(every(v) as string)) AS BOOLEAN):boolean,CAST(udf(cast(some(v) as string)) AS BOOLEAN):boolean,any(v):boolean>
 -- !query 31 output
 NULL	NULL	NULL
 
 
 -- !query 32
-SELECT every(v), some(v), any(v) FROM test_agg WHERE k = 4
+SELECT udf(every(udf(v))), some(v), any(v) FROM test_agg WHERE k = 4
 -- !query 32 schema
-struct<every(v):boolean,some(v):boolean,any(v):boolean>
+struct<CAST(udf(cast(every(cast(udf(cast(v as string)) as boolean)) as string)) AS BOOLEAN):boolean,some(v):boolean,any(v):boolean>
 -- !query 32 output
 NULL	NULL	NULL
 
 
 -- !query 33
-SELECT every(v), some(v), any(v) FROM test_agg WHERE k = 5
+SELECT every(v), udf(some(v)), any(v) FROM test_agg WHERE k = 5
 -- !query 33 schema
-struct<every(v):boolean,some(v):boolean,any(v):boolean>
+struct<every(v):boolean,CAST(udf(cast(some(v) as string)) AS BOOLEAN):boolean,any(v):boolean>
 -- !query 33 output
 false	true	true
 
 
 -- !query 34
-SELECT k, every(v), some(v), any(v) FROM test_agg GROUP BY k
+SELECT k, every(v), udf(some(v)), any(v) FROM test_agg GROUP BY k
 -- !query 34 schema
-struct<k:int,every(v):boolean,some(v):boolean,any(v):boolean>
+struct<k:int,every(v):boolean,CAST(udf(cast(some(v) as string)) AS BOOLEAN):boolean,any(v):boolean>
 -- !query 34 output
 1	false	true	true
 2	true	true	true
@@ -327,9 +325,9 @@ struct<k:int,every(v):boolean,some(v):boolean,any(v):boolean>
 
 
 -- !query 35
-SELECT k, every(v) FROM test_agg GROUP BY k HAVING every(v) = false
+SELECT udf(k), every(v) FROM test_agg GROUP BY k HAVING every(v) = false
 -- !query 35 schema
-struct<k:int,every(v):boolean>
+struct<CAST(udf(cast(k as string)) AS INT):int,every(v):boolean>
 -- !query 35 output
 1	false
 3	false
@@ -337,16 +335,16 @@ struct<k:int,every(v):boolean>
 
 
 -- !query 36
-SELECT k, every(v) FROM test_agg GROUP BY k HAVING every(v) IS NULL
+SELECT k, udf(every(v)) FROM test_agg GROUP BY k HAVING every(v) IS NULL
 -- !query 36 schema
-struct<k:int,every(v):boolean>
+struct<k:int,CAST(udf(cast(every(v) as string)) AS BOOLEAN):boolean>
 -- !query 36 output
 4	NULL
 
 
 -- !query 37
 SELECT k,
-       Every(v) AS every
+       udf(Every(v)) AS every
 FROM   test_agg
 WHERE  k = 2
        AND v IN (SELECT Any(v)
@@ -360,7 +358,7 @@ struct<k:int,every:boolean>
 
 
 -- !query 38
-SELECT k,
+SELECT udf(udf(k)),
        Every(v) AS every
 FROM   test_agg
 WHERE  k = 2
@@ -369,45 +367,45 @@ WHERE  k = 2
                  WHERE  k = 1)
 GROUP  BY k
 -- !query 38 schema
-struct<k:int,every:boolean>
+struct<CAST(udf(cast(cast(udf(cast(k as string)) as int) as string)) AS INT):int,every:boolean>
 -- !query 38 output
 
 
 
 -- !query 39
-SELECT every(1)
+SELECT every(udf(1))
 -- !query 39 schema
 struct<>
 -- !query 39 output
 org.apache.spark.sql.AnalysisException
-cannot resolve 'every(1)' due to data type mismatch: Input to function 'every' should have been boolean, but it's [int].; line 1 pos 7
+cannot resolve 'every(CAST(udf(cast(1 as string)) AS INT))' due to data type mismatch: Input to function 'every' should have been boolean, but it's [int].; line 1 pos 7
 
 
 -- !query 40
-SELECT some(1S)
+SELECT some(udf(1S))
 -- !query 40 schema
 struct<>
 -- !query 40 output
 org.apache.spark.sql.AnalysisException
-cannot resolve 'some(1S)' due to data type mismatch: Input to function 'some' should have been boolean, but it's [smallint].; line 1 pos 7
+cannot resolve 'some(CAST(udf(cast(1 as string)) AS SMALLINT))' due to data type mismatch: Input to function 'some' should have been boolean, but it's [smallint].; line 1 pos 7
 
 
 -- !query 41
-SELECT any(1L)
+SELECT any(udf(1L))
 -- !query 41 schema
 struct<>
 -- !query 41 output
 org.apache.spark.sql.AnalysisException
-cannot resolve 'any(1L)' due to data type mismatch: Input to function 'any' should have been boolean, but it's [bigint].; line 1 pos 7
+cannot resolve 'any(CAST(udf(cast(1 as string)) AS BIGINT))' due to data type mismatch: Input to function 'any' should have been boolean, but it's [bigint].; line 1 pos 7
 
 
 -- !query 42
-SELECT every("true")
+SELECT udf(every("true"))
 -- !query 42 schema
 struct<>
 -- !query 42 output
 org.apache.spark.sql.AnalysisException
-cannot resolve 'every('true')' due to data type mismatch: Input to function 'every' should have been boolean, but it's [string].; line 1 pos 7
+cannot resolve 'every('true')' due to data type mismatch: Input to function 'every' should have been boolean, but it's [string].; line 1 pos 11
 
 
 -- !query 43
@@ -428,9 +426,9 @@ struct<k:int,v:boolean,every(v) OVER (PARTITION BY k ORDER BY v ASC NULLS FIRST
 
 
 -- !query 44
-SELECT k, v, some(v) OVER (PARTITION BY k ORDER BY v) FROM test_agg
+SELECT k, udf(udf(v)), some(v) OVER (PARTITION BY k ORDER BY v) FROM test_agg
 -- !query 44 schema
-struct<k:int,v:boolean,some(v) OVER (PARTITION BY k ORDER BY v ASC NULLS FIRST RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW):boolean>
+struct<k:int,CAST(udf(cast(cast(udf(cast(v as string)) as boolean) as string)) AS BOOLEAN):boolean,some(v) OVER (PARTITION BY k ORDER BY v ASC NULLS FIRST RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW):boolean>
 -- !query 44 output
 1	false	false
 1	true	true
@@ -445,9 +443,9 @@ struct<k:int,v:boolean,some(v) OVER (PARTITION BY k ORDER BY v ASC NULLS FIRST R
 
 
 -- !query 45
-SELECT k, v, any(v) OVER (PARTITION BY k ORDER BY v) FROM test_agg
+SELECT udf(udf(k)), v, any(v) OVER (PARTITION BY k ORDER BY v) FROM test_agg
 -- !query 45 schema
-struct<k:int,v:boolean,any(v) OVER (PARTITION BY k ORDER BY v ASC NULLS FIRST RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW):boolean>
+struct<CAST(udf(cast(cast(udf(cast(k as string)) as int) as string)) AS INT):int,v:boolean,any(v) OVER (PARTITION BY k ORDER BY v ASC NULLS FIRST RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW):boolean>
 -- !query 45 output
 1	false	false
 1	true	true
@@ -462,17 +460,17 @@ struct<k:int,v:boolean,any(v) OVER (PARTITION BY k ORDER BY v ASC NULLS FIRST RA
 
 
 -- !query 46
-SELECT count(*) FROM test_agg HAVING count(*) > 1L
+SELECT udf(count(*)) FROM test_agg HAVING count(*) > 1L
 -- !query 46 schema
-struct<count(1):bigint>
+struct<CAST(udf(cast(count(1) as string)) AS BIGINT):bigint>
 -- !query 46 output
 10
 
 
 -- !query 47
-SELECT k, max(v) FROM test_agg GROUP BY k HAVING max(v) = true
+SELECT k, udf(max(v)) FROM test_agg GROUP BY k HAVING max(v) = true
 -- !query 47 schema
-struct<k:int,max(v):boolean>
+struct<k:int,CAST(udf(cast(max(v) as string)) AS BOOLEAN):boolean>
 -- !query 47 output
 1	true
 2	true
@@ -480,7 +478,7 @@ struct<k:int,max(v):boolean>
 
 
 -- !query 48
-SELECT * FROM (SELECT COUNT(*) AS cnt FROM test_agg) WHERE cnt > 1L
+SELECT * FROM (SELECT udf(COUNT(*)) AS cnt FROM test_agg) WHERE cnt > 1L
 -- !query 48 schema
 struct<cnt:bigint>
 -- !query 48 output
@@ -488,7 +486,7 @@ struct<cnt:bigint>
 
 
 -- !query 49
-SELECT count(*) FROM test_agg WHERE count(*) > 1L
+SELECT udf(count(*)) FROM test_agg WHERE count(*) > 1L
 -- !query 49 schema
 struct<>
 -- !query 49 output
@@ -500,7 +498,7 @@ Invalid expressions: [count(1)];
 
 
 -- !query 50
-SELECT count(*) FROM test_agg WHERE count(*) + 1L > 1L
+SELECT udf(count(*)) FROM test_agg WHERE count(*) + 1L > 1L
 -- !query 50 schema
 struct<>
 -- !query 50 output
@@ -512,7 +510,7 @@ Invalid expressions: [count(1)];
 
 
 -- !query 51
-SELECT count(*) FROM test_agg WHERE k = 1 or k = 2 or count(*) + 1L > 1L or max(k) > 1
+SELECT udf(count(*)) FROM test_agg WHERE k = 1 or k = 2 or count(*) + 1L > 1L or max(k) > 1
 -- !query 51 schema
 struct<>
 -- !query 51 output

How was this patch tested?

Tested as guided in SPARK-27921.
Verified pandas & pyarrow versions:

Python 3.6.8 (default, Jan 14 2019, 11:02:34) 
[GCC 8.0.1 20180414 (experimental) [trunk revision 259383]] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pandas
>>> import pyarrow
>>> pyarrow.__version__
'0.14.0'
>>> pandas.__version__
'0.24.2'

From the sql output it seems that sql statements are evaluated correctly given that udf returns a string and may change results as Null will be returned as None and will be counted in returned values.

@skonto skonto changed the title [SPARK-28280][SQL][PYTHON][TESTS]. Convert and port 'group-by.sql' into UDF test base [SPARK-28280][SQL][PYTHON][TESTS] Convert and port 'group-by.sql' into UDF test base Jul 10, 2019
@SparkQA

This comment has been minimized.

@skonto
Copy link
Contributor Author

skonto commented Jul 10, 2019

@HyukjinKwon build/sbt "sql/test-only *SQLQueryTestSuite -- -z udf/udf-group-by.sql" fails but when setting SPARK_GENERATE_GOLDEN_FILES=1 tests pass. I checked the test suite code, so what is the difference beyond the config part set when not generating the golden files. Why things fail there? I have committed the generated file it should be used by default right?
In the docs it says:

 * Each case is loaded from a file in "spark/sql/core/src/test/resources/sql-tests/inputs".
 * Each case has a golden result file in "spark/sql/core/src/test/resources/sql-tests/results".

@skonto
Copy link
Contributor Author

skonto commented Jul 10, 2019

if I set:

  case class TestScalaUDF(name: String) extends TestUDF {
    private def mapToString(input: Any): Any = {
      val ret = String.valueOf(input)
      if (input == null) {
        input
      } else {
        ret
      }
    }
    private[IntegratedUDFTestUtils] lazy val udf = SparkUserDefinedFunction(
      (input: Any) => mapToString(input),
      StringType,
      inputSchemas = Seq.fill(1)(None),
      name = Some(name))

    def apply(exprs: Column*): Column = udf(exprs: _*)

    val prettyName: String = "Scala UDF"
  }

it validates against the golden file's first query, which means when golden file is not generated test suite picks up "--set" configs and results change... So there is a bug here about expected behavior.

@skonto
Copy link
Contributor Author

skonto commented Jul 10, 2019

Ok I found out what is wrong. The current golden file is the output of the pandas udf testcases which come at the end. Each type of udf overwrites the file:
Compared to pandas here is the diff for scala udf:

diff --git a/sql/core/src/test/resources/sql-tests/results/udf/udf-group-by.sql.out b/sql/core/src/test/resources/sql-tests/results/udf/udf-group-by.sql.out
index 97c831aec4..58ed37fd56 100644
--- a/sql/core/src/test/resources/sql-tests/results/udf/udf-group-by.sql.out
+++ b/sql/core/src/test/resources/sql-tests/results/udf/udf-group-by.sql.out
@@ -37,7 +37,7 @@ struct<udf(a):string,count(udf(b)):bigint>
 1	2
 2	2
 3	3
-nan	2
+null	2
 
 
 -- !query 4
@@ -101,7 +101,7 @@ struct<udf((a + b)):string,udf(count(b)):string>
 3	2
 4	2
 5	1
-nan	1
+null	1
 
 
 -- !query 11
@@ -121,7 +121,7 @@ struct<udf(((a + 1) + 1)):string,udf(count(b)):string>
 3	2
 4	2
 5	2
-nan	1
+null	1
 
 
 -- !query 13
@@ -130,7 +130,7 @@ FROM testData
 -- !query 13 schema
 struct<skewness(CAST(udf(a) AS DOUBLE)):double,udf(kurtosis(cast(a as double))):string,udf(min(a)):string,max(udf(a)):string,udf(avg(cast(udf(a) as double))):string,udf(var_samp(cast(a as double))):string,stddev_samp(CAST(udf(a) AS DOUBLE)):double,udf(sum(cast(a as bigint))):string,udf(count(a)):string>
 -- !query 13 output
--0.2723801058145729	-1.5069204152249134	1	nan	2.142857142857143	0.8095238095238094	0.8997354108424372	15	7
+-0.2723801058145729	-1.5069204152249134	1	null	2.142857142857143	0.8095238095238094	0.8997354108424372	15	7
 
 
 -- !query 14
@@ -295,7 +295,7 @@ SELECT udf(every(v)), udf(some(v)), any(v) FROM test_agg WHERE 1 = 0
 -- !query 31 schema
 struct<udf(every(v)):string,udf(some(v)):string,any(v):boolean>
 -- !query 31 output
-None	None	NULL
+null	null	NULL
 
 
 -- !query 32
@@ -303,7 +303,7 @@ SELECT udf(every(v)), some(v), any(v) FROM test_agg WHERE k = 4
 -- !query 32 schema
 struct<udf(every(v)):string,some(v):boolean,any(v):boolean>
 -- !query 32 output
-None	NULL	NULL
+null	NULL	NULL
 
 
 -- !query 33
@@ -311,7 +311,7 @@ SELECT every(v), udf(some(v)), any(v) FROM test_agg WHERE k = 5
 -- !query 33 schema
 struct<every(v):boolean,udf(some(v)):string,any(v):boolean>
 -- !query 33 output
-false	True	true
+false	true	true
 
 
 -- !query 34
@@ -319,11 +319,11 @@ SELECT k, every(v), udf(some(v)), any(v) FROM test_agg GROUP BY k
 -- !query 34 schema
 struct<k:int,every(v):boolean,udf(some(v)):string,any(v):boolean>
 -- !query 34 output
-1	false	True	true
-2	true	True	true
-3	false	False	false
-4	NULL	None	NULL
-5	false	True	true
+1	false	true	true
+2	true	true	true
+3	false	false	false
+4	NULL	null	NULL
+5	false	true	true
 
 
 -- !query 35
@@ -356,7 +356,7 @@ GROUP  BY k
 -- !query 37 schema
 struct<k:int,every:string>
 -- !query 37 output
-2	True
+2	true
 
 
 -- !query 38
@@ -432,16 +432,16 @@ SELECT k, udf(udf(v)), some(v) OVER (PARTITION BY k ORDER BY v) FROM test_agg
 -- !query 44 schema
 struct<k:int,udf(udf(v)):string,some(v) OVER (PARTITION BY k ORDER BY v ASC NULLS FIRST RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW):boolean>
 -- !query 44 output
-1	False	false
-1	True	true
-2	True	true
-3	False	false
-3	None	NULL
-4	None	NULL
-4	None	NULL
-5	False	false
-5	None	NULL
-5	True	true
+1	false	false
+1	true	true
+2	true	true
+3	false	false
+3	null	NULL
+4	null	NULL
+4	null	NULL
+5	false	false
+5	null	NULL
+5	true	true
 
 
 -- !query 45
@@ -474,9 +474,9 @@ SELECT k, udf(max(udf(v))) FROM test_agg GROUP BY k HAVING max(v) = true
 -- !query 47 schema
 struct<k:int,udf(max(udf(v))):string>
 -- !query 47 output
-1	True
-2	True
-5	True
+1	true
+2	true
+5	true
 
 
 -- !query 48

and here is the diff with python udf:

diff --git a/sql/core/src/test/resources/sql-tests/results/udf/udf-group-by.sql.out b/sql/core/src/test/resources/sql-tests/results/udf/udf-group-by.sql.out
index 97c831aec4..487fbc86f7 100644
--- a/sql/core/src/test/resources/sql-tests/results/udf/udf-group-by.sql.out
+++ b/sql/core/src/test/resources/sql-tests/results/udf/udf-group-by.sql.out
@@ -37,7 +37,7 @@ struct<udf(a):string,count(udf(b)):bigint>
 1	2
 2	2
 3	3
-nan	2
+None	2
 
 
 -- !query 4
@@ -101,7 +101,7 @@ struct<udf((a + b)):string,udf(count(b)):string>
 3	2
 4	2
 5	1
-nan	1
+None	1
 
 
 -- !query 11
@@ -121,7 +121,7 @@ struct<udf(((a + 1) + 1)):string,udf(count(b)):string>
 3	2
 4	2
 5	2
-nan	1
+None	1
 
 
 -- !query 13
@@ -130,7 +130,7 @@ FROM testData
 -- !query 13 schema
 struct<skewness(CAST(udf(a) AS DOUBLE)):double,udf(kurtosis(cast(a as double))):string,udf(min(a)):string,max(udf(a)):string,udf(avg(cast(udf(a) as double))):string,udf(var_samp(cast(a as double))):string,stddev_samp(CAST(udf(a) AS DOUBLE)):double,udf(sum(cast(a as bigint))):string,udf(count(a)):string>
 -- !query 13 output
--0.2723801058145729	-1.5069204152249134	1	nan	2.142857142857143	0.8095238095238094	0.8997354108424372	15	7
+-0.2723801058145729	-1.5069204152249134	1	None	2.142857142857143	0.8095238095238094	0.8997354108424372	15	7
 
 
 -- !query 14

@skonto
Copy link
Contributor Author

skonto commented Jul 10, 2019

@HyukjinKwon why udf test suite assumes str methods for python and scala for example should return the same value? Should I just make scala udf toString logic compatible and avoid applying the udf to columns with nulls?

@HyukjinKwon
Copy link
Member

Yes, there is the difference about that. I noted it here - #25069 (comment)

Making it compatible might be one solution because we're not testing Scala's toString vs Python's str conversion. Optionally, I was also thinking about adding other UDFs of other types that just pass as are. I am not sure yet which way is better.

Another way is that we explicitly wrap it with CAST if it's possible to work around by that. I used this way for now but if that's impossible, yes we should fix.

@HyukjinKwon
Copy link
Member

As a workaround, maybe we can also wrap the udf(...) with, for instance, upper - upper(udf(...)) for now.

Copy link
Member

@HyukjinKwon HyukjinKwon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for taking this tricky one and keeping focused on each diff.

@skonto
Copy link
Contributor Author

skonto commented Jul 12, 2019

@HyukjinKwon ready for review. I used CASTs that work at the end of the day for all udfs.

@SparkQA

This comment has been minimized.

@HyukjinKwon
Copy link
Member

Yea .. I actually tried to avoid those workarounds at all .. sorry for forth and back. I made this PR #25130

@skonto
Copy link
Contributor Author

skonto commented Jul 12, 2019

@HyukjinKwon when the PR is merged I will refactor this one.

@HyukjinKwon
Copy link
Member

@skonto, #25130 is merged. Can you sync this PR to master and rebase please?

@HyukjinKwon
Copy link
Member

Looks fine in general but let's focus on testing GROUP BY clause with UDFs.

@SparkQA

This comment has been minimized.

@skonto
Copy link
Contributor Author

skonto commented Jul 18, 2019

@HyukjinKwon sure im on it.

@skonto
Copy link
Contributor Author

skonto commented Jul 18, 2019

jenkins test this please

@HyukjinKwon
Copy link
Member

@skonto, do you know why we have such diff?

 -- !query 2
-SELECT COUNT(udf(a)), udf(COUNT(b)) FROM testData
+SELECT COUNT(a), COUNT(b) FROM testData
 -- !query 2 schema
-struct<count(udf(a)):bigint,udf(count(b)):string>
+struct<count(a):bigint,count(b):bigint>
 -- !query 2 output
-9	7
+7	7
 -- !query 3
-SELECT CAST(udf(a) as int), COUNT(udf(b)) FROM testData GROUP BY a
+SELECT a, COUNT(b) FROM testData GROUP BY a
 -- !query 3 schema
-struct<CAST(udf(a) AS INT):int,count(udf(b)):bigint>
+struct<a:int,count(b):bigint>
 -- !query 3 output
 1	2
 2	2
-3	3
-NULL	2
+3	2
+NULL	1
 -- !query 5
-SELECT COUNT(udf(a)), COUNT(udf(b)) FROM testData GROUP BY udf(a)
+SELECT COUNT(a), COUNT(b) FROM testData GROUP BY a
 -- !query 5 schema
-struct<count(udf(a)):bigint,count(udf(b)):bigint>
+struct<count(a):bigint,count(b):bigint>
 -- !query 5 output
+0	1
 2	2
 2	2
-2	2
-3	3
+3	2
 -- !query 6
-SELECT 'foo', COUNT(udf(a)) FROM testData GROUP BY 1
+SELECT 'foo', COUNT(a) FROM testData GROUP BY 1
 -- !query 6 schema
-struct<foo:string,count(udf(a)):bigint>
+struct<foo:string,count(a):bigint>
 -- !query 6 output
-foo	9
+foo	7
 -- !query 13
-SELECT SKEWNESS(udf(a)), udf(KURTOSIS(a)), udf(MIN(a)), CAST(MAX(udf(a)) as int), udf(AVG(udf(a))), udf(VARIANCE(a)), STDDEV(udf(a)), udf(SUM(a)), udf(COUNT(a))
+SELECT SKEWNESS(a), KURTOSIS(a), MIN(a), MAX(a), AVG(a), VARIANCE(a), STDDEV(a), SUM(a), COUNT(a)
 FROM testData
 -- !query 13 schema
-struct<skewness(CAST(udf(a) AS DOUBLE)):double,udf(kurtosis(cast(a as double))):string,udf(min(a)):string,CAST(max(udf(a)) AS INT):int,udf(avg(cast(udf(a) as double))):string,udf(var_samp(cast(a as double))):string,stddev_samp(CAST(udf(a) AS DOUBLE)):double,udf(sum(cast(a as bigint))):string,udf(count(a)):string>
+struct<skewness(CAST(a AS DOUBLE)):double,kurtosis(CAST(a AS DOUBLE)):double,min(a):int,max(a):int,avg(a):double,var_samp(CAST(a AS DOUBLE)):double,stddev_samp(CAST(a AS DOUBLE)):double,sum(a):bigint,count(a):bigint>
 -- !query 13 output
--0.2723801058145729	-1.5069204152249134	1	NULL	2.142857142857143	0.8095238095238094	0.8997354108424372	15	7
+-0.2723801058145729	-1.5069204152249134	1	3	2.142857142857143	0.8095238095238094	0.8997354108424372	15	7
 -- !query 15
-SELECT a AS k, COUNT(udf(b)) FROM testData GROUP BY k
+SELECT a AS k, COUNT(b) FROM testData GROUP BY k
 -- !query 15 schema
-struct<k:int,count(udf(b)):bigint>
+struct<k:int,count(b):bigint>
 -- !query 15 output
 1	2
 2	2
-3	3
-NULL	2
+3	2
+NULL	1

@skonto
Copy link
Contributor Author

skonto commented Jul 18, 2019

@HyukjinKwon I will update the diff, because this is before my update... however what I noticed before your PR is that udf created strings changed the count results,eg NULL strings will be counted as separate results and will not be skipped.

@skonto skonto force-pushed the group-by.sql branch 2 times, most recently from 05c1b9e to 6a5da53 Compare July 18, 2019 13:02
@skonto
Copy link
Contributor Author

skonto commented Jul 18, 2019

@HyukjinKwon I updated the diff the issue I see is:

 -- !query 12
-SELECT a + 1 + 1, COUNT(b) FROM testData GROUP BY a + 1
+SELECT udf(a + 1 + 1), udf(COUNT(b)) FROM testData GROUP BY udf(a + 1)
 -- !query 12 schema
-struct<((a + 1) + 1):int,count(b):bigint>
+struct<>
 -- !query 12 output
-3	2
-4	2
-5	2
-NULL	1
+org.apache.spark.sql.AnalysisException
+expression 'testdata.`a`' is neither present in the group by, nor is it an aggregate function. Add to group by or wrap in first() (or first_value) if you don't care which value you get.;

which is expected when you add udf at the group by part this way right (comparing with the other tests)?

@SparkQA
Copy link

SparkQA commented Jul 18, 2019

Test build #107845 has finished for PR 25098 at commit 4029048.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Following original group-by.sql, I think this should be SELECT udf(a + 1) + 1, udf(COUNT(b)) FROM testData GROUP BY udf(a + 1)?

Copy link
Contributor Author

@skonto skonto Jul 18, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah it could be, both should work though.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this query is grouping by udf(a + 1), udf(a + 1 + 1) is an expression the analyzer will complain about. (an AnalysisException)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes you are right.. its based on group value. Will change it.

Copy link
Contributor Author

@skonto skonto Jul 18, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@viirya @HyukjinKwon If I do that scala udf will generate:

 SELECT udf(a + 1) + 1, udf(COUNT(b)) FROM testData GROUP BY udf(a + 1)
 -- !query 12 schema
-struct<>
+struct<(CAST(udf(cast((a + 1) as string)) AS INT) + 1):int,CAST(udf(cast(count(b) as string)) AS BIGINT):bigint>
 -- !query 12 output
-org.apache.spark.sql.AnalysisException
-expression 'testdata.`a`' is neither present in the group by, nor is it an aggregate function. Add to group by or wrap in first() (or first_value) if you don't care which value you get.;
+3      2
+4      2
+5      2
+NULL   1

but then python and pandas udfs will fail with that exception and will over-write the generated file. So then next time I run without SPARK_GENERATE_GOLDEN_FILES=1 scala tests will fail. This does not look good from a first glance. I will try test it outside tests in spark shell.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's one example when I met such case.

Copy link
Contributor Author

@skonto skonto Jul 19, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool I will create the jira thanks and will update the comment pointing to the jira.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@HyukjinKwon Jira, also added a pointer to it in the comment.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Submitted #25215 to fix it.

@SparkQA
Copy link

SparkQA commented Jul 18, 2019

Test build #107846 has finished for PR 25098 at commit 05c1b9e.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Jul 18, 2019

Test build #107854 has finished for PR 25098 at commit 6a5da53.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't we want to comment out this query before the issue is fixed?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And, let's change it to

SELECT udf(a + 1), udf(COUNT(b)) FROM testData GROUP BY udf(a + 1);

as @viirya pointed out.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the problem is that git diff --no-index isn't pretty,

  1. run without commenting this
  2. save the diff by git diff --no-index
  3. manually remove the diff related to this query
  4. update PR description
  5. rerun the test after commenting this query back
  6. push it to this PR.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok I will do the above.

@SparkQA
Copy link

SparkQA commented Jul 19, 2019

Test build #107902 has finished for PR 25098 at commit d63717e.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Jul 19, 2019

Test build #107904 has finished for PR 25098 at commit 67c381e.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@HyukjinKwon
Copy link
Member

I will merge this one first if @skonto comments out the test here. We can re-enable it at #25215.

@HyukjinKwon
Copy link
Member

@skonto, can you address #25098 (comment) comment? then I will get this in.

@skonto
Copy link
Contributor Author

skonto commented Jul 22, 2019

@HyukjinKwon done.

@SparkQA
Copy link

SparkQA commented Jul 22, 2019

Test build #108006 has finished for PR 25098 at commit 9dc5aa1.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@HyukjinKwon
Copy link
Member

Merged to master.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants