forked from apache/spark
-
Notifications
You must be signed in to change notification settings - Fork 0
merge #4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
merge #4
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
## What changes were proposed in this pull request? Today all registered metric sources are reported to GraphiteSink with no filtering mechanism, although the codahale project does support it. GraphiteReporter (ScheduledReporter) from the codahale project requires you implement and supply the MetricFilter interface (there is only a single implementation by default in the codahale project, MetricFilter.ALL). Propose to add an additional regex config to match and filter metrics to the GraphiteSink ## How was this patch tested? Included a GraphiteSinkSuite that tests: 1. Absence of regex filter (existing default behavior maintained) 2. Presence of `regex=<regexexpr>` correctly filters metric keys Closes #25232 from nkarpov/graphite_regex. Authored-by: Nick Karpov <[email protected]> Signed-off-by: jerryshao <[email protected]>
…roup by and aggregate expression ## What changes were proposed in this pull request? When PythonUDF is used in group by, and it is also in aggregate expression, like ``` SELECT pyUDF(a + 1), COUNT(b) FROM testData GROUP BY pyUDF(a + 1) ``` It causes analysis exception in `CheckAnalysis`, like ``` org.apache.spark.sql.AnalysisException: expression 'testdata.`a`' is neither present in the group by, nor is it an aggregate function. ``` First, `CheckAnalysis` can't check semantic equality between PythonUDFs. Second, even we make it possible, runtime exception will be thrown ``` org.apache.spark.sql.catalyst.errors.package$TreeNodeException: Binding attribute, tree: pythonUDF1#8615 ... Cause: java.lang.RuntimeException: Couldn't find pythonUDF1#8615 in [cast(pythonUDF0#8614 as int)#8617,count(b#8599)#8607L] ``` The cause is, `ExtractPythonUDFs` extracts both PythonUDFs in group by and aggregate expression. The PythonUDFs are two different aliases now in the logical aggregate. In runtime, we can't bind the resulting expression in aggregate to its grouping and aggregate attributes. This patch proposes a rule `ExtractGroupingPythonUDFFromAggregate` to extract PythonUDFs in group by and evaluate them before aggregate. We replace the group by PythonUDF in aggregate expression with aliased result. The query plan of query `SELECT pyUDF(a + 1), pyUDF(COUNT(b)) FROM testData GROUP BY pyUDF(a + 1)`, like ``` == Optimized Logical Plan == Project [CAST(pyUDF(cast((a + 1) as string)) AS INT)#8608, cast(pythonUDF0#8616 as bigint) AS CAST(pyUDF(cast(count(b) as string)) AS BIGINT)#8610L] +- BatchEvalPython [pyUDF(cast(agg#8613L as string))], [pythonUDF0#8616] +- Aggregate [cast(groupingPythonUDF#8614 as int)], [cast(groupingPythonUDF#8614 as int) AS CAST(pyUDF(cast((a + 1) as string)) AS INT)#8608, count(b#8599) AS agg#8613L] +- Project [pythonUDF0#8615 AS groupingPythonUDF#8614, b#8599] +- BatchEvalPython [pyUDF(cast((a#8598 + 1) as string))], [pythonUDF0#8615] +- LocalRelation [a#8598, b#8599] == Physical Plan == *(3) Project [CAST(pyUDF(cast((a + 1) as string)) AS INT)#8608, cast(pythonUDF0#8616 as bigint) AS CAST(pyUDF(cast(count(b) as string)) AS BIGINT)#8610L] +- BatchEvalPython [pyUDF(cast(agg#8613L as string))], [pythonUDF0#8616] +- *(2) HashAggregate(keys=[cast(groupingPythonUDF#8614 as int)#8617], functions=[count(b#8599)], output=[CAST(pyUDF(cast((a + 1) as string)) AS INT)#8608, agg#8613L]) +- Exchange hashpartitioning(cast(groupingPythonUDF#8614 as int)#8617, 5), true +- *(1) HashAggregate(keys=[cast(groupingPythonUDF#8614 as int) AS cast(groupingPythonUDF#8614 as int)#8617], functions=[partial_count(b#8599)], output=[cast(groupingPythonUDF#8614 as int)#8617, count#8619L]) +- *(1) Project [pythonUDF0#8615 AS groupingPythonUDF#8614, b#8599] +- BatchEvalPython [pyUDF(cast((a#8598 + 1) as string))], [pythonUDF0#8615] +- LocalTableScan [a#8598, b#8599] ``` ## How was this patch tested? Added tests. Closes #25215 from viirya/SPARK-28445. Authored-by: Liang-Chi Hsieh <[email protected]> Signed-off-by: HyukjinKwon <[email protected]>
… which fail on Python 3.7 ## What changes were proposed in this pull request? Fix flaky test DaemonTests.do_termination_test which fail on Python 3.7. I add a sleep after the test connection to daemon. ## How was this patch tested? Run test ``` python/run-tests --python-executables=python3.7 --testname "pyspark.tests.test_daemon DaemonTests" ``` **Before** Fail on test "test_termination_sigterm". And we can see daemon process do not exit. **After** Test passed Closes #25315 from WeichenXu123/fix_py37_daemon. Authored-by: WeichenXu <[email protected]> Signed-off-by: HyukjinKwon <[email protected]>
## What changes were proposed in this pull request? avoid `.ml.vector => .breeze.vector` conversion in `MaxAbsScaler`, and reuse the transformation method in `StandardScalerModel`, which can deal with dense & sparse vector separately. ## How was this patch tested? existing suites Closes #25311 from zhengruifeng/maxabs_opt. Authored-by: zhengruifeng <[email protected]> Signed-off-by: Sean Owen <[email protected]>
## What changes were proposed in this pull request? This PR implements Spark's own GetFunctionsOperation which mitigates the differences between Spark SQL and Hive UDFs. But our implementation is different from Hive's implementation: - Our implementation always returns results. Hive only returns results when [(null == catalogName || "".equals(catalogName)) && (null == schemaName || "".equals(schemaName))](https://github.com/apache/hive/blob/rel/release-3.1.1/service/src/java/org/apache/hive/service/cli/operation/GetFunctionsOperation.java#L101-L119). - Our implementation pads the `REMARKS` field with the function usage - Hive returns an empty string. - Our implementation does not support `FUNCTION_TYPE`, but Hive does. ## How was this patch tested? unit tests Closes #25252 from wangyum/SPARK-28510. Authored-by: Yuming Wang <[email protected]> Signed-off-by: gatorsmile <[email protected]>
…tation ## What changes were proposed in this pull request? Update HashingTF to use new implementation of MurmurHash3 Make HashingTF use the old MurmurHash3 when a model from pre 3.0 is loaded ## How was this patch tested? Change existing unit tests. Also add one unit test to make sure HashingTF use the old MurmurHash3 when a model from pre 3.0 is loaded Closes #25303 from huaxingao/spark-23469. Authored-by: Huaxin Gao <[email protected]> Signed-off-by: Sean Owen <[email protected]>
## What changes were proposed in this pull request? Imputer currently requires input column to be Double or Float, but the logic should work on any numeric data types. Many practical problems have integer data types, and it could get very tedious to manually cast them into Double before calling imputer. This transformer could be extended to handle all numeric types. ## How was this patch tested? new test Closes #17864 from actuaryzhang/imputer. Lead-authored-by: actuaryzhang <[email protected]> Co-authored-by: Wayne Zhang <[email protected]> Signed-off-by: Sean Owen <[email protected]>
…ependence ## What changes were proposed in this pull request? See discussion on the JIRA (and dev). At heart, we find that math.log and math.pow can actually return slightly different results across platforms because of hardware optimizations. For the actual SQL log and pow functions, I propose that we should use StrictMath instead to ensure the answers are already the same. (This should have the benefit of helping tests pass on aarch64.) Further, the atanh function (which is not part of java.lang.Math) can be implemented in a slightly different and more accurate way. ## How was this patch tested? Existing tests (which will need to be changed). Some manual testing locally to understand the numeric issues. Closes #25279 from srowen/SPARK-28519. Authored-by: Sean Owen <[email protected]> Signed-off-by: Sean Owen <[email protected]>
## What changes were proposed in this pull request? `minPartitions` has been used as a hint and relevant method (KafkaOffsetRangeCalculator.getRanges) doesn't guarantee the behavior that partitions will be equal or more than given value. https://github.com/apache/spark/blob/d67b98ea016e9b714bef68feaac108edd08159c9/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetRangeCalculator.scala#L32-L46 This patch makes clear the configuration is a hint, and actual partitions could be less or more. ## How was this patch tested? Just a documentation change. Closes #25332 from HeartSaVioR/MINOR-correct-kafka-structured-streaming-doc-minpartition. Authored-by: Jungtaek Lim (HeartSaVioR) <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]>
…ion_test which fail on Python 3.7" This reverts commit fbeee0c.
…) batches "Subquery" and "Join Reorder" ## What changes were proposed in this pull request? Explained why "Subquery" and "Join Reorder" optimization batches should be `FixedPoint(1)`, which was introduced in SPARK-28532 and SPARK-28530. ## How was this patch tested? Existing UTs. Closes #25320 from yeshengm/SPARK-28530-followup. Lead-authored-by: Xiao Li <[email protected]> Co-authored-by: Yesheng Ma <[email protected]> Signed-off-by: gatorsmile <[email protected]>
## What changes were proposed in this pull request?
Add configuration spark.scheduler.listenerbus.eventqueue.${name}.capacity to allow configuration of different event queue size.
## How was this patch tested?
Unit test in core/src/test/scala/org/apache/spark/scheduler/SparkListenerSuite.scala
Closes #25307 from yunzoud/SPARK-28574.
Authored-by: yunzoud <[email protected]>
Signed-off-by: Shixiong Zhu <[email protected]>
## What changes were proposed in this pull request? CRAN repo changed the key and it causes our release script failure. This is a release blocker for Apache Spark 2.4.4 and 3.0.0. - https://cran.r-project.org/bin/linux/ubuntu/README.html ``` Err:1 https://cloud.r-project.org/bin/linux/ubuntu bionic-cran35/ InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 51716619E084DAB9 ... W: GPG error: https://cloud.r-project.org/bin/linux/ubuntu bionic-cran35/ InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 51716619E084DAB9 E: The repository 'https://cloud.r-project.org/bin/linux/ubuntu bionic-cran35/ InRelease' is not signed. ``` Note that they are reusing `cran35` for R 3.6 although they changed the key. ``` Even though R has moved to version 3.6, for compatibility the sources.list entry still uses the cran3.5 designation. ``` This PR aims to recover the docker image generation first. We will verify the R doc generation in a separate JIRA and PR. ## How was this patch tested? Manual. After `docker-build.log`, it should continue to the next stage, `Building v3.0.0-rc1`. ``` $ dev/create-release/do-release-docker.sh -d /tmp/spark-3.0.0 -n -s docs ... Log file: docker-build.log Building v3.0.0-rc1; output will be at /tmp/spark-3.0.0/output ``` Closes #25339 from dongjoon-hyun/SPARK-28606. Authored-by: Dongjoon Hyun <[email protected]> Signed-off-by: DB Tsai <[email protected]>
…which fail on Python 3.7 ## What changes were proposed in this pull request? This PR picks up #25315 back after removing `Popen.wait` usage which exists in Python 3 only. I saw the last test results wrongly and thought it was passed. Fix flaky test DaemonTests.do_termination_test which fail on Python 3.7. I add a sleep after the test connection to daemon. ## How was this patch tested? Run test ``` python/run-tests --python-executables=python3.7 --testname "pyspark.tests.test_daemon DaemonTests" ``` **Before** Fail on test "test_termination_sigterm". And we can see daemon process do not exit. **After** Test passed Closes #25343 from HyukjinKwon/SPARK-28582. Authored-by: WeichenXu <[email protected]> Signed-off-by: HyukjinKwon <[email protected]>
## What changes were proposed in this pull request? This PR aims to fix the broken styles/links and make the doc up-to-date for Apache Spark 2.4.4 and 3.0.0 release. - `building-spark.md`  - `configuration.md`  - `sql-pyspark-pandas-with-arrow.md`  - `streaming-programming-guide.md`  - `structured-streaming-programming-guide.md` (1/2)  - `structured-streaming-programming-guide.md` (2/2)  - `submitting-applications.md`  ## How was this patch tested? Manual. Build the doc. ``` SKIP_API=1 jekyll build ``` Closes #25345 from dongjoon-hyun/SPARK-28609. Authored-by: Dongjoon Hyun <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]>
…-1 for accuracy ## What changes were proposed in this pull request? Use `log1p(x)` over `log(1+x)` and `expm1(x)` over `exp(x)-1` for accuracy, where possible. This should improve accuracy a tiny bit in ML-related calculations, and shouldn't hurt in any event. ## How was this patch tested? Existing tests. Closes #25337 from srowen/SPARK-28604. Authored-by: Sean Owen <[email protected]> Signed-off-by: Sean Owen <[email protected]>
…den result file ## What changes were proposed in this pull request? It's hard to know if the query needs to be sorted like [`SQLQueryTestSuite.isSorted`](https://github.com/apache/spark/blob/2ecc39c8d346437811fa991dc6471dc6862eb1f2/sql/core/src/test/scala/org/apache/spark/sql/SQLQueryTestSuite.scala#L375-L380) when building a test framework for Thriftserver. So we can sort both the `outputs` and the `expectedOutputs. However, we removed leading write space in the golden result file. This can lead to inconsistent results. This PR makes it does not remove leading write space in the golden result file. Trailing write space still needs to be removed. ## How was this patch tested? N/A Closes #25351 from wangyum/SPARK-28614. Authored-by: Yuming Wang <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]>
…d strip trailing dots
## What changes were proposed in this pull request?
This PR aims to improve the `merge-spark-pr` script in the following two ways.
1. `[WIP]` is useful when we show that a PR is not ready for merge. Apache Spark allows merging `WIP` PRs. However, sometime, we accidentally forgot to clean up the title for the completed PRs. We had better warn once more during merging stage and get a confirmation from the committers.
2. We have two kinds of PR titles in terms of the ending period. This PR aims to remove the trailing `dot` since the shorter is the better in the commit title. Also, the PR titles without the trailing `dot` is dominant in the Apache Spark commit logs.
```
$ git log --oneline | grep '[.]$' | wc -l
4090
$ git log --oneline | grep '[^.]$' | wc -l
20747
```
## How was this patch tested?
Manual.
```
$ dev/merge_spark_pr.py
git rev-parse --abbrev-ref HEAD
Which pull request would you like to merge? (e.g. 34): 25157
The PR title has `[WIP]`:
[WIP][SPARK-28396][SQL] Add PathCatalog for data source V2
Continue? (y/n):
```
```
$ dev/merge_spark_pr.py
git rev-parse --abbrev-ref HEAD
Which pull request would you like to merge? (e.g. 34): 25304
I've re-written the title as follows to match the standard format:
Original: [SPARK-28570][CORE][SHUFFLE] Make UnsafeShuffleWriter use the new API.
Modified: [SPARK-28570][CORE][SHUFFLE] Make UnsafeShuffleWriter use the new API
Would you like to use the modified title? (y/n):
```
Closes #25356 from dongjoon-hyun/SPARK-28616.
Authored-by: Dongjoon Hyun <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
## What changes were proposed in this pull request? This adds an interface for catalog plugins that exposes namespace operations: * `listNamespaces` * `namespaceExists` * `loadNamespaceMetadata` * `createNamespace` * `alterNamespace` * `dropNamespace` ## How was this patch tested? API only. Existing tests for regressions. Closes #24560 from rdblue/SPARK-27661-add-catalog-namespace-api. Authored-by: Ryan Blue <[email protected]> Signed-off-by: Burak Yavuz <[email protected]>
…re close to its usage ## What changes were proposed in this pull request? After SPARK-27677, the shuffle client not only handles the shuffle block but also responsible for local persist RDD blocks. For better code scalability and precise semantics(as the [discussion](#24892 (comment))), here we did several changes: - Rename ShuffleClient to BlockStoreClient. - Correspondingly rename the ExternalShuffleClient to ExternalBlockStoreClient, also change the server-side class from ExternalShuffleBlockHandler to ExternalBlockHandler. - Move MesosExternalBlockStoreClient to Mesos package. Note, we still keep the name of BlockTransferService, because the `Service` contains both client and server, also the name of BlockTransferService is not referencing shuffle client only. ## How was this patch tested? Existing UT. Closes #25327 from xuanyuanking/SPARK-28593. Lead-authored-by: Yuanjian Li <[email protected]> Co-authored-by: Yuanjian Li <[email protected]> Signed-off-by: Wenchen Fan <[email protected]>
## What changes were proposed in this pull request? - DataFrameWriter.insertInto should match column names by position. - Clean up test cases. ## How was this patch tested? New tests: - insertInto: append by position - insertInto: overwrite partitioned table in static mode by position - insertInto: overwrite partitioned table in dynamic mode by position Closes #25353 from jzhuge/SPARK-28178-bypos. Authored-by: John Zhuge <[email protected]> Signed-off-by: Wenchen Fan <[email protected]>
…dcastBlock to avoid delete by GC ## What changes were proposed in this pull request? Currently, PythonBroadcast may delete its data file while a python worker still needs it. This happens because PythonBroadcast overrides the `finalize()` method to delete its data file. So, when GC happens and no references on broadcast variable, it may trigger `finalize()` to delete data file. That's also means, data under python Broadcast variable couldn't be deleted when `unpersist()`/`destroy()` called but relys on GC. In this PR, we removed the `finalize()` method, and map the PythonBroadcast data file to a BroadcastBlock(which has the same broadcast id with the broadcast variable who wrapped this PythonBroadcast) when PythonBroadcast is deserializing. As a result, the data file could be deleted just like other pieces of the Broadcast variable when `unpersist()`/`destroy()` called and do not rely on GC any more. ## How was this patch tested? Added a Python test, and tested manually(verified create/delete the broadcast block). Closes #25262 from Ngone51/SPARK-28486. Authored-by: wuyi <[email protected]> Signed-off-by: HyukjinKwon <[email protected]>
…ed queries
DebugExec does not implement doExecuteBroadcast and doExecuteColumnar so we can't debug broadcast or columnar related query.
One example for broadcast is here.
```
val df1 = Seq(1, 2, 3).toDF
val df2 = Seq(1, 2, 3).toDF
val joined = df1.join(df2, df1("value") === df2("value"))
joined.debug()
java.lang.UnsupportedOperationException: Debug does not implement doExecuteBroadcast
...
```
Another for columnar is here.
```
val df = Seq(1, 2, 3).toDF
df.persist
df.debug()
java.lang.IllegalStateException: Internal Error class org.apache.spark.sql.execution.debug.package$DebugExec has column support mismatch:
...
```
## How was this patch tested?
Additional test cases in DebuggingSuite.
Closes #25274 from sarutak/fix-debugexec.
Authored-by: Kousuke Saruta <[email protected]>
Signed-off-by: Takeshi Yamamuro <[email protected]>
## What changes were proposed in this pull request? This is an alternative solution of #24442 . It fails the query if ambiguous self join is detected, instead of trying to disambiguate it. The problem is that, it's hard to come up with a reasonable rule to disambiguate, the rule proposed by #24442 is mostly a heuristic. ### background of the self-join problem: This is a long-standing bug and I've seen many people complaining about it in JIRA/dev list. A typical example: ``` val df1 = … val df2 = df1.filter(...) df1.join(df2, df1("a") > df2("a")) // returns empty result ``` The root cause is, `Dataset.apply` is so powerful that users think it returns a column reference which can point to the column of the Dataset at anywhere. This is not true in many cases. `Dataset.apply` returns an `AttributeReference` . Different Datasets may share the same `AttributeReference`. In the example above, `df2` adds a Filter operator above the logical plan of `df1`, and the Filter operator reserves the output `AttributeReference` of its child. This means, `df1("a")` is exactly the same as `df2("a")`, and `df1("a") > df2("a")` always evaluates to false. ### The rule to detect ambiguous column reference caused by self join: We can reuse the infra in #24442 : 1. each Dataset has a globally unique id. 2. the `AttributeReference` returned by `Dataset.apply` carries the ID and column position(e.g. 3rd column of the Dataset) via metadata. 3. the logical plan of a `Dataset` carries the ID via `TreeNodeTag` When self-join happens, the analyzer asks the right side plan of join to re-generate output attributes with new exprIds. Based on it, a simple rule to detect ambiguous self join is: 1. find all column references (i.e. `AttributeReference`s with Dataset ID and col position) in the root node of a query plan. 2. for each column reference, traverse the query plan tree, find a sub-plan that carries Dataset ID and the ID is the same as the one in the column reference. 3. get the corresponding output attribute of the sub-plan by the col position in the column reference. 4. if the corresponding output attribute has a different exprID than the column reference, then it means this sub-plan is on the right side of a self-join and has regenerated its output attributes. This is an ambiguous self join because the column reference points to a table being self-joined. ## How was this patch tested? existing tests and new test cases Closes #25107 from cloud-fan/new-self-join. Authored-by: Wenchen Fan <[email protected]> Signed-off-by: Wenchen Fan <[email protected]>
…ecution framework ## What changes were proposed in this pull request? I did a post-hoc review of #25008 , and would like to propose some cleanups/fixes/improvements: 1. Do not track the scanTime metrics in `ColumnarToRowExec`. This metrics is specific to file scan, and doesn't make sense for a general batch-to-row operator. 2. Because of 2, we need to track scanTime when building RDDs in the file scan node. 3. use `RDD#mapPartitionsInternal` instead of `flatMap` in several places, as `mapPartitionsInternal` is created for Spark SQL and we use it in almost all the SQL operators. 4. Add `limitNotReachedCond` in `ColumnarToRowExec`. This was in the `ColumnarBatchScan` before and is critical for performance. 5. Clear the relationship between codegen stage and columnar stage. The whole-stage-codegen framework is completely row-based, so these 2 kinds of stages can NEVER overlap. When they are adjacent, it's either a `RowToColumnarExec` above `WholeStageExec`, or a `ColumnarToRowExec` above the `InputAdapter`. 6. Reuse the `ColumnarBatch` in `RowToColumnarExec`. We don't need to create a new one every time, just need to reset it. 7. Do not skip testing full scan node in `LogicalPlanTagInSparkPlanSuite` 8. Add back the removed tests in `WholeStageCodegenSuite`. ## How was this patch tested? existing tests Closes #25264 from cloud-fan/minor. Authored-by: Wenchen Fan <[email protected]> Signed-off-by: Wenchen Fan <[email protected]>
…" string representation, and get rid of UnsupportedEncodingException ## What changes were proposed in this pull request? This patch tries to keep consistency whenever UTF-8 charset is needed, as using `StandardCharsets.UTF_8` instead of using "UTF-8". If the String type is needed, `StandardCharsets.UTF_8.name()` is used. This change also brings the benefit of getting rid of `UnsupportedEncodingException`, as we're providing `Charset` instead of `String` whenever possible. This also changes some private Catalyst helper methods to operate on encodings as `Charset` objects rather than strings. ## How was this patch tested? Existing unit tests. Closes #25335 from HeartSaVioR/SPARK-28601. Authored-by: Jungtaek Lim (HeartSaVioR) <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]>
… by clause in 'udf-group-analytics.sql' ## What changes were proposed in this pull request? This PR is a followup of a fix as described in here: #25215 (comment) <details><summary>Diff comparing to 'group-analytics.sql'</summary> <p> ```diff diff --git a/sql/core/src/test/resources/sql-tests/results/udf/udf-group-analytics.sql.out b/sql/core/src/test/resources/sql-tests/results/udf/udf-group-analytics.sql.out index 3439a05..de297ab 100644 --- a/sql/core/src/test/resources/sql-tests/results/udf/udf-group-analytics.sql.out +++ b/sql/core/src/test/resources/sql-tests/results/udf/udf-group-analytics.sql.out -13,9 +13,9 struct<> -- !query 1 -SELECT a + b, b, SUM(a - b) FROM testData GROUP BY a + b, b WITH CUBE +SELECT udf(a + b), b, udf(SUM(a - b)) FROM testData GROUP BY udf(a + b), b WITH CUBE -- !query 1 schema -struct<(a + b):int,b:int,sum((a - b)):bigint> +struct<CAST(udf(cast((a + b) as string)) AS INT):int,b:int,CAST(udf(cast(sum(cast((a - b) as bigint)) as string)) AS BIGINT):bigint> -- !query 1 output 2 1 0 2 NULL 0 -33,9 +33,9 NULL NULL 3 -- !query 2 -SELECT a, b, SUM(b) FROM testData GROUP BY a, b WITH CUBE +SELECT udf(a), udf(b), SUM(b) FROM testData GROUP BY udf(a), b WITH CUBE -- !query 2 schema -struct<a:int,b:int,sum(b):bigint> +struct<CAST(udf(cast(a as string)) AS INT):int,CAST(udf(cast(b as string)) AS INT):int,sum(b):bigint> -- !query 2 output 1 1 1 1 2 2 -52,9 +52,9 NULL NULL 9 -- !query 3 -SELECT a + b, b, SUM(a - b) FROM testData GROUP BY a + b, b WITH ROLLUP +SELECT udf(a + b), b, SUM(a - b) FROM testData GROUP BY a + b, b WITH ROLLUP -- !query 3 schema -struct<(a + b):int,b:int,sum((a - b)):bigint> +struct<CAST(udf(cast((a + b) as string)) AS INT):int,b:int,sum((a - b)):bigint> -- !query 3 output 2 1 0 2 NULL 0 -70,9 +70,9 NULL NULL 3 -- !query 4 -SELECT a, b, SUM(b) FROM testData GROUP BY a, b WITH ROLLUP +SELECT udf(a), b, udf(SUM(b)) FROM testData GROUP BY udf(a), b WITH ROLLUP -- !query 4 schema -struct<a:int,b:int,sum(b):bigint> +struct<CAST(udf(cast(a as string)) AS INT):int,b:int,CAST(udf(cast(sum(cast(b as bigint)) as string)) AS BIGINT):bigint> -- !query 4 output 1 1 1 1 2 2 -97,7 +97,7 struct<> -- !query 6 -SELECT course, year, SUM(earnings) FROM courseSales GROUP BY ROLLUP(course, year) ORDER BY course, year +SELECT course, year, SUM(earnings) FROM courseSales GROUP BY ROLLUP(course, year) ORDER BY udf(course), year -- !query 6 schema struct<course:string,year:int,sum(earnings):bigint> -- !query 6 output -111,7 +111,7 dotNET 2013 48000 -- !query 7 -SELECT course, year, SUM(earnings) FROM courseSales GROUP BY CUBE(course, year) ORDER BY course, year +SELECT course, year, SUM(earnings) FROM courseSales GROUP BY CUBE(course, year) ORDER BY course, udf(year) -- !query 7 schema struct<course:string,year:int,sum(earnings):bigint> -- !query 7 output -127,9 +127,9 dotNET 2013 48000 -- !query 8 -SELECT course, year, SUM(earnings) FROM courseSales GROUP BY course, year GROUPING SETS(course, year) +SELECT course, udf(year), SUM(earnings) FROM courseSales GROUP BY course, year GROUPING SETS(course, year) -- !query 8 schema -struct<course:string,year:int,sum(earnings):bigint> +struct<course:string,CAST(udf(cast(year as string)) AS INT):int,sum(earnings):bigint> -- !query 8 output Java NULL 50000 NULL 2012 35000 -138,26 +138,26 dotNET NULL 63000 -- !query 9 -SELECT course, year, SUM(earnings) FROM courseSales GROUP BY course, year GROUPING SETS(course) +SELECT course, year, udf(SUM(earnings)) FROM courseSales GROUP BY course, year GROUPING SETS(course) -- !query 9 schema -struct<course:string,year:int,sum(earnings):bigint> +struct<course:string,year:int,CAST(udf(cast(sum(cast(earnings as bigint)) as string)) AS BIGINT):bigint> -- !query 9 output Java NULL 50000 dotNET NULL 63000 -- !query 10 -SELECT course, year, SUM(earnings) FROM courseSales GROUP BY course, year GROUPING SETS(year) +SELECT udf(course), year, SUM(earnings) FROM courseSales GROUP BY course, year GROUPING SETS(year) -- !query 10 schema -struct<course:string,year:int,sum(earnings):bigint> +struct<CAST(udf(cast(course as string)) AS STRING):string,year:int,sum(earnings):bigint> -- !query 10 output NULL 2012 35000 NULL 2013 78000 -- !query 11 -SELECT course, SUM(earnings) AS sum FROM courseSales -GROUP BY course, earnings GROUPING SETS((), (course), (course, earnings)) ORDER BY course, sum +SELECT course, udf(SUM(earnings)) AS sum FROM courseSales +GROUP BY course, earnings GROUPING SETS((), (course), (course, earnings)) ORDER BY course, udf(sum) -- !query 11 schema struct<course:string,sum:bigint> -- !query 11 output -173,7 +173,7 dotNET 63000 -- !query 12 SELECT course, SUM(earnings) AS sum, GROUPING_ID(course, earnings) FROM courseSales -GROUP BY course, earnings GROUPING SETS((), (course), (course, earnings)) ORDER BY course, sum +GROUP BY course, earnings GROUPING SETS((), (course), (course, earnings)) ORDER BY udf(course), sum -- !query 12 schema struct<course:string,sum:bigint,grouping_id(course, earnings):int> -- !query 12 output -188,10 +188,10 dotNET 63000 1 -- !query 13 -SELECT course, year, GROUPING(course), GROUPING(year), GROUPING_ID(course, year) FROM courseSales +SELECT udf(course), udf(year), GROUPING(course), GROUPING(year), GROUPING_ID(course, year) FROM courseSales GROUP BY CUBE(course, year) -- !query 13 schema -struct<course:string,year:int,grouping(course):tinyint,grouping(year):tinyint,grouping_id(course, year):int> +struct<CAST(udf(cast(course as string)) AS STRING):string,CAST(udf(cast(year as string)) AS INT):int,grouping(course):tinyint,grouping(year):tinyint,grouping_id(course, year):int> -- !query 13 output Java 2012 0 0 0 Java 2013 0 0 0 -205,7 +205,7 dotNET NULL 0 1 1 -- !query 14 -SELECT course, year, GROUPING(course) FROM courseSales GROUP BY course, year +SELECT course, udf(year), GROUPING(course) FROM courseSales GROUP BY course, udf(year) -- !query 14 schema struct<> -- !query 14 output -214,7 +214,7 grouping() can only be used with GroupingSets/Cube/Rollup; -- !query 15 -SELECT course, year, GROUPING_ID(course, year) FROM courseSales GROUP BY course, year +SELECT course, udf(year), GROUPING_ID(course, year) FROM courseSales GROUP BY udf(course), year -- !query 15 schema struct<> -- !query 15 output -223,7 +223,7 grouping_id() can only be used with GroupingSets/Cube/Rollup; -- !query 16 -SELECT course, year, grouping__id FROM courseSales GROUP BY CUBE(course, year) ORDER BY grouping__id, course, year +SELECT course, year, grouping__id FROM courseSales GROUP BY CUBE(course, year) ORDER BY grouping__id, course, udf(year) -- !query 16 schema struct<course:string,year:int,grouping__id:int> -- !query 16 output -240,7 +240,7 NULL NULL 3 -- !query 17 SELECT course, year FROM courseSales GROUP BY CUBE(course, year) -HAVING GROUPING(year) = 1 AND GROUPING_ID(course, year) > 0 ORDER BY course, year +HAVING GROUPING(year) = 1 AND GROUPING_ID(course, year) > 0 ORDER BY course, udf(year) -- !query 17 schema struct<course:string,year:int> -- !query 17 output -250,7 +250,7 dotNET NULL -- !query 18 -SELECT course, year FROM courseSales GROUP BY course, year HAVING GROUPING(course) > 0 +SELECT course, udf(year) FROM courseSales GROUP BY udf(course), year HAVING GROUPING(course) > 0 -- !query 18 schema struct<> -- !query 18 output -259,7 +259,7 grouping()/grouping_id() can only be used with GroupingSets/Cube/Rollup; -- !query 19 -SELECT course, year FROM courseSales GROUP BY course, year HAVING GROUPING_ID(course) > 0 +SELECT course, udf(udf(year)) FROM courseSales GROUP BY course, year HAVING GROUPING_ID(course) > 0 -- !query 19 schema struct<> -- !query 19 output -268,9 +268,9 grouping()/grouping_id() can only be used with GroupingSets/Cube/Rollup; -- !query 20 -SELECT course, year FROM courseSales GROUP BY CUBE(course, year) HAVING grouping__id > 0 +SELECT udf(course), year FROM courseSales GROUP BY CUBE(course, year) HAVING grouping__id > 0 -- !query 20 schema -struct<course:string,year:int> +struct<CAST(udf(cast(course as string)) AS STRING):string,year:int> -- !query 20 output Java NULL NULL 2012 -281,7 +281,7 dotNET NULL -- !query 21 SELECT course, year, GROUPING(course), GROUPING(year) FROM courseSales GROUP BY CUBE(course, year) -ORDER BY GROUPING(course), GROUPING(year), course, year +ORDER BY GROUPING(course), GROUPING(year), course, udf(year) -- !query 21 schema struct<course:string,year:int,grouping(course):tinyint,grouping(year):tinyint> -- !query 21 output -298,7 +298,7 NULL NULL 1 1 -- !query 22 SELECT course, year, GROUPING_ID(course, year) FROM courseSales GROUP BY CUBE(course, year) -ORDER BY GROUPING(course), GROUPING(year), course, year +ORDER BY GROUPING(course), GROUPING(year), course, udf(year) -- !query 22 schema struct<course:string,year:int,grouping_id(course, year):int> -- !query 22 output -314,7 +314,7 NULL NULL 3 -- !query 23 -SELECT course, year FROM courseSales GROUP BY course, year ORDER BY GROUPING(course) +SELECT course, udf(year) FROM courseSales GROUP BY course, udf(year) ORDER BY GROUPING(course) -- !query 23 schema struct<> -- !query 23 output -323,7 +323,7 grouping()/grouping_id() can only be used with GroupingSets/Cube/Rollup; -- !query 24 -SELECT course, year FROM courseSales GROUP BY course, year ORDER BY GROUPING_ID(course) +SELECT course, udf(year) FROM courseSales GROUP BY course, udf(year) ORDER BY GROUPING_ID(course) -- !query 24 schema struct<> -- !query 24 output -332,7 +332,7 grouping()/grouping_id() can only be used with GroupingSets/Cube/Rollup; -- !query 25 -SELECT course, year FROM courseSales GROUP BY CUBE(course, year) ORDER BY grouping__id, course, year +SELECT course, year FROM courseSales GROUP BY CUBE(course, year) ORDER BY grouping__id, udf(course), year -- !query 25 schema struct<course:string,year:int> -- !query 25 output -348,7 +348,7 NULL NULL -- !query 26 -SELECT a + b AS k1, b AS k2, SUM(a - b) FROM testData GROUP BY CUBE(k1, k2) +SELECT udf(a + b) AS k1, udf(b) AS k2, SUM(a - b) FROM testData GROUP BY CUBE(k1, k2) -- !query 26 schema struct<k1:int,k2:int,sum((a - b)):bigint> -- !query 26 output -368,7 +368,7 NULL NULL 3 -- !query 27 -SELECT a + b AS k, b, SUM(a - b) FROM testData GROUP BY ROLLUP(k, b) +SELECT udf(udf(a + b)) AS k, b, SUM(a - b) FROM testData GROUP BY ROLLUP(k, b) -- !query 27 schema struct<k:int,b:int,sum((a - b)):bigint> -- !query 27 output -386,9 +386,9 NULL NULL 3 -- !query 28 -SELECT a + b, b AS k, SUM(a - b) FROM testData GROUP BY a + b, k GROUPING SETS(k) +SELECT udf(a + b), udf(udf(b)) AS k, SUM(a - b) FROM testData GROUP BY a + b, k GROUPING SETS(k) -- !query 28 schema -struct<(a + b):int,k:int,sum((a - b)):bigint> +struct<CAST(udf(cast((a + b) as string)) AS INT):int,k:int,sum((a - b)):bigint> -- !query 28 output NULL 1 3 NULL 2 0 ``` </p> </details> ## How was this patch tested? Tested as instructed in SPARK-27921. Closes #25362 from skonto/group-analytics-followup. Authored-by: Stavros Kontopoulos <[email protected]> Signed-off-by: HyukjinKwon <[email protected]>
## What changes were proposed in this pull request? This PR add supportColumnar in DebugExec. Seems there was a conflict between #25274 and #25264 Currently tests are broken in Jenkins: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/108687/ https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/108688/ https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/108693/ ``` org.apache.spark.sql.catalyst.errors.package$TreeNodeException: makeCopy, tree: ColumnarToRow +- InMemoryTableScan [id#356956L] +- InMemoryRelation [id#356956L], StorageLevel(disk, memory, deserialized, 1 replicas) +- *(1) Range (0, 5, step=1, splits=2) Stacktrace sbt.ForkMain$ForkError: org.apache.spark.sql.catalyst.errors.package$TreeNodeException: makeCopy, tree: ColumnarToRow +- InMemoryTableScan [id#356956L] +- InMemoryRelation [id#356956L], StorageLevel(disk, memory, deserialized, 1 replicas) +- *(1) Range (0, 5, step=1, splits=2) at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:56) at org.apache.spark.sql.catalyst.trees.TreeNode.makeCopy(TreeNode.scala:431) at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:404) at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:323) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:287) ``` ## How was this patch tested? Manually tested the failed test. Closes #25365 from HyukjinKwon/SPARK-28537. Authored-by: HyukjinKwon <[email protected]> Signed-off-by: HyukjinKwon <[email protected]>
…by clause in 'pgSQL/select_implicit.sql' ## What changes were proposed in this pull request? This PR adds UDF cases into group by clause in 'pgSQL/select_implicit.sql' <details><summary>Diff comparing to 'pgSQL/select_implicit.sql'</summary> <p> ```diff diff --git a/home/root1/src/spark/sql/core/src/test/resources/sql-tests/results/udf/pgSQL/udf-select_implicit.sql.out b/home/root1/src/spark/sql/core/src/test/resources/sql-tests/results/pgSQL/select_implicit.sql.out index 17303b2..0675820 100755 --- a/home/root1/src/spark/sql/core/src/test/resources/sql-tests/results/udf/pgSQL/udf-select_implicit.sql.out +++ b/home/root1/src/spark/sql/core/src/test/resources/sql-tests/results/pgSQL/select_implicit.sql.out -91,11 +91,9 struct<> -- !query 11 -SELECT udf(c), udf(count(*)) FROM test_missing_target GROUP BY -udf(test_missing_target.c) -ORDER BY udf(c) +SELECT c, count(*) FROM test_missing_target GROUP BY test_missing_target.c ORDER BY c -- !query 11 schema -struct<CAST(udf(cast(c as string)) AS STRING):string,CAST(udf(cast(count(1) as string)) AS BIGINT):bigint> +struct<c:string,count(1):bigint> -- !query 11 output ABAB 2 BBBB 2 -106,10 +104,9 cccc 2 -- !query 12 -SELECT udf(count(*)) FROM test_missing_target GROUP BY udf(test_missing_target.c) -ORDER BY udf(c) +SELECT count(*) FROM test_missing_target GROUP BY test_missing_target.c ORDER BY c -- !query 12 schema -struct<CAST(udf(cast(count(1) as string)) AS BIGINT):bigint> +struct<count(1):bigint> -- !query 12 output 2 2 -120,18 +117,18 struct<CAST(udf(cast(count(1) as string)) AS BIGINT):bigint> -- !query 13 -SELECT udf(count(*)) FROM test_missing_target GROUP BY udf(a) ORDER BY udf(b) +SELECT count(*) FROM test_missing_target GROUP BY a ORDER BY b -- !query 13 schema struct<> -- !query 13 output org.apache.spark.sql.AnalysisException -cannot resolve '`b`' given input columns: [CAST(udf(cast(count(1) as string)) AS BIGINT)]; line 1 pos 75 +cannot resolve '`b`' given input columns: [count(1)]; line 1 pos 61 -- !query 14 -SELECT udf(count(*)) FROM test_missing_target GROUP BY udf(b) ORDER BY udf(b) +SELECT count(*) FROM test_missing_target GROUP BY b ORDER BY b -- !query 14 schema -struct<CAST(udf(cast(count(1) as string)) AS BIGINT):bigint> +struct<count(1):bigint> -- !query 14 output 1 2 -140,10 +137,10 struct<CAST(udf(cast(count(1) as string)) AS BIGINT):bigint> -- !query 15 -SELECT udf(test_missing_target.b), udf(count(*)) - FROM test_missing_target GROUP BY udf(b) ORDER BY udf(b) +SELECT test_missing_target.b, count(*) + FROM test_missing_target GROUP BY b ORDER BY b -- !query 15 schema -struct<CAST(udf(cast(b as string)) AS INT):int,CAST(udf(cast(count(1) as string)) AS BIGINT):bigint> +struct<b:int,count(1):bigint> -- !query 15 output 1 1 2 2 -152,9 +149,9 struct<CAST(udf(cast(b as string)) AS INT):int,CAST(udf(cast(count(1) as string) -- !query 16 -SELECT udf(c) FROM test_missing_target ORDER BY udf(a) +SELECT c FROM test_missing_target ORDER BY a -- !query 16 schema -struct<CAST(udf(cast(c as string)) AS STRING):string> +struct<c:string> -- !query 16 output XXXX ABAB -169,10 +166,9 CCCC -- !query 17 -SELECT udf(count(*)) FROM test_missing_target GROUP BY udf(b) ORDER BY udf(b) -desc +SELECT count(*) FROM test_missing_target GROUP BY b ORDER BY b desc -- !query 17 schema -struct<CAST(udf(cast(count(1) as string)) AS BIGINT):bigint> +struct<count(1):bigint> -- !query 17 output 4 3 -181,17 +177,17 struct<CAST(udf(cast(count(1) as string)) AS BIGINT):bigint> -- !query 18 -SELECT udf(count(*)) FROM test_missing_target ORDER BY udf(1) desc +SELECT count(*) FROM test_missing_target ORDER BY 1 desc -- !query 18 schema -struct<CAST(udf(cast(count(1) as string)) AS BIGINT):bigint> +struct<count(1):bigint> -- !query 18 output 10 -- !query 19 -SELECT udf(c), udf(count(*)) FROM test_missing_target GROUP BY 1 ORDER BY 1 +SELECT c, count(*) FROM test_missing_target GROUP BY 1 ORDER BY 1 -- !query 19 schema -struct<CAST(udf(cast(c as string)) AS STRING):string,CAST(udf(cast(count(1) as string)) AS BIGINT):bigint> +struct<c:string,count(1):bigint> -- !query 19 output ABAB 2 BBBB 2 -202,30 +198,30 cccc 2 -- !query 20 -SELECT udf(c), udf(count(*)) FROM test_missing_target GROUP BY 3 +SELECT c, count(*) FROM test_missing_target GROUP BY 3 -- !query 20 schema struct<> -- !query 20 output org.apache.spark.sql.AnalysisException -GROUP BY position 3 is not in select list (valid range is [1, 2]); line 1 pos 63 +GROUP BY position 3 is not in select list (valid range is [1, 2]); line 1 pos 53 -- !query 21 -SELECT udf(count(*)) FROM test_missing_target x, test_missing_target y - WHERE udf(x.a) = udf(y.a) - GROUP BY udf(b) ORDER BY udf(b) +SELECT count(*) FROM test_missing_target x, test_missing_target y + WHERE x.a = y.a + GROUP BY b ORDER BY b -- !query 21 schema struct<> -- !query 21 output org.apache.spark.sql.AnalysisException -Reference 'b' is ambiguous, could be: x.b, y.b.; line 3 pos 14 +Reference 'b' is ambiguous, could be: x.b, y.b.; line 3 pos 10 -- !query 22 -SELECT udf(a), udf(a) FROM test_missing_target - ORDER BY udf(a) +SELECT a, a FROM test_missing_target + ORDER BY a -- !query 22 schema -struct<CAST(udf(cast(a as string)) AS INT):int,CAST(udf(cast(a as string)) AS INT):int> +struct<a:int,a:int> -- !query 22 output 0 0 1 1 -240,10 +236,10 struct<CAST(udf(cast(a as string)) AS INT):int,CAST(udf(cast(a as string)) AS IN -- !query 23 -SELECT udf(udf(a)/2), udf(udf(a)/2) FROM test_missing_target - ORDER BY udf(udf(a)/2) +SELECT a/2, a/2 FROM test_missing_target + ORDER BY a/2 -- !query 23 schema -struct<CAST(udf(cast((cast(udf(cast(a as string)) as int) div 2) as string)) AS INT):int,CAST(udf(cast((cast(udf(cast(a as string)) as int) div 2) as string)) AS INT):int> +struct<(a div 2):int,(a div 2):int> -- !query 23 output 0 0 0 0 -258,10 +254,10 struct<CAST(udf(cast((cast(udf(cast(a as string)) as int) div 2) as string)) AS -- !query 24 -SELECT udf(a/2), udf(a/2) FROM test_missing_target - GROUP BY udf(a/2) ORDER BY udf(a/2) +SELECT a/2, a/2 FROM test_missing_target + GROUP BY a/2 ORDER BY a/2 -- !query 24 schema -struct<CAST(udf(cast((a div 2) as string)) AS INT):int,CAST(udf(cast((a div 2) as string)) AS INT):int> +struct<(a div 2):int,(a div 2):int> -- !query 24 output 0 0 1 1 -271,11 +267,11 struct<CAST(udf(cast((a div 2) as string)) AS INT):int,CAST(udf(cast((a div 2) a -- !query 25 -SELECT udf(x.b), udf(count(*)) FROM test_missing_target x, test_missing_target y - WHERE udf(x.a) = udf(y.a) - GROUP BY udf(x.b) ORDER BY udf(x.b) +SELECT x.b, count(*) FROM test_missing_target x, test_missing_target y + WHERE x.a = y.a + GROUP BY x.b ORDER BY x.b -- !query 25 schema -struct<CAST(udf(cast(b as string)) AS INT):int,CAST(udf(cast(count(1) as string)) AS BIGINT):bigint> +struct<b:int,count(1):bigint> -- !query 25 output 1 1 2 2 -284,11 +280,11 struct<CAST(udf(cast(b as string)) AS INT):int,CAST(udf(cast(count(1) as string) -- !query 26 -SELECT udf(count(*)) FROM test_missing_target x, test_missing_target y - WHERE udf(x.a) = udf(y.a) - GROUP BY udf(x.b) ORDER BY udf(x.b) +SELECT count(*) FROM test_missing_target x, test_missing_target y + WHERE x.a = y.a + GROUP BY x.b ORDER BY x.b -- !query 26 schema -struct<CAST(udf(cast(count(1) as string)) AS BIGINT):bigint> +struct<count(1):bigint> -- !query 26 output 1 2 -297,22 +293,22 struct<CAST(udf(cast(count(1) as string)) AS BIGINT):bigint> -- !query 27 -SELECT udf(a%2), udf(count(udf(b))) FROM test_missing_target -GROUP BY udf(test_missing_target.a%2) -ORDER BY udf(test_missing_target.a%2) +SELECT a%2, count(b) FROM test_missing_target +GROUP BY test_missing_target.a%2 +ORDER BY test_missing_target.a%2 -- !query 27 schema -struct<CAST(udf(cast((a % 2) as string)) AS INT):int,CAST(udf(cast(count(cast(udf(cast(b as string)) as int)) as string)) AS BIGINT):bigint> +struct<(a % 2):int,count(b):bigint> -- !query 27 output 0 5 1 5 -- !query 28 -SELECT udf(count(c)) FROM test_missing_target -GROUP BY udf(lower(test_missing_target.c)) -ORDER BY udf(lower(test_missing_target.c)) +SELECT count(c) FROM test_missing_target +GROUP BY lower(test_missing_target.c) +ORDER BY lower(test_missing_target.c) -- !query 28 schema -struct<CAST(udf(cast(count(c) as string)) AS BIGINT):bigint> +struct<count(c):bigint> -- !query 28 output 2 3 -321,18 +317,18 struct<CAST(udf(cast(count(c) as string)) AS BIGINT):bigint> -- !query 29 -SELECT udf(count(udf(a))) FROM test_missing_target GROUP BY udf(a) ORDER BY udf(b) +SELECT count(a) FROM test_missing_target GROUP BY a ORDER BY b -- !query 29 schema struct<> -- !query 29 output org.apache.spark.sql.AnalysisException -cannot resolve '`b`' given input columns: [CAST(udf(cast(count(cast(udf(cast(a as string)) as int)) as string)) AS BIGINT)]; line 1 pos 80 +cannot resolve '`b`' given input columns: [count(a)]; line 1 pos 61 -- !query 30 -SELECT udf(count(b)) FROM test_missing_target GROUP BY udf(b/2) ORDER BY udf(b/2) +SELECT count(b) FROM test_missing_target GROUP BY b/2 ORDER BY b/2 -- !query 30 schema -struct<CAST(udf(cast(count(b) as string)) AS BIGINT):bigint> +struct<count(b):bigint> -- !query 30 output 1 5 -340,10 +336,10 struct<CAST(udf(cast(count(b) as string)) AS BIGINT):bigint> -- !query 31 -SELECT udf(lower(test_missing_target.c)), udf(count(udf(c))) - FROM test_missing_target GROUP BY udf(lower(c)) ORDER BY udf(lower(c)) +SELECT lower(test_missing_target.c), count(c) + FROM test_missing_target GROUP BY lower(c) ORDER BY lower(c) -- !query 31 schema -struct<CAST(udf(cast(lower(c) as string)) AS STRING):string,CAST(udf(cast(count(cast(udf(cast(c as string)) as string)) as string)) AS BIGINT):bigint> +struct<lower(c):string,count(c):bigint> -- !query 31 output abab 2 bbbb 3 -352,9 +348,9 xxxx 1 -- !query 32 -SELECT udf(a) FROM test_missing_target ORDER BY udf(upper(udf(d))) +SELECT a FROM test_missing_target ORDER BY upper(d) -- !query 32 schema -struct<CAST(udf(cast(a as string)) AS INT):int> +struct<a:int> -- !query 32 output 0 1 -369,33 +365,32 struct<CAST(udf(cast(a as string)) AS INT):int> -- !query 33 -SELECT udf(count(b)) FROM test_missing_target - GROUP BY udf((b + 1) / 2) ORDER BY udf((b + 1) / 2) desc +SELECT count(b) FROM test_missing_target + GROUP BY (b + 1) / 2 ORDER BY (b + 1) / 2 desc -- !query 33 schema -struct<CAST(udf(cast(count(b) as string)) AS BIGINT):bigint> +struct<count(b):bigint> -- !query 33 output 7 3 -- !query 34 -SELECT udf(count(udf(x.a))) FROM test_missing_target x, test_missing_target y - WHERE udf(x.a) = udf(y.a) - GROUP BY udf(b/2) ORDER BY udf(b/2) +SELECT count(x.a) FROM test_missing_target x, test_missing_target y + WHERE x.a = y.a + GROUP BY b/2 ORDER BY b/2 -- !query 34 schema struct<> -- !query 34 output org.apache.spark.sql.AnalysisException -Reference 'b' is ambiguous, could be: x.b, y.b.; line 3 pos 14 +Reference 'b' is ambiguous, could be: x.b, y.b.; line 3 pos 10 -- !query 35 -SELECT udf(x.b/2), udf(count(udf(x.b))) FROM test_missing_target x, -test_missing_target y - WHERE udf(x.a) = udf(y.a) - GROUP BY udf(x.b/2) ORDER BY udf(x.b/2) +SELECT x.b/2, count(x.b) FROM test_missing_target x, test_missing_target y + WHERE x.a = y.a + GROUP BY x.b/2 ORDER BY x.b/2 -- !query 35 schema -struct<CAST(udf(cast((b div 2) as string)) AS INT):int,CAST(udf(cast(count(cast(udf(cast(b as string)) as int)) as string)) AS BIGINT):bigint> +struct<(b div 2):int,count(b):bigint> -- !query 35 output 0 1 1 5 -403,14 +398,14 struct<CAST(udf(cast((b div 2) as string)) AS INT):int,CAST(udf(cast(count(cast( -- !query 36 -SELECT udf(count(udf(b))) FROM test_missing_target x, test_missing_target y - WHERE udf(x.a) = udf(y.a) - GROUP BY udf(x.b/2) +SELECT count(b) FROM test_missing_target x, test_missing_target y + WHERE x.a = y.a + GROUP BY x.b/2 -- !query 36 schema struct<> -- !query 36 output org.apache.spark.sql.AnalysisException -Reference 'b' is ambiguous, could be: x.b, y.b.; line 1 pos 21 +Reference 'b' is ambiguous, could be: x.b, y.b.; line 1 pos 13 -- !query 37 ``` </p> </details> ## How was this patch tested? Tested as Guided in SPARK-27921 Closes #25350 from Udbhav30/master. Authored-by: Udbhav30 <[email protected]> Signed-off-by: HyukjinKwon <[email protected]>
…ExtractPythonUDFFromJoinCondition and move to 'Extract Python UDFs' ## What changes were proposed in this pull request? This PR targets to rename `PullOutPythonUDFInJoinCondition` to `ExtractPythonUDFFromJoinCondition` and move to 'Extract Python UDFs' together with other Python UDF related rules. Currently `PullOutPythonUDFInJoinCondition` rule is alone outside of other 'Extract Python UDFs' rules together. and the name `ExtractPythonUDFFromJoinCondition` is matched to existing Python UDF extraction rules. ## How was this patch tested? Existing tests should cover. Closes #25358 from HyukjinKwon/move-python-join-rule. Authored-by: HyukjinKwon <[email protected]> Signed-off-by: gatorsmile <[email protected]>
## What changes were proposed in this pull request? #25242 proposed to disallow upcasting complex data types to string type, however, upcasting from null type to any types should still be safe. ## How was this patch tested? Add corresponding case in `CastSuite`. Closes #25425 from jiangxb1987/nullToString. Authored-by: Xingbo Jiang <[email protected]> Signed-off-by: Wenchen Fan <[email protected]>
## What changes were proposed in this pull request? The mapping of Spark schema to Avro schema is many-to-many. (See https://spark.apache.org/docs/latest/sql-data-sources-avro.html#supported-types-for-spark-sql---avro-conversion) The default schema mapping might not be exactly what users want. For example, by default, a "string" column is always written as "string" Avro type, but users might want to output the column as "enum" Avro type. With PR #21847, Spark supports user-specified schema in the batch writer. For the function `to_avro`, we should support user-specified output schema as well. ## How was this patch tested? Unit test. Closes #25419 from gengliangwang/to_avro. Authored-by: Gengliang Wang <[email protected]> Signed-off-by: Wenchen Fan <[email protected]>
… group by clause ## What changes were proposed in this pull request? A GROUPED_AGG pandas python udf can't work, if without group by clause, like `select udf(id) from table`. This doesn't match with aggregate function like sum, count..., and also dataset API like `df.agg(udf(df['id']))`. When we parse a udf (or an aggregate function) like that from SQL syntax, it is known as a function in a project. `GlobalAggregates` rule in analysis makes such project as aggregate, by looking for aggregate expressions. At the moment, we should also look for GROUPED_AGG pandas python udf. ## How was this patch tested? Added tests. Closes #25352 from viirya/SPARK-28422. Authored-by: Liang-Chi Hsieh <[email protected]> Signed-off-by: HyukjinKwon <[email protected]>
…shell ## What changes were proposed in this pull request? `Utilities.addToClassPath` has been changed since [HIVE-22096](https://issues.apache.org/jira/browse/HIVE-22096), but we use it to add plugin jars: https://github.com/apache/spark/blob/128ea37bda3dbc1d6ba2af762c7453ab7605d430/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLCLIDriver.scala#L144-L147 This PR add test for `spark-sql` adding plugin jars. ## How was this patch tested? N/A Closes #25435 from wangyum/SPARK-28714. Authored-by: Yuming Wang <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]>
## What changes were proposed in this pull request? Fixes a vulnerability from the GitHub Security Advisory Database: _Moderate severity vulnerability that affects com.puppycrawl.tools:checkstyle_ Checkstyle prior to 8.18 loads external DTDs by default, which can potentially lead to denial of service attacks or the leaking of confidential information. checkstyle/checkstyle#6474 Affected versions: < 8.18 ## How was this patch tested? Ran checkstyle locally. Closes #25432 from Fokko/SPARK-28713. Authored-by: Fokko Driesprong <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]>
…om application jars on JDK9+ ## What changes were proposed in this pull request? We have 8 test cases in `HiveSparkSubmitSuite` still fail with `java.lang.ClassNotFoundException` when running on JDK9+: ``` [info] - SPARK-18989: DESC TABLE should not fail with format class not found *** FAILED *** (9 seconds, 927 milliseconds) [info] spark-submit returned with exit code 1. [info] Command line: './bin/spark-submit' '--class' 'org.apache.spark.sql.hive.SPARK_18989_CREATE_TABLE' '--name' 'SPARK-18947' '--master' 'local-cluster[2,1,1024]' '--conf' 'spark.ui.enabled=false' '--conf' 'spark.master.rest.enabled=false' '--jars' '/root/.m2/repository/org/apache/hive/hive-contrib/2.3.6-SNAPSHOT/hive-contrib-2.3.6-SNAPSHOT.jar' 'file:/root/opensource/spark/target/tmp/spark-36d27542-7b82-4962-a362-bb51ef3e457d/testJar-1565682620744.jar' [info] [info] 2019-08-13 00:50:22.073 - stderr> WARNING: An illegal reflective access operation has occurred [info] 2019-08-13 00:50:22.073 - stderr> WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/root/opensource/spark/common/unsafe/target/scala-2.12/classes/) to constructor java.nio.DirectByteBuffer(long,int) [info] 2019-08-13 00:50:22.073 - stderr> WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform [info] 2019-08-13 00:50:22.073 - stderr> WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations [info] 2019-08-13 00:50:22.073 - stderr> WARNING: All illegal access operations will be denied in a future release [info] 2019-08-13 00:50:28.31 - stderr> Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/metadata/HiveException [info] 2019-08-13 00:50:28.31 - stderr> at java.base/java.lang.Class.getDeclaredConstructors0(Native Method) [info] 2019-08-13 00:50:28.31 - stderr> at java.base/java.lang.Class.privateGetDeclaredConstructors(Class.java:3138) [info] 2019-08-13 00:50:28.31 - stderr> at java.base/java.lang.Class.getConstructors(Class.java:1944) [info] 2019-08-13 00:50:28.31 - stderr> at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:294) [info] 2019-08-13 00:50:28.31 - stderr> at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:410) [info] 2019-08-13 00:50:28.31 - stderr> at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:305) [info] 2019-08-13 00:50:28.31 - stderr> at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:68) [info] 2019-08-13 00:50:28.31 - stderr> at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:67) [info] 2019-08-13 00:50:28.31 - stderr> at org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$databaseExists$1(HiveExternalCatalog.scala:221) [info] 2019-08-13 00:50:28.31 - stderr> at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23) [info] 2019-08-13 00:50:28.31 - stderr> at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:99) [info] 2019-08-13 00:50:28.31 - stderr> at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:221) [info] 2019-08-13 00:50:28.31 - stderr> at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:139) [info] 2019-08-13 00:50:28.31 - stderr> at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:129) [info] 2019-08-13 00:50:28.31 - stderr> at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:42) [info] 2019-08-13 00:50:28.311 - stderr> at org.apache.spark.sql.hive.HiveSessionStateBuilder.$anonfun$catalog$1(HiveSessionStateBuilder.scala:57) [info] 2019-08-13 00:50:28.311 - stderr> at org.apache.spark.sql.catalyst.catalog.SessionCatalog.externalCatalog$lzycompute(SessionCatalog.scala:91) [info] 2019-08-13 00:50:28.311 - stderr> at org.apache.spark.sql.catalyst.catalog.SessionCatalog.externalCatalog(SessionCatalog.scala:91) [info] 2019-08-13 00:50:28.311 - stderr> at org.apache.spark.sql.catalyst.catalog.SessionCatalog.databaseExists(SessionCatalog.scala:244) [info] 2019-08-13 00:50:28.311 - stderr> at org.apache.spark.sql.catalyst.catalog.SessionCatalog.requireDbExists(SessionCatalog.scala:178) [info] 2019-08-13 00:50:28.311 - stderr> at org.apache.spark.sql.catalyst.catalog.SessionCatalog.createTable(SessionCatalog.scala:317) [info] 2019-08-13 00:50:28.311 - stderr> at org.apache.spark.sql.execution.command.CreateTableCommand.run(tables.scala:132) [info] 2019-08-13 00:50:28.311 - stderr> at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70) [info] 2019-08-13 00:50:28.311 - stderr> at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68) [info] 2019-08-13 00:50:28.311 - stderr> at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79) [info] 2019-08-13 00:50:28.311 - stderr> at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:213) [info] 2019-08-13 00:50:28.311 - stderr> at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3431) [info] 2019-08-13 00:50:28.311 - stderr> at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$4(SQLExecution.scala:100) [info] 2019-08-13 00:50:28.311 - stderr> at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160) [info] 2019-08-13 00:50:28.311 - stderr> at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:87) [info] 2019-08-13 00:50:28.311 - stderr> at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3427) [info] 2019-08-13 00:50:28.311 - stderr> at org.apache.spark.sql.Dataset.<init>(Dataset.scala:213) [info] 2019-08-13 00:50:28.311 - stderr> at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:95) [info] 2019-08-13 00:50:28.311 - stderr> at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:653) [info] 2019-08-13 00:50:28.311 - stderr> at org.apache.spark.sql.hive.SPARK_18989_CREATE_TABLE$.main(HiveSparkSubmitSuite.scala:829) [info] 2019-08-13 00:50:28.311 - stderr> at org.apache.spark.sql.hive.SPARK_18989_CREATE_TABLE.main(HiveSparkSubmitSuite.scala) [info] 2019-08-13 00:50:28.311 - stderr> at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [info] 2019-08-13 00:50:28.311 - stderr> at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) [info] 2019-08-13 00:50:28.311 - stderr> at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [info] 2019-08-13 00:50:28.311 - stderr> at java.base/java.lang.reflect.Method.invoke(Method.java:566) [info] 2019-08-13 00:50:28.311 - stderr> at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) [info] 2019-08-13 00:50:28.311 - stderr> at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:920) [info] 2019-08-13 00:50:28.311 - stderr> at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:179) [info] 2019-08-13 00:50:28.311 - stderr> at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:202) [info] 2019-08-13 00:50:28.311 - stderr> at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:89) [info] 2019-08-13 00:50:28.311 - stderr> at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:999) [info] 2019-08-13 00:50:28.311 - stderr> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1008) [info] 2019-08-13 00:50:28.311 - stderr> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) [info] 2019-08-13 00:50:28.311 - stderr> Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.metadata.HiveException [info] 2019-08-13 00:50:28.311 - stderr> at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471) [info] 2019-08-13 00:50:28.311 - stderr> at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588) [info] 2019-08-13 00:50:28.311 - stderr> at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:250) [info] 2019-08-13 00:50:28.311 - stderr> at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:239) [info] 2019-08-13 00:50:28.311 - stderr> at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521) [info] 2019-08-13 00:50:28.311 - stderr> ... 48 more ``` Note that this pr fixes `java.lang.ClassNotFoundException`, but the test will fail again with a different reason, the Hive-side `java.lang.ClassCastException` which will be resolved in the official Hive 2.3.6 release. ``` [info] - SPARK-18989: DESC TABLE should not fail with format class not found *** FAILED *** (7 seconds, 649 milliseconds) [info] spark-submit returned with exit code 1. [info] Command line: './bin/spark-submit' '--class' 'org.apache.spark.sql.hive.SPARK_18989_CREATE_TABLE' '--name' 'SPARK-18947' '--master' 'local-cluster[2,1,1024]' '--conf' 'spark.ui.enabled=false' '--conf' 'spark.master.rest.enabled=false' '--jars' '/Users/dongjoon/.ivy2/cache/org.apache.hive/hive-contrib/jars/hive-contrib-2.3.5.jar' 'file:/Users/dongjoon/PRS/PR-25429/target/tmp/spark-48b7c936-0ec2-4311-9fb5-0de4bf86a0eb/testJar-1565710418275.jar' [info] [info] 2019-08-13 08:33:39.221 - stderr> WARNING: An illegal reflective access operation has occurred [info] 2019-08-13 08:33:39.221 - stderr> WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/Users/dongjoon/PRS/PR-25429/common/unsafe/target/scala-2.12/classes/) to constructor java.nio.DirectByteBuffer(long,int) [info] 2019-08-13 08:33:39.221 - stderr> WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform [info] 2019-08-13 08:33:39.221 - stderr> WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations [info] 2019-08-13 08:33:39.221 - stderr> WARNING: All illegal access operations will be denied in a future release [info] 2019-08-13 08:33:43.59 - stderr> Exception in thread "main" org.apache.spark.sql.AnalysisException: java.lang.ClassCastException: class jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader and java.net.URLClassLoader are in module java.base of loader 'bootstrap'); [info] 2019-08-13 08:33:43.59 - stderr> at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:109) ``` ## How was this patch tested? manual tests: 1. Install [Hive 2.3.6-SNAPSHOT](https://github.com/wangyum/hive/tree/HIVE-21584-branch-2.3) to local maven repository: ``` mvn clean install -DskipTests=true ``` 2. Upgrade our built-in Hive to 2.3.6-SNAPSHOT, you can checkout [this branch](https://github.com/wangyum/spark/tree/SPARK-28708-Hive-2.3.6) to test. 3. Test with hadoop-3.2: ``` build/sbt "hive/test-only *. HiveSparkSubmitSuite" -Phive -Phadoop-3.2 -Phive-thriftserver ... [info] Run completed in 3 minutes, 8 seconds. [info] Total number of tests run: 11 [info] Suites: completed 1, aborted 0 [info] Tests: succeeded 11, failed 0, canceled 3, ignored 0, pending 0 [info] All tests passed. ``` Closes #25429 from wangyum/SPARK-28708. Authored-by: Yuming Wang <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]>
## What changes were proposed in this pull request? In the PR, I propose additional synonyms for the `field` argument of `extract` supported by PostgreSQL. The `extract.sql` is updated to check all supported values of the `field` argument. The list of synonyms was taken from https://github.com/postgres/postgres/blob/master/src/backend/utils/adt/datetime.c . ## How was this patch tested? By running `extract.sql` via: ``` $ build/sbt "sql/test-only *SQLQueryTestSuite -- -z extract.sql" ``` Closes #25438 from MaxGekk/extract-field-synonyms. Authored-by: Maxim Gekk <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]>
## What changes were proposed in this pull request? Changed type of `sec` argument in the `make_timestamp()` function from `DOUBLE` to `DECIMAL(8, 6)`. The scale is set to 6 to cover microsecond fractions, and the precision is 2 digits for seconds + 6 digits for microsecond fraction. New type prevents losing precision in some cases, for example: Before: ```sql spark-sql> select make_timestamp(2019, 8, 12, 0, 0, 58.000001); 2019-08-12 00:00:58 ``` After: ```sql spark-sql> select make_timestamp(2019, 8, 12, 0, 0, 58.000001); 2019-08-12 00:00:58.000001 ``` Also switching to `DECIMAL` fixes rounding `sec` towards "nearest neighbor" unless both neighbors are equidistant, in which case round up. For example: Before: ```sql spark-sql> select make_timestamp(2019, 8, 12, 0, 0, 0.1234567); 2019-08-12 00:00:00.123456 ``` After: ```sql spark-sql> select make_timestamp(2019, 8, 12, 0, 0, 0.1234567); 2019-08-12 00:00:00.123457 ``` ## How was this patch tested? This was tested by `DateExpressionsSuite` and `pgSQL/timestamp.sql`. Closes #25421 from MaxGekk/make_timestamp-decimal. Authored-by: Maxim Gekk <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]>
## What changes were proposed in this pull request? Github now provides free CI/CD for build, test, and deploy. This PR enables a simple Github Actions to build master with JDK8 with latest Ubuntu. We can extend it with different versions of JDK, and even build Spark with docker images in the future. Closes #25440 from dbtsai/actions. Authored-by: DB Tsai <[email protected]> Signed-off-by: DB Tsai <[email protected]>
## What changes were proposed in this pull request? R version 3.6.1 (Action of the Toes) was released on 2019-07-05. This PR aims to upgrade R installation for AppVeyor CI environment. ## How was this patch tested? Pass the AppVeyor CI. Closes #25441 from dongjoon-hyun/SPARK-28720. Authored-by: Dongjoon Hyun <[email protected]> Signed-off-by: DB Tsai <[email protected]>
… case-insensitively
## What changes were proposed in this pull request?
Here is the problem description from the JIRA.
```
When the inputs contain the constant 'infinity', Spark SQL does not generate the expected results.
SELECT avg(CAST(x AS DOUBLE)), var_pop(CAST(x AS DOUBLE))
FROM (VALUES ('1'), (CAST('infinity' AS DOUBLE))) v(x);
SELECT avg(CAST(x AS DOUBLE)), var_pop(CAST(x AS DOUBLE))
FROM (VALUES ('infinity'), ('1')) v(x);
SELECT avg(CAST(x AS DOUBLE)), var_pop(CAST(x AS DOUBLE))
FROM (VALUES ('infinity'), ('infinity')) v(x);
SELECT avg(CAST(x AS DOUBLE)), var_pop(CAST(x AS DOUBLE))
FROM (VALUES ('-infinity'), ('infinity')) v(x);
The root cause: Spark SQL does not recognize the special constants in a case insensitive way. In PostgreSQL, they are recognized in a case insensitive way.
Link: https://www.postgresql.org/docs/9.3/datatype-numeric.html
```
In this PR, the casting code is enhanced to handle these `special` string literals in case insensitive manner.
## How was this patch tested?
Added tests in CastSuite and modified existing test suites.
Closes #25331 from dilipbiswal/double_infinity.
Authored-by: Dilip Biswal <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
This change implements a few changes to the k8s pod allocator so that it behaves a little better when dynamic allocation is on. (i) Allow the application to ramp up immediately when there's a change in the target number of executors. Without this change, scaling would only trigger when a change happened in the state of the cluster, e.g. an executor going down, or when the periodical snapshot was taken (default every 30s). (ii) Get rid of pending pod requests, both acknowledged (i.e. Spark knows that a pod is pending resource allocation) and unacknowledged (i.e. Spark has requested the pod but the API server hasn't created it yet), when they're not needed anymore. This avoids starting those executors to just remove them after the idle timeout, wasting resources in the meantime. (iii) Re-work some of the code to avoid unnecessary logging. While not bad without dynamic allocation, the existing logging was very chatty when dynamic allocation was on. With the changes, all the useful information is still there, but only when interesting changes happen. (iv) Gracefully shut down executors when they become idle. Just deleting the pod causes a lot of ugly logs to show up, so it's better to ask pods to exit nicely. That also allows Spark to respect the "don't delete pods" option when dynamic allocation is on. Tested on a small k8s cluster running different TPC-DS workloads. Closes #25236 from vanzin/SPARK-28487. Authored-by: Marcelo Vanzin <[email protected]> Signed-off-by: Marcelo Vanzin <[email protected]>
## What changes were proposed in this pull request? Add JDK11 for Github Actions Closes #25444 from dbtsai/jdk11. Authored-by: DB Tsai <[email protected]> Signed-off-by: DB Tsai <[email protected]>
## What changes were proposed in this pull request? This PR adds the `renameTable` call to the `TableCatalog` API, as described in the [Table Metadata API SPIP](https://docs.google.com/document/d/1zLFiA1VuaWeVxeTDXNg8bL6GP3BVoOZBkewFtEnjEoo/edit#heading=h.m45webtwxf2d). This PR is related to: #24246 ## How was this patch tested? Added unit tests and contract tests. Closes #25206 from edgarRd/SPARK-28265-add-rename-table-catalog-api. Authored-by: Edgar Rodriguez <[email protected]> Signed-off-by: Wenchen Fan <[email protected]>
## What changes were proposed in this pull request? CacheManager.cacheQuery saves the stats from the optimized plan to cache. ## How was this patch tested? Existing testss. Closes #24623 from jzhuge/SPARK-27739. Authored-by: John Zhuge <[email protected]> Signed-off-by: Wenchen Fan <[email protected]>
## What changes were proposed in this pull request? This pr adds DELETE support for V2 datasources. As a first step, this pr only support delete by source filters: ```scala void delete(Filter[] filters); ``` which could not deal with complicated cases like subqueries. Since it's uncomfortable to embed the implementation of DELETE in the current V2 APIs, a new mix-in of datasource is added, which is called `SupportsMaintenance`, similar to `SupportsRead` and `SupportsWrite`. A datasource which can be maintained means we can perform DELETE/UPDATE/MERGE/OPTIMIZE on the datasource, as long as the datasource implements the necessary mix-ins. ## How was this patch tested? new test case. Please review https://spark.apache.org/contributing.html before opening a pull request. Closes #25115 from xianyinxin/SPARK-28351. Authored-by: xy_xin <[email protected]> Signed-off-by: Wenchen Fan <[email protected]>
…croseconds` at `extract()` ## What changes were proposed in this pull request? In the PR, I propose new expressions `Epoch`, `IsoYear`, `Milliseconds` and `Microseconds`, and support additional parameters of `extract()` for feature parity with PostgreSQL (https://www.postgresql.org/docs/11/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT): 1. `epoch` - the number of seconds since 1970-01-01 00:00:00 local time in microsecond precision. 2. `isoyear` - the ISO 8601 week-numbering year that the date falls in. Each ISO 8601 week-numbering year begins with the Monday of the week containing the 4th of January. 3. `milliseconds` - the seconds field including fractional parts multiplied by 1,000. 4. `microseconds` - the seconds field including fractional parts multiplied by 1,000,000. Here are examples: ```sql spark-sql> SELECT EXTRACT(EPOCH FROM TIMESTAMP '2019-08-11 19:07:30.123456'); 1565550450.123456 spark-sql> SELECT EXTRACT(ISOYEAR FROM DATE '2006-01-01'); 2005 spark-sql> SELECT EXTRACT(MILLISECONDS FROM TIMESTAMP '2019-08-11 19:07:30.123456'); 30123.456 spark-sql> SELECT EXTRACT(MICROSECONDS FROM TIMESTAMP '2019-08-11 19:07:30.123456'); 30123456 ``` ## How was this patch tested? Added new tests to `DateExpressionsSuite`, and uncommented existing tests in `extract.sql` and `pgSQL/date.sql`. Closes #25408 from MaxGekk/extract-ext3. Authored-by: Maxim Gekk <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]>
…adoop configuration ## What changes were proposed in this pull request? 1. PythonHadoopUtil.mapToConf generates a Configuration with loadDefaults disabled 2. merging hadoop conf in several places of PythonRDD is consistent. ## How was this patch tested? Added a new test and existed tests Closes #25002 from advancedxy/SPARK-28203. Authored-by: Xianjin YE <[email protected]> Signed-off-by: HyukjinKwon <[email protected]>
## What changes were proposed in this pull request? We add support for the V2SessionCatalog for saveAsTable, such that V2 tables can plug in and leverage existing DataFrameWriter.saveAsTable APIs to write and create tables through the session catalog. ## How was this patch tested? Unit tests. A lot of tests broke under hive when things were not working properly under `ResolveTables`, therefore I believe the current set of tests should be sufficient in testing the table resolution and read code paths. Closes #25402 from brkyvz/saveAsV2. Lead-authored-by: Burak Yavuz <[email protected]> Co-authored-by: Burak Yavuz <[email protected]> Signed-off-by: Wenchen Fan <[email protected]>
…ke source param handling more robust ## What changes were proposed in this pull request? [SPARK-28163](https://issues.apache.org/jira/browse/SPARK-28163) fixed a bug and during the analysis we've concluded it would be more robust to use `CaseInsensitiveMap` inside Kafka source. This case less lower/upper case problem would rise in the future. Please note this PR doesn't intend to solve any kind of actual problem but finish the concept added in [SPARK-28163](https://issues.apache.org/jira/browse/SPARK-28163) (in a fix PR I didn't want to add too invasive changes). In this PR I've changed `Map[String, String]` to `CaseInsensitiveMap[String]` to enforce the usage. These are the main use-cases: * `contains` => `CaseInsensitiveMap` solves it * `get...` => `CaseInsensitiveMap` solves it * `filter` => keys must be converted to lowercase because there is no guarantee that the incoming map has such key set * `find` => keys must be converted to lowercase because there is no guarantee that the incoming map has such key set * passing parameters to Kafka consumer/producer => keys must be converted to lowercase because there is no guarantee that the incoming map has such key set ## How was this patch tested? Existing unit tests. Closes #25418 from gaborgsomogyi/SPARK-28695. Authored-by: Gabor Somogyi <[email protected]> Signed-off-by: Wenchen Fan <[email protected]>
## What changes were proposed in this pull request?
Hive using incorrect **InputFormat**(`org.apache.hadoop.mapred.SequenceFileInputFormat`) to read Spark's **Parquet** bucketed data source table.
Spark side:
```sql
spark-sql> CREATE TABLE t (c1 INT, c2 INT) USING parquet CLUSTERED BY (c1) SORTED BY (c1) INTO 2 BUCKETS;
2019-04-29 17:52:05 WARN HiveExternalCatalog:66 - Persisting bucketed data source table `default`.`t` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
spark-sql> DESC FORMATTED t;
c1 int NULL
c2 int NULL
# Detailed Table Information
Database default
Table t
Owner yumwang
Created Time Mon Apr 29 17:52:05 CST 2019
Last Access Thu Jan 01 08:00:00 CST 1970
Created By Spark 2.4.0
Type MANAGED
Provider parquet
Num Buckets 2
Bucket Columns [`c1`]
Sort Columns [`c1`]
Table Properties [transient_lastDdlTime=1556531525]
Location file:/user/hive/warehouse/t
Serde Library org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
InputFormat org.apache.hadoop.mapred.SequenceFileInputFormat
OutputFormat org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
Storage Properties [serialization.format=1]
```
Hive side:
```sql
hive> DESC FORMATTED t;
OK
# col_name data_type comment
c1 int
c2 int
# Detailed Table Information
Database: default
Owner: root
CreateTime: Wed May 08 03:38:46 GMT-07:00 2019
LastAccessTime: UNKNOWN
Retention: 0
Location: file:/user/hive/warehouse/t
Table Type: MANAGED_TABLE
Table Parameters:
bucketing_version spark
spark.sql.create.version 3.0.0-SNAPSHOT
spark.sql.sources.provider parquet
spark.sql.sources.schema.bucketCol.0 c1
spark.sql.sources.schema.numBucketCols 1
spark.sql.sources.schema.numBuckets 2
spark.sql.sources.schema.numParts 1
spark.sql.sources.schema.numSortCols 1
spark.sql.sources.schema.part.0 {\"type\":\"struct\",\"fields\":[{\"name\":\"c1\",\"type\":\"integer\",\"nullable\":true,\"metadata\":{}},{\"name\":\"c2\",\"type\":\"integer\",\"nullable\":true,\"metadata\":{}}]}
spark.sql.sources.schema.sortCol.0 c1
transient_lastDdlTime 1557311926
# Storage Information
SerDe Library: org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe
InputFormat: org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat
OutputFormat: org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat
Compressed: No
Num Buckets: -1
Bucket Columns: []
Sort Columns: []
Storage Desc Params:
path file:/user/hive/warehouse/t
serialization.format 1
```
So it's non-bucketed table at Hive side. This pr set the `SerDe` correctly so Hive can read these tables.
Related code:
https://github.com/apache/spark/blob/33f3c48cac087e079b9c7e342c2e58b16eaaa681/sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala#L976-L990
https://github.com/apache/spark/blob/f9776e389215255dc61efaa2eddd92a1fa754b48/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala#L444-L459
## How was this patch tested?
unit tests
Closes #24486 from wangyum/SPARK-27592.
Authored-by: Yuming Wang <[email protected]>
Signed-off-by: Wenchen Fan <[email protected]>
## What changes were proposed in this pull request? New documentation to explain in detail Web UI Jobs page and link it to monitoring page. New images are included to better explanation   ## How was this patch tested? This pull request contains only documentation. I have generated it using "jekyll build" to ensure that it's ok Closes #25424 from planga82/feature/SPARK-28543_ImproveWebUIDocs. Lead-authored-by: Unknown <[email protected]> Co-authored-by: Pablo <[email protected]> Signed-off-by: Sean Owen <[email protected]>
ulysses-you
pushed a commit
that referenced
this pull request
Oct 25, 2019
…ver)QueryTestSuite ### What changes were proposed in this pull request? This PR adds 2 changes regarding exception handling in `SQLQueryTestSuite` and `ThriftServerQueryTestSuite` - fixes an expected output sorting issue in `ThriftServerQueryTestSuite` as if there is an exception then there is no need for sort - introduces common exception handling in those 2 suites with a new `handleExceptions` method ### Why are the changes needed? Currently `ThriftServerQueryTestSuite` passes on master, but it fails on one of my PRs (apache#23531) with this error (https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/111651/testReport/org.apache.spark.sql.hive.thriftserver/ThriftServerQueryTestSuite/sql_3/): ``` org.scalatest.exceptions.TestFailedException: Expected " [Recursion level limit 100 reached but query has not exhausted, try increasing spark.sql.cte.recursion.level.limit org.apache.spark.SparkException] ", but got " [org.apache.spark.SparkException Recursion level limit 100 reached but query has not exhausted, try increasing spark.sql.cte.recursion.level.limit] " Result did not match for query #4 WITH RECURSIVE r(level) AS ( VALUES (0) UNION ALL SELECT level + 1 FROM r ) SELECT * FROM r ``` The unexpected reversed order of expected output (error message comes first, then the exception class) is due to this line: https://github.com/apache/spark/pull/26028/files#diff-b3ea3021602a88056e52bf83d8782de8L146. It should not sort the expected output if there was an error during execution. ### Does this PR introduce any user-facing change? No. ### How was this patch tested? Existing UTs. Closes apache#26028 from peter-toth/SPARK-29359-better-exception-handling. Authored-by: Peter Toth <[email protected]> Signed-off-by: Yuming Wang <[email protected]>
ulysses-you
pushed a commit
that referenced
this pull request
Jan 17, 2023
### What changes were proposed in this pull request? This PR introduces sasl retry count in RetryingBlockTransferor. ### Why are the changes needed? Previously a boolean variable, saslTimeoutSeen, was used. However, the boolean variable wouldn't cover the following scenario: 1. SaslTimeoutException 2. IOException 3. SaslTimeoutException 4. IOException Even though IOException at #2 is retried (resulting in increment of retryCount), the retryCount would be cleared at step #4. Since the intention of saslTimeoutSeen is to undo the increment due to retrying SaslTimeoutException, we should keep a counter for SaslTimeoutException retries and subtract the value of this counter from retryCount. ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? New test is added, courtesy of Mridul. Closes apache#39611 from tedyu/sasl-cnt. Authored-by: Ted Yu <[email protected]> Signed-off-by: Mridul Muralidharan <mridul<at>gmail.com>
ulysses-you
pushed a commit
that referenced
this pull request
Mar 22, 2023
…edExpression() ### What changes were proposed in this pull request? In `EquivalentExpressions.addExpr()`, add a guard `supportedExpression()` to make it consistent with `addExprTree()` and `getExprState()`. ### Why are the changes needed? This fixes a regression caused by apache#39010 which added the `supportedExpression()` to `addExprTree()` and `getExprState()` but not `addExpr()`. One example of a use case affected by the inconsistency is the `PhysicalAggregation` pattern in physical planning. There, it calls `addExpr()` to deduplicate the aggregate expressions, and then calls `getExprState()` to deduplicate the result expressions. Guarding inconsistently will cause the aggregate and result expressions go out of sync, eventually resulting in query execution error (or whole-stage codegen error). ### Does this PR introduce _any_ user-facing change? This fixes a regression affecting Spark 3.3.2+, where it may manifest as an error running aggregate operators with higher-order functions. Example running the SQL command: ```sql select max(transform(array(id), x -> x)), max(transform(array(id), x -> x)) from range(2) ``` example error message before the fix: ``` java.lang.IllegalStateException: Couldn't find max(transform(array(id#0L), lambdafunction(lambda x#2L, lambda x#2L, false)))#4 in [max(transform(array(id#0L), lambdafunction(lambda x#1L, lambda x#1L, false)))#3] ``` after the fix this error is gone. ### How was this patch tested? Added new test cases to `SubexpressionEliminationSuite` for the immediate issue, and to `DataFrameAggregateSuite` for an example of user-visible symptom. Closes apache#40473 from rednaxelafx/spark-42851. Authored-by: Kris Mok <[email protected]> Signed-off-by: Wenchen Fan <[email protected]>
ulysses-you
pushed a commit
that referenced
this pull request
Nov 14, 2023
…edExpression() ### What changes were proposed in this pull request? In `EquivalentExpressions.addExpr()`, add a guard `supportedExpression()` to make it consistent with `addExprTree()` and `getExprState()`. ### Why are the changes needed? This fixes a regression caused by apache#39010 which added the `supportedExpression()` to `addExprTree()` and `getExprState()` but not `addExpr()`. One example of a use case affected by the inconsistency is the `PhysicalAggregation` pattern in physical planning. There, it calls `addExpr()` to deduplicate the aggregate expressions, and then calls `getExprState()` to deduplicate the result expressions. Guarding inconsistently will cause the aggregate and result expressions go out of sync, eventually resulting in query execution error (or whole-stage codegen error). ### Does this PR introduce _any_ user-facing change? This fixes a regression affecting Spark 3.3.2+, where it may manifest as an error running aggregate operators with higher-order functions. Example running the SQL command: ```sql select max(transform(array(id), x -> x)), max(transform(array(id), x -> x)) from range(2) ``` example error message before the fix: ``` java.lang.IllegalStateException: Couldn't find max(transform(array(id#0L), lambdafunction(lambda x#2L, lambda x#2L, false)))#4 in [max(transform(array(id#0L), lambdafunction(lambda x#1L, lambda x#1L, false)))#3] ``` after the fix this error is gone. ### How was this patch tested? Added new test cases to `SubexpressionEliminationSuite` for the immediate issue, and to `DataFrameAggregateSuite` for an example of user-visible symptom. Closes apache#40473 from rednaxelafx/spark-42851. Authored-by: Kris Mok <[email protected]> Signed-off-by: Wenchen Fan <[email protected]> (cherry picked from commit ef0a76e) Signed-off-by: Wenchen Fan <[email protected]>
ulysses-you
pushed a commit
that referenced
this pull request
Apr 26, 2024
### What changes were proposed in this pull request? In the `Window` node, both `partitionSpec` and `orderSpec` must be orderable, but the current type check only verifies `orderSpec` is orderable. This can cause an error in later optimizing phases. Given a query: ``` with t as (select id, map(id, id) as m from range(0, 10)) select rank() over (partition by m order by id) from t ``` Before the PR, it fails with an `INTERNAL_ERROR`: ``` org.apache.spark.SparkException: [INTERNAL_ERROR] grouping/join/window partition keys cannot be map type. SQLSTATE: XX000 at org.apache.spark.SparkException$.internalError(SparkException.scala:92) at org.apache.spark.SparkException$.internalError(SparkException.scala:96) at org.apache.spark.sql.catalyst.optimizer.NormalizeFloatingNumbers$.needNormalize(NormalizeFloatingNumbers.scala:103) at org.apache.spark.sql.catalyst.optimizer.NormalizeFloatingNumbers$.org$apache$spark$sql$catalyst$optimizer$NormalizeFloatingNumbers$$needNormalize(NormalizeFloatingNumbers.scala:94) ... ``` After the PR, it fails with a `EXPRESSION_TYPE_IS_NOT_ORDERABLE`, which is expected: ``` org.apache.spark.sql.catalyst.ExtendedAnalysisException: [EXPRESSION_TYPE_IS_NOT_ORDERABLE] Column expression "m" cannot be sorted because its type "MAP<BIGINT, BIGINT>" is not orderable. SQLSTATE: 42822; line 2 pos 53; Project [RANK() OVER (PARTITION BY m ORDER BY id ASC NULLS FIRST ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)#4] +- Project [id#1L, m#0, RANK() OVER (PARTITION BY m ORDER BY id ASC NULLS FIRST ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)#4, RANK() OVER (PARTITION BY m ORDER BY id ASC NULLS FIRST ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)#4] +- Window [rank(id#1L) windowspecdefinition(m#0, id#1L ASC NULLS FIRST, specifiedwindowframe(RowFrame, unboundedpreceding$(), currentrow$())) AS RANK() OVER (PARTITION BY m ORDER BY id ASC NULLS FIRST ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)#4], [m#0], [id#1L ASC NULLS FIRST] +- Project [id#1L, m#0] +- SubqueryAlias t +- SubqueryAlias t +- Project [id#1L, map(id#1L, id#1L) AS m#0] +- Range (0, 10, step=1, splits=None) at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:52) ... ``` ### How was this patch tested? Unit test. Closes apache#45730 from chenhao-db/SPARK-47572. Authored-by: Chenhao Li <[email protected]> Signed-off-by: Wenchen Fan <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What changes were proposed in this pull request?
(Please fill in changes proposed in this fix)
How was this patch tested?
(Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
(If this patch involves UI changes, please attach a screenshot; otherwise, remove this)
Please review https://spark.apache.org/contributing.html before opening a pull request.