Skip to content

Commit 4ccaa81

Browse files
committed
Fix nits
1 parent b5bea82 commit 4ccaa81

1 file changed

Lines changed: 2 additions & 2 deletions

File tree

docs/sql-programming-guide.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1690,7 +1690,7 @@ using the call `toPandas()` and when creating a Spark DataFrame from a Pandas Da
16901690
the Spark configuration 'spark.sql.execution.arrow.enabled' to 'true'. This is disabled by default.
16911691

16921692
In addition, optimizations enabled by 'spark.sql.execution.arrow.enabled' could fallback automatically
1693-
to non-optimized implementations if an error occurs before the actual computation within Spark.
1693+
to non-Arrow optimization implementation if an error occurs before the actual computation within Spark.
16941694
This can be controlled by 'spark.sql.execution.arrow.fallback.enabled'.
16951695

16961696
<div class="codetabs">
@@ -1804,7 +1804,7 @@ working with timestamps in `pandas_udf`s to get the best performance, see
18041804
## Upgrading From Spark SQL 2.3 to 2.4
18051805

18061806
- Since Spark 2.4, Spark maximizes the usage of a vectorized ORC reader for ORC files by default. To do that, `spark.sql.orc.impl` and `spark.sql.orc.filterPushdown` change their default values to `native` and `true` respectively.
1807-
- In PySpark, when Arrow optimization is enabled, previously `toPandas` just failed when Arrow optimization is unabled to be used whereas `createDataFrame` from Pandas DataFrame allowed the fallback to non-optimization. Now, both `toPandas` and `createDataFrame` from Pandas DataFrame allow the fallback by default, which can be switched by `spark.sql.execution.arrow.fallback.enabled`.
1807+
- In PySpark, when Arrow optimization is enabled, previously `toPandas` just failed when Arrow optimization is unabled to be used whereas `createDataFrame` from Pandas DataFrame allowed the fallback to non-optimization. Now, both `toPandas` and `createDataFrame` from Pandas DataFrame allow the fallback by default, which can be switched off by `spark.sql.execution.arrow.fallback.enabled`.
18081808

18091809
## Upgrading From Spark SQL 2.2 to 2.3
18101810

0 commit comments

Comments
 (0)