|
6 | 6 | "source": [ |
7 | 7 | "# Quickstart\n", |
8 | 8 | "\n", |
9 | | - "This is a short introduction and quickstart for PySpark DataFrame. PySpark DataFrame is lazily evaludated and implemented on thetop of [RDD](https://spark.apache.org/docs/latest/rdd-programming-guide.html#overview). When the data is [transformed](https://spark.apache.org/docs/latest/rdd-programming-guide.html#transformations), it does not actually compute but plans how to compute later. When the [actions](https://spark.apache.org/docs/latest/rdd-programming-guide.html#actions) such as `collect()` are explicitly called, the computation starts.\n", |
| 9 | + "This is a short introduction and quickstart for the PySpark DataFrame API. PySpark DataFrames are lazily evaluated. They are implemented on top of [RDD](https://spark.apache.org/docs/latest/rdd-programming-guide.html#overview)s. When Spark [transforms](https://spark.apache.org/docs/latest/rdd-programming-guide.html#transformations) data, it does not immediately compute the transformation but plans how to compute later. When [actions](https://spark.apache.org/docs/latest/rdd-programming-guide.html#actions) such as `collect()` are explicitly called, the computation starts.\n", |
10 | 10 | "This notebook shows the basic usages of the DataFrame, geared mainly for new users. You can run the latest version of these examples by yourself on a live notebook [here](https://mybinder.org/v2/gh/databricks/apache/master?filepath=python%2Fdocs%2Fsource%2Fgetting_started%2Fquickstart.ipynb).\n", |
11 | 11 | "\n", |
12 | 12 | "There are also other useful information in Apache Spark documentation site, see the latest version of [Spark SQL and DataFrames](https://spark.apache.org/docs/latest/sql-programming-guide.html), [RDD Programming Guide](https://spark.apache.org/docs/latest/rdd-programming-guide.html), [Structured Streaming Programming Guide](https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html), [Spark Streaming Programming Guide](https://spark.apache.org/docs/latest/streaming-programming-guide.html) and [Machine Learning Library (MLlib) Guide](https://spark.apache.org/docs/latest/ml-guide.html).\n", |
|
242 | 242 | "cell_type": "markdown", |
243 | 243 | "metadata": {}, |
244 | 244 | "source": [ |
245 | | - "Alternatively, you can enable `spark.sql.repl.eagerEval.enabled` configuration to enable the eager evaluation of PySpark DataFrame in notebooks such as Jupyter." |
| 245 | + "Alternatively, you can enable `spark.sql.repl.eagerEval.enabled` configuration for the eager evaluation of PySpark DataFrame in notebooks such as Jupyter. The number of rows to show can be controled via `spark.sql.repl.eagerEval.maxNumRows` configuration." |
246 | 246 | ] |
247 | 247 | }, |
248 | 248 | { |
|
309 | 309 | "cell_type": "markdown", |
310 | 310 | "metadata": {}, |
311 | 311 | "source": [ |
312 | | - "Its schema and column names can be shown as below:" |
| 312 | + "You can see the DataFrame's schema and column names as follows:" |
313 | 313 | ] |
314 | 314 | }, |
315 | 315 | { |
|
392 | 392 | "cell_type": "markdown", |
393 | 393 | "metadata": {}, |
394 | 394 | "source": [ |
395 | | - "`DataFrame.collect()` collects the distributed data to the driver side as Python premitive representation. Note that this can throw out-of-memory error when the dataset is too larget to fit in the driver side because it collects all the data from executors to the driver side." |
| 395 | + "`DataFrame.collect()` collects the distributed data to the driver side as the local data in Python. Note that this can throw an out-of-memory error when the dataset is too larget to fit in the driver side because it collects all the data from executors to the driver side." |
396 | 396 | ] |
397 | 397 | }, |
398 | 398 | { |
|
448 | 448 | "cell_type": "markdown", |
449 | 449 | "metadata": {}, |
450 | 450 | "source": [ |
451 | | - "PySpark DataFrame also provides the conversion back to a [pandas DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) in order to leverage pandas APIs." |
| 451 | + "PySpark DataFrame also provides the conversion back to a [pandas DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) in order to leverage pandas APIs. Note that `toPandas` also collects all data into the driver side that can easily cause an out-of-memory-error when the data is too large to fit into the driver side." |
452 | 452 | ] |
453 | 453 | }, |
454 | 454 | { |
|
562 | 562 | "cell_type": "markdown", |
563 | 563 | "metadata": {}, |
564 | 564 | "source": [ |
565 | | - "In fact, most of column-weise operations return `Column`s." |
| 565 | + "In fact, most of column-wise operations return `Column`s." |
566 | 566 | ] |
567 | 567 | }, |
568 | 568 | { |
|
685 | 685 | "source": [ |
686 | 686 | "## Applying a Function\n", |
687 | 687 | "\n", |
688 | | - "PySpark supports various UDFs and APIs to allow users to execute Python native functions. See also Pandas UDFs and Pandas Function APIs in User Guide. For instance, the example below allows users to directly use the APIs in [a pandas Series](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.html) within Python native function." |
| 688 | + "PySpark supports various UDFs and APIs to allow users to execute Python native functions. See also the latest [Pandas UDFs](https://spark.apache.org/docs/latest/sql-pyspark-pandas-with-arrow.html#pandas-udfs-aka-vectorized-udfs) and [Pandas Function APIs](https://spark.apache.org/docs/latest/sql-pyspark-pandas-with-arrow.html#pandas-function-apis). For instance, the example below allows users to directly use the APIs in [a pandas Series](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.html) within Python native function." |
689 | 689 | ] |
690 | 690 | }, |
691 | 691 | { |
|
918 | 918 | "\n", |
919 | 919 | "CSV is straightforward and easy to use. Parquet and ORC are efficient and compact file formats to read and write faster.\n", |
920 | 920 | "\n", |
921 | | - "There are many other data sources available in PySpark such as JDBC, text, binaryFile, Avro, etc. See also \"Spark SQL, DataFrames and Datasets Guide\" in Apache Spark documentation." |
| 921 | + "There are many other data sources available in PySpark such as JDBC, text, binaryFile, Avro, etc. See also the latest [Spark SQL, DataFrames and Datasets Guide](https://spark.apache.org/docs/latest/sql-programming-guide.html) in Apache Spark documentation." |
922 | 922 | ] |
923 | 923 | }, |
924 | 924 | { |
|
1063 | 1063 | "df.createOrReplaceTempView(\"tableA\")\n", |
1064 | 1064 | "spark.sql(\"SELECT count(*) from tableA\").show()" |
1065 | 1065 | ] |
| 1066 | + }, |
| 1067 | + { |
| 1068 | + "cell_type": "markdown", |
| 1069 | + "metadata": {}, |
| 1070 | + "source": [ |
| 1071 | + "In addition, UDFs can be registered and invoked in SQL out of the box:" |
| 1072 | + ] |
| 1073 | + }, |
| 1074 | + { |
| 1075 | + "cell_type": "code", |
| 1076 | + "execution_count": 31, |
| 1077 | + "metadata": {}, |
| 1078 | + "outputs": [ |
| 1079 | + { |
| 1080 | + "name": "stdout", |
| 1081 | + "output_type": "stream", |
| 1082 | + "text": [ |
| 1083 | + "+-----------+\n", |
| 1084 | + "|add_one(v1)|\n", |
| 1085 | + "+-----------+\n", |
| 1086 | + "| 2|\n", |
| 1087 | + "| 3|\n", |
| 1088 | + "| 4|\n", |
| 1089 | + "| 5|\n", |
| 1090 | + "| 6|\n", |
| 1091 | + "| 7|\n", |
| 1092 | + "| 8|\n", |
| 1093 | + "| 9|\n", |
| 1094 | + "+-----------+\n", |
| 1095 | + "\n" |
| 1096 | + ] |
| 1097 | + } |
| 1098 | + ], |
| 1099 | + "source": [ |
| 1100 | + "@pandas_udf(\"integer\")\n", |
| 1101 | + "def add_one(s: pd.Series) -> pd.Series:\n", |
| 1102 | + " return s + 1\n", |
| 1103 | + "\n", |
| 1104 | + "spark.udf.register(\"add_one\", add_one)\n", |
| 1105 | + "spark.sql(\"SELECT add_one(v1) FROM tableA\").show()" |
| 1106 | + ] |
| 1107 | + }, |
| 1108 | + { |
| 1109 | + "cell_type": "markdown", |
| 1110 | + "metadata": {}, |
| 1111 | + "source": [ |
| 1112 | + "These SQL expressions can directly be mixed and used as PySpark columns." |
| 1113 | + ] |
| 1114 | + }, |
| 1115 | + { |
| 1116 | + "cell_type": "code", |
| 1117 | + "execution_count": 32, |
| 1118 | + "metadata": {}, |
| 1119 | + "outputs": [ |
| 1120 | + { |
| 1121 | + "name": "stdout", |
| 1122 | + "output_type": "stream", |
| 1123 | + "text": [ |
| 1124 | + "+-----------+\n", |
| 1125 | + "|add_one(v1)|\n", |
| 1126 | + "+-----------+\n", |
| 1127 | + "| 2|\n", |
| 1128 | + "| 3|\n", |
| 1129 | + "| 4|\n", |
| 1130 | + "| 5|\n", |
| 1131 | + "| 6|\n", |
| 1132 | + "| 7|\n", |
| 1133 | + "| 8|\n", |
| 1134 | + "| 9|\n", |
| 1135 | + "+-----------+\n", |
| 1136 | + "\n", |
| 1137 | + "+--------------+\n", |
| 1138 | + "|(count(1) > 0)|\n", |
| 1139 | + "+--------------+\n", |
| 1140 | + "| true|\n", |
| 1141 | + "+--------------+\n", |
| 1142 | + "\n" |
| 1143 | + ] |
| 1144 | + } |
| 1145 | + ], |
| 1146 | + "source": [ |
| 1147 | + "from pyspark.sql.functions import expr\n", |
| 1148 | + "\n", |
| 1149 | + "df.selectExpr('add_one(v1)').show()\n", |
| 1150 | + "df.select(expr('count(*)') > 0).show()" |
| 1151 | + ] |
1066 | 1152 | } |
1067 | 1153 | ], |
1068 | 1154 | "metadata": { |
|
0 commit comments