Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion sql/core/src/main/scala/org/apache/spark/sql/Column.scala
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ class TypedColumn[-T, U](
* col("`a.column.with.dots`") // Escape `.` in column names.
* $"columnName" // Scala short hand for a named column.
* expr("a + 1") // A column that is constructed from a parsed SQL Expression.
* lit("1") // A column that produces a literal (constant) value.
* lit("abc") // A column that produces a literal (constant) value.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @marmbrus I changed it to a string literal. Otherwise it might be confusing to some users thinking the way to specify a numeric literal is also to wrap it in string.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point.

* }}}
*
* [[Column]] objects can be composed to form complex expressions:
Expand Down
13 changes: 13 additions & 0 deletions sql/core/src/main/scala/org/apache/spark/sql/DataFrame.scala
Original file line number Diff line number Diff line change
Expand Up @@ -1421,6 +1421,19 @@ class DataFrame private[sql](
*/
def first(): Row = head()

/**
* Concise syntax for chaining custom transformations.
* {{{
* def featurize(ds: DataFrame) = ...
*
* df
* .transform(featurize)
* .transform(...)
* }}}
* @since 1.6.0
*/
def transform[U](t: DataFrame => DataFrame): DataFrame = t(this)

/**
* Returns a new RDD by applying a function to all rows of this DataFrame.
* @group rdd
Expand Down