Skip to content
Closed
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions docs/sql-programming-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -297,6 +297,9 @@ reflection and become the names of the columns. Case classes can also be nested
types such as `Seq`s or `Array`s. This RDD can be implicitly converted to a DataFrame and then be
registered as a table. Tables can be used in subsequent SQL statements.

Spark Encoders are used to convert a JVM object to Spark SQL representation. To create dataset, spark requires an encoder which takes the form of <b>Encoder[T]</b> where <b>T</b> is the type which has to be encoded. Creation of a dataset with a custom type of object, may result into <b>java.lang.UnsupportedOperationException: No Encoder found for Object-Name</b>.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is trivial.. but maybe spark -> Spark? I am not an expert in grammar but up to my knowledge, capitalizing a proper noun is correct.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, @HarshSharma8 this still doesn't address the comments. Use back-ticks for code, not bold, too. What is Object-Name?

To overcome this problem, we use the <b>kryo</b> encoder. It generally tells Spark SQL to encode our custom object, so that the operation could find this encoded object.

{% include_example schema_inferring scala/org/apache/spark/examples/sql/SparkSQLExample.scala %}
</div>

Expand Down