Skip to content

Commit 0ab4fb0

Browse files
marmbrusmarkhamstra
authored andcommitted
[SQL][DOCS] Improve table caching section
Author: Michael Armbrust <michael@databricks.com> Closes apache#2434 from marmbrus/patch-1 and squashes the following commits: 67215be [Michael Armbrust] [SQL][DOCS] Improve table caching section (cherry picked from commit cbf983b) Signed-off-by: Michael Armbrust <michael@databricks.com>
1 parent b7d95bc commit 0ab4fb0

1 file changed

Lines changed: 4 additions & 4 deletions

File tree

docs/sql-programming-guide.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -801,12 +801,12 @@ turning on some experimental options.
801801

802802
## Caching Data In Memory
803803

804-
Spark SQL can cache tables using an in-memory columnar format by calling `cacheTable("tableName")`.
804+
Spark SQL can cache tables using an in-memory columnar format by calling `sqlContext.cacheTable("tableName")`.
805805
Then Spark SQL will scan only required columns and will automatically tune compression to minimize
806-
memory usage and GC pressure. You can call `uncacheTable("tableName")` to remove the table from memory.
806+
memory usage and GC pressure. You can call `sqlContext.uncacheTable("tableName")` to remove the table from memory.
807807

808-
Note that if you call `cache` rather than `cacheTable`, tables will _not_ be cached using
809-
the in-memory columnar format, and therefore `cacheTable` is strongly recommended for this use case.
808+
Note that if you call `schemaRDD.cache()` rather than `sqlContext.cacheTable(...)`, tables will _not_ be cached using
809+
the in-memory columnar format, and therefore `sqlContext.cacheTable(...)` is strongly recommended for this use case.
810810

811811
Configuration of in-memory caching can be done using the `setConf` method on SQLContext or by running
812812
`SET key=value` commands using SQL.

0 commit comments

Comments
 (0)