Skip to content
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -195,16 +195,21 @@ case class AlterTableRenameCommand(
DDLUtils.verifyAlterTableType(catalog, table, isView)
// If an exception is thrown here we can just assume the table is uncached;
// this can happen with Hive tables when the underlying catalog is in-memory.
val wasCached = Try(sparkSession.catalog.isCached(oldName.unquotedString)).getOrElse(false)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The existing implementation uses Catalog APIs (isCached), whereas this PR uses CacheManager directly. If this approach is not desired, we can update Catalog API to expose StorageLevel.

if (wasCached) {
// If `optStorageLevel` is defined, the old table was cached.
val optStorageLevel = Try {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will lookupCachedData throw exception actually? If it's not cached, None should be returned. Isn't?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lookupCachedData doesn't throw an exception. I believe the existing code is wrapped in Try because isCached is calling spark.table (could be related to "this can happen with Hive tables when the underlying catalog is in-memory"). So, I was keeping the same behavior.

Do you think Try can be removed?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea I think we can remove it

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to remove it. We use lookupCachedData in many places and I think we don't need to add Try here.

val optCachedData = sparkSession.sharedState.cacheManager.lookupCachedData(
sparkSession.table(oldName.unquotedString))
optCachedData.map(_.cachedRepresentation.cacheBuilder.storageLevel)
}.getOrElse(None)
optStorageLevel.foreach { _ =>
Copy link
Contributor

@cloud-fan cloud-fan Dec 15, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: if (optStorageLevel.isDefine). It's clearer as a check to see if the table is cached or not.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated.

CommandUtils.uncacheTableOrView(sparkSession, oldName.unquotedString)
}
// Invalidate the table last, otherwise uncaching the table would load the logical plan
// back into the hive metastore cache
catalog.refreshTable(oldName)
catalog.renameTable(oldName, newName)
if (wasCached) {
sparkSession.catalog.cacheTable(newName.unquotedString)
optStorageLevel.foreach { storageLevel =>
sparkSession.catalog.cacheTable(newName.unquotedString, storageLevel)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this miss the tableName if there is in the original cache?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I didn't get this question. This is creating a new cache with a new table name.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, you can check the change like #30769. Especially how it recaches the table. There is a cacheName parameter. If the table was cached with a cache name, when recaching it, I think we should keep it.

    val cache = session.sharedState.cacheManager.lookupCachedData(v2Relation)
    session.sharedState.cacheManager.uncacheQuery(session, v2Relation, cascade = true)
    session.sharedState.cacheManager.uncacheQuery(session, v2Relation, cascade = true)
    if (recacheTable && cache.isDefined) {
      // save the cache name and cache level for recreation
      val cacheName = cache.get.cachedRepresentation.cacheBuilder.tableName
      val cacheLevel = cache.get.cachedRepresentation.cacheBuilder.storageLevel

      // recache with the same name and cache level.
      val ds = Dataset.ofRows(session, v2Relation)
      session.sharedState.cacheManager.cacheQuery(ds, cacheName, cacheLevel)
    }

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The previous code seems also recache with the new name?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, the refresh table command for v2 doesn't recache the table before #30769.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean the previous code in AlterTableRenameCommand. We shouldn't change its behavior regarding cache name in this bug fix PR.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm okay, actually it also sounds like a bug if alter table command changes the cache name. I'm fine to leave it unchanged here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe cache name is used for debug purpose only (for RDD name and InMemoryTableScanExec). So if the cache name - which is tied to the table name - doesn't change when the table is changed, wouldn't it cause a confusion since it will still refer to the old table name? I can do a follow up PR if this seems like a bug.

}
}
Seq.empty[Row]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1285,4 +1285,24 @@ class CachedTableSuite extends QueryTest with SQLTestUtils
assert(spark.sharedState.cacheManager.lookupCachedData(sql("select 1, 2")).isDefined)
}
}

test("SPARK-33786: Cache's storage level should be respected when a table name is altered.") {
withTable("old", "new") {
withTempPath { path =>
def getStorageLevel(tableName: String): StorageLevel = {
val table = spark.table(tableName)
val cachedData = spark.sharedState.cacheManager.lookupCachedData(table).get
cachedData.cachedRepresentation.cacheBuilder.storageLevel
}
Seq(1 -> "a").toDF("i", "j").write.parquet(path.getCanonicalPath)
sql(s"CREATE TABLE old USING parquet LOCATION '${path.toURI}'")
sql("CACHE TABLE old OPTIONS('storageLevel' 'MEMORY_ONLY')")
val oldStorageLevel = getStorageLevel("old")

sql("ALTER TABLE old RENAME TO new")
val newStorageLevel = getStorageLevel("new")
assert(oldStorageLevel === newStorageLevel)
}
}
}
}