Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@ import org.apache.spark.sql.catalyst.expressions.{Expression, Literal}
import org.apache.spark.sql.catalyst.parser.ParseException
import org.apache.spark.sql.catalyst.plans.logical.{ColumnDefinition, CreateTable, LocalRelation, LogicalPlan, OptionList, RecoverPartitions, ShowFunctions, ShowNamespaces, ShowTables, UnresolvedTableSpec, View}
import org.apache.spark.sql.catalyst.types.DataTypeUtils
import org.apache.spark.sql.catalyst.util.CaseInsensitiveMap
import org.apache.spark.sql.connector.catalog.{CatalogManager, SupportsNamespaces, TableCatalog}
import org.apache.spark.sql.connector.catalog.CatalogV2Implicits.{CatalogHelper, MultipartIdentifierHelper, NamespaceHelper, TransformHelper}
import org.apache.spark.sql.errors.QueryCompilationErrors
Expand Down Expand Up @@ -671,12 +672,9 @@ class CatalogImpl(sparkSession: SparkSession) extends Catalog {
} else {
CatalogTableType.MANAGED
}
val location = if (storage.locationUri.isDefined) {
val locationStr = storage.locationUri.get.toString
Some(locationStr)
} else {
None
}

// The location in UnresolvedTableSpec should be the original user-provided path string.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just in case, shall we add a legacy config to restore to the old behavior?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about this. I did not add a config because the change in behaviour is only for new tables. Existing tables should continue working. I am also suspecting that there is no real usage of this API for tables with special chars. But I don't have a strong opinion. WDYT?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK let's keep it simple. double escaping is definitely a bug and no one should rely on it (existing tables won't be affected)

val location = CaseInsensitiveMap(options).get("path")

val newOptions = OptionList(options.map { case (key, value) =>
(key, Literal(value).asInstanceOf[Expression])
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -700,7 +700,8 @@ class CatalogSuite extends SharedSparkSession with AnalysisTest with BeforeAndAf
val description = "this is a test table"

withTable("t") {
withTempDir { dir =>
withTempDir { baseDir =>
val dir = new File(baseDir, "test%prefix")
spark.catalog.createTable(
tableName = "t",
source = "json",
Expand Down