-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-10181][SQL] Do kerberos login for credentials during hive client initialization #9272
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 13 commits
59253b3
3e79e94
9efa819
7934585
30a7d05
836bb8b
7ec246f
1fe7593
3f0fee9
06ee550
906b3cb
ef927e7
7d09f5d
36d5ef8
cdb3aa5
1fbc372
caf51a7
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -21,6 +21,8 @@ import java.io.{File, PrintStream} | |
| import java.util.{Map => JMap} | ||
| import javax.annotation.concurrent.GuardedBy | ||
|
|
||
| import org.apache.hadoop.security.UserGroupInformation | ||
|
|
||
| import scala.collection.JavaConverters._ | ||
| import scala.language.reflectiveCalls | ||
|
|
||
|
|
@@ -35,7 +37,7 @@ import org.apache.hadoop.hive.ql.{Driver, metadata} | |
| import org.apache.hadoop.hive.shims.{HadoopShims, ShimLoader} | ||
| import org.apache.hadoop.util.VersionInfo | ||
|
|
||
| import org.apache.spark.Logging | ||
| import org.apache.spark.{SparkConf, SparkException, Logging} | ||
| import org.apache.spark.sql.catalyst.expressions.Expression | ||
| import org.apache.spark.sql.execution.QueryExecutionException | ||
| import org.apache.spark.util.{CircularBuffer, Utils} | ||
|
|
@@ -150,6 +152,21 @@ private[hive] class ClientWrapper( | |
| val original = Thread.currentThread().getContextClassLoader | ||
| // Switch to the initClassLoader. | ||
| Thread.currentThread().setContextClassLoader(initClassLoader) | ||
|
|
||
| val sparkConf = new SparkConf | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Instead of creating a new Spark Conf, can we use
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @yhuai Sorry for the late response. I did the testing after changing to SparkEnv.get.conf but it didn't work. The reason is that Yarn Client.scala resets property spark.yarn.keytab by appending some random strings to keytab file name during setupCredentials, which will be used as the link name in distributed cache. I think the one for the link name should be actually separated from the original keytab setting, e.g. using different property names.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @yolandagao Thanks for the explanation. Can you add comments to your code (including why we need to put those confs to sysProps and why we need to create a new SparkConf at here)? Basically, we need to document the flow of how these confs get propagated. Otherwise, it is not obvious why we need to do this change. Thanks!
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @yhuai Sure. I should have done this earlier, to make everything clearer:) Added some comments there, and please help review. Thank you! |
||
| if (sparkConf.contains("spark.yarn.principal") && sparkConf.contains("spark.yarn.keytab")) { | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Let's make it clear that we set these two settings in SparkSubmit. |
||
| val principalName = sparkConf.get("spark.yarn.principal") | ||
| val keytabFileName = sparkConf.get("spark.yarn.keytab") | ||
| if (!new File(keytabFileName).exists()) { | ||
| throw new SparkException(s"Keytab file: ${keytabFileName}" + | ||
| " specified in spark.yarn.keytab does not exist") | ||
| } else { | ||
| logInfo("Attempting to login to Kerberos" + | ||
| s" using principal: ${principalName} and keytab: ${keytabFileName}") | ||
| UserGroupInformation.loginUserFromKeytab(principalName, keytabFileName) | ||
| } | ||
| } | ||
|
|
||
| val ret = try { | ||
| val initialConf = new HiveConf(classOf[SessionState]) | ||
| // HiveConf is a Hadoop Configuration, which has a field of classLoader and | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's move this import down to the place where we have other hadoop related imports. https://cwiki.apache.org/confluence/display/SPARK/Spark+Code+Style+Guide#SparkCodeStyleGuide-Imports is the doc about import ordering.