-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-10181][SQL] Do kerberos login for credentials during hive client initialization #9272
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 8 commits
59253b3
3e79e94
9efa819
7934585
30a7d05
836bb8b
7ec246f
1fe7593
3f0fee9
06ee550
906b3cb
ef927e7
7d09f5d
36d5ef8
cdb3aa5
1fbc372
caf51a7
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -21,6 +21,8 @@ import java.io.{File, PrintStream} | |
| import java.util.{Map => JMap} | ||
| import javax.annotation.concurrent.GuardedBy | ||
|
|
||
| import org.apache.hadoop.security.UserGroupInformation | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Let's move this import down to the place where we have other hadoop related imports. https://cwiki.apache.org/confluence/display/SPARK/Spark+Code+Style+Guide#SparkCodeStyleGuide-Imports is the doc about import ordering. |
||
|
|
||
| import scala.collection.JavaConverters._ | ||
| import scala.language.reflectiveCalls | ||
|
|
||
|
|
@@ -35,7 +37,7 @@ import org.apache.hadoop.hive.ql.{Driver, metadata} | |
| import org.apache.hadoop.hive.shims.{HadoopShims, ShimLoader} | ||
| import org.apache.hadoop.util.VersionInfo | ||
|
|
||
| import org.apache.spark.Logging | ||
| import org.apache.spark.{SparkConf, Logging} | ||
| import org.apache.spark.sql.catalyst.expressions.Expression | ||
| import org.apache.spark.sql.execution.QueryExecutionException | ||
| import org.apache.spark.util.{CircularBuffer, Utils} | ||
|
|
@@ -150,6 +152,14 @@ private[hive] class ClientWrapper( | |
| val original = Thread.currentThread().getContextClassLoader | ||
| // Switch to the initClassLoader. | ||
| Thread.currentThread().setContextClassLoader(initClassLoader) | ||
|
|
||
| val sparkConf = new SparkConf | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Instead of creating a new Spark Conf, can we use
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @yhuai Sorry for the late response. I did the testing after changing to SparkEnv.get.conf but it didn't work. The reason is that Yarn Client.scala resets property spark.yarn.keytab by appending some random strings to keytab file name during setupCredentials, which will be used as the link name in distributed cache. I think the one for the link name should be actually separated from the original keytab setting, e.g. using different property names.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @yolandagao Thanks for the explanation. Can you add comments to your code (including why we need to put those confs to sysProps and why we need to create a new SparkConf at here)? Basically, we need to document the flow of how these confs get propagated. Otherwise, it is not obvious why we need to do this change. Thanks!
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @yhuai Sure. I should have done this earlier, to make everything clearer:) Added some comments there, and please help review. Thank you! |
||
| if (sparkConf.contains("spark.yarn.principal") && sparkConf.contains("spark.yarn.keytab")) { | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Let's make it clear that we set these two settings in SparkSubmit. |
||
| UserGroupInformation.loginUserFromKeytab( | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. before calling this, actually verify that the keytab file exists and fail with a message including the property name, to help people debug the problem. UGI internal exceptions are rarely informative enough |
||
| sparkConf.get("spark.yarn.principal"), | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. actually, you should call
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @steveloughran Good point. Better to check the existence of the keytab file before make the login call, as if the keytab doesn't exist the UGI call will definitely fail but with some indirect message like "login failed... no keys found..." ect. Added the check. However, calling SparkHadoopUtil.get.loginUserFromKeytab instead of UserGroupInformation.loginUserFromKeytab in ClientWrapper will not solve the problem as SparkHadoopUtil is shared and the UserGroupInformation class it includes is not the same one used by SessionState.start in ClientWrapper. Therefore, the program still fails with no tgt exception when connecting to metastore. Also not able to replace the UGI call in SparkSubmit either, as incorrect type of SparkHadoopUtil instance might get created due to yarn mode isn't set in the system until it flows to yarn Client.scala.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. OK. just include the check (actually, UGI itself should do that check shouldn't it? Lazy) |
||
| sparkConf.get("spark.yarn.keytab")) | ||
| } | ||
|
|
||
| val ret = try { | ||
| val initialConf = new HiveConf(classOf[SessionState]) | ||
| // HiveConf is a Hadoop Configuration, which has a field of classLoader and | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@harishreedharan I see you changed this part of code last time. If we want to pass these two arguments to Spark SQL, what is the recommended way?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@harishreedharan Hi Hari, could you let us know the preferred way to pass principal and keytab parameters from spark submit to spark sql? waiting for your response to proceed. Thank you!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We might want to look at yarn Client.scala setupCredentials() since its doing something pretty similar it looks like.