-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-20646][core] Port executors page to new UI backend. #19678
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The executors page is built on top of the REST API, so the page itself was easy to hook up to the new code. Some other pages depend on the `ExecutorListener` class that is being removed, though, so they needed to be modified to use data from the new store. Fortunately, all they seemed to need is the map of executor logs, so that was somewhat easy too. The executor timeline graph required adding some properties to the ExecutorSummary API type. Instead of following the previous code, which stored all the listener events in memory, the timeline is now created based on the data available from the API. I had to change some of the test golden files because the old code would return executors in "random" order (since it used a mutable Map instead of something that returns a sorted list), and the new code returns executors in id order. Tested with existing unit tests.
|
For context:
|
|
Test build #83505 has finished for PR 19678 at commit
|
squito
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
a couple nits about unused imports -- there are some more in AppStatusStore, SparkUI as well (probably from before) would be nice to clean them up.
|
|
||
| import org.apache.spark.{Resubmitted, SparkConf, SparkContext} | ||
| import org.apache.spark.annotation.DeveloperApi | ||
| import org.apache.spark.scheduler._ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I think more of these imports can be deleted
| import org.apache.spark.executor.TaskMetrics | ||
| import org.apache.spark.scheduler._ | ||
| import org.apache.spark.status.AppStatusStore | ||
| import org.apache.spark.storage.StorageStatusListener |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: this import is no longer used
|
Also going to copy this comment from here for others: vanzin#45
agree, fine to do that later. |
|
Test build #83556 has finished for PR 19678 at commit
|
|
lgtm |
|
Test build #83568 has started for PR 19678 at commit |
|
retest this please |
|
Test build #83572 has finished for PR 19678 at commit
|
|
merged to master |
The executors page is built on top of the REST API, so the page itself
was easy to hook up to the new code.
Some other pages depend on the
ExecutorListenerclass that is beingremoved, though, so they needed to be modified to use data from the
new store. Fortunately, all they seemed to need is the map of executor
logs, so that was somewhat easy too.
The executor timeline graph required adding some properties to the
ExecutorSummary API type. Instead of following the previous code,
which stored all the listener events in memory, the timeline is
now created based on the data available from the API.
I had to change some of the test golden files because the old code would
return executors in "random" order (since it used a mutable Map instead
of something that returns a sorted list), and the new code returns executors
in id order.
Tested with existing unit tests.