-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-4583] [mllib] LogLoss for GradientBoostedTrees fix + doc updates #3439
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Test build #23811 has started for PR 3439 at commit
|
|
Test build #23811 has finished for PR 3439 at commit
|
|
Test PASSed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MSE is not usually defined with multiplier 1/2. Shall we use a different name here, or example, mean squared loss or average loss?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll remove the 1/2. It's probably better to have an odd loss (which only experts need to know about) than to have an odd name (which everyone needs to recognize).
…orests, and boosting
…test suite since it effectively doubles the gradient and loss. * Added doc for developers within RandomForest. * Small cleanup in test suite (generating data only once)
7c38962 to
5e52bff
Compare
|
I just pushed an update which includes:
|
|
Test build #23849 has started for PR 3439 at commit
|
|
Test build #23849 has finished for PR 3439 at commit
|
|
Test PASSed. |
|
@jkbradley I am trying to find my reference for the LogLoss calculations. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is an issue with numerical stability. Maybe we can fix it in this PR. The problem appears when w = -2.0 * point.label * prediction is large. math.exp(w) would overflow while math.log(1 + math.exp(w)) should be close to w. When w < 0, we can use
math.log1p(math.exp(w)).
Otherwise, we should use
w + math.log1p(math.exp(-w))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will do!
|
@jkbradley LGTM. Thanks for the documentation too -- it is really helpful. |
|
Test build #23854 has started for PR 3439 at commit
|
|
Updated LogLoss. |
|
Test build #23856 has started for PR 3439 at commit
|
|
Test build #23856 has finished for PR 3439 at commit
|
|
Test FAILed. |
|
Test build #23862 has started for PR 3439 at commit
|
|
Test build #23854 has finished for PR 3439 at commit
|
|
Test PASSed. |
|
Test build #23862 has finished for PR 3439 at commit
|
|
Test PASSed. |
|
LGTM. Merged into master and branch-1.2. Thanks! |
Currently, the LogLoss used by GradientBoostedTrees has 2 issues: * the gradient (and therefore loss) does not match that used by Friedman (1999) * the error computation uses 0/1 accuracy, not log loss This PR updates LogLoss. It also adds some doc for boosting and forests. I tested it on sample data and made sure the log loss is monotonically decreasing with each boosting iteration. CC: mengxr manishamde codedeft Author: Joseph K. Bradley <[email protected]> Closes apache#3439 from jkbradley/gbt-loss-fix and squashes the following commits: cfec17e [Joseph K. Bradley] removed forgotten temp comments a27eb6d [Joseph K. Bradley] corrections to last log loss commit ed5da2c [Joseph K. Bradley] updated LogLoss (boosting) for numerical stability 5e52bff [Joseph K. Bradley] * Removed the 1/2 from SquaredError. This also required updating the test suite since it effectively doubles the gradient and loss. * Added doc for developers within RandomForest. * Small cleanup in test suite (generating data only once) e57897a [Joseph K. Bradley] Fixed LogLoss for GradientBoostedTrees, and updated doc for losses, forests, and boosting (cherry picked from commit c251fd7) Signed-off-by: Xiangrui Meng <[email protected]>
Currently, the LogLoss used by GradientBoostedTrees has 2 issues: * the gradient (and therefore loss) does not match that used by Friedman (1999) * the error computation uses 0/1 accuracy, not log loss This PR updates LogLoss. It also adds some doc for boosting and forests. I tested it on sample data and made sure the log loss is monotonically decreasing with each boosting iteration. CC: mengxr manishamde codedeft Author: Joseph K. Bradley <[email protected]> Closes apache#3439 from jkbradley/gbt-loss-fix and squashes the following commits: cfec17e [Joseph K. Bradley] removed forgotten temp comments a27eb6d [Joseph K. Bradley] corrections to last log loss commit ed5da2c [Joseph K. Bradley] updated LogLoss (boosting) for numerical stability 5e52bff [Joseph K. Bradley] * Removed the 1/2 from SquaredError. This also required updating the test suite since it effectively doubles the gradient and loss. * Added doc for developers within RandomForest. * Small cleanup in test suite (generating data only once) e57897a [Joseph K. Bradley] Fixed LogLoss for GradientBoostedTrees, and updated doc for losses, forests, and boosting (cherry picked from commit c251fd7) Signed-off-by: Xiangrui Meng <[email protected]>
…+ doc updates We reverted #3439 in branch-1.2 due to missing `import o.a.s.SparkContext._`, which is no longer needed in master (#3262). This PR adds #3439 back to branch-1.2 with correct imports. Github is out-of-sync now. The real changes are the last two commits. Author: Joseph K. Bradley <[email protected]> Author: Xiangrui Meng <[email protected]> Closes #3474 from mengxr/SPARK-4583-1.2 and squashes the following commits: aca2abb [Xiangrui Meng] add import o.a.s.SparkContext._ for v1.2 6b5564a [Joseph K. Bradley] [SPARK-4583] [mllib] LogLoss for GradientBoostedTrees fix + doc updates
Currently, the LogLoss used by GradientBoostedTrees has 2 issues:
This PR updates LogLoss.
It also adds some doc for boosting and forests.
I tested it on sample data and made sure the log loss is monotonically decreasing with each boosting iteration.
CC: @mengxr @manishamde @codedeft