-
Notifications
You must be signed in to change notification settings - Fork 136
Print reason for failure during assess run #2182
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
I still need to convert this into a table format, but I think having the information is worthwhile.
tedinski
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems reasonable. Notes:
- We're adding more to the scan test script, but should maybe be adopting Adrian's testing mode. Not for this PR, though.
- Besides a table, I think we should print the path to the log file for that crate that scan produces, so more information can be easily found beyond just what's printed.
- Have you tried this on "real" failures? What does it look like (e.g. for top-100? Which you can run with that script)
I'd consider adding the log file to the output, but beyond that lgtm
|
Is it Ok if I add the path if we run with verbose on. What do you think? For the top 100 crates I'm worried this is going to get way too verbose. |
|
FYI, this is how the top 100 looks like: I need to figure out what is up with the 0 compilation errors. The ones that worry me the most though are the ICE ones. |
|
Maybe I should try to sort the output by either the crate name of the error message. @tedinski what do you think? |
|
Ah, the 0 compilation errors seem related with project configuration issues. E.g.: I'll create an issue to track those. |
adpaco-aws
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, @celinval ! This is good information to have as well.
Also, what do you think about printing the failures after the table? IMO it would look cleaner.
|
Would you prefer if I print it after the table about the tests or before? I was initially printing the error information after the unsupported constructs table, but I thought it looked a bit odd since it was in between the two tables. |
|
Oh, I didn't realize there were more tables after the first one. Then I'm OK with the PR as is. |
That was my initial thought too, but I thought it looked a bit odd when we run the unit tests too
Yeah, it depends whether we pass |
|
FYI, the output is now sorted by the name of the package. |
Description of changes:
We should probably convert this into a table format, but I think having the information is worthwhile. For this PR, the output of scan for the scan regression script looks like:
Resolved issues:
Fixes #2165
Related RFC:
Optional #ISSUE-NUMBER.
Call-outs:
There wasn't any change to the output when there is no failure.
Testing:
How is this change tested?
Is this a refactor change?
Checklist
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 and MIT licenses.