-
Notifications
You must be signed in to change notification settings - Fork 1.8k
refactor pipeline validation unit tests v1beta1 #2581
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor pipeline validation unit tests v1beta1 #2581
Conversation
|
The following is the coverage report on the affected files.
|
d6fec06 to
1e6e8a4
Compare
|
The following is the coverage report on the affected files.
|
|
@bobcatfish I have split one huge unit test into individual validations, if these changes look good, I will apply similar changes to |
|
@pritidesai #2537 made v1beta1 builder to ease the port. Also, this is gonna conflict hard with #2577 (as I am porting all the tests to v1beta1, starting with the builders to limit the size). |
|
@vdemeester as much I feel I am starting to understand the whole structuring of unit tests, I dont think I do 😭 v1beta1 pipeline validation unit tests does not use |
Long term goal is to deprecate them (mainly because the benefit they bring might not be worth), see #2537 (comment). But this is long-term view, and a PR like #2577 would be ever bigger (and would take way more time) if I hadn't ported the builder to |
|
thanks @vdemeester, I am refactoring |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great to me!! The coverage has gone up and you improved the naming of so many existing test cases 🙏
I'm so excited to see all the focused test cases for each individual function, thanks for extracting them all out, this must have been a ton of work!!
My one request is that I'd love to see these with separate tests for success and failure cases, i.e. instead of a boolean "failureExpected" type flag, two totally different tests. I find this makes it much easier to tell what is covered, b/c you can look at the success and failure cases separately, and because you don't have to worry about the logic that is checking that flag having any bugs.
/approve
| Tasks: []v1beta1.PipelineTask{ | ||
| {Name: "foo"}, | ||
| }, | ||
| Description: "this is a valid pipeline with all possible fields initialized", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice!!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i wonder if it makes sense to have a similar failure case pipelinespec, e.g. one with a lot of different failures (i think they'll mostly all be surfaced? not sure if the logic just bails early in that case 🤔 )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yup it does 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
interesting!! totally outside the scope of what you're doing, but i feel like it'd be nice to be able to build up as many errors as possible and give them all at once
| tests := []struct { | ||
| name string | ||
| ps *v1beta1.PipelineSpec | ||
| failureExpected bool |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is definitely outside the scope of your changes b/c the tests were already here but I really recommend having completely different tests for failure/non-failure cases in general - makes them much easier to read and maintain!
e.g. if you have a test where all the pipelines are invalid, and one where they are all valid, makes it much easier to understand whats covered and whats not
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in fact it looks at a glance like all the cases here might be invalid so maybe the flag isnt needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yup, will create separate functions for failure tests vs non-failure tests.
| tasks: []v1beta1.PipelineTask{{Name: "_foo", TaskRef: &v1beta1.TaskRef{Name: "foo-task"}}}, | ||
| failureExpected: true, | ||
| }, { | ||
| name: "pipeline task with invalid task name (camel case)", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice - much more descriptive than "invalid task name 2"!
| }}, | ||
| }}, | ||
| name: "invalid condition only resource -" + | ||
| " pipeline task condition referring to a resource which is missing from resource declarations", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for adding the details 🙏
|
thanks @bobcatfish, I will create separate functions for failure tests vs non-failure tests and create a separate PR with refactoring |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we exporting the validate* function just for the tests ?
If so, I really don't think this is a good idea, as this makes them part of the "tekton go API".
Maybe we want to make those function part of our "go API" and are fine with external projects depending on it, but this has to be discussed.
If we want to tests those function, we can always use the same package name instead of v1beta1_test to be able to see these. But them we are testing the impl. detail and not the go API itself ; which mean, if I refactor some validate* function without changing the main pipeline.Validate() function behaviour, I will need to update/rewrite tests, whereas I didn't have to do that before (because we were testing the contract/api not the implementation detail)
1e6e8a4 to
09af651
Compare
|
The following is the coverage report on the affected files.
|
|
@bobcatfish @vdemeester thanks for the discussion this morning, I have updated the PR. @bobcatfish I have split positive and negative cases into separate functions naming them |
This change is part of the larger refactoring effort to simplify unit tests by splitting one huge validate test function into its own unit tests. The end goal is to move the individual validation functions into their own packages or have them available as attributes of the fields they are validating instead of defining them locally under each APIs such as `v1beta1`.
09af651 to
40a2a25
Compare
|
The following is the coverage report on the affected files.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/meow
|
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: bobcatfish, vdemeester The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
For posterity: we went back and forth a bunch on the best way to test these, in the end we all agreed (correct me if i'm wrong!) that it did make sense to have these focused tests, but that also indicated that in the long run these functions belong in a different package (we all agree that testing unexported functions isnt a good idea - it usually indicates some potential refactoring!) but in the short term we're testing the unexported functions here so that we get get the improved tests but without having to fix everything at once thanks again @pritidesai ! /lgtm |
|
thanks @bobcatfish for details, yup thats accurate 👍 |


Changes
This change is part of the larger refactoring effort to simplify unit tests
by splitting one huge validate test function into its own unit tests.
The end goal is to move the individual validation functions into their own
packages or have them available as attributes of the fields they are validating
instead of defining them locally under each APIs such as
v1beta1.Most changes done in test file, split one huge test function into small tests and added more test cases.
Submitter Checklist
These are the criteria that every PR should meet, please check them off as you
review them:
See the contribution guide for more details.
Double check this list of stuff that's easy to miss:
cmddir, please updatethe release Task to build and release this image.
Reviewer Notes
If API changes are included, additive changes must be approved by at least two OWNERS and backwards incompatible changes must be approved by more than 50% of the OWNERS, and they must first be added in a backwards compatible way.