Skip to content

Bug: Deployment status fixes #2407

@kartik-579

Description

@kartik-579

📜 Description

It bugs out when -

  1. Multiple cd pipelines in same app are triggered together.
  2. Deployments gets healthy instantly.
  3. Deployments are done without change.

👟 Reproduction steps

Follow description.

👍 Expected behavior

Most of the times, cron for getting the cd pipeline status gets on work and updates the pipeline status which makes user confused since app status already gets healthy.

👎 Actual Behavior

For most of the cases, pipeline status should get updated on its own(by events received from kubewatch) without cron's dependency.

💻 Device

Desktop/Laptop

💻 Operating system

MacOS

🌍 Browser

Chrome

🧱 Your Environment

No response

✅ Proposed Solution

To fix this above bugs -

  1. Currently we set status time in orchestrator which causes some logics to fail as some deployments are completed instantly. To fix this set status time in kubewatch itself.
  2. Fetch git commits of a repo and check whether the current commit in event object from kubewatch is newer than the latest commit for a cd pipeline. With this, we can identify if some commits are revised or not and can set statuses accordingly.

👀 Have you spent some time to check if this issue has been raised before?

  • I checked and didn't find any similar issue

🏢 Have you read the Code of Conduct?

AB#586

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions