feat: add support & docs for adding Hetzner Robot servers as nodes#1792
Conversation
There was a problem hiding this comment.
Summary of Changes
Hello @pat-s, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces the capability to add Hetzner Robot (dedicated) servers as nodes to a Kubernetes cluster managed by this module. It includes comprehensive documentation on the required setup steps and adds configuration options to pass custom values to the Hetzner Cloud Controller Manager (HCCM) Helm chart, which is essential for enabling Robot server support. The author notes that this configuration hasn't been tested yet due to local setup conflicts.
Highlights
- Robot Server Support: Adds documentation and configuration support for integrating Hetzner Robot (dedicated) servers as Kubernetes nodes using HCCM v1.19+.
- New Documentation: A detailed guide (
docs/add-robot-server.md) is added, covering prerequisites, networking, HCCM configuration, k3s agent setup, storage considerations, and caveats for adding Robot nodes. - HCCM Configuration Flexibility: Introduces a new
hetzner_ccm_valuesvariable to allow users to provide custom Helm values for the Hetzner Cloud Controller Manager chart.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configureGemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
This pull request introduces support for Hetzner Robot servers, including documentation and a new Terraform variable hetzner_ccm_values for custom Helm chart configuration of the Hetzner Cloud Controller Manager (HCCM). Key feedback includes addressing the placement of the hetzner_ccm_values example in kube.tf.example and reviewing the export condition in values-export.tf.
|
@pat-s Excellent, will review ASAP |
|
I got it also working using cilium in "tunnel" mode. But i have issues with MTU. Im pretty newby with kube, and also overall in networking, but what i understood all nodes and pods should use MTU 1400 as this is the linit for vSwitch. When i use MTU 1450 my pods that are not using nodenetworking arent stable. And when i switch to MTU 1400 manually all new nodepool servers start with 1450. So should we need option to set the MTU for new nodes also? When all are MTU 1400 except new nodes, they have then issues to connect to kube api for example. My alloy pods for example cant fetch creds. When i finish I get my setup to work I will add here what i changed and some confs I used. I did not use this fork, but i used kustomizer to get the same result ( i think ). |
|
@alanaasmaa I have had this problem as well. I had to make next change olexiyb@f8903ae |
mysticaltech
left a comment
There was a problem hiding this comment.
Thanks @pat-s, LGTM, that's a very important functionality being added.
Could you just please address the comments above? Maybe some instructions tweaking is needed.
|
I never had MTU issues in relation to that, so I can't comment on it. I don't have the bandwidth to test this through with all possible networking options and combinations that one could use. In the cluster I am using Robot servers, I am using |
|
You are correct @pat-s, I will later add the reference to the different MTUs and Cilium and we're good. |
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request adds support for integrating Hetzner Robot servers as Kubernetes nodes by allowing custom Helm values for the Hetzner Cloud Controller Manager (HCCM). It also includes comprehensive documentation for this new feature. My review has identified a critical issue where the custom Helm values are not being applied due to a missing change in a template file, which will prevent the feature from working. I've also pointed out opportunities to improve the new documentation for clarity and to enhance the maintainability of the Terraform configuration by making it more consistent with the rest of the module. Given the author's note that this is untested, addressing the critical issue is essential before this can be merged.
|
Hi @pat-s, thanks a lot for documenting that! Regarding |
After HCCM is now integrated via helm, there's only the ability missing to configure its values.
I've extracted the instructions from #1311 (comment), which I am using since its existence.
As this config conflicts with my custom HCCM install right now, I haven't tested it yet - maybe somebody else will? ;)
close #1405 #1311 #283