-
Notifications
You must be signed in to change notification settings - Fork 1.8k
OSDOCS-640: Adding docs for configuring proxy during installation #16635
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -32,6 +32,8 @@ include::modules/installation-configuration-parameters.adoc[leveloffset=+2] | |
|
|
||
| include::modules/installation-azure-config-yaml.adoc[leveloffset=+2] | ||
|
|
||
| include::modules/installation-configure-proxy.adoc[leveloffset=+2] | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Same here, we only have IPI-on-Azure in 4.2 |
||
|
|
||
| include::modules/installation-launching-installer.adoc[leveloffset=+1] | ||
|
|
||
| include::modules/cli-install.adoc[leveloffset=+1] | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -32,6 +32,8 @@ include::modules/installation-configuration-parameters.adoc[leveloffset=+2] | |
|
|
||
| include::modules/installation-gcp-config-yaml.adoc[leveloffset=+2] | ||
|
|
||
| include::modules/installation-configure-proxy.adoc[leveloffset=+2] | ||
|
|
||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Same here, this section is for IPI-on-GCP There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. And for proxy support on GCP, we still have a testblocker bug https://bugzilla.redhat.com/show_bug.cgi?id=1753930, which was targeted to 4.3. We might need to add it to known issue in the release note.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. openshift/installer#2405 is in the merge queue, which would unblock a 4.2 backport. But yeah, restricting proxy GCP to UPI in the meantime would be a good, conservative choice.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The 4.3/master changes all landed. Hopefully we'll have the 4.2 backports soon.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. PRs for https://bugzilla.redhat.com/show_bug.cgi?id=1753930 have merged. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. https://bugzilla.redhat.com/show_bug.cgi?id=1753930 was already verified with 4.3 build.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. FYI - with PR #17080 we're limiting support to UPI installs only, so support is only for UPI for GCP now. @danehans Were the PRs for https://bugzilla.redhat.com/show_bug.cgi?id=1753930 backported to 4.2, or is this still a problem in 4.2 for GCP UPI? If not, you mentioned there was a workaround of adding 'metadata.google.internal.' to noProxy - let me know if this is something that we need to document or not. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @bergerhoffer The cloned bug for 4.2 https://bugzilla.redhat.com/show_bug.cgi?id=1759245 has been verified now, it's not an issue for 4.2.0 GCP UPI anymore, thanks! |
||
| include::modules/installation-launching-installer.adoc[leveloffset=+1] | ||
|
|
||
| //include::modules/installing-aws-customizations.adoc[leveloffset=+1] | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,90 @@ | ||
| // Module included in the following assemblies: | ||
| // | ||
| // * installing/installing_aws/installing-aws-customizations.adoc | ||
| // * installing/installing_aws_user_infra/installing-aws-user-infra.adoc | ||
| // * installing/installing_bare_metal/installing-bare-metal.adoc | ||
| // * installing/installing_vsphere/installing-vsphere.adoc | ||
| // * installing/installing_gcp/installing-gcp-customizations.adoc | ||
| // * installing/installing_azure/installing-azure-customizations.adoc | ||
| // * installing/installing_restricted_networks/installing-restricted-networks-aws.adoc | ||
| // * installing/installing_restricted_networks/installing-restricted-networks-bare-metal.adoc | ||
| // * installing/installing_restricted_networks/installing-restricted-networks-vsphere.adoc | ||
|
|
||
| ifeval::["{context}" == "installing-bare-metal"] | ||
| :bare-metal: | ||
| endif::[] | ||
|
|
||
| [id="installation-configure-proxy_{context}"] | ||
| = Configuring the cluster-wide proxy during installation | ||
|
|
||
| Production environments can deny direct access to the Internet and instead have | ||
| an HTTP or HTTPS proxy available. You can configure a new {product-title} | ||
| cluster to use a proxy by configuring the proxy settings in the | ||
| `install-config.yaml` file. | ||
|
|
||
| ifdef::bare-metal[] | ||
| [NOTE] | ||
| ==== | ||
| For bare metal installations, if you do not assign node IP addresses from the | ||
| range that is specified in the `networking.machineCIDR` field in the | ||
| `install-config.yaml` file, you must include them in the `proxy.noProxy` field. | ||
| ==== | ||
| endif::bare-metal[] | ||
|
|
||
| .Prerequisites | ||
|
|
||
| * An existing `install-config.yaml` file. | ||
|
|
||
| .Procedure | ||
|
|
||
| . Edit your `install-config.yaml` file and add the proxy settings. For example: | ||
| + | ||
| [source,yaml] | ||
| ---- | ||
| apiVersion: v1 | ||
| baseDomain: my.domain.com | ||
| proxy: | ||
| httpProxy: http://<username>:<pswd>@<ip>:<port> <1> | ||
bergerhoffer marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| httpsProxy: http://<username>:<pswd>@<ip>:<port> <2> | ||
| noProxy: example.com <3> | ||
| additionalTrustBundle: | <4> | ||
| -----BEGIN CERTIFICATE----- | ||
| <MY_TRUSTED_CA_CERT> | ||
| -----END CERTIFICATE----- | ||
| ... | ||
| ---- | ||
| <1> A proxy URL to use for creating HTTP connections outside the cluster. The | ||
| URL scheme must be `http`. | ||
|
||
| <2> A proxy URL to use for creating HTTPS connections outside the cluster. If | ||
| this field is not specified, then `httpProxy` is used for both HTTP and HTTPS | ||
| connections. The URL scheme must be `http`; `https` is currently not | ||
| supported. | ||
| <3> A comma-separated list of destination domain names, domains, IP addresses, or | ||
| other network CIDRs to exclude proxying. Preface a domain with `.` to include | ||
| all subdomains of that domain. Use `*` to bypass proxy for all destinations. | ||
| <4> If provided, the installation program generates a ConfigMap that is named `user-ca-bundle` in | ||
| the `openshift-config` namespace that contains one or more additional CA | ||
| certificates that are required for proxying HTTPS connections. The Cluster Network | ||
| Operator then creates a `trusted-ca-bundle` ConfigMap that merges these contents | ||
| with the {op-system-first} trust bundle, and this ConfigMap is referenced in the Proxy | ||
| object's `trustedCA` field. The `additionalTrustBundle` field is required unless | ||
| the proxy's identity certificate is signed by an authority from the {op-system} trust | ||
| bundle. | ||
| + | ||
| [NOTE] | ||
| ==== | ||
| The installation program does not support the proxy `readinessEndpoints` field. | ||
| ==== | ||
|
|
||
| . Save the file and reference it when installing {product-title}. | ||
|
|
||
| The installation program creates a cluster-wide proxy that is named `cluster` that uses the proxy | ||
| settings in the provided `install-config.yaml` file. If no proxy settings are | ||
| provided, a `cluster` Proxy object is still created, but it will have a nil | ||
| `spec`. | ||
|
|
||
| [NOTE] | ||
| ==== | ||
| Only the Proxy object named `cluster` is supported, and no additional | ||
| proxies can be created. | ||
| ==== | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,81 @@ | ||
| // Module included in the following assemblies: | ||
| // | ||
| // * installing/installing_aws_user_infra/installing-aws-user-infra.adoc | ||
|
|
||
| [id="installation-generate-aws-user-infra-ignition_{context}"] | ||
| = Creating the Kubernetes manifest and Ignition config files | ||
|
|
||
| Because you must manually start the cluster machines, you must generate the | ||
| Kubernetes manifest and Ignition config files that the cluster needs to make its | ||
| machines. | ||
|
|
||
| [IMPORTANT] | ||
| ==== | ||
| The Ignition config files that the installation program generates contain | ||
| certificates that expire after 24 hours. You must complete your cluster | ||
| installation and keep the cluster running for 24 hours in a non-degraded state | ||
| to ensure that the first certificate rotation has finished. | ||
| ==== | ||
|
|
||
| .Prerequisites | ||
|
|
||
| * Obtain the {product-title} installation program. | ||
| ifdef::restricted[] | ||
| For a restricted network installation, these files are on your bastion host. | ||
|
||
| endif::restricted[] | ||
| * Create the `install-config.yaml` installation configuration file. | ||
|
|
||
| .Procedure | ||
|
|
||
| . Remove the Kubernetes manifest files for the control plane machines. By | ||
| removing these files, you prevent the cluster from automatically generating | ||
| control plane machines. | ||
| .. Generate the Kubernetes manifests for the cluster: | ||
|
||
| + | ||
| ---- | ||
| $ ./openshift-install create manifests --dir=<installation_directory> <1> | ||
|
|
||
| WARNING There are no compute nodes specified. The cluster will not fully initialize without compute nodes. | ||
| INFO Consuming "Install Config" from target directory | ||
| ---- | ||
| <1> For `<installation_directory>`, specify the installation directory that | ||
| contains the `install-config.yaml` file you created. | ||
| + | ||
| Because you create your own compute machines later in the installation process, | ||
| you can safely ignore this warning. | ||
| .. Remove the files that define the control plane machines: | ||
| + | ||
| ---- | ||
| $ rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml | ||
| ---- | ||
|
|
||
| ifeval::["{context}" == "installing-aws-user-infra"] | ||
| . Remove the Kubernetes manifest files that define the worker machines: | ||
| + | ||
| ---- | ||
| $ rm -f openshift/99_openshift-cluster-api_worker-machineset-* | ||
| ---- | ||
| + | ||
| Because you create and manage the worker machines yourself, you do not need | ||
| to initialize these machines. | ||
| endif::[] | ||
|
|
||
| . Obtain the Ignition config files: | ||
| + | ||
| ---- | ||
| $ ./openshift-install create ignition-configs --dir=<installation_directory> <1> | ||
| ---- | ||
| <1> For `<installation_directory>`, specify the same installation directory. | ||
| + | ||
| The following files are generated in the directory: | ||
| + | ||
| ---- | ||
| . | ||
| ├── auth | ||
| │ ├── kubeadmin-password | ||
| │ └── kubeconfig | ||
| ├── bootstrap.ign | ||
| ├── master.ign | ||
| ├── metadata.json | ||
| └── worker.ign | ||
| ---- | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a doubt whether we should add it here. As I know, only UPI way support proxy setting until IPI supports using a preconfigured VPC. This section is for IPI-on-AWS.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can use a proxy on AWS IPI, the proxy just has to live outside the VPC and still be reachable by the cluster. This is how CI works now. Not as nice as what's in flight with openshift/release#4719, but still something.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gpei Are you suggesting that this should not be included here for AWS IPI (and Azure and GCP)? I had confirmed with @katherinedube the appropriate supported list here: https://jira.coreos.com/browse/OSDOCS-640?focusedCommentId=130922&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-130922
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As Trevor mentioned, until we support deploying to a pre-existing VPC (planned for the next release), the proxy must live outside the VPC and be reachable by the nodes. While I agree it's not ideal, I'm not sure we should be saying it doesn't work unless it really doesn't work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, I understand that we can use proxy for IPI install. But QE was told
UPI must be used for proxy testingin 4.2, so all proxy related testing were scheduled with UPI installs in QE's 4.2 test matrix.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@katherinedube IPI was being used for testing as a stop-gap until UPI CI was complete. It was identified early on by @derekwaynecarr that UPI is required due to the lack of existing VPC support for IPI. As I mention in my above comment, IPI can technically work, but I don't see the requirements for IPI proxy meeting real world use cases.
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@danehans Just to be clear, even if we're going to say upi support is required for proxy this should only effect public cloud provider ipi deployments. Meaning I don't know this is really a generic ipi problem across all of our ipi providers like OSP. As it is today in the text matrix, it's all ipi providers - so nothing is being tested with OSP (not sure this should have been excluded.) Otherwise, why couldn't a public cloud provider use proxy with ipi? Could you provider some additional details as to why it wouldn't be practical? Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I totally agree with @danehans 's statement.
IPI's prerequisite is having internet connectivity. That means, cluster itself already have internet connection, why still need proxy? Even proxy is enabled, how to validate all components and operators are really getting outside via proxy but not via its own internet connectivity?
In QE's testing, we dropped internet connectivity, and enable proxy, we found the following bugs:
Bug 1753467 - [proxy] no proxy is set for kube-controller-manager
That indicates if we do not drop internet connectivity for proxy testing, our test result is NOT persuadable, or else, we would miss the above bugs. But in the whole IPI install process, we have no way to drop internet connectivity.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We definitely want to support proxy on blackholed UPI; I don't think anyone opposes that. Once we have bring-your-own-VPC, we all agree we want to support proxy there too. And testing in those environments makes me reasonably confident that at least OpenShift components will respect a voluntary proxy. The only contention seems to be whether we want to support proxy for installer-provided-everything. And the pushback seems to be mostly around "who wants this?". But I don't see anything about that case that would be a support burden, so I'd rather not add confusion by walling it off as an unsupported niche. Maybe customers want a proxy for MitM snooping on our egress (and they are also convinced from our blackhole support that we are playing nice). Or maybe they want a proxy for some other reason. But I much prefer the optics of "you may not care about this on IPI" to "we don't support this (suggesting it does not work reliably) on IPI". From a QE angle, we don't test all possible config permutations on all platforms. I'm fine leaving IPI+proxy covered just in CI, and relying on QE in the blackholed cases to turn up bugs like they're already doing.
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, agree. QE did not run proxy testing in a blackholed IPI (due to in 4.2 IPI, we have no way to create a blackholed env), but only proxy testing in a blackholed UPI.
I like "we don't support this (suggesting it does not work reliably) on IPI".