Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 0 additions & 2 deletions _topic_map.yml
Original file line number Diff line number Diff line change
Expand Up @@ -161,8 +161,6 @@ Topics:
Distros: openshift-enterprise,openshift-origin
- Name: Configuring your firewall
File: configuring-firewall
# - Name: Configuring a custom certificate authority
# File: configuring-custom-ca
---
Name: Updating clusters
Dir: updating
Expand Down
2 changes: 2 additions & 0 deletions installing/installing_aws/installing-aws-customizations.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,8 @@ include::modules/installation-configuration-parameters.adoc[leveloffset=+2]

include::modules/installation-aws-config-yaml.adoc[leveloffset=+2]

include::modules/installation-configure-proxy.adoc[leveloffset=+2]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a doubt whether we should add it here. As I know, only UPI way support proxy setting until IPI supports using a preconfigured VPC. This section is for IPI-on-AWS.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can use a proxy on AWS IPI, the proxy just has to live outside the VPC and still be reachable by the cluster. This is how CI works now. Not as nice as what's in flight with openshift/release#4719, but still something.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gpei Are you suggesting that this should not be included here for AWS IPI (and Azure and GCP)? I had confirmed with @katherinedube the appropriate supported list here: https://jira.coreos.com/browse/OSDOCS-640?focusedCommentId=130922&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-130922

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As Trevor mentioned, until we support deploying to a pre-existing VPC (planned for the next release), the proxy must live outside the VPC and be reachable by the nodes. While I agree it's not ideal, I'm not sure we should be saying it doesn't work unless it really doesn't work.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, I understand that we can use proxy for IPI install. But QE was told UPI must be used for proxy testing in 4.2, so all proxy related testing were scheduled with UPI installs in QE's 4.2 test matrix.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I traced it back to a discussion Daneyon had with Xiaoli to remove the testing of ipi installs (at the time that it might have been the belief), but after ipi was tested, it wasn't the case.

@katherinedube IPI was being used for testing as a stop-gap until UPI CI was complete. It was identified early on by @derekwaynecarr that UPI is required due to the lack of existing VPC support for IPI. As I mention in my above comment, IPI can technically work, but I don't see the requirements for IPI proxy meeting real world use cases.

Copy link

@katherinedube katherinedube Sep 27, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@danehans Just to be clear, even if we're going to say upi support is required for proxy this should only effect public cloud provider ipi deployments. Meaning I don't know this is really a generic ipi problem across all of our ipi providers like OSP. As it is today in the text matrix, it's all ipi providers - so nothing is being tested with OSP (not sure this should have been excluded.) Otherwise, why couldn't a public cloud provider use proxy with ipi? Could you provider some additional details as to why it wouldn't be practical? Thanks!

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I totally agree with @danehans 's statement.
IPI's prerequisite is having internet connectivity. That means, cluster itself already have internet connection, why still need proxy? Even proxy is enabled, how to validate all components and operators are really getting outside via proxy but not via its own internet connectivity?
In QE's testing, we dropped internet connectivity, and enable proxy, we found the following bugs:
Bug 1753467 - [proxy] no proxy is set for kube-controller-manager

That indicates if we do not drop internet connectivity for proxy testing, our test result is NOT persuadable, or else, we would miss the above bugs. But in the whole IPI install process, we have no way to drop internet connectivity.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In QE's testing, we dropped internet connectivity, and enable proxy, we found...

We definitely want to support proxy on blackholed UPI; I don't think anyone opposes that. Once we have bring-your-own-VPC, we all agree we want to support proxy there too. And testing in those environments makes me reasonably confident that at least OpenShift components will respect a voluntary proxy. The only contention seems to be whether we want to support proxy for installer-provided-everything. And the pushback seems to be mostly around "who wants this?". But I don't see anything about that case that would be a support burden, so I'd rather not add confusion by walling it off as an unsupported niche. Maybe customers want a proxy for MitM snooping on our egress (and they are also convinced from our blackhole support that we are playing nice). Or maybe they want a proxy for some other reason. But I much prefer the optics of "you may not care about this on IPI" to "we don't support this (suggesting it does not work reliably) on IPI". From a QE angle, we don't test all possible config permutations on all platforms. I'm fine leaving IPI+proxy covered just in CI, and relying on QE in the blackholed cases to turn up bugs like they're already doing.

Copy link

@jianlinliu jianlinliu Sep 29, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From a QE angle, we don't test all possible config permutations on all platforms.

Yeah, agree. QE did not run proxy testing in a blackholed IPI (due to in 4.2 IPI, we have no way to create a blackholed env), but only proxy testing in a blackholed UPI.

I like "we don't support this (suggesting it does not work reliably) on IPI".


include::modules/installation-launching-installer.adoc[leveloffset=+1]

include::modules/installing-aws-customizations.adoc[leveloffset=+1]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,12 @@ include::modules/ssh-agent-using.adoc[leveloffset=+1]

include::modules/installation-generate-aws-user-infra.adoc[leveloffset=+1]

include::modules/installation-generate-aws-user-infra-install-config.adoc[leveloffset=+2]

include::modules/installation-configure-proxy.adoc[leveloffset=+2]

include::modules/installation-generate-aws-user-infra-ignition.adoc[leveloffset=+2]

include::modules/installation-extracting-infraid.adoc[leveloffset=+1]

include::modules/installation-creating-aws-vpc.adoc[leveloffset=+1]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,8 @@ include::modules/installation-configuration-parameters.adoc[leveloffset=+2]

include::modules/installation-azure-config-yaml.adoc[leveloffset=+2]

include::modules/installation-configure-proxy.adoc[leveloffset=+2]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here, we only have IPI-on-Azure in 4.2


include::modules/installation-launching-installer.adoc[leveloffset=+1]

include::modules/cli-install.adoc[leveloffset=+1]
Expand Down
2 changes: 2 additions & 0 deletions installing/installing_bare_metal/installing-bare-metal.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,8 @@ include::modules/installation-initializing-manual.adoc[leveloffset=+1]

include::modules/installation-bare-metal-config-yaml.adoc[leveloffset=+2]

include::modules/installation-configure-proxy.adoc[leveloffset=+2]

include::modules/installation-generate-ignition-configs.adoc[leveloffset=+1]

[id="creating-machines-bare-metal"]
Expand Down
2 changes: 2 additions & 0 deletions installing/installing_gcp/installing-gcp-customizations.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,8 @@ include::modules/installation-configuration-parameters.adoc[leveloffset=+2]

include::modules/installation-gcp-config-yaml.adoc[leveloffset=+2]

include::modules/installation-configure-proxy.adoc[leveloffset=+2]

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here, this section is for IPI-on-GCP

Copy link

@gpei gpei Sep 27, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And for proxy support on GCP, we still have a testblocker bug https://bugzilla.redhat.com/show_bug.cgi?id=1753930, which was targeted to 4.3. We might need to add it to known issue in the release note.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

openshift/installer#2405 is in the merge queue, which would unblock a 4.2 backport. But yeah, restricting proxy GCP to UPI in the meantime would be a good, conservative choice.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The 4.3/master changes all landed. Hopefully we'll have the 4.2 backports soon.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://bugzilla.redhat.com/show_bug.cgi?id=1753930 was already verified with 4.3 build.
But if the 4.2 backports were not merged into 4.2.0, customer still have the problem when 4.2.0 is shipped.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI - with PR #17080 we're limiting support to UPI installs only, so support is only for UPI for GCP now.

@danehans Were the PRs for https://bugzilla.redhat.com/show_bug.cgi?id=1753930 backported to 4.2, or is this still a problem in 4.2 for GCP UPI?

If not, you mentioned there was a workaround of adding 'metadata.google.internal.' to noProxy - let me know if this is something that we need to document or not.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bergerhoffer The cloned bug for 4.2 https://bugzilla.redhat.com/show_bug.cgi?id=1759245 has been verified now, it's not an issue for 4.2.0 GCP UPI anymore, thanks!

include::modules/installation-launching-installer.adoc[leveloffset=+1]

//include::modules/installing-aws-customizations.adoc[leveloffset=+1]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,9 @@ include::modules/installation-configuration-parameters.adoc[leveloffset=+2]

include::modules/installation-azure-config-yaml.adoc[leveloffset=+2]

// TODO: If this assembly is going to be for restricted network Azure install, whenever it is included, need to confirm whether proxy is supported before including the following file:
// include::modules/installation-configure-proxy.adoc[leveloffset=+2]

include::modules/installation-launching-installer.adoc[leveloffset=+1]

include::modules/cli-install.adoc[leveloffset=+1]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -67,8 +67,11 @@ include::modules/ssh-agent-using.adoc[leveloffset=+1]

include::modules/installation-generate-aws-user-infra.adoc[leveloffset=+1]

// After the proxy change merges, I need to put it in and emphasize that you
// must configure a proxy for the AWS mirrored content story.
include::modules/installation-generate-aws-user-infra-install-config.adoc[leveloffset=+2]

include::modules/installation-configure-proxy.adoc[leveloffset=+2]

include::modules/installation-generate-aws-user-infra-ignition.adoc[leveloffset=+2]

include::modules/installation-extracting-infraid.adoc[leveloffset=+1]

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,8 @@ include::modules/installation-initializing-manual.adoc[leveloffset=+1]

include::modules/installation-bare-metal-config-yaml.adoc[leveloffset=+2]

include::modules/installation-configure-proxy.adoc[leveloffset=+2]

include::modules/installation-generate-ignition-configs.adoc[leveloffset=+1]

[id="creating-machines-bare-metal-restricted-network"]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,8 @@ include::modules/installation-initializing-manual.adoc[leveloffset=+1]

include::modules/installation-vsphere-config-yaml.adoc[leveloffset=+2]

include::modules/installation-configure-proxy.adoc[leveloffset=+2]

include::modules/installation-generate-ignition-configs.adoc[leveloffset=+1]

include::modules/installation-vsphere-machines.adoc[leveloffset=+1]
Expand Down
2 changes: 2 additions & 0 deletions installing/installing_vsphere/installing-vsphere.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,8 @@ include::modules/installation-initializing-manual.adoc[leveloffset=+1]

include::modules/installation-vsphere-config-yaml.adoc[leveloffset=+2]

include::modules/installation-configure-proxy.adoc[leveloffset=+2]

include::modules/installation-generate-ignition-configs.adoc[leveloffset=+1]

include::modules/installation-vsphere-machines.adoc[leveloffset=+1]
Expand Down
90 changes: 90 additions & 0 deletions modules/installation-configure-proxy.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
// Module included in the following assemblies:
//
// * installing/installing_aws/installing-aws-customizations.adoc
// * installing/installing_aws_user_infra/installing-aws-user-infra.adoc
// * installing/installing_bare_metal/installing-bare-metal.adoc
// * installing/installing_vsphere/installing-vsphere.adoc
// * installing/installing_gcp/installing-gcp-customizations.adoc
// * installing/installing_azure/installing-azure-customizations.adoc
// * installing/installing_restricted_networks/installing-restricted-networks-aws.adoc
// * installing/installing_restricted_networks/installing-restricted-networks-bare-metal.adoc
// * installing/installing_restricted_networks/installing-restricted-networks-vsphere.adoc

ifeval::["{context}" == "installing-bare-metal"]
:bare-metal:
endif::[]

[id="installation-configure-proxy_{context}"]
= Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have
an HTTP or HTTPS proxy available. You can configure a new {product-title}
cluster to use a proxy by configuring the proxy settings in the
`install-config.yaml` file.

ifdef::bare-metal[]
[NOTE]
====
For bare metal installations, if you do not assign node IP addresses from the
range that is specified in the `networking.machineCIDR` field in the
`install-config.yaml` file, you must include them in the `proxy.noProxy` field.
====
endif::bare-metal[]

.Prerequisites

* An existing `install-config.yaml` file.

.Procedure

. Edit your `install-config.yaml` file and add the proxy settings. For example:
+
[source,yaml]
----
apiVersion: v1
baseDomain: my.domain.com
proxy:
httpProxy: http://<username>:<pswd>@<ip>:<port> <1>
httpsProxy: http://<username>:<pswd>@<ip>:<port> <2>
noProxy: example.com <3>
additionalTrustBundle: | <4>
-----BEGIN CERTIFICATE-----
<MY_TRUSTED_CA_CERT>
-----END CERTIFICATE-----
...
----
<1> A proxy URL to use for creating HTTP connections outside the cluster. The
URL scheme must be `http`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see @danehans asking for this restriction above, but I'm curious about why. My understanding is that you can use a HTTPS URI for httpProxy (e.g. to avoid leaking your proxy-connection creds), although you might bump into containers/image#699 if the proxy doesn't support any of the encryption options currently compiled into containers/image (and that would go for httpsProxy too).

Copy link

@gpei gpei Sep 26, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I once had a try with using https URI for httpProxy, network-operator will complain about it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What error message did you see?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a quite long time ago, so I just spin up a new cluster with such setting.
In network-operator pod log, I could see such message, also a clear prompt.

2019/09/27 02:42:21 Updated ClusterOperator with conditions:
- lastTransitionTime: "2019-09-27T02:42:21Z"
  message: The configuration is invalid for proxy 'cluster' (httpProxy requires a
    'http' URI scheme). Use 'oc edit proxy.config.openshift.io cluster' to fix.
  reason: InvalidProxyConfig

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Followed up here, where that check landed.

<2> A proxy URL to use for creating HTTPS connections outside the cluster. If
this field is not specified, then `httpProxy` is used for both HTTP and HTTPS
connections. The URL scheme must be `http`; `https` is currently not
supported.
<3> A comma-separated list of destination domain names, domains, IP addresses, or
other network CIDRs to exclude proxying. Preface a domain with `.` to include
all subdomains of that domain. Use `*` to bypass proxy for all destinations.
<4> If provided, the installation program generates a ConfigMap that is named `user-ca-bundle` in
the `openshift-config` namespace that contains one or more additional CA
certificates that are required for proxying HTTPS connections. The Cluster Network
Operator then creates a `trusted-ca-bundle` ConfigMap that merges these contents
with the {op-system-first} trust bundle, and this ConfigMap is referenced in the Proxy
object's `trustedCA` field. The `additionalTrustBundle` field is required unless
the proxy's identity certificate is signed by an authority from the {op-system} trust
bundle.
+
[NOTE]
====
The installation program does not support the proxy `readinessEndpoints` field.
====

. Save the file and reference it when installing {product-title}.

The installation program creates a cluster-wide proxy that is named `cluster` that uses the proxy
settings in the provided `install-config.yaml` file. If no proxy settings are
provided, a `cluster` Proxy object is still created, but it will have a nil
`spec`.

[NOTE]
====
Only the Proxy object named `cluster` is supported, and no additional
proxies can be created.
====
81 changes: 81 additions & 0 deletions modules/installation-generate-aws-user-infra-ignition.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
// Module included in the following assemblies:
//
// * installing/installing_aws_user_infra/installing-aws-user-infra.adoc

[id="installation-generate-aws-user-infra-ignition_{context}"]
= Creating the Kubernetes manifest and Ignition config files

Because you must manually start the cluster machines, you must generate the
Kubernetes manifest and Ignition config files that the cluster needs to make its
machines.

[IMPORTANT]
====
The Ignition config files that the installation program generates contain
certificates that expire after 24 hours. You must complete your cluster
installation and keep the cluster running for 24 hours in a non-degraded state
to ensure that the first certificate rotation has finished.
====

.Prerequisites

* Obtain the {product-title} installation program.
ifdef::restricted[]
For a restricted network installation, these files are on your bastion host.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If restricted network is not going to be supported in 4.2, you should consider removing this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kalexand-rh can confirm for me, but the docs for these restricted network are in for 4.2, so they should be supported. Also, this sentence will only appear in the restricted docs, so if they don't go out for some reason, then this won't appear.

endif::restricted[]
* Create the `install-config.yaml` installation configuration file.

.Procedure

. Remove the Kubernetes manifest files for the control plane machines. By
removing these files, you prevent the cluster from automatically generating
control plane machines.
.. Generate the Kubernetes manifests for the cluster:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit confused by this paragraph. First it talks about removing manifests, then it states to generate the manifests.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First it talks about removing manifests, then it states to generate the manifests.

You: "Installer, make your usual manifests." Installer: "Here you go" You: "Forget about these. The rest look good. Carry on."

However we want to explain that workflow works for me.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was pulled from the original module that I broke up, but I agree that it is confusing.

@kalexand-rh - I don't have the background to know why the steps are the way that they are. Do you have any reword suggestions that would make this clearer, or know who might be good to ask on this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I missed your comment @wking before I added mine - thanks for the explanation!

@kalexand-rh let's look at this together tomorrow.

+
----
$ ./openshift-install create manifests --dir=<installation_directory> <1>

WARNING There are no compute nodes specified. The cluster will not fully initialize without compute nodes.
INFO Consuming "Install Config" from target directory
----
<1> For `<installation_directory>`, specify the installation directory that
contains the `install-config.yaml` file you created.
+
Because you create your own compute machines later in the installation process,
you can safely ignore this warning.
.. Remove the files that define the control plane machines:
+
----
$ rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml
----

ifeval::["{context}" == "installing-aws-user-infra"]
. Remove the Kubernetes manifest files that define the worker machines:
+
----
$ rm -f openshift/99_openshift-cluster-api_worker-machineset-*
----
+
Because you create and manage the worker machines yourself, you do not need
to initialize these machines.
endif::[]

. Obtain the Ignition config files:
+
----
$ ./openshift-install create ignition-configs --dir=<installation_directory> <1>
----
<1> For `<installation_directory>`, specify the same installation directory.
+
The following files are generated in the directory:
+
----
.
├── auth
│ ├── kubeadmin-password
│ └── kubeconfig
├── bootstrap.ign
├── master.ign
├── metadata.json
└── worker.ign
----
Loading