Skip to content

EXPERIMENT: Switch container networking from host to named bridge networks#403

Draft
pablomh wants to merge 5 commits intotheforeman:masterfrom
pablomh:foremanctl_networks
Draft

EXPERIMENT: Switch container networking from host to named bridge networks#403
pablomh wants to merge 5 commits intotheforeman:masterfrom
pablomh:foremanctl_networks

Conversation

@pablomh
Copy link
Contributor

@pablomh pablomh commented Mar 9, 2026

Currently all containers run with network: host, which means they
share the host network namespace and communicate via localhost. This
works but provides no network isolation between services.

This PR replaces host networking with named Podman bridge networks:

  • foreman-db (internal, isolated) — PostgreSQL; accessible only to
    services that need database access
  • foreman-cache (internal, isolated) — Redis; accessible only to
    services that need cache access
  • foreman-app — Foreman, Pulp, Candlepin, httpd; has a gateway for
    outbound traffic
  • foreman-proxy-net — Foreman and the smart proxy; isolated from
    the database/cache networks

Services communicate by container DNS name (e.g. postgresql,
redis, candlepin) rather than localhost. A dedicated TLS
certificate with SAN=candlepin is issued for the Candlepin
container, since Foreman now connects to it via the bridge DNS name
and validates the hostname.

Candlepin does not publish any ports to the host — Foreman reaches
it directly over the bridge. Port publishing is minimised throughout:
PostgreSQL and Redis never publish ports; Pulp API and content
endpoints publish to 127.0.0.1 for the httpd proxy; the smart
proxy publishes to 0.0.0.0 for external clients.

The development environment keeps network: host to avoid
reconfiguring every service URL for a Rails development workflow where
the Foreman process runs directly on the host.

pablomh and others added 5 commits March 9, 2026 10:35
Introduces the deploy_network role, which wraps
containers.podman.podman_network to create a named bridge network with
configurable properties: internal (no external routing), isolate (no
cross-network packet forwarding), IPv6, subnet, gateway, and DNS.

Co-Authored-By: Claude Sonnet 4.6 <[email protected]>
Replace host networking with four named Podman bridge networks:

- foreman-db (internal, isolated): PostgreSQL and its clients
- foreman-cache (internal, isolated): Redis and its clients
- foreman-app: application layer — Foreman, Candlepin, Pulp, httpd
- foreman-proxy-net: Foreman and the smart proxy

Services reach each other by container DNS name instead of localhost.
Per-service notes:

- postgresql: also accessible via a Unix socket bind-mounted at
  /var/run/postgresql, used by Ansible's community.postgresql modules
  during deployment so no TCP port needs to be published to the host.
- redis: publishes no ports; clients reach it by DNS name on
  foreman-cache.
- candlepin: Tomcat binds to 0.0.0.0 so that other containers on the
  bridge network can reach it via its DNS name.
- migration containers (foreman db:migrate, pulp migrations): attached
  to foreman-db when the database is internal; switched to foreman-app
  when the database is external, because foreman-db uses internal: true
  which removes the default gateway, making it impossible for containers
  on that network to reach an external database host.

Co-Authored-By: Claude Sonnet 4.6 <[email protected]>
The development setup runs the Foreman Rails process directly on the
host rather than in a container, so it cannot resolve container bridge
DNS names such as postgresql or candlepin. Keeping host networking for
all services in the devel environment means every service remains
reachable on localhost as before, without requiring per-service URL
overrides for the Rails process.

Co-Authored-By: Claude Sonnet 4.6 <[email protected]>
foreman-certs generates certificates only for localhost and the server
FQDN. Now that Foreman connects to Candlepin via the bridge DNS name
"candlepin", TLS hostname validation fails against those certificates.
A dedicated certificate with SAN=candlepin, signed by the installer
CA, is generated using the openssl CLI (consistent with the rest of
the codebase).

The certificate validity is set to 7300 days (20 years), matching the
default used by puppet-certs (theforeman/puppet-certs manifests/init.pp
$expiration parameter). The signing step runs on every deployment to
ensure the certificate is always freshly dated, consistent with how
foreman-certs handles the other installer certificates.

The localhost certificate previously used by Candlepin's Tomcat is now
unused and removed:

- "localhost" dropped from certificates_hostnames — the certificates
  role no longer generates the cert.
- localhost_key / localhost_certificate removed from
  default_certificates.yml and installer_certificates.yml.

The healthcheck is updated to validate the new certificate instead of
skipping verification with --insecure. --resolve candlepin:23443:127.0.0.1
forces the connection to the loopback so it works in both bridge
networking (where other containers reach Candlepin by DNS name) and
host networking (where "candlepin" would not resolve via container DNS).

Tests are updated to route connectivity checks through the foreman
container (which shares the same bridge network as candlepin), to verify
against the new certificate, and to check candlepin_certificate expiry
instead of the now-removed localhost_certificate.

Co-Authored-By: Claude Sonnet 4.6 <[email protected]>
Adds docs/deployment.md with a description of the four bridge
networks introduced by this series, their properties (internal,
isolate), which containers are attached to each, and the rationale
for the port publishing decisions.

Co-Authored-By: Claude Sonnet 4.6 <[email protected]>
@pablomh pablomh force-pushed the foremanctl_networks branch from 6ab3c71 to 7800f9d Compare March 9, 2026 09:36
@pablomh pablomh marked this pull request as draft March 9, 2026 17:08
@pablomh pablomh changed the title Switch container networking from host to named bridge networks EXPERIMENT: Switch container networking from host to named bridge networks Mar 9, 2026
@ehelms
Copy link
Member

ehelms commented Mar 9, 2026

I like where you are headed with security in mind and isolation where possible. I worry about a few things:

  1. Complexity for the developer at the start having to understand this network topology and debugging.
  2. Service complexity for services that might need to be connected to multiple networks due to how many services they use.
  3. Is there an impact to the end user due to network requirements?

I have been wondering and exploring how to use podman networking to our advantage to try and avoid TLS between services. Based on your experiments, what do you think about that?

Do you think you could craft a version of this that starts with just a single network as our deployment model?

The addition of iop also adds network (https://github.com/theforeman/foremanctl/pull/280/changes#diff-024ced7a24ad97c3e89dceb9ebae521f47807987db81039cd4973f04b79e7e9c) since it's based on puppet-iop and the current model. The goal being similar to what I think you are thinking to achieve and it'd be nice to bring this all together into a single design.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants