Skip to content
Merged
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@

= Administration

[role="_abstract"]
The *Administration* menu provides access to the administrative options of {ControllerName}.
From here, you can create, view, and edit:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,11 @@

= Benefits of fact caching

[role="_abstract"]
Fact caching saves you time over running fact gathering.
If you have a playbook in a job that runs against a thousand hosts and forks, you can spend 10 minutes gathering facts across all of those hosts.
However, if you run a job on a regular basis, the first run of it caches these facts and the next run pulls them from the database.
If you have a playbook in a job that runs against a thousand hosts and forks, it can take 10 minutes to gather facts across all of those hosts.

If you run a job on a regular basis, the first run of it caches these facts and the next run pulls them from the database.
This reduces the runtime of jobs against large inventories.

[NOTE]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,16 @@

= Capacity job impacts

When selecting the capacity, it is important to understand how each job type affects capacity.
[role="_abstract"]
When configuring {ControllerName} capacity, it is important to understand how different job types impact the system capacity.

{ControllerName} uses Ansible to run jobs. Each job can have a different impact on system resources depending on the number of forks used for the job.

The default forks value for Ansible is five.
This means that, by default, each job can run tasks on up to five systems concurrently.

However, if you set up {ControllerName} to run against fewer systems than that, then the actual concurrency value is lower.

When a job is run in {ControllerName}, the number of forks selected is incremented by 1, to compensate for the Ansible parent process.

.Example
If you run a playbook against five systems with forks value of 5, then the actual forks value from the Job Impact perspective is 6.
For example, if you run a playbook against five systems with forks value of 5, then the actual forks value from the Job Impact perspective is 6.
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,9 @@

= Cleanup Expired Sessions

[role="_abstract"]
You can remove expired user sessions from {ControllerName} to help maintain system performance.

To remove expired sessions, click the launch image:rightrocket.png[Launch,15,15] icon beside *Cleanup Expired Sessions*.

You can review or set a schedule for cleaning up expired sessions by performing the same procedure described for activity stream management jobs.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,9 @@

= Configure the host hostname for notifications

[role="_abstract"]
By default, {ControllerName} uses the system hostname for notifications.

In link:{URLControllerAdminGuide}/controller-config#controller-configure-system[System settings], you can replace the default value in the *Base URL of the service* field with your preferred hostname to change the notification hostname.

//image::ug-system-misc-baseurl.png[System Base URL]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,14 @@

= Execution environment mount options

[role="_abstract"]
{ExecEnvShort} isolation provides a secure environment for running jobs, but it can limit access to system resources such as certificates.

Rebuilding an {ExecEnvShort} is one way to add certificates, but inheriting certificates from the host provides a more convenient solution.
For VM-based installations, {ControllerName} automatically mounts the system truststore in the {ExecEnvShort} when jobs run.

You can customize {ExecEnvShort} mount options and mount paths in the *Paths to expose to isolated jobs* field of the *Job Settings* page, where Podman-style volume mount syntax is supported.

[role="_additional-resources"]
.Additional resources

For more information, see the link:https://docs.podman.io/en/latest/markdown/podman-run.1.html#volume-v-source-volume-host-dir-container-dir-options[Podman documentation].
* link:https://docs.podman.io/en/latest/markdown/podman-run.1.html#volume-v-source-volume-host-dir-container-dir-options[Podman documentation]
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@

= Granting access

[role="_abstract"]
Granting access to certain parts of the system exposes security risks.
Apply the following practices to help secure access:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@

= Impact of job types in {ControllerName}

[role="_abstract"]
Jobs and ad hoc jobs follow the preceding model, forks +1.
If you set a fork value on your job template, your job capacity value is the minimum of the forks value supplied and the number of hosts that you have, plus one.
The +1 is to account for the parent Ansible process.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,13 @@

= Inventory sync jobs

[role="_abstract"]
Inventory synchronization jobs update the inventory data in {ControllerNameStart} by pulling the latest information from the configured inventory sources.
This ensures that the inventory reflects the current state of the managed infrastructure.

Inventory sync jobs can be scheduled to run at regular intervals or triggered manually by users.
These jobs gather data such as host details, group memberships, and variables from various sources such as cloud providers, dynamic inventory scripts, or static files.

When an inventory synchronization is executed, the results display in the *Output* tab.

For more information about inventory synchronization, see link:{URLControllerUserGuide}/controller-inventories#ref-controller-constructed-inventories[Constructed inventories].
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,9 @@

= Keeping your subscription in compliance

[role="_abstract"]
To ensure that your {ControllerName} installation remains compliant with your Red Hat subscription, {ControllerName} provides a way to monitor your subscription status and usage.

Your subscription has two possible statuses:

* *Compliant*: Indicates that your subscription is appropriate for the number of hosts that you have automated within your subscription count.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@

= Metrics to monitor {ControllerName}

[role="_abstract"]
Monitor your {ControllerName} hosts at the system and application levels.

System level monitoring includes the following information:
Expand All @@ -13,7 +14,7 @@ System level monitoring includes the following information:
* CPU use
* Network traffic

Application level metrics provide data that the application knows about the system. This data includes the following information:
Application level metrics supply data that the application knows about the system. This data includes the following information:

* How many jobs are running in a given instance
* Capacity information about instances in the cluster
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,9 @@

= Configuring `metrics-utility` on a manual containerized installation of {PlatformNameShort}

[role="_abstract"]
The `metrics-utility` tool generates performance metrics and reports for {PlatformNameShort} installations.

`metrics-utility` is included in the {OCPShort} image beginning with version 4.12, 4.512, and 4.6.
If your system does not have `metrics-utility` installed, update your OpenShift image to the latest version.

Expand All @@ -15,10 +18,12 @@ Use the following steps to configure `metrics-utility` on a manual containerized
. Make scripts executable.
. Configure cron jobs.

.Prerequisites
* An active {PlatformNameShort} subscription
[NOTE]
====
You must have an active {PlatformNameShort} subscription
====

.Minimum resource requirements
*Minimum resource requirements*

Using the metrics-utility tool on a containerized installation of {PlatformNameShort} requires the following resources:

Expand All @@ -28,6 +33,9 @@ Using the metrics-utility tool on a containerized installation of {PlatformNameS
** Minimum: 256 MB RAM (supports up to ~10,000 job host summaries)
** Recommended: 512 MB RAM (standard deployments)
** Large-scale: 1 GB RAM (supports up to ~100,000 job host summaries)
+
[NOTE]
====
Memory requirements scale with the number of hosts and jobs being processed.
* Execution time: Report generation typically completes within 10-30 seconds, depending on data volume
====
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,9 @@

= Configuring `metrics-utility` on {OCPShort} from the {PlatformNameShort} operator

[role="_abstract"]
The `metrics-utility` is a command-line tool that collects and reports metrics from your {OCPShort} cluster to your {ControllerName} instance.

`metrics-utility` is included in the {OCPShort} image beginning with version 4.12, 4.512, and 4.6.
If your system does not have `metrics-utility` installed, update your OpenShift image to the latest version.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,5 +4,6 @@

= Minimize local system access

[role="_abstract"]
When you use {ControllerName} with best practices, it does not require local user access except for administrative purposes.
Non-administrator users do not have access to the {ControllerName} system.
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,9 @@

= Notification hierarchy

[role="_abstract"]
{ControllerName} uses a hierarchical notification system where notification templates can be defined at various levels, and lower-level objects can inherit templates from their parent objects.

Notification templates inherit templates defined on parent objects, such as the following:

* Job templates use notification templates defined for them.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,10 @@

[id="con-controller-overview-automation_{context}"]

= "Push Button" automation
= "Push button" automation

[role="_abstract"]
{ControllerName} provides "push button" automation capabilities that allow users to run predefined jobs and workflows with minimal input.

Use {ControllerName} to access your favorite projects and re-trigger execution from the web interface.
{ControllerNameStart} asks for input variables, prompts for your credentials, starts and monitors jobs, and displays results and host history.
Original file line number Diff line number Diff line change
Expand Up @@ -4,5 +4,6 @@

= Real-time playbook output and exploration

[role="_abstract"]
With {ControllerName} you can watch playbooks run in real time, seeing each host as they check in.
You can go back and explore the results for specific tasks and hosts in great detail, search for specific plays or hosts and see just those results, or locate errors that need to be corrected.
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,10 @@

[id="con-controller-overview-galaxy_{context}"]

= Ansible Galaxy integration
= {Galaxy} integration

[role="_abstract"]
{ControllerName} integrates with {Galaxy} to simplify the process of using and managing Ansible roles in your automation projects.

By including an {Galaxy} `requirements.yml` file in your project directory, {ControllerName} automatically fetches the roles your playbook needs from Galaxy, GitHub, or your local source control.
For more information, see xref:ref-projects-galaxy-support[Ansible Galaxy Support].
For more information, see link:https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.6/html/using_automation_execution/controller-projects#ref-projects-galaxy-support[Ansible Galaxy Support].
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,10 @@

= Inventory plugins

The following inventory plugins are used from upstream collections:
[role="_abstract"]
{ControllerName} uses inventory plugins to gather information about the hosts in your infrastructure.

Use the following inventory plugins from upstream collections:

* `amazon.aws.aws_ec2`
* `community.vmware.vmware_vm_inventory`
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@

= Integrated notifications

[role="_abstract"]
Keep track of the status of your automation.

You can configure the following notifications:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,9 @@

= Remote command execution

[role="_abstract"]
Remote command execution allows you to run ad hoc tasks on one or more managed nodes directly from the {ControllerName} interface without creating a job template.

Use remote command execution to perform a simple task, such as adding a single user, updating a single security vulnerability, or restarting a failing service.
Any task that you can describe as a single Ansible play can be run on a host or group of hosts in your inventory.
You can manage your systems quickly and easily.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@

= Playbook access and information sharing

[role="_abstract"]
{ControllerNameStart}'s use of automation {ExecEnvShort}s and Linux containers prevents playbooks from reading files outside of their project directory.

By default, the only data exposed to the ansible-playbook process inside the container is the current project being used.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,15 @@

= Playbook run jobs

When a playbook is executed, the results display in the *Output* tab.
[role="_abstract"]
You can run playbook jobs to run Ansible playbooks on one or more managed nodes directly from the {ControllerName} interface without creating a job template.

Use playbook run jobs to perform tasks that are more complex than those that can be accomplished with remote command execution.
Any task that you can describe as an Ansible Playbook can be run on a host or group of hosts in your inventory.
You can manage your systems quickly and easily.
Because of an RBAC engine and detailed audit logging, you know which user has completed a specific task.

When a playbook is run, the results display in the *Output* tab.
If used, the Ansible CLI displays the same information. This can be useful for debugging.

//image::ug-results-for-example-job.png[Results for example job]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,15 @@

[id="controller-provisioning-callbacks"]

= Provisioning Callbacks
= Provisioning callbacks

[role="_abstract"]
Provisioning Callbacks are a feature of {ControllerName} that enable a host to start a playbook run against itself, rather than waiting for a user to launch a job to manage the host from the {ControllerName} console.

Provisioning Callbacks are only used to run playbooks on the calling host and are meant for cloud bursting.
Cloud bursting is a cloud computing configuration that enables a private cloud to access public cloud resources by "bursting" into a public cloud when computing demand spikes.

.Example
*Example*

New instances with a need for client to server communication for configuration, such as transmitting an authorization key, not to run a job against another host.
This provides for automatically configuring the following:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,16 @@

= Relaunch a job template

[role="_abstract"]
Relaunching a job template creates a new job based on a previous job.

Instead of manually relaunching a job, a relaunch is denoted by setting `launch_type` to `relaunch`.
The relaunch behavior deviates from the launch behavior in that it does not inherit `extra_vars`.

Job relaunching does not go through the inherit logic.
It uses the same `extra_vars` that were calculated for the job being relaunched.

.Example
*Example*

You launch a job template with no `extra_vars` which results in the creation of a job called *j1*.
Then you edit the job template and add `extra_vars` (such as adding `"{ "hello": "world" }"`).
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@

= Secret handling

[role="_abstract"]
{ControllerNameStart} manages three sets of secrets:

* User passwords for local {ControllerName} users.
Expand All @@ -24,5 +25,5 @@ You must have 'local' user access for the following users:

== User passwords for local users

{ControllerNameStart} hashes local {ControllerName} user passwords with the PBKDF2 algorithm using a SHA256 hash.
{ControllerNameStart} hashes local {ControllerName} user passwords with the PBKDF2 algorithm by using a SHA256 hash.
Users who authenticate by external account mechanisms, such as LDAP, SAML, and OAuth, do not have any password or secret stored.
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,17 @@

= Source tree copy behavior

[role="_abstract"]
When {ControllerName} runs a job, it creates a private copy of the project's source tree for that job run.

Every job run has its own private data directory.
This directory contains a copy of the project source tree for the given `scm_branch` that the job is running.
Jobs are free to make changes to the project folder and make use of those changes while it is still running.
This folder is temporary and is removed at the end of the job run.

If you check the *Clean* option, modified files are removed in {ControllerName}'s local copy of the repository.
This is done through use of the force parameter in its corresponding Ansible modules pertaining to git or Subversion.
This is done through use of the force parameter in its corresponding Ansible modules pertaining to git or sub-version.

[role="_additional-resources"]
.Additional resources
For more information, see the link:https://docs.ansible.com/ansible/latest/modules/git_module.html#parameters[Parameters] section of the Ansible documentation.
* link:https://docs.ansible.com/ansible/latest/modules/git_module.html#parameters[Parameters]
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,10 @@

= Understand the architecture of {PlatformNameShort} and {ControllerName}

[role="_abstract"]
{PlatformNameShort} and {ControllerName} comprise a general-purpose, declarative automation platform.
That means that when an Ansible Playbook is launched (by {ControllerName}, or directly on the command line), the playbook, inventory, and credentials provided to Ansible are considered to be the source of truth.
When an Ansible Playbook is launched (by {ControllerName}, the playbook, inventory, and credentials provided to Ansible are considered to be the source of truth.

If you want policies around external verification of specific playbook content, job definition, or inventory contents, you must complete these processes before the automation is launched, either by the {ControllerName} web UI, or the {ControllerName} API.

The use of source control, branching, and mandatory code review is best practice for Ansible automation.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,9 @@

= View completed jobs

[role="_abstract"]
You can view completed jobs in {ControllerName} to see the status and details of job templates that have run.

The *Jobs* tab provides the list of job templates that have run.
Click the expand icon next to each job to view the following details:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,9 @@

= View completed workflow jobs

[role="_abstract"]
You can view completed workflow jobs in {ControllerName} to see the details of each job run.

The *Jobs* tab provides the list of job templates that have run.
Click the expand image:arrow.png[Expand,15,15]icon next to each job to view the details of each job.

Expand Down
Loading