Code in this library are loaded at runtime by Jenkins. Jenkins is already configured to point to this repository. See Jenkins Shared Libraries
To get an understanding of the directory structure within this repository, please refer to Directory Structure
To use this pipeline in your repo, you must import it in a Jenkinsfile
@Library('Infrastructure')This library contains a complete opinionated pipeline that can build, test and deploy Java and NodeJS applications. The pipeline contains the following stages:
- Checkout
- Build
- Unit Test
- Security Checks
- Lint (nodejs only)
- Sonar Scan
- Docker build (for AKS deployments, optional ACR steps)
- Contract testing
- Deploy Dev
- High Level Data Setup - Dev
- Smoke Tests - Dev
- (Optional) API (gateway) Tests - Dev
- Deploy Prod
- High Level Data Setup - Production
- Smoke Tests - Production
- (Optional) API (gateway) Tests - Production
In this version, Java apps must use Gradle for builds and contain the gradlew wrapper
script and dependencies in source control. NodeJS apps must use Yarn.
The opinionated app pipeline supports Slack notifications when the build fails or is fixed - your team build channel should be provided.
Example Jenkinsfile to use the opinionated pipeline:
#!groovy
@Library("Infrastructure")
def type = "java" // supports "java", "nodejs" and "angular"
def product = "rhubarb"
def component = "recipe-backend" // must match infrastructure module name
withPipeline(type, product, component) {
enableSlackNotifications('#my-team-builds')
}The opinionated pipeline uses the following branch mapping to deploy applications to different environments.
| Branch | Environment |
|---|---|
master |
aat then prod |
demo |
demo |
perftest |
perftest |
| PR branch | preview |
By default terraform plans against production are executed on Pull Requests that have any terraform changes. Application teams can opt out of this by:
- For all PRs. Manually adding a topic
not-plan-on-prodto the repo. - For a specific PR. Manually adding a label
not-plan-on-prodto that PR.
If the Pull Request is being merged into these branches demo, perftest, and ithc. Terraform Plan will run against the corresponding environment NOT production.
Plans will only run against production on the Production Jenkins. It will NOT work on the Sandbox Jenkins as its “production” environment is sandbox.
If you want tests in AAT / Stg environments to run via Azure Front Door, you must add configuration for your application to front door. Have a look at the HMCTS Way.
Add a CNAME for your application that points to front door to azure-private-dns and ensure it ends with -staging. See example.
If a CNAME is not created in private DNS, Jenkins will create an A record and connect to your application on it's private IP instead.
For the Dev and Preview environments, you will also need to prevent External DNS from creating an A record in their respective DNS zones. To do this, update your helm values to add an annotation telling external-dns to ignore your ingress:
java:
ingressAnnotations:
external-dns.alpha.kubernetes.io/exclude: "true"
If your tests need secrets to run, e.g. a smoke test user for production then:
${env} will be replaced by the pipeline with the environment that it is being run in. In order to use this feature you must use single quotes around your string to prevent Groovy from resolving the variable immediately.
def secrets = [
'your-app-${env}': [
secret('idam-client-secret', 'IDAM_CLIENT_SECRET')
],
's2s-${env}' : [
secret('microservicekey-your-app', 'S2S_SECRET')
]
]
static LinkedHashMap<String, Object> secret(String secretName, String envVar) {
[ $class: 'AzureKeyVaultSecret',
secretType: 'Secret',
name: secretName,
version: '',
envVariable: envVar
]
}
withPipeline(type, product, component) {
...
loadVaultSecrets(secrets)
}In some instances vaults from a different environment could be needed. This is for example the case when deploying to preview environments, which should use aat vaults.
When enabled, ${env} will be replaced by the overridden vault environment.
def vaultOverrides = [
'preview': 'aat',
'spreview': 'saat'
]
def secrets = [
'your-app-${env}': [
secret('idam-client-secret', 'IDAM_CLIENT_SECRET')
],
's2s-${env}' : [
secret('microservicekey-your-app', 'S2S_SECRET')
]
]
static LinkedHashMap<String, Object> secret(String secretName, String envVar) {
[ $class: 'AzureKeyVaultSecret',
secretType: 'Secret',
name: secretName,
version: '',
envVariable: envVar
]
}
withPipeline(type, product, component) {
...
overrideVaultEnvironments(vaultOverrides)
loadVaultSecrets(secrets)
}Any outputs you add to output.tf are available as environment variable which can be used in smoke and functional tests.
If your functional tests require an environmental variable S2S_URL you can pass it in to functional test by adding it as a output.tf
output "s2s_url" {
value = "http://${var.s2s_url}-${local.local_env}.service.core-compute-${local.local_env}.internal"
}
this output will be transposed to Uppercase s2s_url => S2S_URL and can then be used by functional and smoke test.
Calls yarn test:nsp so this command must be implemented in package.json
To check that the app is working as intended you should implement smoke tests which call your app and check that the appropriate response is received.
This should, ideally, check the entire happy path of the application. Currently, the pipeline only supports Yarn to run smoketests and will call yarn test:smoke
so this must be implemented as a command in package.json. The pipeline exposes the appropriate application URL in the
TEST_URL environment variable and this should be used by the smoke tests you implement. The smoke test stage is
called after each deployment to each environment.
The smoke tests are to be non-destructive (i.e. have no data impact, such as not creating accounts) and a subset of component level functional tests.
An application can configure running continuous smoke/functional tests on java app deployments managed through flux.
https://github.com/hmcts/chart-java/#smoke-and-functional-tests
To build docker images for this, add enableDockerTestBuild() in Jenkinsfile_CNP. Static Checks/Container Build stage in the pipeline will execute, including a test docker image.
A Docker test build was previously built by default however has been made optional for pipeline speed and reliability.
This can be used to import data required for the application. The most common example is importing a CCD definition, but data requirements of a similar nature can be included using the same functionality. Smoke and functional tests in non-production environments will run after the import allowing automated regression testing of the change.
By adding enableHighLevelDataSetup() to the Jenkinsfile, High Level Data Setup stages will be added to the pipeline.
#!groovy
@Library("Infrastructure")
def type = "java"
def product = "rhubarb"
def component = "recipe-backend"
withPipeline(type, product, component) {
enableHighLevelDataSetup()
}
The opinionated pipeline uses the following branch mapping to import definition files to different environments.
| Branch | HighDataSetup Stage |
|---|---|
master |
aat then prod |
PR |
aat |
perftest |
perftest |
demo |
demo |
ithc |
ithc |
If your service is not yet built on prod, you can disable prod HighLevelDataSetup by setting skipHighLevelDataSetupProd flag to true.
enableHighLevelDataSetup("", true)
It is not possible to remove stages from the pipeline but it is possible to add extra steps to the existing stages.
You can use the before(stage) and after<Condition>(stage) within the withPipeline block to add extra steps at the beginning or end of a named stage.
Conditions are:
- Success
- Failure
- Always
Valid values for the stage variable are as follows where ENV must be replaced by the short environment name
- checkout
- build
- test
- securitychecks
- sonarscan
- deploy:ENV
- smoketest:ENV
- functionalTest:ENV
- buildinfra:ENV
E.g.
withPipeline(type, product, component) {
...
afterSuccess('checkout') {
echo 'Checked out'
}
afterSuccess('build') {
sh 'yarn setup'
}
}If your service contains an API (in Azure Api Management Service), you need to implement tests for that API. For the pipeline to run those tests, do the following:
- define
apiGatewaytask (gradle/yarn) in you application - from your Jenkinsfile_CNP/Jenkinsfile_parameterized instruct the pipeline to run that gradle task:
withPipeline(type, product, component) {
...
enableApiGatewayTest()
...
}
The API tests run after smoke tests.
E2E tests can be enabled to run on the master branch after deployment to AKS environments (AAT and Production). To enable E2E tests, add enableE2eTest() to your withPipeline block:
withPipeline(type, product, component) {
...
enableE2eTest()
...
}
E2E tests require the appropriate task to be defined in your application's build configuration:
- For Java:
e2eTesttask in build.gradle
The tests run after deployment to each environment (AAT and Production) on the master branch, after smoke tests and API gateway tests (if enabled).
- By default your Helm resources are uninstalled to free up resources on the cluster.
- You can keep these resources by adding the enable_keep_helm label on your PR.
- If you want to keep the resources for master build, you can add the below flag to Jenkinsfile_CNP
withPipeline(type, product, component) { ... disableCleanupOfHelmReleaseOnFailure() ... }
Please note that Pod logs are saved as artefacts in Jenkins before the Helm release is cleared.
For infrastructure-only repositories e.g. "shared infrastructure" the library provides an opinionated infrastructure pipeline which will build Terraform files in the root of the repository.
The opinionated infrastructure pipeline supports Slack notifications when the build fails or is fixed - your team build channel should be provided.
It uses a similar branch --> environment strategy as the app pipeline but with some differences for PRs
| Branch | Environment |
|---|---|
master |
aat then prod |
demo |
demo |
perftest |
perftest |
| PR branch | aat (plan only) |
Example Jenkinsfile to use the opinionated infrastructure pipeline:
#!groovy
@Library("Infrastructure") _
def product = "rhubarb"
withInfraPipeline(product) {
enableSlackNotifications('#my-team-builds')
}You have the ability to pass extra parameters to the withInfraPipeline.
These parameters include:
| parameter name | description |
|---|---|
| component | https://hmcts.github.io/glossary/#component |
| expires | https://github.com/hmcts/terraform-module-common-tags#expiresafter |
Example Jenkinsfile to use the opinionated infrastructure pipeline:
#!groovy
@Library("Infrastructure") _
def product = "rhubarb"
//Optional
def component = "extra-detail"
def expiresAfter = "YYYY-MM-DD"
withInfraPipeline(product, component) {
enableSlackNotifications('#my-team-builds')
expires(expiresAfter)
}The expiresAfter parameter is used in the Sandbox environment to tag resources with an end date after which they are no longer needed. They will then be automatically deleted after this date.
By default the tag value will be now() + 14 days.
If you want your resources to remain for longer than 14 days, you can override the parameter manually in your Jenkinsfile by specifying the expiresAfter parameter as a date in the format shown above.
For resources that must remain permanently, specify a value of "3000-01-01"
def expiresAfter = "3000-01-01"
It is not possible to remove stages from the pipeline but it is possible to add extra steps to the existing stages.
You can use the before(stage) and after<Condition>(stage) within the withInfraPipeline block to add extra steps at the beginning or end of a named stage.
Conditions are:
- Success
- Failure
- Always
Valid values for the stage variable are as follows where ENV should be replaced by the short environment name:
- checkout
- buildinfra:ENV
E.g.
withInfraPipeline(product) {
...
afterSuccess('checkout') {
echo 'Checked out'
}
before('buildinfra:aat') {
echo 'About to build infra in AAT'
}
}It is possible for applications to build their specific infrastructure elements by providing infrastructure folder in application home directory containing terraform scripts to build that
In case your infrastructure includes database creation there is a Flyway migration step available that will be triggered only if it's enabled inside withPipeline block via enableDbMigration() function. By default this step is disabled
The intent of the Nightly Pipeline is to run dependency checks on a nightly basis against the AAT environment as well as some optional tests.
Example block to enable tests:
withNightlyPipeline(type, product, component) {
// add this!
enableCrossBrowserTest()
enableFortifyScan()
}
Dependency checks are mandatory and will be included in all pipelines. The tests stages are all 'opt-in' and can be added or removed based on your needs.
You can also call enableFortifyScan() inside a withPipeline block. When enabled there, the Fortify scan runs in parallel with the other static checks in the Static checks / Container build stage of the regular pipeline and, by default, does not fail the pipeline.
When Fortify completes, the pipeline archives:
Fortify Scan/FortifyScanReport.html(summary produced by the Fortify client)Fortify Scan/FortifyVulnerabilities.htmlandFortify Scan/FortifyVulnerabilities.json(per-issue details fetched from FoD)
If your repo does not already provide a fortifyScan script/task, the library falls back to a built-in FoD API runner (zips the workspace, starts a static scan, and writes Fortify Scan/FortifyScanReport.html). The scan runner resolves releaseId from FORTIFY_RELEASE_ID, config/fortify-client.properties, or by looking up a FoD release whose name matches the repository name (derived from GIT_URL).
To force the built-in scan runner (even if the repo has its own fortifyScan hook), set FORTIFY_SCAN_RUNNER=library. Default is auto (use repo hook if present, otherwise library runner).
Authentication for the per-issue vulnerability fetch uses FoD OAuth client_credentials with FORTIFY_OAUTH_CLIENT_ID/FORTIFY_OAUTH_CLIENT_SECRET (client id/secret).
withFortifySecrets(...) loads Fortify scan credentials from Azure Key Vault secrets fortify-on-demand-username/fortify-on-demand-password and exports them as FORTIFY_USER_NAME/FORTIFY_PASSWORD.
withFortifyOAuthSecrets(...) binds the Jenkins usernamePassword credential fortify-on-demand-oauth (override via FORTIFY_OAUTH_CREDENTIALS_ID) and exports it as FORTIFY_OAUTH_CLIENT_ID/FORTIFY_OAUTH_CLIENT_SECRET.
All available test stages are detailed in the table below:
| TestName | How to enable | Example |
|---|---|---|
| CrossBrowser | Add package.json file with "test:crossbrowser" : "Your script to run browser tests" and call enableCrossBrowserTest() | CrossBrowser example |
| FortifyScan | Call enableFortifyScan() | Java example Node example |
| Performance* | Add Gatling config and call enablePerformancetest() | Example Gatling config |
| SecurityScan | Call enableSecurityScan() | Web Application example API example |
| Mutation | Add package.json file with "test:mutation": "Your script to run mutation tests" and call enableMutationTest() | Mutation example |
| FullFunctional | Call enableFullFunctionalTest() | FullFunctional example |
| E2eTest | Call enableE2eTest() | E2eTest Example |
*Performance tests use Gatling. You can find more information about the tool on their website https://gatling.io/.
You can customise the zap proxy scans of your application by passing through options to the security scanning scripts using the urlExclusions parameter in your Jenkinsfile.
Pass this parameter to the enableSecurityScan block to customise the zap proxy scans.
properties([
parameters([
string(name: 'ZAP_URL_EXCLUSIONS', defaultValue: "-config globalexcludeurl.url_list.url\\(1\\).regex=\\'.*jquery-3.5.1.min.js${'$'}\\' -config globalexcludeurl.url_list.url\\(2\\).regex=\\'.*/assets/images.*\\' -config globalexcludeurl.url_list.url\\(3\\).regex=\\'.*/assets/stylesheets.*\\' -config globalexcludeurl.url_list.url\\(4\\).regex=\\'.*/assets/javascripts.*\\' -config globalexcludeurl.url_list.url\\(5\\).regex=\\'.*/ruxitagentjs_.*\\' -config globalexcludeurl.url_list.url\\(6\\).regex=\\'.*/terms-and-conditions.*\\' -config globalexcludeurl.url_list.url\\(7\\).regex=\\'.*/privacy-policy.*\\' -config globalexcludeurl.url_list.url\\(8\\).regex=\\'.*/contact-us.*\\' -config globalexcludeurl.url_list.url\\(9\\).regex=\\'.*/login.*\\' -config globalexcludeurl.url_list.url\\(10\\).regex=\\'.*/cookies.*\\' -config globalexcludeurl.url_list.url\\(11\\).regex=\\'.*/cookie-preferences.*\\' -config globalexcludeurl.url_list.url\\(12\\).regex=\\'.*jquery-3.4.1.min.js${'$'}\\'")
])
])
def urlExclusions = params.ZAP_URL_EXCLUSIONS
withNightlyPipeline(type, product, component) {
// add this!
enableSecurityScan(
urlExclusions: urlExclusions
)
}
You can find an example in idam-web-public
The current state of the Nightly Pipeline is geared towards testing both frontend and backend applications served by NodeJS, AngularJS and Java APIs.
The pipeline will automatically detect whether your application is node based or gradle based and run the appropriate security tests based on that.
Gradle based applications are more commonly used in the backend but if your frontend application is gradle based, you can pass scanType: "frontend" to indicate this is the case and run the frontend specific security script instead of the default backend specific script.
withNightlyPipeline(type, product, component) {
enableSecurityScan(
scanType: "frontend"
)
}
If you have a requirement to customise the security script, you can place your own script in a folder called ci in your repo. Make sure to call the script security.sh.
The pipeline contains stages for application checkout, build and list of testing types. Jenkins triggers the build based on the Jenkins file configuration. In order to enable the Jenkins Nightly Pipeline, a file named Jenkinsfile_nightly must be included in the repository.
Create the Jenkinsfile_Nightly, import the Infrastructure library and use the withNightlyPipeline block.
When initially setting up the nightly pipeline for use in your repo, you should make use of the nightly-dev branch. You should also utilise this branch when debugging any issues that arise in the nightly pipeline.
You can use the before(stage) and after<Condition>(stage) within the withNightlyPipeline block to add extra steps at the beginning or end of a named stage.
Conditions are:
- Success
- Failure
- Always
withNightlyPipeline(type, product, component) {
enableCrossBrowserTest()
enableFullFunctionalTest()
loadVaultSecrets(secrets)
before('crossBrowserTest') {
yarnBuilder.smokeTest()
}
afterAlways('crossBrowserTest') {
steps.archiveArtifacts allowEmptyArchive: true, artifacts: 'functional-output/crossbrowser/reports/**/*'
}
afterAlways('fullFunctionalTest') {
steps.archiveArtifacts allowEmptyArchive: true, artifacts: 'functional-output/functional/reports/**/*'
}
}
It is possible to trigger optional full functional tests, performance tests, fortify scans and security scans on your PRs. To trigger a test, add the appropriate label(s) to your pull request in GitHub:
enable_full_functional_testsenable_performance_testenable_e2e_testenable_fortify_scanenable_security_scan
Some tests may require additional configuration - copy this from your Jenkinsfile_nightly to your Jenkinsfile_CNP.
The fortify scan will be triggered in parallel as part of the Tests/Checks/Container Build stage.
The pipeline supports running performance tests as part of your deployment to AAT or perftest environments. There are two types of performance tests available:
Dynatrace synthetic tests are browser-based tests that monitor your application from the user's perspective. These run in Dynatrace and check response times, availability, and functional correctness.
Gatling tests run load tests against your application from a separate Git repository. This lets teams maintain centralised performance test suites that can be reused across multiple services.
Add the following to your Jenkinsfile_CNP:
withPipeline(type, product, component) {
// Enable Dynatrace synthetic tests
enablePerformanceTestStages(
timeout: 15,
configPath: 'src/test/performance/config/config.groovy'
)
// Enable Gatling load tests from external repo
enableGatlingLoadTests(
repo: 'https://github.com/hmcts/your-performance-tests.git',
branch: 'main',
simulation: 'uk.gov.hmcts.yourapp.simulation.LoadTest',
timeout: 10
)
// Optionally enable Site Reliability Guardian evaluation
enableSrgEvaluation(
serviceName: 'your-service-name',
failureBehavior: 'warn' // 'fail', 'warn', or 'ignore'
)
// Optionally create IDAM test user for performance tests
enableIdamTestUser(
email: '[email protected]',
forename: 'Performance',
surname: 'Tester',
password: 'PerfTest1!',
roles: ['caseworker', 'caseworker-employment', 'caseworker-employment-englandwales']
)
}For Dynatrace tests, create a config file in your service repo (default location: src/test/performance/config/config.groovy):
// config.groovy
// ====== Set up default values ======
this.dynatraceMetricType = 'service-application'
this.dynatraceMetricTag = 'namespace:'
//Preview Config
this.dynatraceSyntheticTestPreview = "SYNTHETIC_TEST-ABC123"
this.dynatraceDashboardIdPreview = "DASHBOARD-123"
this.dynatraceDashboardURLPreview = "https://your-preview-dashboard-url"
this.dynatraceEntitySelectorPreview = 'type(service),tag(\\"[Kubernetes]namespace:\\"),tag(\\"Environment:PREVIEW\\"),entityId(\\"\\")'
//AAT Config
this.dynatraceSyntheticTestAAT = "SYNTHETIC_TEST-ABC123"
this.dynatraceDashboardIdAAT = "DASHBOARD-123"
this.dynatraceDashboardURLAAT = "https://your-aat-dashboard-url"
this.dynatraceEntitySelectorAAT = 'type(service),tag(\\"[Kubernetes]namespace:\\"),tag(\\"Environment:AAT\\"),entityId(\\"\\")'
//Perftest Config
this.dynatraceSyntheticTestPerfTest = "SYNTHETIC_TEST-ABC123"
this.dynatraceDashboardIdPerfTest = "DASHBOARD-123"
this.dynatraceDashboardURLPerfTest = "https://your-perftest-dashboard-url"
this.dynatraceEntitySelectorPerfTest = 'type(service),tag(\\"[Kubernetes]namespace:\\"),tag(\\"Environment:PERF\\"),entityId(\\"\\")'Your external Gatling repository must:
- Have Gradle with
gatling-gradle-pluginorgradle-gatling-plugin - Read
TEST_URLfrom the environment to target the correct deployment - Define test parameters (users, duration, ramp) in
build.gradlerather than the pipeline
Example Gatling build.gradle:
gatling {
simulations = {
include 'uk/gov/hmcts/**/*.scala'
}
}Performance tests require Dynatrace API tokens stored in Azure KeyVault (et-perftest or your product-specific vault):
perf-synthetic-monitor-token- For triggering and monitoring synthetic testsperf-metrics-token- For sending release metricsperf-event-token- For posting build eventsperf-synthetic-update-token- For enabling/disabling synthetic tests
These are automatically loaded by the pipeline from the shared perftest vault.
Performance test stages run:
- Preview environment: After deployment (for PR branches)
- AAT environment: After deployment (for master branch)
- Perftest environment: After deployment (for perftest branch)
Tests can run in parallel (Dynatrace and Gatling at the same time) or individually depending on which you've enabled.
SRG evaluates performance test results against quality gates you define in Dynatrace. You can configure what happens when tests fail:
fail- Pipeline fails if SRG thresholds are breachedwarn- Pipeline marked as unstable (default)ignore- Results logged but build continues
Note: SRG evaluation is not yet fully implemented - the structure is in place for future use.
Gatling test reports are automatically uploaded to Azure Blob Storage and CosmosDB. You can view them at:
https://buildlog-storage-account.blob.core.windows.net/performance/{product}-{component}/{environment}
For external Gatling tests, reports are uploaded to:
https://buildlog-storage-account.blob.core.windows.net/performance/perfInBuildPipeline/{product}-{component}/{environment}
For performance testing, you can create test users in IDAM using the createIdamTestUsers pipeline function.
Important: This function only runs in AAT and Preview environments (both use AAT IDAM). In other environments, it logs a message and returns null without creating users.
- Store the full IDAM testing-support URL in KeyVault as
idam-test-support-url - This is already added to the shared rpe-shared-perftest KV which the performance stages load secrets from
def secrets = [
secret('idam-test-support-url', 'IDAM_TEST_SUPPORT_URL')
]
withAzureKeyvault(azureKeyVaultSecrets: secrets, keyVaultURLOverride: 'https://your-vault-aat.vault.azure.net/') {
def testUser = createIdamTestUsers(
email: '[email protected]',
forename: 'Case',
surname: 'Worker',
password: 'Caseworker1!',
roles: ['caseworker', 'caseworker-employment', 'caseworker-employment-englandwales']
)
// Store credentials for use in tests
if (testUser) {
env.TEST_USER_EMAIL = testUser.email
}
}IDAM requires passwords to have:
- Minimum 8 characters
- At least one uppercase letter
- At least one lowercase letter
- At least one number
- At least one special character
If a user already exists (409 Conflict), the function will continue the build and use the existing user. This makes user creation idempotent - you can safely call it multiple times with the same email without failing the build.
The response will include existed: true to indicate the user already existed:
def user = createIdamTestUsers(email: '[email protected]', ...)
if (user.existed) {
echo "Using existing user: ${user.email}"
} else {
echo "Created new user: ${user.email}"
}The function automatically checks the environment:
- AAT/Preview: User creation proceeds (both use AAT IDAM)
- Other environments: Returns
nulland logs a message - build continues normally
This means you can safely call it in any environment without conditional logic:
def testUser = createIdamTestUsers(email: '[email protected]', ...)
if (testUser) {
echo "Test user available: ${testUser.email}"
} else {
echo "Test user creation skipped (not AAT/Preview)"
}The recommended approach is to use enableIdamTestUser() in your pipeline configuration (see Performance Testing section above). This automatically creates the test user during the Dynatrace Performance Setup stage and makes credentials available via environment variables:
withPipeline(type, product, component) {
enablePerformanceTestStages(
configPath: 'src/test/performance/config/config.groovy'
)
enableIdamTestUser(
email: '[email protected]',
forename: 'Performance',
surname: 'Tester',
password: 'PerfTest1!',
roles: ['caseworker', 'caseworker-employment']
)
}The test user is created automatically, and credentials are stored in:
TEST_USER_EMAIL- The user's email addressTEST_USER_PASSWORD- The user's password
Your Gatling tests or functional tests can access these environment variables directly.
You need to add nonServiceApp() method in withPipeline block to skip service specific steps in the pipeline.
#!groovy
@Library("Infrastructure")
withPipeline(type, product, component) {
nonServiceApp()
}This is a Groovy project, and gradle is used to build and test.
Run
$ ./gradlew build
$ ./gradlew testAlternatively, you can use the gradle tasks from within a container using the following script:
$ ./start-docker-groovy-envThen you can run the build and test tasks as described above.
If you use AKS deployments, a docker image is built and pushed remotely to ACR.
You can optionally make this build faster by using explicit ACR tasks, in a acb.tpl.yaml file located at the root of your project (watch out, the extension is .yaml, not .yml).
This is particularly effective for nodejs projects pulling loads of npm packages.
Here is a sample file, assuming you use docker multi stage build:
# ./acb.tpl.yaml
version: 1.0-preview-1
steps:
# Pull previous build images
# This is used to leverage on layers re-use for the next steps
- id: pull-base
cmd: docker pull {{.Run.Registry}}/product/component/base:latest || true
when: ["-"]
keep: true
# (Re)create base image
- id: base
build: >
-t {{.Run.Registry}}/product/component/base
--cache-from {{.Run.Registry}}/product/component/base:latest
--target base
.
when:
- pull-base
keep: true
# Create runtime image
- id: runtime
build: >
-t {{.Run.Registry}}/{{CI_IMAGE_TAG}}
--cache-from {{.Run.Registry}}/product/component/base:latest
--target runtime
.
when:
- base
keep: true
# Push to registry
- id: push-images
push:
- "{{.Run.Registry}}/product/component/base:latest"
- "{{.Run.Registry}}/{{CI_IMAGE_TAG}}"
when:
- runtimeProperties expanded by Jenkins:
| Property matcher | |
|---|---|
{{CI_IMAGE_TAG}} |
is the stadard name of the runtime image |
{{REGISTRY_NAME}} |
is the registry name, e.g. hmcts of hmctssandbox. Useful if you want to pass it as --build-arg parameter |
If you want to learn more about ACR tasks, here is the documentation.
Some basic versions of tools are installed on the Jenkins agent VM images but we try to use version managers where possible, so that applications can update independently and aren't stuck using old versions forever.
Java 11 is installed on the Jenkins agent.
nvm is used, place a .nvmrc file at the root of your repo containing the version you want. If it isn't present we fallback to whatever is on the Jenkins agent, currently the latest 8.x version.
tfenv is used, place a .terraform-version file in your infrastructure folder for app pipelines, and at the root of your repo for infra pipelines. If this file isn't present we fallback to v0.11.7.
You can activate the testing and deployment of Camunda files using the withCamundaOnlyPipeline() method
This particular method is designed to be used with a separate Camunda repo, as opposed to Camunda files in the app repo.
It has been configured to find BPMN and DMN files in the repo, and create the deployment in Camunda if there are changes.
It will run unit and security tests on PRs, and will upload these DMN/BPMN files to Camunda once merged.
Example of usage
/* … */
def s2sServiceName = "wa_task_configuration_api"
withCamundaOnlyPipeline(type, product, component, s2sServiceName, tenantId) {
/* … */
}These s2s Service Names can be found in the camunda-bpm repo: https://github.com/hmcts/camunda-bpm/blob/d9024d0fe21592b39cd77fd6dbd5c2e585e56c59/src/main/resources/application.yaml#L58, eg. unspec-service, wa_task_configuration_api etc.
Tenant ID can also be checked from the camunda-bpm repo: https://github.com/hmcts/camunda-bpm/blob/master/src/main/resources/application.yaml#L47 eg. wa, ia, civil-unspecified etc.
You can activate contract testing lifecycle hooks in the CI using the enablePactAs() method.
The different hooks are based on roles that you can assign to your project: CONSUMER and/or PROVIDER and/or 'CONSUMER_DEPLOY_CHECK' (to be used in conjunction with CONSUMER role). A common broker will be used as well as the naming and tagging conventions.
Here is an example of a project which acts a consumer and provider (for example a backend-for-frontend):
import uk.gov.hmcts.contino.AppPipelineDsl
/* … */
withPipeline(product) {
/* … */
enablePactAs([
AppPipelineDsl.PactRoles.CONSUMER,
AppPipelineDsl.PactRoles.PROVIDER,
AppPipelineDsl.PactRoles.CONSUMER_DEPLOY_CHECK
])
}The following hooks will then be ran before the deployment:
| Role | Order | Yarn | Gradle | Active on branch |
|---|---|---|---|---|
CONSUMER |
1 | test:pact:run-and-publish |
runAndPublishConsumerPactTests |
Any branch |
PROVIDER |
2 | test:pact:verify-and-publish |
runProviderPactVerification publish true |
master only |
PROVIDER |
2 | test:pact:verify |
runProviderPactVerification publish false |
Any branch |
CONSUMER_DEPLOY_CHECK |
3 | test:can-i-deploy:consumer |
canideploy |
Any branch |
The Pact broker url and other parameters are passed to these hooks as following:
yarn:PACT_BROKER_URLPACT_CONSUMER_VERSION/PACT_PROVIDER_VERSION
gradlew:-Ppactbroker.url-Ppact.consumer.version/-Ppact.provider.version-Ppact.verifier.publishResults=${onMaster}is passed by default for providers
🛎️ onMaster is a boolean that is true if the current branch is master
🛎️ It is expected that the scripts are responsible for figuring out which tag or branch is currently tested.
The environment specific branches such as demo, ithc and perftest are set by default to automatically be synced with master branch. If the branch doesn't exist it will not be synced.
By using the syncBranchesWithMaster() method in Application, Infrastructure and Camunda pipelines, you can manually override what environment branches are synced. Setting the value to '[]' will disable the sync for all branches. This method will be invoked in the master build and execute as the last stage in the build.
Example of overriding branches
def branchesToSync = ['demo', 'perftest']
withPipeline(type, product, component) {
syncBranchesWithMaster(branchesToSync)
}
Example of disabling sync
def branchesToSync = []
withPipeline(type, product, component) {
syncBranchesWithMaster(branchesToSync)
}
Terraform AzureRM provider now supports new resource types, which were previously created using Azure Template Deployment.
Currently, resources created using the following modules can be imported:
- Service Bus Namespace (https://github.com/hmcts/terraform-module-servicebus-namespace)
- Service Bus Topic (https://github.com/hmcts/terraform-module-servicebus-topic)
- Service Bus Queue (https://github.com/hmcts/terraform-module-servicebus-queue)
- Service Bus Subscription (https://github.com/hmcts/terraform-module-servicebus-subscription)
Platops have released new versions of these modules, where native terraform resource types are used. The new version is available in a separate branch in the respective repositories.
To consume the new modules, existing resources must be imported to the new module structure. The import will be automatically performed in the background if there are modules that needs to be imported. Users will notice a new stage "Import Terraform Modules" in the pipeline.
NOTE: The module's local name should NOT be changed for the import to work as expected. For example: module "servicebus-namespace" { ... }. The local name "servicebus-namespace" should not be changed.
Example:
Build Console: https://sandbox-build.platform.hmcts.net/job/HMCTS_Sandbox_RD/job/rd-shared-infrastructure/job/sandbox/170/consoleFull
Use JsonSlurperClassic instead of JsonSlurper when parsing loosely structured or untyped JSON to avoid errors, especially ones related to unit tests. Add null checks and other checks when working with Jenkins provided objects or properties to make sure that things work across different environments. Tests should include appropriate stubbing and mocking for anything that is pipeline specific to improve reliability. This helps prevent issues caused by recent updates to certain plugins.
Any steps that you see in your Jenkins pipeline can be found within this repository.
If you search this repository for the command being run when a failure occurs, you can see where the command and it's associated variables are defined.
For example, pipelines are restricted to create resources via Terraform that have been pre-approved.
If your pipeline fails with an error message saying "this repo is using a terraform resource that is not allowed", you can search the repo for this message to see where the steps that throw this error are defined.
On searching for this, you will be directed to /vars/approvedTerraformInfrastructure.groovy
This file calls a class named TerraformInfraApprovals.
This file will point to the repository which defines, in json syntax, which infrastructure resources and modules are approved for use at the global and project level.
- Use the Github pull requests to make change
- Test the change by pointing a repository, to the branch with the change, edit your
Jenkinsfilelike so:
@Library('Infrastructure@<your-branch-name>') _