Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 18 additions & 0 deletions .ecrc.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
{
"Verbose": false,
"Debug": false,
"IgnoreDefaults": false,
"SpacesAftertabs": false,
"NoColor": false,
"Exclude": ["LICENSE.txt"],
"AllowedContentTypes": [],
"PassedFiles": [],
"Disable": {
"EndOfLine": false,
"Indentation": false,
"IndentSize": false,
"InsertFinalNewline": false,
"TrimTrailingWhitespace": false,
"MaxLineLength": true
}
}
2 changes: 1 addition & 1 deletion .github/workflows/adf.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,4 +25,4 @@ jobs:
# Run tox using the version of Python in `PATH`
run: |
tox --version
tox
tox
5 changes: 3 additions & 2 deletions .mega-linter.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ ENABLE_LINTERS:
- BASH_EXEC
- CLOUDFORMATION_CFN_LINT
- DOCKERFILE_HADOLINT
- EDITORCONFIG_EDITORCONFIG_CHECKER
- JSON_JSONLINT
- JSON_PRETTIER
- JSON_V8R
Expand All @@ -35,5 +36,5 @@ JSON_PRETTIER_PRE_COMMANDS:
cwd: "workspace"

CLOUDFORMATION_CFN_LINT_FILE_EXTENSIONS: [".yml", ".yaml"]

MARKDOWN_MARKDOWN_LINK_CHECK_ARGUMENTS: '-q'
EDITORCONFIG_EDITORCONFIG_CHECKER_CONFIG_FILE: '.ecrc.json'
MARKDOWN_MARKDOWN_LINK_CHECK_ARGUMENTS: '-q'
4 changes: 2 additions & 2 deletions .pylintrc
Original file line number Diff line number Diff line change
Expand Up @@ -222,7 +222,7 @@ notes=FIXME,XXX

# Minimum lines number of a similarity.
# Temp 500 until we merge initial_commit into shared codebase.
min-similarity-lines=500
min-similarity-lines=500

# Ignore comments when computing similarities.
ignore-comments=yes
Expand Down Expand Up @@ -361,4 +361,4 @@ int-import-graph=

# Exceptions that will emit a warning when being caught. Defaults to
# "Exception"
overgeneral-exceptions=Exception
overgeneral-exceptions=Exception
2 changes: 1 addition & 1 deletion NOTICE.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
AWS Deployment Framework

Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved.
Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved.
16 changes: 8 additions & 8 deletions docs/admin-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -170,21 +170,21 @@ adf-bootstrap <-- This folder lives in the bootstrap repo on master account
│ ------│ global.yml
│ │ ───test
│ ------│ regional.yml
│ ------│ global.yml
│ ------│ global.yml
│ │ ───prod
│ ------│ regional.yml
│ ------│ global.yml

│───insurance
│ │
│───insurance
│ │
│ │ ───dev
│ ------│ regional.yml
│ ------│ global.yml
│ │ ───test
│ ------│ regional.yml
│ ------│ global.yml
│ ------│ global.yml
│ │ ───prod
│ ------│ regional.yml
│ ------│ global.yml
Expand All @@ -206,7 +206,7 @@ adf-bootstrap <-- This folder lives in the bootstrap repo on master account
│ │ ───prod
│ -------- │ regional.yml
│ -------- │ global.yml
│───insurance
│───insurance
│ │ ───prod
│ -------- │ regional.yml
│ -------- │ global.yml
Expand Down Expand Up @@ -420,17 +420,17 @@ pipelines:
```

### Integrating with Slack with AWS ChatBot
The ADF also supports integrating pipeline notifications with Slack via the AWS ChatBot. This allows pipeline notifications to scale and provides a consistent Slack notification across different AWS services.
The ADF also supports integrating pipeline notifications with Slack via the AWS ChatBot. This allows pipeline notifications to scale and provides a consistent Slack notification across different AWS services.

In order to use AWS ChatBot, first you must configure an (AWS ChatBot Client)[https://us-east-2.console.aws.amazon.com/chatbot/home?region=eu-west-1#/chat-clients] for your desired Slack workspace. Once the client has been created. You will need to manually create a channel configuration that will be used by the ADF.
In order to use AWS ChatBot, first you must configure an (AWS ChatBot Client)[https://us-east-2.console.aws.amazon.com/chatbot/home?region=eu-west-1#/chat-clients] for your desired Slack workspace. Once the client has been created. You will need to manually create a channel configuration that will be used by the ADF.

Currently, dynamically creating channel configurations is not supported. In the deployment map, you can configure a unique channel via the notification endpoint parameter for each pipeline separately. Add the `params` section if that is missing and add the following configuration to the pipeline:
```
pipelines:
- name: some-pipeline
# ...
params:
notification_endpoint:
notification_endpoint:
type: chat_bot
target: my_channel_config
```
Expand Down
6 changes: 3 additions & 3 deletions docs/providers-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,8 +104,8 @@ Provider type: `codecommit`.
> CodeCommit when an update to the repository took place.
- *output_artifact_format* - *(String)* default: `CODE_ZIP`
> The output artifact format. Values can be either CODEBUILD_CLONE_REF or CODE_ZIP. If unspecified, the default is CODE_ZIP.
> If you are using CODEBUILD_CLONE_REF, you need to ensure that the IAM role passed in via the *role* property has the CodeCommit:GitPull permission.
> NB: The CODEBUILD_CLONE_REF value can only be used by CodeBuild downstream actions.
> If you are using CODEBUILD_CLONE_REF, you need to ensure that the IAM role passed in via the *role* property has the CodeCommit:GitPull permission.
> NB: The CODEBUILD_CLONE_REF value can only be used by CodeBuild downstream actions.

### GitHub

Expand Down Expand Up @@ -250,7 +250,7 @@ Provider type: `codebuild`.
> Along with `repository_arn`, we also support a `tag` key which can be used
> to define which image should be used (defaults to `latest`).
> An example of this setup is provided [here](user-guide.md#custom-build-images).
>
>
> Image can also take an object that contains a reference to a
> public docker hub image with a prefix of `docker-hub://`, such as
> `docker-hub://bitnami/mongodb`. This allows your pipeline
Expand Down
18 changes: 9 additions & 9 deletions docs/technical-guide.md
Original file line number Diff line number Diff line change
@@ -1,24 +1,24 @@
# Technical Guide
## Introduction
This document is intended to give insight into how the AWS Deployment Framework works under the hood.
This document is intended to give insight into how the AWS Deployment Framework works under the hood.

## High Level Overview - AWS Deployment Framework Bootstrap Repository
The AWS Deployment Framework Bootstrap Repository aka "Bootstrap Repo" is where the source code used by ADF lives. The bootstrap repo is also where your accounts, OU layout and base templates are defined.
The flow below is a high level overview of what happens when a change is committed to this repository.
The AWS Deployment Framework Bootstrap Repository aka "Bootstrap Repo" is where the source code used by ADF lives. The bootstrap repo is also where your accounts, OU layout and base templates are defined.
The flow below is a high level overview of what happens when a change is committed to this repository.
![bootstrap-repo-overview](images/TechnicalGuide-BootstrapRepo.drawio.png)

### Account Management State Machine
The Account Management State Machine is triggered by S3 PUT events to the ADF Accounts bucket.
### Account Management State Machine
The Account Management State Machine is triggered by S3 PUT events to the ADF Accounts bucket.
Below is a diagram detailing the components of the standard state machine. This state machine is defined in `src/account_processing.yml` and the lambda functions code is location in `src/lambda_codebase/account_processing`
![account-management-state-machine](images/TechnicalGuide-AccountManagementStateMachine.drawio.png)


## High Level Overview - AWS Deployment Framework Pipeline Repository
The AWS Deployment Framework Pipeline Repository aka "Pipeline Rep" is where the deployment map definitions live. It typically exists in CodeCommit within your Deployment Account(s).
The diagram below details what happens when a commit is pushed to this repository.
The AWS Deployment Framework Pipeline Repository aka "Pipeline Rep" is where the deployment map definitions live. It typically exists in CodeCommit within your Deployment Account(s).
The diagram below details what happens when a commit is pushed to this repository.
![pipeline-repo-overview](images/adf-pipeline-high-level.png)

### Pipeline Management State Machine
The Pipeline Management State machine is triggered by S3 PUT events to the ADF Pipelines bucket. This state machine is responsible for expanding the deployment map, resolving the targets, creating pipeline definitions (JSON objects that detail the source(s) and stages involved and the targets) and then generating CDK stacks off of the definitions.
The Pipeline Management State machine is triggered by S3 PUT events to the ADF Pipelines bucket. This state machine is responsible for expanding the deployment map, resolving the targets, creating pipeline definitions (JSON objects that detail the source(s) and stages involved and the targets) and then generating CDK stacks off of the definitions.

It additionally covers the deletion of stale pipelines. A Stale pipeline is any pipeline that has a definition but does not exist in a deployment map.
It additionally covers the deletion of stale pipelines. A Stale pipeline is any pipeline that has a definition but does not exist in a deployment map.
Original file line number Diff line number Diff line change
@@ -1 +1 @@
distributionUrl=https://repo1.maven.org/maven2/org/apache/maven/apache-maven/3.3.9/apache-maven-3.3.9-bin.zip
distributionUrl=https://repo1.maven.org/maven2/org/apache/maven/apache-maven/3.3.9/apache-maven-3.3.9-bin.zip
2 changes: 1 addition & 1 deletion samples/sample-ec2-java-app-codedeploy/appspec.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,4 +23,4 @@ hooks:
ValidateService:
- location: validate.sh
timeout: 120
runas: ec2-user
runas: ec2-user
2 changes: 1 addition & 1 deletion samples/sample-ec2-java-app-codedeploy/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -63,4 +63,4 @@
</dependency>
</dependencies>

</project>
</project>
2 changes: 1 addition & 1 deletion samples/sample-ec2-java-app-codedeploy/scripts/stop.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#!/usr/bin/env bash

sudo killall java
exit 0
exit 0
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,4 @@ endpoints:
restart:
enabled: true
shutdown:
enabled: true
enabled: true
2 changes: 1 addition & 1 deletion samples/sample-etl-pipeline/big_data.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@
Lots of data...
Lots of data...
2 changes: 1 addition & 1 deletion samples/sample-fargate-node-app/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,4 @@ WORKDIR /app
COPY . .
RUN npm install
EXPOSE 3000
ENTRYPOINT ["npm", "start"]
ENTRYPOINT ["npm", "start"]
14 changes: 7 additions & 7 deletions samples/sample-rdk-rules/README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
# Sample RDK Rules pipeline
This setup will allow you to deploy custom config rules created by the RDK via ADF pipeline.
This setup will allow you to deploy custom config rules created by the RDK via ADF pipeline.

## Architecture
![Architecture](./meta/custom-configs.png)
* As a first step it requires to have a Source code repository to store our code. In this pattern we are using CodeCommit repository. This repository created by as a part of the pipeline definition in the ADF's deployment_map.yml. Example of the pipeline definition is in the ADF setup section.
* ADF pipeline definition creates a pipeline that will deploy Lambda function(s) into the compliance account and Custom Config rule(s) to Target accounts.
* When a Custom Config rule get pushed into the CodeCommit repository;
* When a Custom Config rule get pushed into the CodeCommit repository;
- CodeBuild will find the RDK rule(s) recursively in the `config-rules` directory then zip each single rule one by one and upload into ADF bucket. Buildspec is utilising a helper script called lambda_helper.py to achieve this task. ADF populates bucket names into SSM Parameter store on the Installation. lambda_helper.py fetches the bucket name from the SSM Parameter Store. Parameter name looks like /cross_region/s3_regional_bucket/{region}.
- Then CodeBuild will generate 2 CloudFormation templates one for Lambda function(s) deployment and other for the Custom Config rule(s) deployment.

Expand All @@ -27,7 +27,7 @@ Sample pipeline definition looks like below:
image: "STANDARD_5_0"
deploy:
provider: cloudformation
targets:
targets:
- name: LambdaDeployment
regions: <regions>
target: <compliance-account-id>
Expand All @@ -36,7 +36,7 @@ Sample pipeline definition looks like below:
- name: ConfigRulesDeployment
regions: <regions>
target:
- <target-accounts-to-deploy-custom-config-rules>
- <target-accounts-to-deploy-custom-config-rules>
properties:
template_filename: "template-config-rules.json"
```
Expand All @@ -54,10 +54,10 @@ After you clone the repo following file/folder structure will be there;
| requirements.txt | Requirements for the lambda_helper.py script. |

## Lambda function implementation requirements
In Lambda functions when you want to refer boto3 client or resource make sure
In Lambda functions when you want to refer boto3 client or resource make sure
- Set `ASSUME_ROLE_MODE` constant to `True`
- Use `get_client` method for client.
- Duplicate `get_client` and create the `get_resource` method.
- Duplicate `get_client` and create the `get_resource` method.

```
def get_resource(service, event, region=None):
Expand Down Expand Up @@ -85,6 +85,6 @@ These methods use STS and config payload to assume the IAM role in the target ac
## Prerequisites/ Important bits
- This solution does not setup config or config recorder.
- When this solution deploys the config-rule to a target account; it expects config is enabled in the target account.
- Each target account's config role should be able assume by `<account-that-has-the-lambda-function>` to put evaluations into each target account's config. AKA config role in target account(2222222222) should have the lambda-function-account-id(1111111111) as trusted entity as below.
- Each target account's config role should be able assume by `<account-that-has-the-lambda-function>` to put evaluations into each target account's config. AKA config role in target account(2222222222) should have the lambda-function-account-id(1111111111) as trusted entity as below.

![Trusted entiry](./meta/lambda-account-id-trusted-entiry.png)
Loading