Skip to content

soumiknandi/terraform-tutorial

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

7 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Terraform Tutorial

What is Terraform

HashiCorp Terraform is an infrastructure as code tool that lets you define both cloud and on-prem resources in human-readable configuration files that you can version, reuse, and share.

Terraform Components

Terraform

The terraform block allows you to configure Terraform behavior, including the Terraform version, backend, integration with HCP Terraform, and required providers.

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "5.92.0"
    }
  }
}

Providers

Defines which cloud or service Terraform interacts with.

provider "aws" {
  region = "us-east-1"
}

Resources

The most important part of Terraform, used to create, modify, and delete infrastructure components.

Every resource type is associated with a specific provider (e.g., AWS, Azure, GCP).

resource "aws_instance" "example" {
  ami           = "ami-123456"
  instance_type = var.instance_type
}

Data Sources

Used to fetch or read information from external sources without managing them.

Commonly used to retrieve existing infrastructure details (e.g., getting an AWS AMI ID).

data "aws_ami" "latest_ubuntu" {
  most_recent = true
  owners      = ["amazon"]

  filter {
    name   = "name"
    values = ["ubuntu/images/hvm-ssd/ubuntu-20.04-amd64-server-*"]
  }
}

resource "aws_instance" "example" {
  ami           = data.aws_ami.latest_ubuntu.id
  instance_type = "t2.micro"
}

Variables

Used to define dynamic values that can be passed into Terraform configurations.

Helps make configurations reusable and flexible.

variable "instance_type" {
  description = "Type of EC2 instance"
  type        = string
  default     = "t2.micro"
}

Outputs

Used to display values after terraform apply.

output "instance_ip" {
  value = aws_instance.example.public_ip
}

Modules

Used to organize and reuse Terraform code.

module "vpc" {
  source = "./modules/vpc"
  vpc_cidr = "10.0.0.0/16"
}

Basic Commands

Init

terraform init

When you create a new configuration β€” or check out an existing configuration from version control β€” you need to initialize the directory with teraform init.

Initializing a configuration directory downloads and installs the providers defined in the configuration

Terraform downloads the required provider and installs it in a hidden subdirectory of your current working directory, named .terraform.

The terraform init command prints out which version of the provider was installed.

Terraform also creates a lock file named .terraform.lock.hcl which specifies the exact provider versions used, so that you can control when you want to update the providers used for your project.

Format

terraform fmt

The terraform fmt command formats Terraform configuration file contents so that it matches the canonical format and style.

Validate

terraform validate

The terraform validate command validates the configuration files in a directory. Validate runs checks that verify whether a configuration is syntactically valid and internally consistent, regardless of any provided variables or existing state.

It does not validate remote services, such as remote state or provider APIs.

Plan

terraform plan

The terraform plan command creates an execution plan, which lets you preview the changes that Terraform plans to make to your infrastructure.

The plan command alone does not actually carry out the proposed changes You can use this command to check whether the proposed changes match what you expected before you apply the changes or share your changes with your team for broader review.

If Terraform detects that no changes are needed to resource instances or to root module output values, terraform plan will report that no actions need to be taken.

Apply

terraform apply

To skip confirmation

terraform apply -auto-approve

The terraform apply command executes the actions proposed in a Terraform plan.

When you run terraform apply without passing a saved plan file, Terraform automatically creates a new execution plan as if you had run terraform plan, prompts you to approve that plan, and takes the indicated actions. You can use all of the planning modes and planning options to customize how Terraform will create the plan.

More details

TDLR: Apply changes by creating, updating or deleting the resources.

Destroy

terraform destroy

The terraform destroy command deprovisions all objects managed by a Terraform configuration.

Output

terraform output
terraform output instance_ip

The terraform output command extracts the value of an output variable from the state file.

State Commands

List

The terraform state list command lists resources within a terraform state.

terraform state list
  • Filtering using resource name
terraform state list aws_instance.bar

Show

The terraform state show command shows the attributes of a single resource in the terraform state.

terraform state show aws_instance.myec2

Remove

The terraform state rm command removes the binding to an existing remote object without first destroying it. The remote object continues to exist but is no longer managed by Terraform.

terraform state rm aws_instance.myec2

Move

The terraform state mv command changes bindings in Terraform state so that existing remote objects bind to new resource instances.

terraform state mv aws_instance.old_name aws_instance.new_name

Credentials Setup

Basic - Parameters in the provider configuration

provider "aws" {
  region = "ap-south-1"
  access_key = "my-access-key"
  secret_key = "my-secret-key"
}

Environment variables

provider "aws" {}
% export AWS_ACCESS_KEY_ID="anaccesskey"
% export AWS_SECRET_ACCESS_KEY="asecretkey"
% export AWS_REGION="us-west-2"
% terraform plan

Shared Configuration and Credentials Files

We can place or generate the credential files under default path or any path and provide the path to terraform provider block.

  • Default path of credential and config is under $HOME/.aws. No need to mention path if files are present under default path.
provider "aws" {}
  • For custom path we need to provide the path.
provider "aws" {
  shared_config_files = ["/Users/tf_user/.aws/conf"]
  shared_credentials_files = ["/Users/tf_user/.aws/creds"]
  profile = "customprofile"
}

Container credentials

TODO

IAM Role

TODO

HashiCorp Vault

TODO

Code

Provider Setup

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "5.92.0"
    }
  }
}

Provider AWS Connection

provider "aws" {
  access_key = "ACCESS_KEY"
  secret_key = "SECRET_KEY"
  region = "AWS_REGION"
}

Terraform Resource Structure

resource "provider_resource_name" "internal_name_used_for_reference" {
  ...
  details
  ...  
}

resource "provider_resource_name" "another_internal_name" {
  ...
  details
  ...
  reference=provider_resource_name.internal_name_used_for_reference.id
}


Simple S3 Bucket

Create a simple S3 bucket without versioning

resource "aws_s3_bucket" "mik_test_149" {
  bucket = "mik-test-149"
  tags = {
    "env":"test"
  }
}
  • To delete s3 bucket having data use below
force_destroy = "true"
  • Tags are optional
tags = {
  "env":"test"
}

S3 Bucket With Versioning

Create a simple S3 bucket with versioning enabled

Example Code

resource "aws_s3_bucket" "mik_test_149" {
  bucket = "mik-test-149"
}

resource "aws_s3_bucket_versioning" "mik_test_149_version" {
  bucket = aws_s3_bucket.mik_test_149.id
  versioning_configuration {
    status = "Enabled"
  }
}

Simple EC2

Create a simple EC2 instance on default VPC using AMI ID

resource "aws_instance" "aws_simple_ec2_test" {
  ami = "ami-05c179eced2eb9b5b"
  instance_type = "t2.micro"
}

Simple EC2 With AMI Filtering

Create a simple EC2 instance on default VPC, filtering AMI using tags

Resources Used

  • aws_ami
  • aws_instance
Example Code
data "aws_ami" "latest_ubuntu" {
  most_recent = true
  owners      = ["aws-marketplace"]

  filter {
    name   = "name"
    values = ["ubuntu/images/hvm-*/ubuntu-*-24.04-amd64-server-*"]
  }

  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }

  filter {
    name   = "root-device-type"
    values = ["ebs"]
  }
}

resource "aws_instance" "ec2_ubuntu" {
  ami = data.aws_ami.latest_ubuntu.id
  instance_type = "t2.micro"
}

Key Pair

We can specify the public key directly or pass the public key file.

resource "aws_key_pair" "key_aws" {
  key_name   = "aws"
  public_key = "ssh-rsa xxxxxx [email protected]"
}
resource "aws_key_pair" "key_aws" {
  key_name   = "aws"
  public_key = file("/home/totoro/.ssh/aws.pub")
}

EC2 With Security Group & Key Pair

Create EC2 with SG allowing SSH, which will allow users to connect to EC2 instance.

Create EC2 Instance on default VPC and connect.

Resources Used

  • aws_key_pair
  • aws_security_group
  • aws_vpc_security_group_ingress_rule
  • aws_vpc_security_group_egress_rule
  • aws_instance
Example Code
resource "aws_key_pair" "key_aws" {
  key_name   = "aws"
  public_key = file("/home/totoro/.ssh/aws.pub")
}

resource "aws_security_group" "sg_allow_ssh" {
  name = "sg_allow_ssh"

  tags = {
    Name = "sg_allow_ssh"
  }
}

resource "aws_vpc_security_group_ingress_rule" "sg_ingress_allow_ssh" {
  security_group_id = aws_security_group.sg_allow_ssh.id

  cidr_ipv4 = "0.0.0.0/0"
  from_port = 22
  ip_protocol = "tcp"
  to_port = 22
}

resource "aws_vpc_security_group_egress_rule" "sq_egress_allow_ephemeral_tcp" {
  security_group_id = aws_security_group.sg_allow_ssh.id

  cidr_ipv4 = "0.0.0.0/0"
  from_port = 1024
  ip_protocol = "tcp"
  to_port = 65535

}

resource "aws_vpc_security_group_egress_rule" "sq_egress_allow_ephemeral_udp" {
  security_group_id = aws_security_group.sg_allow_ssh.id

  cidr_ipv4 = "0.0.0.0/0"
  from_port = 1024
  ip_protocol = "udp"
  to_port = 65535

}

data "aws_ami" "latest_ubuntu" {
  most_recent = true
  owners      = ["aws-marketplace"]

  filter {
    name   = "name"
    values = ["ubuntu/images/hvm-*/ubuntu-*-24.04-amd64-server-*"]
  }

  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }

  filter {
    name   = "root-device-type"
    values = ["ebs"]
  }
}

resource "aws_instance" "ec2_ubuntu" {
  ami = data.aws_ami.latest_ubuntu.id
  instance_type = "t2.micro"
  key_name = aws_key_pair.key_aws.key_name
  vpc_security_group_ids = [aws_security_group.sg_allow_ssh.id]
}

EC2 With VPC, Security Group & Key Pair. Implemented for-loop to allow multiple ports and protocols for SG

Create VPC with Subnet, Security Group, Route Table & Internet Gateway

Create EC2 instance on your custom VPC

Resources Used

  • aws_key_pair
  • aws_vpc
  • aws_internet_gateway
  • aws_subnet
  • aws_route_table
  • aws_route_table_association
  • aws_security_group
  • aws_vpc_security_group_ingress_rule
  • aws_vpc_security_group_egress_rule
  • aws_instance
  • output
Example Code
resource "aws_key_pair" "key_aws" {
  key_name   = "aws"
  public_key = file("/home/totoro/.ssh/aws.pub")
}

resource "aws_vpc" "main_vpc" {
  cidr_block = "10.0.0.0/16"

  tags = {
    Name = "main_vpc"
  }
}

resource "aws_subnet" "main_vpc_subnet_pub" {
  vpc_id     = aws_vpc.main_vpc.id
  cidr_block = "10.0.1.0/24"

  tags = {
    Name = "main_vpc_subnet_pub"
  }
}

resource "aws_subnet" "main_vpc_subnet_pvt" {
  vpc_id     = aws_vpc.main_vpc.id
  cidr_block = "10.0.2.0/24"

  tags = {
    Name = "main_vpc_subnet_pvt"
  }
}

resource "aws_internet_gateway" "main_vpc_gw" {
  vpc_id = aws_vpc.main_vpc.id

  tags = {
    Name = "main_vpc_gw"
  }
}

resource "aws_route_table" "main_vpc_subnet_pub_rt" {
  vpc_id = aws_vpc.main_vpc.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.main_vpc_gw.id
  }

  tags = {
    Name = "main_vpc_subnet_pub_rt"
  }
}

resource "aws_route_table" "main_vpc_subnet_pvt_rt" {
  vpc_id = aws_vpc.main_vpc.id

  route {
    cidr_block = "10.0.0.0/16"
    gateway_id = "local"
  }

  tags = {
    Name = "main_vpc_subnet_pvt_rt"
  }
}

resource "aws_route_table_association" "main_vpc_subnet_pvt_rt_association" {
  subnet_id = aws_subnet.main_vpc_subnet_pvt.id
  route_table_id = aws_route_table.main_vpc_subnet_pvt_rt.id
}

resource "aws_route_table_association" "main_vpc_subnet_pub_rt_association" {
  subnet_id = aws_subnet.main_vpc_subnet_pub.id
  route_table_id = aws_route_table.main_vpc_subnet_pub_rt.id
}

resource "aws_security_group" "sg_allow_ssh_http_tls" {
  name = "allow_ssh_https_tls"
  vpc_id = aws_vpc.main_vpc.id

  tags = {
    Name = "allow_ssh_https_tls"
  }
}

resource "aws_vpc_security_group_ingress_rule" "sg_ingress_allow_ssh_http_tls" {
  security_group_id = aws_security_group.sg_allow_ssh_http_tls.id

  for_each = toset(["22", "80", "443"])
  from_port = each.value
  to_port = each.value

  cidr_ipv4 = "0.0.0.0/0"
  ip_protocol = "tcp"
}

resource "aws_vpc_security_group_egress_rule" "sq_egress_allow_ephemeral" {
  security_group_id = aws_security_group.sg_allow_ssh_http_tls.id

  for_each = toset(["tcp","udp"])
  ip_protocol = each.value

  cidr_ipv4 = "0.0.0.0/0"
  from_port = 1024
  to_port = 65535
}

resource "aws_vpc_security_group_egress_rule" "sq_egress_allow_http_tls" {
  security_group_id = aws_security_group.sg_allow_ssh_http_tls.id

  for_each = toset(["80","443"])
  from_port = each.value
  to_port = each.value

  cidr_ipv4 = "0.0.0.0/0"
  ip_protocol = "tcp"
}

data "aws_ami" "latest_ubuntu" {
  most_recent = true
  owners      = ["amazon"]

  filter {
    name   = "name"
    values = ["ubuntu/images/hvm-*/ubuntu-*-24.04-amd64-server-*"]
  }

  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }

  filter {
    name   = "root-device-type"
    values = ["ebs"]
  }
}

resource "aws_instance" "ec2_ubuntu" {
  ami = data.aws_ami.latest_ubuntu.id
  associate_public_ip_address = "true"
  instance_type = "t2.micro"
  key_name = aws_key_pair.key_aws.key_name
  subnet_id = aws_subnet.main_vpc_subnet_pub.id
  vpc_security_group_ids = [aws_security_group.sg_allow_ssh_http_tls.id]
  user_data = file("${path.module}/user_data.sh")
}

output "ec2_ubuntu_ip" {
  value = aws_instance.ec2_ubuntu.public_ip
}

EC2 With VPC, Security Group, Key Pair, Internet Gateway, EIP & NAT Gateway

Create VPC with public and private Subnet, Security Group, Route Table, Internet Gateway, NAT Gateway having 1 EC2 instance in each subnet(public & private).

Create 1 EC2 instance in each subnet, user can access public EC2 instance using public ip via SSH. To access the private EC2 instance we need to use public EC2 instance.

Resources Used

  • aws_key_pair
  • aws_vpc
  • aws_eip
  • aws_nat_gateway
  • aws_internet_gateway
  • aws_subnet
  • aws_route_table
  • aws_route_table_association
  • aws_security_group
  • aws_vpc_security_group_ingress_rule
  • aws_vpc_security_group_egress_rule
  • aws_instance
  • output
Example Code
# create key
resource "aws_key_pair" "key_aws" {
  key_name   = "aws"
  public_key = file("/home/totoro/.ssh/aws.pub")
}

# VPC
resource "aws_vpc" "main_vpc" {
  cidr_block = "10.0.0.0/16"

  tags = {
    Name = "main_vpc"
  }
}

# Public Subnet
resource "aws_subnet" "main_vpc_subnet_pub" {
  vpc_id     = aws_vpc.main_vpc.id
  cidr_block = "10.0.1.0/24"

  tags = {
    Name = "main_vpc_subnet_pub"
  }
}

# Private Subnet
resource "aws_subnet" "main_vpc_subnet_pvt" {
  vpc_id     = aws_vpc.main_vpc.id
  cidr_block = "10.0.2.0/24"

  tags = {
    Name = "main_vpc_subnet_pvt"
  }
}

# Internet Gateway
resource "aws_internet_gateway" "main_vpc_gw" {
  vpc_id = aws_vpc.main_vpc.id

  tags = {
    Name = "main_vpc_gw"
  }
}

# NAT Gateway EIP
resource "aws_eip" "nat_gateway_eip" {
  depends_on = [aws_internet_gateway.main_vpc_gw]
}

# NAT Gateway
resource "aws_nat_gateway" "main_vpc_nat_gw" {
  allocation_id = aws_eip.nat_gateway_eip.id
  subnet_id     = aws_subnet.main_vpc_subnet_pub.id

  tags = {
    Name = "main_vpc_nat_gw"
  }

  depends_on = [aws_internet_gateway.main_vpc_gw]
}

# Routing all traffic to Internet Gateway - Public route table
resource "aws_route_table" "main_vpc_subnet_pub_rt" {
  vpc_id = aws_vpc.main_vpc.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.main_vpc_gw.id
  }

  tags = {
    Name = "main_vpc_subnet_pub_rt"
  }
}

# Routing internal traffic & NAT - Private route table
resource "aws_route_table" "main_vpc_subnet_pvt_rt" {
  vpc_id = aws_vpc.main_vpc.id

  route {
    cidr_block = "10.0.0.0/16"
    gateway_id = "local"
  }

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_nat_gateway.main_vpc_nat_gw.id
  }

  tags = {
    Name = "main_vpc_subnet_pvt_rt"
  }
}

# Associating private route table with private subnet
resource "aws_route_table_association" "main_vpc_subnet_pvt_rt_association" {
  subnet_id      = aws_subnet.main_vpc_subnet_pvt.id
  route_table_id = aws_route_table.main_vpc_subnet_pvt_rt.id
}

# Associating public route table with public subnet
resource "aws_route_table_association" "main_vpc_subnet_pub_rt_association" {
  subnet_id      = aws_subnet.main_vpc_subnet_pub.id
  route_table_id = aws_route_table.main_vpc_subnet_pub_rt.id
}

# Security group
resource "aws_security_group" "sg_allow_ssh_http_tls" {
  name   = "allow_ssh_https_tls"
  vpc_id = aws_vpc.main_vpc.id

  tags = {
    Name = "allow_ssh_https_tls"
  }
}

# Adding ingress rule to security group
resource "aws_vpc_security_group_ingress_rule" "sg_ingress_allow_ssh_http_tls" {
  security_group_id = aws_security_group.sg_allow_ssh_http_tls.id

  for_each  = toset(["22", "80", "443"])
  from_port = each.value
  to_port   = each.value

  cidr_ipv4   = "0.0.0.0/0"
  ip_protocol = "tcp"
}

# Adding egress rule to security group
resource "aws_vpc_security_group_egress_rule" "sq_egress_allow_ephemeral" {
  security_group_id = aws_security_group.sg_allow_ssh_http_tls.id

  for_each    = toset(["tcp", "udp"])
  ip_protocol = each.value

  cidr_ipv4 = "0.0.0.0/0"
  from_port = 1024
  to_port   = 65535
}

# Adding egress rule to security group
resource "aws_vpc_security_group_egress_rule" "sq_egress_allow_ssh_http_tls" {
  security_group_id = aws_security_group.sg_allow_ssh_http_tls.id

  for_each  = toset(["22", "80", "443"])
  from_port = each.value
  to_port   = each.value

  cidr_ipv4   = "0.0.0.0/0"
  ip_protocol = "tcp"
}

# Filtering AMI using name
data "aws_ami" "latest_ubuntu" {
  most_recent = true
  owners      = ["amazon"]

  filter {
    name   = "name"
    values = ["ubuntu/images/hvm-*/ubuntu-*-24.04-amd64-server-*"]
  }

  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }

  filter {
    name   = "root-device-type"
    values = ["ebs"]
  }
}

# Private host 
resource "aws_instance" "ec2_pvt_host" {
  ami                         = data.aws_ami.latest_ubuntu.id
  associate_public_ip_address = "false"
  instance_type               = "t2.micro"
  key_name                    = aws_key_pair.key_aws.key_name
  subnet_id                   = aws_subnet.main_vpc_subnet_pvt.id
  vpc_security_group_ids      = [aws_security_group.sg_allow_ssh_http_tls.id]
}

# Jump Host
resource "aws_instance" "ec2_jump_host" {
  ami                         = data.aws_ami.latest_ubuntu.id
  associate_public_ip_address = "true"
  instance_type               = "t2.micro"
  key_name                    = aws_key_pair.key_aws.key_name
  subnet_id                   = aws_subnet.main_vpc_subnet_pub.id
  vpc_security_group_ids      = [aws_security_group.sg_allow_ssh_http_tls.id]
  user_data                   = file("${path.module}/user_data.sh")
}

output "ec2_jump_host_ip" {
  value = aws_instance.ec2_jump_host.public_ip
}

output "ec2_pvt_ip" {
  value = aws_instance.ec2_pvt_host.private_ip
}

Advance Topics

Terraform State File In S3

terraform {
  backend "s3" {
    bucket = "mybucket"
    key    = "path/to/my/key"
    region = "us-east-1"
  }
}

The S3 backend stores state data in an S3 object at the path set by the key parameter in the S3 bucket indicated by the bucket parameter. Using the example shown above, the state would be stored at the path path/to/my/key in the bucket mybucket.


Lock Terraform State File

State locking prevents multiple users from making changes to infra at the same time.

Prevents race conditions and state corruption.

With S3 backend + DynamoDB table, Terraform locks the state when one user runs terraform apply.

Example Code
  • Create S3 bucket that will store state file
aws s3api create-bucket \
  --bucket my-terraform-state-bucket-soumik \
  --region ap-south-1 \
  --create-bucket-configuration LocationConstraint=ap-south-1
  • Create a DynamoDB table for state locking
aws dynamodb create-table \
  --table-name terraform-locks-soumik \
  --attribute-definitions AttributeName=LockID,AttributeType=S \
  --key-schema AttributeName=LockID,KeyType=HASH \
  --billing-mode PAY_PER_REQUEST
  • Configure backend in terraform
terraform {
  backend "s3" {
    bucket         = "my-terraform-state-bucket-soumik"
    key            = "dev/terraform.tfstate" # folder-like path
    region         = "ap-south-1"
    dynamodb_table = "terraform-locks-soumik"  # for state locking
    encrypt        = true               # enable SSE encryption
  }
}
  • Initialize terraform
terraform init

Terraform will ask - Do you want to copy existing state to the new backend? [y/n]

Type Y to proceed


Import Existing Resources

resource "aws_vpc" "main_vpc" {
  cidr_block = "10.0.0.0/16"

  tags = {
    Name = "main_vpc"
  }
}

Run terraform import to attach an existing instance to the resource configuration:

$ terraform import resource.name instance-id

Example

$ terraform import aws_vpc.main_vpc vpc-0a50dc6b331c407ee

Terraform Destroy Specific Resources

$ terraform destroy -target

OR

$ terraform apply -destroy

Example

terraform destroy -target aws_instance.ec2_ubuntu

Terraform Taint, Untaint and Replace

TODO

Terraformer

TODO

Terragrunt

TODO

About

πŸš€ Terraform Tutorial A beginner-friendly guide to Infrastructure as Code (IaC) using Terraform. This repository covers the basics, best practices, and hands-on examples to help you deploy and manage cloud infrastructure efficiently. 🌍☁️ Let me know if you want to tweak it! 😊

Topics

Resources

Stars

Watchers

Forks

Contributors