HashiCorp Terraform is an infrastructure as code tool that lets you define both cloud and on-prem resources in human-readable configuration files that you can version, reuse, and share.
The terraform block allows you to configure Terraform behavior, including the Terraform version, backend, integration with HCP Terraform, and required providers.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "5.92.0"
}
}
}
Defines which cloud or service Terraform interacts with.
provider "aws" {
region = "us-east-1"
}
The most important part of Terraform, used to create, modify, and delete infrastructure components.
Every resource type is associated with a specific provider (e.g., AWS, Azure, GCP).
resource "aws_instance" "example" {
ami = "ami-123456"
instance_type = var.instance_type
}
Used to fetch or read information from external sources without managing them.
Commonly used to retrieve existing infrastructure details (e.g., getting an AWS AMI ID).
data "aws_ami" "latest_ubuntu" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-20.04-amd64-server-*"]
}
}
resource "aws_instance" "example" {
ami = data.aws_ami.latest_ubuntu.id
instance_type = "t2.micro"
}
Used to define dynamic values that can be passed into Terraform configurations.
Helps make configurations reusable and flexible.
variable "instance_type" {
description = "Type of EC2 instance"
type = string
default = "t2.micro"
}
Used to display values after terraform apply.
output "instance_ip" {
value = aws_instance.example.public_ip
}
Used to organize and reuse Terraform code.
module "vpc" {
source = "./modules/vpc"
vpc_cidr = "10.0.0.0/16"
}
terraform initWhen you create a new configuration β or check out an existing configuration from version control β you need to initialize the directory with teraform init.
Initializing a configuration directory downloads and installs the providers defined in the configuration
Terraform downloads the required provider and installs it in a hidden subdirectory of your current working directory, named .terraform.
The terraform init command prints out which version of the provider was installed.
Terraform also creates a lock file named .terraform.lock.hcl which specifies the exact provider versions used, so that you can control when you want to update the providers used for your project.
terraform fmtThe terraform fmt command formats Terraform configuration file contents so that it matches the canonical format and style.
terraform validateThe terraform validate command validates the configuration files in a directory. Validate runs checks that verify whether a configuration is syntactically valid and internally consistent, regardless of any provided variables or existing state.
It does not validate remote services, such as remote state or provider APIs.
terraform planThe terraform plan command creates an execution plan, which lets you preview the changes that Terraform plans to make to your infrastructure.
The plan command alone does not actually carry out the proposed changes You can use this command to check whether the proposed changes match what you expected before you apply the changes or share your changes with your team for broader review.
If Terraform detects that no changes are needed to resource instances or to root module output values, terraform plan will report that no actions need to be taken.
terraform applyterraform apply -auto-approveThe terraform apply command executes the actions proposed in a Terraform plan.
When you run terraform apply without passing a saved plan file, Terraform automatically creates a new execution plan as if you had run terraform plan, prompts you to approve that plan, and takes the indicated actions. You can use all of the planning modes and planning options to customize how Terraform will create the plan.
TDLR: Apply changes by creating, updating or deleting the resources.
terraform destroyThe terraform destroy command deprovisions all objects managed by a Terraform configuration.
terraform output
terraform output instance_ipThe terraform output command extracts the value of an output variable from the state file.
The terraform state list command lists resources within a terraform state.
terraform state list- Filtering using resource name
terraform state list aws_instance.barThe terraform state show command shows the attributes of a single resource in the terraform state.
terraform state show aws_instance.myec2The terraform state rm command removes the binding to an existing remote object without first destroying it. The remote object continues to exist but is no longer managed by Terraform.
terraform state rm aws_instance.myec2The terraform state mv command changes bindings in Terraform state so that existing remote objects bind to new resource instances.
terraform state mv aws_instance.old_name aws_instance.new_nameprovider "aws" {
region = "ap-south-1"
access_key = "my-access-key"
secret_key = "my-secret-key"
}
provider "aws" {}
% export AWS_ACCESS_KEY_ID="anaccesskey"
% export AWS_SECRET_ACCESS_KEY="asecretkey"
% export AWS_REGION="us-west-2"
% terraform planWe can place or generate the credential files under default path or any path and provide the path to terraform provider block.
- Default path of credential and config is under
$HOME/.aws. No need to mention path if files are present under default path.
provider "aws" {}
- For custom path we need to provide the path.
provider "aws" {
shared_config_files = ["/Users/tf_user/.aws/conf"]
shared_credentials_files = ["/Users/tf_user/.aws/creds"]
profile = "customprofile"
}
TODO
TODO
TODO
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "5.92.0"
}
}
}
provider "aws" {
access_key = "ACCESS_KEY"
secret_key = "SECRET_KEY"
region = "AWS_REGION"
}
resource "provider_resource_name" "internal_name_used_for_reference" {
...
details
...
}
resource "provider_resource_name" "another_internal_name" {
...
details
...
reference=provider_resource_name.internal_name_used_for_reference.id
}
Create a simple S3 bucket without versioning
resource "aws_s3_bucket" "mik_test_149" {
bucket = "mik-test-149"
tags = {
"env":"test"
}
}
- To delete s3 bucket having data use below
force_destroy = "true"
- Tags are optional
tags = {
"env":"test"
}
Create a simple S3 bucket with versioning enabled
Example Code
resource "aws_s3_bucket" "mik_test_149" {
bucket = "mik-test-149"
}
resource "aws_s3_bucket_versioning" "mik_test_149_version" {
bucket = aws_s3_bucket.mik_test_149.id
versioning_configuration {
status = "Enabled"
}
}
Create a simple EC2 instance on default VPC using AMI ID
resource "aws_instance" "aws_simple_ec2_test" {
ami = "ami-05c179eced2eb9b5b"
instance_type = "t2.micro"
}
Create a simple EC2 instance on default VPC, filtering AMI using tags
- aws_ami
- aws_instance
Example Code
data "aws_ami" "latest_ubuntu" {
most_recent = true
owners = ["aws-marketplace"]
filter {
name = "name"
values = ["ubuntu/images/hvm-*/ubuntu-*-24.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
filter {
name = "root-device-type"
values = ["ebs"]
}
}
resource "aws_instance" "ec2_ubuntu" {
ami = data.aws_ami.latest_ubuntu.id
instance_type = "t2.micro"
}
We can specify the public key directly or pass the public key file.
resource "aws_key_pair" "key_aws" {
key_name = "aws"
public_key = "ssh-rsa xxxxxx [email protected]"
}
resource "aws_key_pair" "key_aws" {
key_name = "aws"
public_key = file("/home/totoro/.ssh/aws.pub")
}
Create EC2 with SG allowing SSH, which will allow users to connect to EC2 instance.
Create EC2 Instance on default VPC and connect.
- aws_key_pair
- aws_security_group
- aws_vpc_security_group_ingress_rule
- aws_vpc_security_group_egress_rule
- aws_instance
Example Code
resource "aws_key_pair" "key_aws" {
key_name = "aws"
public_key = file("/home/totoro/.ssh/aws.pub")
}
resource "aws_security_group" "sg_allow_ssh" {
name = "sg_allow_ssh"
tags = {
Name = "sg_allow_ssh"
}
}
resource "aws_vpc_security_group_ingress_rule" "sg_ingress_allow_ssh" {
security_group_id = aws_security_group.sg_allow_ssh.id
cidr_ipv4 = "0.0.0.0/0"
from_port = 22
ip_protocol = "tcp"
to_port = 22
}
resource "aws_vpc_security_group_egress_rule" "sq_egress_allow_ephemeral_tcp" {
security_group_id = aws_security_group.sg_allow_ssh.id
cidr_ipv4 = "0.0.0.0/0"
from_port = 1024
ip_protocol = "tcp"
to_port = 65535
}
resource "aws_vpc_security_group_egress_rule" "sq_egress_allow_ephemeral_udp" {
security_group_id = aws_security_group.sg_allow_ssh.id
cidr_ipv4 = "0.0.0.0/0"
from_port = 1024
ip_protocol = "udp"
to_port = 65535
}
data "aws_ami" "latest_ubuntu" {
most_recent = true
owners = ["aws-marketplace"]
filter {
name = "name"
values = ["ubuntu/images/hvm-*/ubuntu-*-24.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
filter {
name = "root-device-type"
values = ["ebs"]
}
}
resource "aws_instance" "ec2_ubuntu" {
ami = data.aws_ami.latest_ubuntu.id
instance_type = "t2.micro"
key_name = aws_key_pair.key_aws.key_name
vpc_security_group_ids = [aws_security_group.sg_allow_ssh.id]
}EC2 With VPC, Security Group & Key Pair. Implemented for-loop to allow multiple ports and protocols for SG
Create VPC with Subnet, Security Group, Route Table & Internet Gateway
Create EC2 instance on your custom VPC
- aws_key_pair
- aws_vpc
- aws_internet_gateway
- aws_subnet
- aws_route_table
- aws_route_table_association
- aws_security_group
- aws_vpc_security_group_ingress_rule
- aws_vpc_security_group_egress_rule
- aws_instance
- output
Example Code
resource "aws_key_pair" "key_aws" {
key_name = "aws"
public_key = file("/home/totoro/.ssh/aws.pub")
}
resource "aws_vpc" "main_vpc" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "main_vpc"
}
}
resource "aws_subnet" "main_vpc_subnet_pub" {
vpc_id = aws_vpc.main_vpc.id
cidr_block = "10.0.1.0/24"
tags = {
Name = "main_vpc_subnet_pub"
}
}
resource "aws_subnet" "main_vpc_subnet_pvt" {
vpc_id = aws_vpc.main_vpc.id
cidr_block = "10.0.2.0/24"
tags = {
Name = "main_vpc_subnet_pvt"
}
}
resource "aws_internet_gateway" "main_vpc_gw" {
vpc_id = aws_vpc.main_vpc.id
tags = {
Name = "main_vpc_gw"
}
}
resource "aws_route_table" "main_vpc_subnet_pub_rt" {
vpc_id = aws_vpc.main_vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.main_vpc_gw.id
}
tags = {
Name = "main_vpc_subnet_pub_rt"
}
}
resource "aws_route_table" "main_vpc_subnet_pvt_rt" {
vpc_id = aws_vpc.main_vpc.id
route {
cidr_block = "10.0.0.0/16"
gateway_id = "local"
}
tags = {
Name = "main_vpc_subnet_pvt_rt"
}
}
resource "aws_route_table_association" "main_vpc_subnet_pvt_rt_association" {
subnet_id = aws_subnet.main_vpc_subnet_pvt.id
route_table_id = aws_route_table.main_vpc_subnet_pvt_rt.id
}
resource "aws_route_table_association" "main_vpc_subnet_pub_rt_association" {
subnet_id = aws_subnet.main_vpc_subnet_pub.id
route_table_id = aws_route_table.main_vpc_subnet_pub_rt.id
}
resource "aws_security_group" "sg_allow_ssh_http_tls" {
name = "allow_ssh_https_tls"
vpc_id = aws_vpc.main_vpc.id
tags = {
Name = "allow_ssh_https_tls"
}
}
resource "aws_vpc_security_group_ingress_rule" "sg_ingress_allow_ssh_http_tls" {
security_group_id = aws_security_group.sg_allow_ssh_http_tls.id
for_each = toset(["22", "80", "443"])
from_port = each.value
to_port = each.value
cidr_ipv4 = "0.0.0.0/0"
ip_protocol = "tcp"
}
resource "aws_vpc_security_group_egress_rule" "sq_egress_allow_ephemeral" {
security_group_id = aws_security_group.sg_allow_ssh_http_tls.id
for_each = toset(["tcp","udp"])
ip_protocol = each.value
cidr_ipv4 = "0.0.0.0/0"
from_port = 1024
to_port = 65535
}
resource "aws_vpc_security_group_egress_rule" "sq_egress_allow_http_tls" {
security_group_id = aws_security_group.sg_allow_ssh_http_tls.id
for_each = toset(["80","443"])
from_port = each.value
to_port = each.value
cidr_ipv4 = "0.0.0.0/0"
ip_protocol = "tcp"
}
data "aws_ami" "latest_ubuntu" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["ubuntu/images/hvm-*/ubuntu-*-24.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
filter {
name = "root-device-type"
values = ["ebs"]
}
}
resource "aws_instance" "ec2_ubuntu" {
ami = data.aws_ami.latest_ubuntu.id
associate_public_ip_address = "true"
instance_type = "t2.micro"
key_name = aws_key_pair.key_aws.key_name
subnet_id = aws_subnet.main_vpc_subnet_pub.id
vpc_security_group_ids = [aws_security_group.sg_allow_ssh_http_tls.id]
user_data = file("${path.module}/user_data.sh")
}
output "ec2_ubuntu_ip" {
value = aws_instance.ec2_ubuntu.public_ip
}Create VPC with public and private Subnet, Security Group, Route Table, Internet Gateway, NAT Gateway having 1 EC2 instance in each subnet(public & private).
Create 1 EC2 instance in each subnet, user can access public EC2 instance using public ip via SSH. To access the private EC2 instance we need to use public EC2 instance.
- aws_key_pair
- aws_vpc
- aws_eip
- aws_nat_gateway
- aws_internet_gateway
- aws_subnet
- aws_route_table
- aws_route_table_association
- aws_security_group
- aws_vpc_security_group_ingress_rule
- aws_vpc_security_group_egress_rule
- aws_instance
- output
Example Code
# create key
resource "aws_key_pair" "key_aws" {
key_name = "aws"
public_key = file("/home/totoro/.ssh/aws.pub")
}
# VPC
resource "aws_vpc" "main_vpc" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "main_vpc"
}
}
# Public Subnet
resource "aws_subnet" "main_vpc_subnet_pub" {
vpc_id = aws_vpc.main_vpc.id
cidr_block = "10.0.1.0/24"
tags = {
Name = "main_vpc_subnet_pub"
}
}
# Private Subnet
resource "aws_subnet" "main_vpc_subnet_pvt" {
vpc_id = aws_vpc.main_vpc.id
cidr_block = "10.0.2.0/24"
tags = {
Name = "main_vpc_subnet_pvt"
}
}
# Internet Gateway
resource "aws_internet_gateway" "main_vpc_gw" {
vpc_id = aws_vpc.main_vpc.id
tags = {
Name = "main_vpc_gw"
}
}
# NAT Gateway EIP
resource "aws_eip" "nat_gateway_eip" {
depends_on = [aws_internet_gateway.main_vpc_gw]
}
# NAT Gateway
resource "aws_nat_gateway" "main_vpc_nat_gw" {
allocation_id = aws_eip.nat_gateway_eip.id
subnet_id = aws_subnet.main_vpc_subnet_pub.id
tags = {
Name = "main_vpc_nat_gw"
}
depends_on = [aws_internet_gateway.main_vpc_gw]
}
# Routing all traffic to Internet Gateway - Public route table
resource "aws_route_table" "main_vpc_subnet_pub_rt" {
vpc_id = aws_vpc.main_vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.main_vpc_gw.id
}
tags = {
Name = "main_vpc_subnet_pub_rt"
}
}
# Routing internal traffic & NAT - Private route table
resource "aws_route_table" "main_vpc_subnet_pvt_rt" {
vpc_id = aws_vpc.main_vpc.id
route {
cidr_block = "10.0.0.0/16"
gateway_id = "local"
}
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_nat_gateway.main_vpc_nat_gw.id
}
tags = {
Name = "main_vpc_subnet_pvt_rt"
}
}
# Associating private route table with private subnet
resource "aws_route_table_association" "main_vpc_subnet_pvt_rt_association" {
subnet_id = aws_subnet.main_vpc_subnet_pvt.id
route_table_id = aws_route_table.main_vpc_subnet_pvt_rt.id
}
# Associating public route table with public subnet
resource "aws_route_table_association" "main_vpc_subnet_pub_rt_association" {
subnet_id = aws_subnet.main_vpc_subnet_pub.id
route_table_id = aws_route_table.main_vpc_subnet_pub_rt.id
}
# Security group
resource "aws_security_group" "sg_allow_ssh_http_tls" {
name = "allow_ssh_https_tls"
vpc_id = aws_vpc.main_vpc.id
tags = {
Name = "allow_ssh_https_tls"
}
}
# Adding ingress rule to security group
resource "aws_vpc_security_group_ingress_rule" "sg_ingress_allow_ssh_http_tls" {
security_group_id = aws_security_group.sg_allow_ssh_http_tls.id
for_each = toset(["22", "80", "443"])
from_port = each.value
to_port = each.value
cidr_ipv4 = "0.0.0.0/0"
ip_protocol = "tcp"
}
# Adding egress rule to security group
resource "aws_vpc_security_group_egress_rule" "sq_egress_allow_ephemeral" {
security_group_id = aws_security_group.sg_allow_ssh_http_tls.id
for_each = toset(["tcp", "udp"])
ip_protocol = each.value
cidr_ipv4 = "0.0.0.0/0"
from_port = 1024
to_port = 65535
}
# Adding egress rule to security group
resource "aws_vpc_security_group_egress_rule" "sq_egress_allow_ssh_http_tls" {
security_group_id = aws_security_group.sg_allow_ssh_http_tls.id
for_each = toset(["22", "80", "443"])
from_port = each.value
to_port = each.value
cidr_ipv4 = "0.0.0.0/0"
ip_protocol = "tcp"
}
# Filtering AMI using name
data "aws_ami" "latest_ubuntu" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["ubuntu/images/hvm-*/ubuntu-*-24.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
filter {
name = "root-device-type"
values = ["ebs"]
}
}
# Private host
resource "aws_instance" "ec2_pvt_host" {
ami = data.aws_ami.latest_ubuntu.id
associate_public_ip_address = "false"
instance_type = "t2.micro"
key_name = aws_key_pair.key_aws.key_name
subnet_id = aws_subnet.main_vpc_subnet_pvt.id
vpc_security_group_ids = [aws_security_group.sg_allow_ssh_http_tls.id]
}
# Jump Host
resource "aws_instance" "ec2_jump_host" {
ami = data.aws_ami.latest_ubuntu.id
associate_public_ip_address = "true"
instance_type = "t2.micro"
key_name = aws_key_pair.key_aws.key_name
subnet_id = aws_subnet.main_vpc_subnet_pub.id
vpc_security_group_ids = [aws_security_group.sg_allow_ssh_http_tls.id]
user_data = file("${path.module}/user_data.sh")
}
output "ec2_jump_host_ip" {
value = aws_instance.ec2_jump_host.public_ip
}
output "ec2_pvt_ip" {
value = aws_instance.ec2_pvt_host.private_ip
}
terraform {
backend "s3" {
bucket = "mybucket"
key = "path/to/my/key"
region = "us-east-1"
}
}
The S3 backend stores state data in an S3 object at the path set by the key parameter in the S3 bucket indicated by the bucket parameter. Using the example shown above, the state would be stored at the path path/to/my/key in the bucket mybucket.
State locking prevents multiple users from making changes to infra at the same time.
Prevents race conditions and state corruption.
With S3 backend + DynamoDB table, Terraform locks the state when one user runs terraform apply.
Example Code
- Create S3 bucket that will store state file
aws s3api create-bucket \
--bucket my-terraform-state-bucket-soumik \
--region ap-south-1 \
--create-bucket-configuration LocationConstraint=ap-south-1
- Create a DynamoDB table for state locking
aws dynamodb create-table \
--table-name terraform-locks-soumik \
--attribute-definitions AttributeName=LockID,AttributeType=S \
--key-schema AttributeName=LockID,KeyType=HASH \
--billing-mode PAY_PER_REQUEST
- Configure backend in terraform
terraform {
backend "s3" {
bucket = "my-terraform-state-bucket-soumik"
key = "dev/terraform.tfstate" # folder-like path
region = "ap-south-1"
dynamodb_table = "terraform-locks-soumik" # for state locking
encrypt = true # enable SSE encryption
}
}
- Initialize terraform
terraform init
Terraform will ask - Do you want to copy existing state to the new backend? [y/n]
Type Y to proceed
resource "aws_vpc" "main_vpc" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "main_vpc"
}
}
Run terraform import to attach an existing instance to the resource configuration:
$ terraform import resource.name instance-idExample
$ terraform import aws_vpc.main_vpc vpc-0a50dc6b331c407ee$ terraform destroy -targetOR
$ terraform apply -destroyExample
terraform destroy -target aws_instance.ec2_ubuntuTODO
TODO
TODO