Skip to main content

How to Create a New Terraform(IaC) File and Deploy an Application on AWS EC2

This guide will walk you through the process of creating a Terraform file to deploy an application on Amazon AWS EC2, including the configuration of IAM roles, Security Groups, VPC with two subnets, Route Tables, Internet Gateway, and a VPC Endpoint for a private subnet. By the end of this guide, you will have an EC2 instance running in a secure VPC, with access control via IAM and security groups.

Prerequisites​

  1. Terraform installed on your local machine.
  2. AWS CLI installed and configured with your credentials.
  3. An AWS account with necessary permissions.

Step 1: Set up your Terraform Directory​

Create a new directory for your Terraform configuration files.

bash
mkdir my-terraform-ec2
cd my-terraform-ec2

Inside this directory, you will create a Terraform configuration file (main.tf).

Step 2: Create and Configure a VPC with Two Subnets​

  1. VPC: Virtual Private Cloud is a virtual network dedicated to your AWS account.

  2. Subnets: We will create two subnets, one public and one private.

  3. Internet Gateway: The public subnet will be connected to the Internet via an Internet Gateway.

  4. Route Tables: The public subnet will have a route to the Internet Gateway, and the private subnet will route traffic internally.

Create a main.tf file in your directory:

hcl
provider "aws" {
region = "us-east-1" # Replace with your preferred region
}

resource "aws_vpc" "main_vpc" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "main-vpc"
}
}

resource "aws_subnet" "public_subnet" {
vpc_id = aws_vpc.main_vpc.id
cidr_block = "10.0.1.0/24"
availability_zone = "us-east-1a" # Adjust accordingly

map_public_ip_on_launch = true

tags = {
Name = "public-subnet"
}
}

resource "aws_subnet" "private_subnet" {
vpc_id = aws_vpc.main_vpc.id
cidr_block = "10.0.2.0/24"
availability_zone = "us-east-1a" # Adjust accordingly

tags = {
Name = "private-subnet"
}
}

resource "aws_internet_gateway" "main_igw" {
vpc_id = aws_vpc.main_vpc.id

tags = {
Name = "main-igw"
}
}

resource "aws_route_table" "public_route_table" {
vpc_id = aws_vpc.main_vpc.id

route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.main_igw.id
}

tags = {
Name = "public-route-table"
}
}

resource "aws_route_table_association" "public_subnet_association" {
subnet_id = aws_subnet.public_subnet.id
route_table_id = aws_route_table.public_route_table.id
}

resource "aws_vpc_endpoint" "s3_endpoint" {
vpc_id = aws_vpc.main_vpc.id
service_name = "com.amazonaws.us-east-1.s3"

subnet_ids = [aws_subnet.private_subnet.id]
}

Step 3: Configure IAM Role for EC2​

An IAM Role allows your EC2 instance to interact with other AWS services securely without embedding credentials.

hcl
resource "aws_iam_role" "ec2_role" {
name = "ec2_role"

assume_role_policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Action = "sts:AssumeRole",
Effect = "Allow",
Principal = {
Service = "ec2.amazonaws.com"
}
}
]
})
}

resource "aws_iam_role_policy_attachment" "attach_s3_policy" {
role = aws_iam_role.ec2_role.name
policy_arn = "arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"
}

This grants the EC2 instance read-only access to Amazon S3.

Step 4: Configure Security Groups​

Security groups act as virtual firewalls to control inbound and outbound traffic.

hcl
resource "aws_security_group" "ec2_security_group" {
vpc_id = aws_vpc.main_vpc.id
name = "ec2-security-group"

ingress {
description = "Allow SSH"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

ingress {
description = "Allow HTTP"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}

tags = {
Name = "ec2-security-group"
}
}

This security group allows inbound traffic for SSH (port 22) and HTTP (port 80).

Step 5: Deploy EC2 Instance​

Now, we can define the EC2 instance using the configured VPC, Subnets, IAM Role, and Security Groups.

hcl
resource "aws_instance" "web_server" {
ami = "ami-0c55b159cbfafe1f0" # Replace with your desired AMI
instance_type = "t2.micro"
subnet_id = aws_subnet.public_subnet.id
security_groups = [aws_security_group.ec2_security_group.name]
iam_instance_profile = aws_iam_instance_profile.ec2_instance_profile.id

tags = {
Name = "web-server"
}

user_data = <<-EOF
#!/bin/bash
sudo yum install -y httpd
sudo systemctl start httpd
sudo systemctl enable httpd
EOF
}

In this example:

  • We use the Amazon Linux 2 AMI (replace it with the AMI of your choice).
  • We launch a t2.micro instance in the public subnet.
  • The security group allows HTTP and SSH traffic.
  • The user_data script installs and starts the Apache web server.

Step 6: Initialize and Apply Terraform​

Now that the configuration is complete, initialize and apply the Terraform plan:

  1. Initialize Terraform: This installs the required providers and modules.
bash
terraform init
  1. Create the Terraform plan: This shows the changes Terraform will make.
bash
terraform plan
  1. Apply the Terraform plan: This will deploy your infrastructure.
bash
terraform apply

Confirm the action by typing yes when prompted.

Step 7: Access Your Application​

Once the Terraform apply is complete, your EC2 instance will be running with a public IP. You can access the application by visiting the public IP of the EC2 instance in your browser.

bash
# Get public IP
terraform output

Your web server (Apache) should be running on the instance, and you should see the default Apache page when you visit the IP.

Transform Your Business with Expert DevOps Solutions

Our tailored DevOps consulting services help streamline your processes, accelerate delivery, and ensure scalability.

Conclusion​

You have successfully deployed an EC2 instance with a web server using Terraform. The instance is secured via IAM roles, VPC configuration, and security groups. The public subnet is connected to the Internet via an Internet Gateway, while the private subnet can communicate internally using the VPC endpoint for S3.

Feel free to modify this configuration for more complex setups, such as adding more instances or deploying applications behind a load balancer.