Welcome to Terraform: Architecting for Scale
You’re tasked with deploying a scalable cloud architecture, and someone suggests, “Just click through the AWS console!” Sure, that might work for a one-off deployment, but maintaining infrastructure like that? A nightmare. Enter Terraform.
This is the first post in the Terraform: Architecting for Scale series. We’ll cover modular Terraform design that scales effortlessly. Today, we’ll build a foundational AWS setup: a VPC, ECS cluster, and RDS instance - all neatly packaged in a Terraform module.
Who Is This For?
- Cloud architects, DevOps engineers, and software developers who want a structured Terraform approach
- Anyone tired of spaghetti-code Terraform projects
- Those looking to build scalable, production-grade infrastructure
- Teams managing multi-environment cloud deployments
- Engineers transitioning from manual infrastructure management to Infrastructure as Code (IaC)
Terraform Fundamentals
Before diving into code, let’s cover the basics:
- Providers define which platform (AWS, Azure, etc.) Terraform interacts with.
- Resources describe infrastructure components (EC2, VPC, RDS, etc.).
- Variables make code reusable and configurable.
- Outputs expose useful data from the module.
- Modules encapsulate logic to keep Terraform projects maintainable.
Why Modular Terraform?
Modular Terraform design prevents redundant code, enhances reusability, and simplifies infrastructure changes. Instead of defining VPCs, ECS clusters, and RDS instances repeatedly across environments, we create self-contained modules. This way, when a business need changes, we update a single module rather than hunting through an entire project.
Project Structure
A clean structure is crucial:
terraform-project/
├── modules/
│ ├── network/
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ ├── outputs.tf
│ ├── ecs/
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ ├── outputs.tf
│ ├── rds/
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ ├── outputs.tf
├── envs/
│ ├── dev/
│ │ ├── main.tf
│ │ ├── terraform.tfvars
│ ├── prod/
│ │ ├── main.tf
│ │ ├── terraform.tfvars
├── provider.tf
├── versions.tf
└── README.md
Setting Up the VPC Module
A VPC provides network isolation for our infrastructure. It includes public and private subnets for security.
modules/network/main.tf:
resource "aws_vpc" "main" {
cidr_block = var.cidr_block
enable_dns_support = true
enable_dns_hostnames = true
}
resource "aws_subnet" "public" {
count = length(var.public_subnets)
vpc_id = aws_vpc.main.id
cidr_block = var.public_subnets[count.index]
map_public_ip_on_launch = true
availability_zone = element(var.azs, count.index)
}
Setting Up the ECS Module
An ECS cluster hosts our containerized applications. We can later attach services with load balancers and IAM roles.
modules/ecs/main.tf:
resource "aws_ecs_cluster" "main" {
name = var.cluster_name
}
Setting Up the RDS Module
A managed database service ensures reliability and security.
modules/rds/main.tf:
resource "aws_db_instance" "main" {
allocated_storage = 20
engine = "postgres"
instance_class = "db.t3.micro"
db_name = var.db_name
username = var.db_user
password = var.db_password
publicly_accessible = false
}
Bringing It All Together
envs/dev/main.tf:
module "network" {
source = "../../modules/network"
cidr_block = "10.0.0.0/16"
public_subnets = ["10.0.1.0/24", "10.0.2.0/24"]
azs = ["us-east-1a", "us-east-1b"]
}
module "ecs" {
source = "../../modules/ecs"
cluster_name = "dev-cluster"
}
module "rds" {
source = "../../modules/rds"
db_name = "devdb"
db_user = "admin"
db_password = "supersecret"
}
Terraform Remote State & Backends
Why does state matter? Terraform needs to track infrastructure. By default, state is local, but for teams, use remote backends like S3. Remote state prevents conflicts, ensures consistency, and supports collaboration.
backend.tf:
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "envs/dev/terraform.tfstate"
region = "us-east-1"
}
}
Stay tuned for more!