Manually clicking through cloud provider dashboards to create servers, configure DNS records, and set up firewalls does not scale. When your infrastructure grows beyond a few resources, manual management becomes error-prone, inconsistent, and impossible to reproduce. Infrastructure as Code (IaC) solves this by defining your entire infrastructure in version-controlled configuration files. Terraform, created by HashiCorp, is the most widely adopted IaC tool, supporting hundreds of cloud providers with a single workflow.
What Is Infrastructure as Code?
Infrastructure as Code (IaC) is the practice of managing infrastructure — servers, networks, DNS, databases, firewalls — using declarative configuration files instead of manual processes. You describe the desired state of your infrastructure, and the tool figures out how to make it happen.
Key benefits:
- Reproducibility — Deploy identical environments for development, staging, and production
- Version control — Track every infrastructure change in Git with full history
- Collaboration — Review infrastructure changes through pull requests just like application code
- Automation — Integrate with CI/CD pipelines for automated deployments
- Documentation — The code itself documents what your infrastructure looks like
Terraform vs Other IaC Tools
| Feature | Terraform | Ansible | Pulumi | CloudFormation |
|---|---|---|---|---|
| Approach | Declarative | Imperative/Declarative | Imperative | Declarative |
| Language | HCL | YAML | Python, TypeScript, Go | JSON/YAML |
| Multi-cloud | Yes | Yes | Yes | AWS only |
| State management | State file | Stateless | State file | Managed by AWS |
| Best for | Infrastructure provisioning | Configuration management | Developers who prefer code | AWS-only shops |
Terraform excels at provisioning infrastructure across multiple cloud providers. Use Ansible for configuration management (installing software on servers), and Terraform for creating the servers themselves. Many teams use both together.
Installing Terraform
macOS
brew tap hashicorp/tap
brew install hashicorp/tap/terraform
Ubuntu/Debian
wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt install terraform
Windows
choco install terraform
Verify the installation:
terraform version
HCL Syntax Basics
Terraform uses HashiCorp Configuration Language (HCL), a declarative language designed to be human-readable. Here are the core constructs:
Blocks
Everything in Terraform is defined in blocks:
# Resource block
resource "type" "name" {
argument1 = "value1"
argument2 = "value2"
nested_block {
key = "value"
}
}
Data Types
# String
name = "web-server"
# Number
count = 3
# Boolean
enable_monitoring = true
# List
availability_zones = ["us-east-1a", "us-east-1b"]
# Map
tags = {
Environment = "production"
Team = "platform"
}
References and Interpolation
# Reference another resource's attribute
subnet_id = aws_subnet.main.id
# String interpolation
name = "app-${var.environment}-${count.index}"
# Conditional expression
instance_type = var.environment == "production" ? "t3.large" : "t3.micro"
The Core Terraform Workflow
Every Terraform project follows four commands:
1. terraform init
Initializes the working directory, downloads provider plugins, and configures the backend:
terraform init
Run this when you first set up a project or add a new provider.
2. terraform plan
Previews the changes Terraform will make without actually applying them:
terraform plan
This is your safety net. Always review the plan before applying. The output shows resources to be created (+), modified (~), or destroyed (-).
3. terraform apply
Executes the changes shown in the plan:
terraform apply
Terraform will show the plan again and ask for confirmation. Use -auto-approve only in automated pipelines, never interactively.
4. terraform destroy
Removes all resources managed by this configuration:
terraform destroy
Warning:
terraform destroyis irreversible. It deletes real infrastructure. Always verify you are targeting the correct workspace and environment before running this command.
Providers
Providers are plugins that interface with cloud platforms, SaaS tools, and other APIs. You must declare which providers your configuration uses:
terraform {
required_version = ">= 1.7.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
cloudflare = {
source = "cloudflare/cloudflare"
version = "~> 4.0"
}
}
}
provider "aws" {
region = "us-east-1"
}
provider "cloudflare" {
api_token = var.cloudflare_api_token
}
The ~> 5.0 version constraint means “any version >= 5.0.0 and < 6.0.0,” allowing patch and minor updates while preventing breaking changes.
Your First Resource: Cloudflare DNS Record
Let’s create a practical first resource — a DNS record in Cloudflare:
# main.tf
terraform {
required_providers {
cloudflare = {
source = "cloudflare/cloudflare"
version = "~> 4.0"
}
}
}
provider "cloudflare" {
api_token = var.cloudflare_api_token
}
resource "cloudflare_record" "www" {
zone_id = var.cloudflare_zone_id
name = "www"
content = "203.0.113.50"
type = "A"
ttl = 300
proxied = true
}
resource "cloudflare_record" "mail" {
zone_id = var.cloudflare_zone_id
name = "@"
content = "mail.knowledgexchange.xyz"
type = "MX"
priority = 10
ttl = 3600
}
Run the workflow:
terraform init
terraform plan
terraform apply
Terraform creates both DNS records and tracks them in its state file.
Variables
Variables make your configurations reusable and environment-specific:
Defining Variables
# variables.tf
variable "cloudflare_api_token" {
description = "Cloudflare API token with DNS edit permissions"
type = string
sensitive = true
}
variable "cloudflare_zone_id" {
description = "Cloudflare zone ID for the domain"
type = string
}
variable "environment" {
description = "Deployment environment"
type = string
default = "staging"
validation {
condition = contains(["development", "staging", "production"], var.environment)
error_message = "Environment must be development, staging, or production."
}
}
variable "server_count" {
description = "Number of servers to create"
type = number
default = 2
}
variable "allowed_ips" {
description = "List of IP addresses allowed to access the server"
type = list(string)
default = []
}
variable "tags" {
description = "Resource tags"
type = map(string)
default = {
ManagedBy = "terraform"
}
}
Setting Variable Values
Create a terraform.tfvars file (automatically loaded):
# terraform.tfvars
cloudflare_zone_id = "abc123def456"
environment = "production"
server_count = 3
allowed_ips = ["203.0.113.50", "198.51.100.25"]
tags = {
ManagedBy = "terraform"
Environment = "production"
Team = "platform"
}
Pass sensitive values through environment variables:
export TF_VAR_cloudflare_api_token="your-api-token-here"
terraform apply
Important: Never commit
terraform.tfvarsfiles containing secrets to version control. Use environment variables or a secrets manager for sensitive values.
Outputs
Outputs expose values from your configuration, making them available to other configurations or scripts:
# outputs.tf
output "dns_record_hostname" {
description = "The FQDN of the created DNS record"
value = cloudflare_record.www.hostname
}
output "server_ip" {
description = "The public IP address of the server"
value = aws_instance.web.public_ip
}
output "database_connection_string" {
description = "Database connection string"
value = aws_db_instance.main.endpoint
sensitive = true
}
View outputs after applying:
terraform output
terraform output dns_record_hostname
State Management
Terraform tracks every resource it manages in a state file (terraform.tfstate). This file maps your configuration to real-world resources.
Why State Matters
- Terraform uses state to determine what changes need to be made
- State contains sensitive information (resource IDs, IP addresses, sometimes passwords)
- Without state, Terraform cannot manage existing resources
Remote Backends
Never store state files locally for team projects. Use a remote backend:
# backend.tf
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "production/terraform.tfstate"
region = "us-east-1"
encrypt = true
dynamodb_table = "terraform-locks"
}
}
The S3 backend stores state in an S3 bucket with server-side encryption. The DynamoDB table provides state locking, preventing two people from applying changes simultaneously.
Other backend options include:
- Azure Blob Storage — For Azure-centric teams
- Google Cloud Storage — For GCP users
- Terraform Cloud — HashiCorp’s managed service with a free tier
- Consul — For on-premises deployments
Data Sources
Data sources let you read information from existing infrastructure that Terraform does not manage:
# Look up an existing VPC
data "aws_vpc" "existing" {
filter {
name = "tag:Name"
values = ["production-vpc"]
}
}
# Look up the latest Ubuntu AMI
data "aws_ami" "ubuntu" {
most_recent = true
owners = ["099720109477"] # Canonical
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-*-24.04-amd64-server-*"]
}
}
# Use the data source in a resource
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu.id
instance_type = "t3.micro"
subnet_id = data.aws_vpc.existing.id
tags = {
Name = "web-server"
}
}
Modules
Modules are reusable packages of Terraform configuration. They encapsulate related resources into a single unit:
# modules/web-server/main.tf
variable "instance_type" {
type = string
default = "t3.micro"
}
variable "server_name" {
type = string
}
resource "aws_instance" "server" {
ami = data.aws_ami.ubuntu.id
instance_type = var.instance_type
tags = {
Name = var.server_name
}
}
output "public_ip" {
value = aws_instance.server.public_ip
}
Use the module:
# main.tf
module "web" {
source = "./modules/web-server"
server_name = "web-production"
instance_type = "t3.large"
}
module "staging" {
source = "./modules/web-server"
server_name = "web-staging"
instance_type = "t3.micro"
}
output "web_ip" {
value = module.web.public_ip
}
The Terraform Registry hosts thousands of community modules. For example, the popular terraform-aws-modules/vpc/aws module creates a complete VPC with subnets, route tables, and NAT gateways in a few lines.
Practical Example: Complete Web Stack
Here is a real-world example that deploys DNS records, a server, and firewall rules:
# main.tf
terraform {
required_version = ">= 1.7.0"
required_providers {
cloudflare = {
source = "cloudflare/cloudflare"
version = "~> 4.0"
}
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
backend "s3" {
bucket = "myapp-terraform-state"
key = "web-stack/terraform.tfstate"
region = "us-east-1"
encrypt = true
dynamodb_table = "terraform-locks"
}
}
provider "aws" {
region = var.aws_region
}
provider "cloudflare" {
api_token = var.cloudflare_api_token
}
# --- Security Group (Firewall) ---
resource "aws_security_group" "web" {
name = "${var.project}-web-sg"
description = "Allow HTTP, HTTPS, and SSH"
vpc_id = var.vpc_id
ingress {
description = "HTTPS"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "HTTP"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "SSH"
from_port = 2222
to_port = 2222
protocol = "tcp"
cidr_blocks = var.ssh_allowed_ips
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = merge(var.tags, {
Name = "${var.project}-web-sg"
})
}
# --- EC2 Instance ---
data "aws_ami" "ubuntu" {
most_recent = true
owners = ["099720109477"]
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-*-24.04-amd64-server-*"]
}
}
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu.id
instance_type = var.instance_type
key_name = var.ssh_key_name
vpc_security_group_ids = [aws_security_group.web.id]
subnet_id = var.subnet_id
root_block_device {
volume_size = 30
volume_type = "gp3"
encrypted = true
}
user_data = <<-EOF
#!/bin/bash
apt-get update
apt-get install -y nginx
systemctl enable nginx
systemctl start nginx
EOF
tags = merge(var.tags, {
Name = "${var.project}-web-server"
})
}
# --- DNS Records ---
resource "cloudflare_record" "root" {
zone_id = var.cloudflare_zone_id
name = "@"
content = aws_instance.web.public_ip
type = "A"
ttl = 300
proxied = true
}
resource "cloudflare_record" "www" {
zone_id = var.cloudflare_zone_id
name = "www"
content = var.domain
type = "CNAME"
ttl = 300
proxied = true
}
# variables.tf
variable "aws_region" {
type = string
default = "us-east-1"
}
variable "cloudflare_api_token" {
type = string
sensitive = true
}
variable "cloudflare_zone_id" {
type = string
}
variable "domain" {
type = string
}
variable "project" {
type = string
default = "myapp"
}
variable "instance_type" {
type = string
default = "t3.micro"
}
variable "vpc_id" {
type = string
}
variable "subnet_id" {
type = string
}
variable "ssh_key_name" {
type = string
}
variable "ssh_allowed_ips" {
type = list(string)
default = []
}
variable "tags" {
type = map(string)
default = {
ManagedBy = "terraform"
}
}
# outputs.tf
output "server_public_ip" {
description = "Public IP of the web server"
value = aws_instance.web.public_ip
}
output "website_url" {
description = "URL of the website"
value = "https://${var.domain}"
}
output "ssh_command" {
description = "SSH command to connect to the server"
value = "ssh -p 2222 ubuntu@${aws_instance.web.public_ip}"
}
Deploy the complete stack:
terraform init
terraform plan -out=plan.tfplan
terraform apply plan.tfplan
Best Practices
Version Pinning
Always pin provider and Terraform versions to avoid unexpected changes:
terraform {
required_version = ">= 1.7.0, < 2.0.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.30"
}
}
}
.gitignore for Terraform
Add this to your .gitignore:
# Terraform
*.tfstate
*.tfstate.*
*.tfvars
.terraform/
.terraform.lock.hcl
crash.log
override.tf
override.tf.json
*_override.tf
*_override.tf.json
Note: Some teams choose to commit
.terraform.lock.hclto ensure consistent provider versions across the team. This is a valid approach, especially for production configurations.
Workspaces
Use workspaces to manage multiple environments with the same configuration:
# Create workspaces
terraform workspace new staging
terraform workspace new production
# Switch workspaces
terraform workspace select staging
# List workspaces
terraform workspace list
Reference the workspace in your configuration:
resource "aws_instance" "web" {
instance_type = terraform.workspace == "production" ? "t3.large" : "t3.micro"
tags = {
Environment = terraform.workspace
}
}
File Structure
Organize your Terraform project with a clear structure:
project/
main.tf # Primary resources
variables.tf # Variable declarations
outputs.tf # Output declarations
providers.tf # Provider configuration
backend.tf # Backend configuration
terraform.tfvars # Variable values (not committed)
modules/
web-server/
main.tf
variables.tf
outputs.tf
Common Mistakes to Avoid
-
Storing state locally — Always use a remote backend for team projects. Local state files get out of sync and cause conflicts.
-
Hardcoding values — Use variables for anything that might change between environments. Hardcoded values make configurations brittle.
-
Not using
terraform plan— Always review the plan before applying. A misconfigured resource can delete production data. -
Committing secrets — Never commit API tokens, passwords, or
terraform.tfvarsfiles with sensitive values. Use environment variables or a secrets manager. -
Ignoring state locking — Without state locking (DynamoDB for S3, or Terraform Cloud), two people can modify infrastructure simultaneously, causing corruption.
-
Creating resources manually — Once you start using Terraform, create everything through Terraform. Manual changes create drift between your code and actual infrastructure.
-
Massive monolithic configurations — Break large configurations into modules and separate state files. A single state file for your entire infrastructure is fragile and slow.
Summary
Terraform transforms infrastructure management from a manual, error-prone process into a repeatable, version-controlled workflow. Start with a simple resource like a DNS record, then gradually build up to managing complete application stacks. The key principles are: define everything as code, always review plans before applying, use remote state with locking, and break large configurations into reusable modules.
For more cloud and DevOps content, explore our Cloud articles and DevOps guides. If you are deploying to specific cloud platforms, check out our guides on cloud hosting providers.