Practical CI/CD Guide to Deploying AWS Infrastructure through Terraform - Multi Environment Deployment - Part 2

Terraform Directory Structure

In my previous blog , I have given an introduction to tool/services will be using in my deployment pipeline. In this blog post, I will go into Terraform directory structure . Just to recap I will be using Terraform Cloud and GitHub Actions. Another important thing, In this blog I will not be going into very detail about writing terraform code. I will be using code from terraform registry. Big Thanks to Anton Babenko.

Pre-Requisties

Scenario

Let's imagine you have joined a new company and your first task is to create vpc's. They would like you to deploy 3 VPC's for them (Dev-->Stage-->Prod VPC). You have chosen terraform to deploy VPC's.

Terraform Directory Structure

If you are previously using cloud-formation, you need not design a directory structure, because you don't need to manage state files or modules. But defining directory structure is very important when you are using terraform. First I will give some examples of the commonly used directory structure, then I will give details about my directory I will be using in this project.

Basic Directory Structure

Screenshot 2021-05-13 at 21.11.38.png

You will have 3 files in this structure.

  • main.tf is your primary file. In this file where all the resources are defined.
resource "aws_vpc" "this" {

  cidr_block = var.cidr
 }
  • variables.tf in this file your input variables are defined
variable "cidr" {
 description = "The CIDR block for the VPC"
 type        = string
 default     = "10.0.0.0/16"
}
  • outputs.tf output value are defined in this file
output "vpc_id" {
  description = "The ID of the VPC"
  value       = concat(aws_vpc.this.*.id, [""])[0]
}

This directory Structure will work for a small project and a small team. But this structure won't be able to scale when you are using modules and for larger projects

Complex and Scalable Directory Structure

With the basic directory structure, you will not be scale as your project and team grows. For the larger project, you would need multiple environments and multiple regions. You will need a good directory Structure where you can deploy infrastructure from development to production environments from CI/CD system. In this structure, you can make use of Terraform Modules.

Modules are reusable Terraform configurations that can be called and configured by other configurations.

.
├── enviournments
│   ├── dev
│   │   ├── compute.tf
│   │   ├── dev.tfvars
│   │   ├── outputs.tf
│   │   ├── rds.tf
│   │   ├── s3.tf
│   │   ├── variables.tf
│   │   └── vpc.tf
│   ├── prod
│   │   ├── compute.tf
│   │   ├── outputs.tf
│   │   ├── prod.tfvars
│   │   ├── rds.tf
│   │   ├── s3.tf
│   │   ├── variables.tf
│   │   └── vpc.tf
│   └── stage
│       ├── compute.tf
│       ├── outputs.tf
│       ├── rds.tf
│       ├── s3.tf
│       ├── stage.tfvars
│       ├── variables.tf
│       └── vpc.tf
└── modules
    ├── compute
    │   ├── main.tf
    │   ├── outputs.tf
    │   └── variables.tf
    ├── rds
    │   ├── main.tf
    │   ├── outputs.tf
    │   └── variables.tf
    ├── s3
    │   ├── main.tf
    │   ├── outputs.tf
    │   └── variables.tf
    ├── security-group
    │   ├── main.tf
    │   ├── outputs.tf
    │   └── variables.tf
    └── vpc
        ├── main.tf
        ├── outputs.tf
        └── variables.tf

With configuration organized into subdirectories with modules, you can test each piece individually and reuse them. Each environment maintains its own state. Terraform can populate variables using values from a file. For all files which match terraform. tfvars or *.auto.tfvars present in the current directory, Terraform automatically loads them to populate variables. To set lots of variables, it is more convenient to specify their values in a variable definitions file (with a filename ending in either .tfvars or .tfvars.json)

This structure I have seen many of them are using. here the contents of each environment will be more or less identical. But my theory is content should be the same for all environments .we should be using the same main.tf file for all environments. If we want to change the number of servers, you can use variables.

variable "instance_count" {
  description = "Numbers of servers count"
}

variable "instance_type" {
  description = "Instance Size (t2.micro,t2.large"
}

Propose Directory Structure

As I mentioned in the previous topic of having a separate folder and separate configuration file doesn't make any sense. Feel free to comment if you think if there any advantage of having a separate folder for each environment. so here is my proposed directory structure for vpc deployment.

├── README.md
├── main.tf
├── outputs.tf
└── variables.tf

This directory for vpc deployment . I am strong believer of decoupling components . Loosely coupled resources are easy to manage and deploy . So each resource will have their own repo.

You might be wondering how you are going reference resources from different repo . Thats where terraform cloud workspace will help you . Will explain this topic in my next blog post.

Looking at the above directory you might think this looks similar to "Basic Directory Structure". Also, you might be wondering where is module directories. Yes, directories look similar but the magic happens within the content of configurations files.

terraform {
  required_version = "~> 0.12"
  backend "remote" {
    hostname     = "app.terraform.io"
    organization = "xxxxxxxx"
    workspaces { prefix = "vpc-" }
  }
}

provider "aws" {
  region = "ap-south-1"
}


module "vpc" {
  source = "github.com/nitheesh86/terraform-modules/modules/vpc"

  name = var.name
  cidr = "10.0.0.0/16"

  azs             = ["ap-south-1a", "ap-south-1b", "ap-south-1c"]
  public_subnets  = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
  private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]

  enable_nat_gateway = true
  enable_vpn_gateway = true

  tags = {
    Terraform   = "true"
    Environment = var.env
  }
}

I have kept modules independent of configurations. All of my modules are stored in a different repo. I will be calling that module by their git repo URL.

The source argument in a module block tells Terraform where to find the source code for the desired child module.

Following module, sources are supported by terraform

  • Local paths
  • Terraform Registry
  • GitHub
  • Bitbucket
  • Generic Git, Mercurial repositories
  • HTTP URLs
  • S3 buckets
  • GCS buckets

As we are using Terraform Cloud, we can also use terraform registry as our module source. But each module requires separate git repo. For example, if you are publishing vpc modules (terraform-aws-vpc), they can contain code only related vpc resources. For the security group module, you need to create another repo (terraform-aws-sg).

One module per repository. The registry cannot use combined repositories with multiple modules.

But is worth taking looking at this structure if we have a separate network, security, compute team in your company. Each team can manage their modules separately

Terraform Module Repository Directory Structure

Terraform Module Repo

.
└── modules
    ├── sg
    │   ├── main.tf
    │   ├── outputs.tf
    │   └── variables.tf
    └── vpc
        ├── README.md
        ├── main.tf
        ├── outputs.tf
        ├── variables.tf
        ├── vpc-endpoints.tf
        └── vpc-flow-logs.tf

Please add your directory structure in the comment section

Did you find this article valuable?

Support Nitheesh Poojary by becoming a sponsor. Any amount is appreciated!

Learn more about Hashnode Sponsors
 
Share this