Article
6 min read
Julian Alarcon

I created this post for people who plan to start using Terraform on a project, in the hope it may help them save some time by sharing some of my lessons learned. And yes, the title is true – I wish I had known most of these lessons before starting to work with Terraform. I have split 11 lessons across two posts – this is part 2, find part 1 here.

 

5. Defining outputs

 

Outputs show the information needed after Terraform templates are deployed. They are also used within modules to export information.

 

output "instance_id" {
	value = "${aws_instance.instance_ws.id}"
}

 

When used within modules, two outputs must be defined: one in the module and a similar one in the configuration files. These outputs need to be explicitly defined. The output information is stored in a Terraform state file and can be queried by other Terraform templates.

 

I recommend defining outputs for resources even if you are not using them at the time. Check the resource and the outputs provided by the resource and choose wisely which information will be useful for your infrastructure when you are using this Terraform resource. By doing so, you will decrease the need to go back and edit your module and your resource because an output will be required by a new resource that you are defining.

 

Also, as you may want to organise your files, you can save the output files in a specific file called outputs.tf.

 

6. Define even the smallest components

 

When working with Terraform, there may be a tendency to start by focusing on the larger components, meaning that you could miss smaller components, which can cause frustration and technical debt. During the creation of components, Terraform will use your provider’s default options if nothing else is predefined. It’s important to acknowledge the default components in use and define them in Terraform since you might need them in the future, and the default options may be modified by your chosen provider with no notice. This could result in two different component sets or changes in the component properties with all the associated issues.

 

Examples of these include the route tables, which are sometimes not a focus area at the beginning of a project, or Elastic Container Repositories, which are easy to define but not always top of mind.

 

resource "aws_ecr_repository" "repository" {
	name = "name_of_repo"
}

 

7. Terraform interpolations

 

The interpolation syntax is a powerful feature which allows you to reference variables, resource attributes, call functions etc.

 

String variables -> ${var.foo}
Map variables -> ${var.amis["us-east-1"]}
List variables -> ${var.subnets[idx]}

 

When you need to retrieve data from a module output or from the state of particular resources, you can use the module or data syntax to call the desired attributes:

 

# Getting information from a module
output "my_module_bar_value_from_module" {
	value = ${module.my_module.bar}
}

# Getting information from a data source
resource "aws_instance" "web" {
	ami = "${data.aws_ami.my_amy.id}"
    instance_type = "t3.micro"
}

 

You can also use some arithmetic or logical operations with interpolation. In the following code snippet, if the evaluation of var.something is true (1, true), the VPN resource will be included.:

 

resource "aws_instance" "vpn" {
	count = "${var.something ? 1 : 0}"
}

 

You can find more information about the supported Interpolations in the Terraform documentation.

 

8. Manage your environment

 

Workspaces in Terraform

 

Initially, these were known as ‘environments’ in Terraform version 0.9, but since version 0.10 onwards, Terraform has renamed this feature ‘workspaces’.

 

You can define new workspaces, change workspaces or delete workspaces and more using the ‘terraform workspace’ command:

 

$ terraform workspace -h
Usage: terraform workspace

 

These are the available subcommands:

 

delete: Delete a workspace

list: List workspaces

new: Create a new workspace

select: Select a workspace

show: Show the name of the current workspace

 

There are several advantages to using workspaces:

 

1. They are defined by Terraform developer HashiCorp, so they may be continuously improved in the future.

2. They reduce the necessary amount of code.

 

However, workspaces also present a couple of challenges:

 

1. They are still in an early implementation stage.

2. They may not be supported by all back ends.

3. It is not clear at the time of deployment (‘terraform apply’) which workspace will be used (‘terraform workspace show’).

 

Folder structure

 

One simple and useful option is to define components inside folders by environment:

 

project-01
├── dev
│	├── clusters
│	│	└── ecs_cluster
│	│	├── service01
│	│	│	├── main.tf
│	│	│	├── outputs.tf
│	│	│	├── variables.tf
│	│	├── service02
│	│	├── service03
│	│	├── service04
│	├── databas
│	│	├── main.tf
│	│	├── outputs.tf
│	│	└── variables.tf
│	├── elasticsearch
│	│	├── main.tf
│	│	├── outputs.tf
│	│	└── variables.tf
│	└── vpc
│	├── main.tf
│	├── outputs.tf
│	└── variables.tf
├── global
│	│	└── web_login
│	│	├── main.tf
│	│	├── outputs.tf
│	│	└── variables.tf
│	└── terraform_state
│	├── output.tf
│	├── variables.tf
│	└── main.tf
├── prod
│	├── clusters
│	├── …
├── qa
│	├── clusters
│	├── …
└── README.md

 

Some advantages of this approach are:

 

1. The folder path gives a clear definition of the environment being deployed.

2. It’s the most commonly used and fail-safe option for public deployments.

3. Terraform states can be defined for each environment folder without issues.

4. You can specify the name of the outputs for each environment.

 

On the other hand, some issues include:

 

1. You'll likely end up with some duplicated code.

2. For larger projects, you will have a very large number of folders.

3. You will always need to copy code and replace the core values.

 

So, which one should you choose? I think the folder structure by environment might be easier to use and is a recommended way to split the different components of your infrastructure.

 

9. Recommended workflow for commands

 

The ‘terraform’ command has multiple options, so I’d like to share this recommendation for the workflow at the moment of deployment:

 

1. Download the modules and force the update. The ‘terraform init’ command initialises the workspace, downloading the providers, modules and setting up the Terraform state back end. But if a module is already downloaded, Terraform won’t recognise that a new version of a module is available. With ‘terraform get’, it’s possible to download the modules, and I’d recommended to use the ‘-update’ option to force an update.

 

terraform get -update

 

2. Once you have the latest modules, it’s necessary to initialise the Terraform workspace to start downloading the providers and modules (already completed through the first command) and initialise the Terraform state backend.

 

terraform init

 

It’s possible to also use the ‘-update’ option to force the update of providers, plug-ins and modules.

 

3. Before deployment, Terraform can define a plan for deployment (what will be created, modified or destroyed). Used in a pipeline, this plan is useful to check the changes before initialising the real deployment.

 

It’s important to note that the plan option is not always 1:1 with the deployment. If a component is already deployed, or if there are not enough permissions provided, the plan will pass, but the deployment could fail.

 

terraform plan

 

To simplify things, it’s possible to run all in one Bash line:

 

terraform get -update && terraform init && terraform plan

 

4. When you reach the final deployment step, Terraform will create a plan and provide the option to confirm the deployment of the desired architecture.

 

terraform apply

 

Please note that there’s no rollback option in Terraform after deployment. So, if an error appears in the deployment, the issue should be solved at that moment.

 

Also, it’s possible to destroy the deployment (‘terraform destroy’), but that will destroy everything and not roll back the changes. You can also specify the application of a specific change or if you want to destroy a specific resource with the ‘-target’ option.

 

10. Calling external data

 

The use of data sources allows a Terraform configuration to build on information defined outside of Terraform, or defined by another separate Terraform configuration, including:

 

Data from a remote state – this is useful to call states from other Terraform deployments:

 

data "terraform_remote_state" "vpc_state" {
	backend = "s3"
    config {
    	bucket = "ci-cd-terraform-state"
        key = "vpc/terraform.tfstate"
        region = "us-east-1"
    }
}

 

Data from AWS or external systems:

 

data "aws_ami" "linux" {
	most_recent = true
    
    filter {
    	name = "name"
        values = ["amzn2-ami-hvm-2.0.20180810-x86_64-gp2*"]
    }
    
    filter {
    	name = "virtualization-type"
        values = ["hvm"]
    }
    
    owners = ["137112412989"]
}

 

11. Testing and keeping up to date

 

To test your code, you can use a great tool called Terratest. This tool uses Go’s unit testing framework.

 

Also, it’s always good to keep up with the latest developments and Terraform updates that could provide very helpful improvements.

 

No video selected

Select a video type in the sidebar.