The Grayzone

Managing Infrastructure with Octopus and Terraform

Terraform is an awesome tool. It gives fantastic control over infrastructure and massively helps avoid any issues when migrating apps across environments, as you know exactly what infrastructure will be deployed.

However, as my Terraform config has grown arms and legs I’ve had to restructure (several times). The project I’m currently working on has a few distinct groups of infrastructure that will be deployed with varying cadences. These are (using Azure for this so using Azure resource names):

  1. Core infrastructure elements that will change very rarely, if at all. e.g. Resource Group, Storage Containers.
  2. Network specific infrastructure that will change infrequently. e.g. Virtual Network Gateway, Network Security Rules.
  3. Application specific infrastructure that will change frequently. e.g. Virtual Machines.

Fairly obvious but worth pointing out that each infrastructure area has a dependency on those with a slower cadence; 3 relies on 2 and 1, 2 relies on 1 etc (you can’t deploy a VM without setting up network first).

I’m still cutting my teeth with Terraform and every day brings new challenges so the details I outline here could well be out of date shortly after I publish! I will try to keep this article up to date while maintaining the history.

Final note before I get into the guts of the article is that I’m using Terraform 0.9.3.

State Files

A lot of the “best practice” and “lessons learned” articles I read (e.g. here, here and here) advocate using a separate folder per environment to facilitate having separate state files per environment. This saves accidently running a terraform apply in Production rather than Test (not good!).

As well as having separate state files per environment, I want a separate state file per infrastructure area (1, 2 + 3 above). Having separate state files per area means that I can make use of terraforms remote state data sources to share output values between areas - but not between environments. E.g.

data "terraform_remote_state" "network" {
  backend = "consul"
  config {
    path = "state/dev/network.tfstate"

This is an important distinction as I do want to be able to query the id of a NIC created in the Network area when creating a VM. However, I definitely don’t want to be able to query the Id of a dev NIC from the QA environment.

It is also necessary due to the way I’ve split up the resources. If I run (2) and it’s using the same state file as (1) then it’ll want to destroy everything in (1) as it won’t find those Terraform config files.


I played with having a folder per environment (e.g. dev, QA, and production) but I found it got unwieldy pretty quickly, with a lot of duplication. Ultimately all that differs between the environments are Terraform variable values. For example VM size - use a ‘Standard A7’ in production but a ‘Standard A2’ in QA.

We already use Octopus deploy for application deployment and I thought it would be a good fit for deploying Terraform infrastructure changes as it is good for managing environment specific configuration (those Terraform variable values that differ per environment). I’ll hopefully give an overview and some understanding to how I’ve accomplished this as I’ve found Octopus and Terraform work quite well together.

The project is contained within a single Git repository. An example of the folder structure is:

 UK South/
    - terraform.tfvars
 EU West/


The modules folder contains standard Terraform modules, in general each module contains:

Having consistent files within each module makes it a lot easier to grok what is going on. I don’t have hard and fast rules about when to split modules down into smaller modules, I just do it when it feels correct or it makes sense.


As well as modules I have what I call “Entry Points”. These are essentially the glue that stitches together multiple modules to form the actual infrastructure that is required.

Similar to how we structure code repositories, all the juicy stuff is within an ‘src’ folder. ‘src’ then contains region specific folders, as we have infrastructure deployed to various regions in the world. This could be managed in the same way Environments are managed, via config, having distinct entry points per region is more flexible should we need to modify region specific infrastructure.

Within each region folder is a folder that corresponds with these entry points, which match the different levels of release cadence as documented above.

Within each entry point I have the same general files:

Octopus Deploy

Disclaimer: I don’t work for Octopus Deploy and they’re not paying me to write this - I just found it handy for managing configuration files!


A very high level overview of Octopus Deploy, which can be skipped if you’re familiar with it, is:

In the deployment pipeline Octopus sits alongside the CI server, in my case TeamCity. The CI server creates and uploads the Octopack and then creates a release whenever code is pushed to the master branch (and the build is successful, obviously!).

Why Octopus?

As well as the configuration management aspect mentioned above, we are a small team so using Terraform in conjunction with Octopus allows for much greater transparency in what is happening. As all releases can be tracked back to a Git commit/branch it is easy to see how the infrastructure environment has changed over time, and exactly what infrastructure is in each environment. If in the future process dictates that only users above level X or in role Y can change the Production infrastructure, e.g. for compliance or financial reasons, then we can have the deployment process defined in Octopus and all that needs to happen is X or Y log in and hit the “Promote to Production” button.

Octopus makes managing variables across environments very easy, meaning across environments I can easily alter things like:

This means that I can have a single folder, per entry point, containing my Terraform config files rather than multiple environment specific folders per entry point.


A convention I’ve followed is for all Octopus variables to be named the same as those they are replacing in Terraform. For example, if I had the following Terraform.tfvars file:

disk_name = "vm_data_disk"

location = "North Europe"

disk_size_GB = "128"

The I would have those same 3 variables located in Octopus: disk_name, location and disk_size_GB

I wrote a simple Octopus step template to iterate through all vars in a tfvars file and replace the value with any found in Octopus. I’ll create a PR to the Octopus community step repository once I’m fully happy with it.

Octopus has the concept of sensitive variables which is ideal for storing stuff like Azure credentials.

Storing the variables in Terraform, and outside of Git, means that non-developers can plug values in too as it is done via a web UI. This can be handy for stuff like service account credentials or subnet ranges, members of other teams can specify the values themselves.


I’ve created an Octopus project per entry point and each follows the same general flow. When creating the Octopack nuget package I add all configuration files, including modules and entry points within the Git repo. The process is then:

  1. Set credentials. Values are read from Octopus variables and put in either environment variables or by populating a provider resource.
  2. Download and extract Octopack. This leaves me with the folder structure sitting on disk which mimics the Git repository.
  3. Download Terraform executable. This is downloaded as a package to allow me to have different projects on different versions of Terraform (not required yet but gives flexibility moving forward).
  4. Update terraform.tfvars with any values from Octopus. At this point I will have variable values specific to the environment I am deploying to.
  5. Update with any values from Octopus. This will ensure that I’m using the correct remote state for current environment.
  6. terraform init -backend=true -get=true -force-copy -lock=true -input=false. -input=false will cause a failure should any variables be missing (Octopus isn’t interactive!)
  7. terraform init -out=project-name.plan. I save the plan for use in the later apply step.
  8. Manual review. This is to review what changes will be made. The job is paused until a user chooses to proceed or stop.
  9. terraform apply project-name.plan.

Still Outstanding

I’m at a stage where this is all new infrastructure and it’s fairly well bedded in on Staging so next step is Production. However, in the future I will need to make updates and things will go wrong so I will need a way to leverage the rest of Terraform, namely taint and destroy.

To be honest this is where the grand plan may come tumbling down, I’ll need to get my thinking cap on…

Share this: