Creating a Terraform Custom Provider - Terraform Cloud Project Beginner Bootcamp
The first two weeks of the Terraform Cloud Project Bootcamp from Exampro were spent on getting our development environment in Gitpod up and running, as well as becoming familiar with the basic features of Terraform. We learnt how to create, update and delete resources via Terraform and how the Terraform state can be managed either locally or remotely in Terraform Cloud.
As Terraform is a cloud-agnostic infrastructure as code (IaC) tool, it can interact and manage resources in practically any cloud provider. The project created during this boot camp included AWS resources, so we of course used the AWS provider. We also experimented with the ‘Random’ provider, which can be used to create for example random strings. It does this by using entirely Terraform logic and without interacting with any other services.
However, to further experience how Terraform can indeed be used for almost anything, we went ahead to create a provider of our own. Our custom provider is used to create resources on the ‘TerraTowns Cloud’ platform, which has been created for the purposes of this boot camp. Participants can create ‘homes’ in the Terratown by using Terraform to create these resources.
To summarize, we are using in total three providers:
Creating the Custom Provider
The development process of the custom provider can be summarized in five steps. We needed to have a mock server that could be used locally for testing, a bunch of bash scripts and Go code for creating the provider. These steps are visualized below and explained in further detail in the following paragraphs:
Step 1 - Mock Server
The mock server was created using Sinatra, which is a lightweight Ruby framework that can be used to build simple web servers. When running the mock server on localhost, we were able to make sure that our bash scripts were working as intended to perform CRUD (create, read, update, delete) operations on the mock server.
Step 2 - Skeleton for the Custom Terraform provider
Terraform providers are typically created using the Go (Golang) programming language so this was the chosen language for our provider as well.
The main.go
was first created with a simple 'hello world' skeleton, just to test if our set-up is working and if we are able to create a provider. To test the functionality, we obviously also needed a bash script that creates the custom provider. As Go files are compiled binaries, they are not dynamically run. You compile them into binaries and then you run the binaries. So running the bash scripts basically creates for us a binary file that is then used as a provider.
Step 3 - Connecting the Custom Provider and Mock Server
At this point, we knew that we were able to use a bash script to take the Go code, compile it and generate the binary executable that can be used as a Terraform custom provider. We also knew that the mock server was functional and CRUD operations should work. The next step was to connect these two things and make sure that the custom provider was able to call the endpoints on the mock server. To test this, we added Terraform configuration that utilizes a custom Terraform provider named "terratowns.:
terraform {
required_providers {
terratowns = {
source = "local.providers/local/terratowns"
version = "1.0.0"
}
}
}
provider "terratowns" {
endpoint = "http://localhost:4567"
user_uuid="e328f4ab-b99f-421c-84c9-4ccea042c7d1"
token="9b49b3fb-b8e9-483c-b703-97ba88eef8e0"
}
Running terraform apply
now instructs Terraform to utilize the 'Terratowns' custom provider. The configuration specifies that the custom provider can be found in a local file, which is the binary file created from the Go code. This custom provider then interacts with the CRUD endpoints on the mock server.
Step 4 - Testing the Production Server
Now that everything was working as intended locally, it was time to make sure that we could interact with the production server. In order for this to work, the endpoint at main.tf
had to be changed and we also had to make sure that we had access to the Terratown Cloud by having a valid user_uuid and access token. This test was successful and our Terratown resource was created on the production server.
Step 5 - Creating the whole infrastructure
As a final step after the custom provider was working, we included the Terraform configuration for the AWS infrastructure as well. We could now run Terraform apply
, which would use two different providers to create resources on two different cloud platforms. By running just this one command, we created a 'Terratowns home' recourse on Terratowns cloud, as well as an S3 bucket and CloudFront distribution on AWS cloud. Furthermore, the domain URL of the CloudFront distribution was referenced by the 'Terratowns home' resource, which meant that we could provide a direct link from the deployed resource on Terratowns cloud to the static website that was deployed on AWS.
The code for this project can be found in my repository, which you can access here.