Build your own Cloud Infrastructure using Terraform

Reading Time: 5 minutes

Infrastructure as Code (IaC) is a process of managing and provisioning IT infrastructure through machine-readable definition files. It means writing code to define your infrastructure, in the same manner, you would write code to define your application software. Nowadays, all modern IT companies use IaC tools to build and manage their cloud computing infrastructure.

Traditionally, IaC was accomplished by using configuration management tools like Chef, Puppet and Ansible. Configuration management tools like these provide great customization and focus more on application delivery rather than infrastructure provisioning.

What is Terraform?

Terraform is an infrastructure provisioning tool. Being a provisioning tool means that Terraform can deploy your entire infrastructure stack, not just new versions of your application (what configuration management tools actually do). By writing configuration files, Terraform can deploy just about any cloud or combination of clouds, including private, public, hybrid, and multi-cloud.

Benefits of using Terraform

Terraform isn’t the only Infrastructure as Code technology. There are plenty of other tools that do the same thing. Moreover, Terraform was created and initially released in 2014 by a relatively small tech company called HashiCorp, and doesn’t even have a 1.0 version out yet. So, how does Terraform compete on the market? The answer is, of course, that Terraform provides a unique set of advantages. These competitive advantages stem from six key characteristics:

  • Provisioning tool: Deploy infrastructure, not just applications
  • Easy to use: For all Software Engineers, not just DevOps
  • Free and Open Source: Who doesn’t like free?
  • Declarative: Say what you want, not how to do it
  • Cloud agnostic: Deploy to any cloud
  • Extendable: You aren’t limited by the language

The very beginning

After seeing what Terraform is and the advantages of using it, let’s see how simple it is to start using it.

Terraform code is written in HCL (Harshicorp Configuration Language) with “.tf” files extension where your goal is to describe the infrastructure you want.

There are numerous providers supported by Terraform including the most popular: AWS, Google Cloud, Azure, Heroku and many more.

A typical provider configuration would look something like:

provider "aws" {
      region = "eu-central-1"
}

This means that you use aws as a cloud computing platform, and eu-central-1 define the region that will host your resources.

The main purpose of the Terraform language is declaring resources, which represent infrastructure objects. Each provider has different types of resources related to their real infrastructure objects. This is the most simple representation of AWS EC2 instance (a resource with required attributes only) by Terraform:

resource "aws_instance" "example" {
       ami = "ami-d74be5b8"
       instance_type = "t2.micro"
}

If you gather these two code blocks above in one main.tf file, you already have one small Terraform project that is ready to launch a single EC2 instance from your local machine. You only need to have the following prerequisites:

  • Terraform installed
  • An AWS account
  • An AWS CLI installed

After you provide the prerequisites, you should configure AWS locally by executing the following command:

$ aws configure

Follow the prompts to input your AWS Access Key ID and Secret Access Key, which you will find on this page.

Now you are ready to execute your first terraform command. First, navigate to your directory where you created your main.tf file and initialize the directory with:

$ terraform init

By initializing, Terraform downloads the AWS provider and installs it in a hidden subdirectory of the current working directory. The output shows which version of the plugin was installed.

After the successful initialization you can see the set of instructions that terraform prepared to execute for you:

$ terraform plan

To apply the plan and release it:

$ terraform apply

Now you have your EC2 instance up and running. You can see your instance here.

If you go inside the newly created instance, you can see that Public IPv4 addresses and Private IPv4 addresses are already created for you. These IPs will help you communicate inside and outside the cloud world. You can also see it is Red Hat instance by default and a lot of other details about the instance. You can customize all these details by adding proper arguments in your aws_instance.

At the end of your work, you are able to destroy your instance or whole infrastructure by executing one command:

$ terraform destroy

Summary

Even though I explained Terraform in a very simple way, it can become very complex by going further and try to build more complex cloud infrastructure. I recommend you go through the documentation before you try to develop anything. When you learn how to build your infrastructure on one cloud platform, it is going to be much easier to make it happen on other platforms as well.

Deploying a Vue.js app on the Google Cloud Platform using GitLab AutoDeploy

Reading Time: 4 minutes

For a few weeks now, we are working on several internal projects. We are currently developing different products and services, which we want to release soon™️. We started from scratch, so we had the freedom to choose our tools, technologies and frameworks. We decided to deploy our application on a Kubernetes cluster on the Google Cloud. Here is a short howto, to automate the deployment process.

Getting started

First, we need an account on Google Cloud. When you register for the first time, they give you access to the clusters and $300 in credit.

  • Google Cloud account is required
  • Node.js (v10.x)
  • npm (v5.6.0)
  • Docker
  • Git & GitLab

We are using the GitLab AutoDeploy, Google Cloud, Vue.js and Docker to build this CI/CD.

Creating The Vue App

# let's create our workspace
mkdir vue-ci-app
cd vue-ci-app/

# install vue
npm install @vue/cli -g

# create the vue-app (select default settings)
vue create vue-app
cd vue-app/

# let's test out the app locally
npm run serve

We first create a folder and enter it, then we use npm to install the Vue command line interface and use that to create a bootstrapped Vue app. It should be accessible at http://localhost:8080/

Docker Config

FROM node:lts-alpine

# install simple http server for serving static content
RUN npm install -g http-server

# make the 'app' folder the current working directory
WORKDIR /app

# copy both 'package.json' and 'package-lock.json' (if available)
COPY package*.json ./

# install project dependencies
RUN npm install

# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY . .

# build app for production with minification
RUN npm run build

EXPOSE 5000
CMD [ "http-server", "-p 5000", "dist" ]
  • From pulls the latest node from the public docker registry (Docker Hub)
  • Then we install http-server, a simple static serve
  • Afterwards, we make a directory where we will place the app
  • Copy our package.json local machine to the docker instance
  • After installing the dependencies and copying the dist of the app, we run a build which we serve using http-server
  • This is all done in a docker container

GitLab & Kubernetes

The last part of the deployment begins with setting up a Kubernetes cluster and enabling GitLab Autodeploy.

First, we need to go to our Project > Settings > CI/CD > Auto DevOps. Enable the default pipeline. This is the auto part that escapes the need for gitlab-ci.yml.

Then we need to add a cluster which means going to our GC account and setting up a Kubernetes cluster. We need to specify a name, environment scope, type of project, region, number of nodes, machine type, and whether it’s an RBAC-enabled cluster.

We need to go to GitLab, to the CI/CD page, and add a GitLab runner, this needs to be configured to run docker.

We need to set a base domain, and finally add our created Kubernetes cluster to the GitLab Autodeploy.

We have three jobs if all is set up and done, build and review phases where we have a build on the remote Kubernetes cluster and review where we can add linting and tests. Cleanup is a manual job that deletes the commit from the pipeline ready to be deployed again.