How we deploy with Terraform and BitBucket to Azure Kubernetes

Reading Time: 6 minutes

N47 implemented a set of back-office web applications for Prestige, a real estate management company located in Zurich, Switzerland. One application is a tool for displaying construction projects nearby properties managed by Prestige and a second example is a tool for creating and assigning orders to craftsmen. But the following examples aren’t specific for those use cases.

Screenshot of the Construction Project tool.

An Overview

The project entails one frontend application with multiple microservices whereby each service has its own database schema.

The application consumes data from Prestige’s main ERP system Abacus and third-party applications.

N47 is responsible for setting up and maintaining the full Kubernetes stack, MySQL Database, Azure Application Gateway and Azure Active Directory applications.

Another company is responsible for the networking and the Abacus part.

Architectural Overview

Involved Technologies

Our application uses following technologies:

  • Database: MySQL 8
  • Microservices: Java 11, Spring Boot 2.3, Flyway for database schema updates
  • Frontend: Vue.js 2.5 and Vuetify 2.3
  • API Gateway: ngix

The CI/CD technology stack includes:

  • Source code: BitBucket (Git)
  • Pipelines: BitBucket Pipelines
  • Static code analysis: SonarCloud
  • Infrastructure: Terraform
  • Cloud provider: Azure

We’ll focus on the second list of technologies.

Infrastructure as Code (IaC) with Terraform and BitBucket Pipelines

One thing I really like when using IaC is having the definition of the involved services and resources of the whole project in source code. That enables us to track the changes over time in the Git log and of course, it makes it far easier to set up a stage and deploy safely to production.

Please read more about Terraform in our blog post Build your own Cloud Infrastructure using Terraform. The Terraform website is of course as well a good resource.

Storage of Terraform State

One important thing when dealing with Terraform is storing the state in an appropriate place. We’ve chosen to create an Azure Storage Account and use Azure Blob Storage like this:

terraform {
  backend azurerm {
    storage_account_name = "prestigetoolsterraform"
    container_name = "prestige-tools-dev-tfstate"
    key = "prestige-tools-dev"
  }
}

The required access_key is passed as an argument to terraform within the pipeline (more details later). You can find more details in the official tutorial Store Terraform state in Azure Storage by Microsoft.

Another important point is not to run pipelines in parallel, as this could result in conflicts with locks.

Used Terraform Resources

We provide the needed resources on Azure via BitBucket + Terraform. Selection of important resources:

Structure of Terraform Project

We created an entry point for each stage (local, dev, test and prod) which is relatively small and mainly aggregate to the modules with some environment-specific configurations.

The configurations, credentials and other data are stored as variables in the BitBucket pipelines.

/environments
  /local
  /dev
  /test
  /prod
/modules
  /azure_active_directory
  /azure_application_gateway
  /azure_aplication_insights
    /_variables.tf
    /_output.tf
    /main.tf
  /azure_mysql
  /azure_kubernetes_cluster
  /...

The modules themselves have always a file _variables.tf, main.tf and _output.tf to have a clean separation of input, logic and output.


Example source code of the azure_aplication_insights module (please note that some of the text have been shortened in order to have enough space to display it properly)

_variables.tf

variable "name" {
  type = string
}

variable "location" {
  type = string
}

variable "resource_group_name" {
  type = string
}

main.tf

resource "azurerm_application_insights" "ai" {
  name                = var.name
  location            = var.location
  resource_group_name = var.resource_group_name
  application_type    = "web"
}

_output.tf

output "instrumentation_key" {
  value = azurerm_application_insights.ai.instrumentation_key
}

BitBucket Pipeline

The BitBucket pipeline controls Terraform and includes the init, plan and apply. We decided to manually apply the changes in the infrastructure in the beginning.

image: hashicorp/terraform:0.12.26

pipelines:
  default:
      - step:
        name: Plan DEV
        script:
          - cd environments/dev
          - terraform init -backend-config="access_key=$DEV_TF_CONFIG_ACCESS_KEY"
          - terraform plan -out out-overall.plan
        artifacts:
          - environments/dev/out-overall.plan

  branches:
    develop:
      - step:
        name: Plan DEV
        script:
          - cd environments/dev
          - terraform init -backend-config="access_key=$DEV_TF_CONFIG_ACCESS_KEY"
          - terraform plan -out out-overall.plan
        artifacts:
          - environments/dev/out-overall.plan
          - environments/dev/.terraform/**
      - step:
        name: Apply DEV
        trigger: manual
        deployment: dev
        script:
          - cd environments/dev
          - terraform apply out-overall.plan

    master:
      # PRESTIGE TEST
      - step:
          name: Plan TEST
          script:
            - cd environments/test
            - terraform init -backend-config="access_key=$PRESTIGE_TF_CONFIG_ACCESS_KEY"
            - terraform plan -out out-overall.plan
          artifacts:
            - environments/test/out-overall.plan
            - environments/test/.terraform/**
      - step:
          name: Apply TEST
          trigger: manual
          deployment: test
          script:
            - cd environments/test
            - terraform apply out-overall.plan

      # PRESTIGE PROD ...

Needed Steps for Deploying to Production

1. Create feature branch with some changes

2. Push to Git (BitBucket pipeline with step Plan DEV will run). All the details about the changes can be found in the Terraform plan command

3. Create a pull request and merge the feature branch into develop. This will start another pipeline with the two steps (plan + apply)

4. Check the output of the plan step before triggering the deploy on dev

5. Now the dev stage is updated and if everything is working as you wish, create another pull request to merge from develop to master. And re-do the same for the production of other stages

We have just deployed an infrastructure change to production without logging into any system except BitBucket. Time for celebration.

people watching concert
Symbol picture of N47 production deployment party (from unsplash)

Is Really Everything That Shiny?

Well, everything is a big word.

We found issues, for example with cross-module dependencies, which aren’t just solvable with a depends_on. Luckily, there are some alternatives:

network module:

output "id" {
  description = "The Azure assigned ID generated after the Virtual Network resource is created and available."
  value = azurerm_virtual_network.virtual_network.id
}

kubernetes cluster module, which depends on network:

variable "subnet_depends_on" {
  description = "Variable to force module to wait for the Virtual Network creation to finish"
}

and the usage of those two modules in environments/dev/main.tf

module "network" {
  source = "../../modules/azure_network"
}

module "kubernetes_cluster" {
  source = "../../modules/azure_kubernetes_cluster"
  subnet_depends_on = module.network.id
}

After having things set up, it really makes joy to wipe out a stage and just provision everything from scratch with running a BitBucket pipeline.


Deploying a Vue.js app on the Google Cloud Platform using GitLab AutoDeploy

Reading Time: 4 minutes

For a few weeks now, we are working on several internal projects. We are currently developing different products and services, which we want to release soon™️. We started from scratch, so we had the freedom to choose our tools, technologies and frameworks. We decided to deploy our application on a Kubernetes cluster on the Google Cloud. Here is a short howto, to automate the deployment process.

Getting started

First, we need an account on Google Cloud. When you register for the first time, they give you access to the clusters and $300 in credit.

  • Google Cloud account is required
  • Node.js (v10.x)
  • npm (v5.6.0)
  • Docker
  • Git & GitLab

We are using the GitLab AutoDeploy, Google Cloud, Vue.js and Docker to build this CI/CD.

Creating The Vue App

# let's create our workspace
mkdir vue-ci-app
cd vue-ci-app/

# install vue
npm install @vue/cli -g

# create the vue-app (select default settings)
vue create vue-app
cd vue-app/

# let's test out the app locally
npm run serve

We first create a folder and enter it, then we use npm to install the Vue command line interface and use that to create a bootstrapped Vue app. It should be accessible at http://localhost:8080/

Docker Config

FROM node:lts-alpine

# install simple http server for serving static content
RUN npm install -g http-server

# make the 'app' folder the current working directory
WORKDIR /app

# copy both 'package.json' and 'package-lock.json' (if available)
COPY package*.json ./

# install project dependencies
RUN npm install

# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY . .

# build app for production with minification
RUN npm run build

EXPOSE 5000
CMD [ "http-server", "-p 5000", "dist" ]
  • From pulls the latest node from the public docker registry (Docker Hub)
  • Then we install http-server, a simple static serve
  • Afterwards, we make a directory where we will place the app
  • Copy our package.json local machine to the docker instance
  • After installing the dependencies and copying the dist of the app, we run a build which we serve using http-server
  • This is all done in a docker container

GitLab & Kubernetes

The last part of the deployment begins with setting up a Kubernetes cluster and enabling GitLab Autodeploy.

First, we need to go to our Project > Settings > CI/CD > Auto DevOps. Enable the default pipeline. This is the auto part that escapes the need for gitlab-ci.yml.

Then we need to add a cluster which means going to our GC account and setting up a Kubernetes cluster. We need to specify a name, environment scope, type of project, region, number of nodes, machine type, and whether it’s an RBAC-enabled cluster.

We need to go to GitLab, to the CI/CD page, and add a GitLab runner, this needs to be configured to run docker.

We need to set a base domain, and finally add our created Kubernetes cluster to the GitLab Autodeploy.

We have three jobs if all is set up and done, build and review phases where we have a build on the remote Kubernetes cluster and review where we can add linting and tests. Cleanup is a manual job that deletes the commit from the pipeline ready to be deployed again.

Deploy Spring Boot Application on Google Cloud with GitLab

Reading Time: 5 minutes

A lot of developers experience a painful process with their code being deployed on the environment. We, as a company, suffer the same thing so that we wanted to create something to make our life easier.

After internal discussions, we decided to make a fully automated CI/CD process. We investigated and came up with a decision for that purpose to implement Gitlab CI/CD with google cloud deployment.

Further in this blog, you can see how we achieved that and how you can achieve the same.

Let’s start with setting up. 🙂

  • After that, we create a simple rest controller for testing purposes.
package com.northlabs.gitlabdemo.rest;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class RootController {

    @GetMapping(value = "/")
    public String root() {
        return "Hello from Root";
    }

    @GetMapping(value = "/demo")
    public String demo() {
        return "Hello from Demo";
    }

}
  • Next step is to push the application to our GitLab repo.
  1. cd path_to_root_folder
  2. git init
  3. git remote add origin https://gitlab.com/47northlabs/47northlabs/product/playground/gitlab-demo.git
  4. git add .
  5. git commit -m “Initial commit”
  6. git push -u origin master

Now, after we have our application in GitLab repository, we can go to setup Google Cloud. But, before you start, be sure that you have a G-Suite account with enabled billing.

  • The first step is to create a new project: in my case it is northlabsgitlab-demo.

Create project: northlabs-gitlab-demo
  • Now, let’s create our Kubernetes Cluster.

It will take some time after Kubernetes clusters are initialized so that GitLab will be able to create a cluster.

We are done with Google Cloud, so it’s time to set up Kubernetes in our GitLab repository.

  • First, we add a Kubernetes cluster.
Add Kubernetes Cluster
Sign in with Google
  • Next, we give a name to the cluster and select a project from our Google Cloud account: in my case it’s gitlab-demo.
  • The base domain name should be set up.
  • Installing Helm Tiller is required, and installing other applications is optional (I installed Ingress, Cert-Manager, Prometheus, and GitLab Runner).

Install Helm Tiller

Installed Ingress, Cert-Manager, Prometheus, and GitLab Runner

After installing the applications it’s IMPORTANT to update your DNS settings. Ingress IP address should be copied and added to your DNS configuration.
In my case, it looks like this:

Configure DNS

We are almost done. 🙂

  • The last thing that should be done is to enable Auto DevOps.
  • And to set up Auto DevOps.

Now take your coffee and watch your pipelines. 🙂
After a couple of minutes your pipeline will finish and will look like this:

Now open the production pipeline and in the log, under notes section, check the URL of the application. In my case that is:

Application should be accessible at: http://47northlabs-47northlabs-product-playground-gitlab-demo.gitlab-demo.north-47.com

Open the URL in browser or postman.

https://47northlabs-47northlabs-product-playground-gitlab-demo.gitlab-demo.north-47.com
https://47northlabs-47northlabs-product-playground-gitlab-demo.gitlab-demo.north-47.com/demo
  • Let’s edit our code and push it to GitLab repository.
package com.northlabs.gitlabdemo.rest;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class RootController {

    @GetMapping(value = "/")
    public String root() {
        return "Hello from Root v1";
    }

    @GetMapping(value = "/demo")
    public String demo() {
        return "Hello from Demo v1";
    }
}

After the job is finished, if you check the same URL, you will see that the values are now changed.


https://47northlabs-47northlabs-product-playground-gitlab-demo.gitlab-demo.north-47.com

https://47northlabs-47northlabs-product-playground-gitlab-demo.gitlab-demo.north-47.com/demo

And we are done !!!

This is just a basic implementation of the GitLab Auto DevOps. In some of the next blogs, we will show how to customize your pipeline, and how to add, remove or edit jobs.