What is CI? Continuous Integration Explained

Reading Time: 5 minutes

Continuous Integration (CI) is a software development practice that requires members of a team, to frequently integrate their code changes into a central repository (master branch), preferably several times a day.

Each merge is then verified by automatically generating a build, and running automated tests against that build.

By integrating regularly, you can detect errors quickly, as well as locate and fix them easier.

Why is Continuous Integration Needed?

Back in the days, BCI – Before Continuous Integration, developers from a single team might have worked in isolation for a longer period of time, and they merged their code changes only when they finished working on a particular feature or bug fix.

This caused the well-known merge hell (integration hell) or in other words a lot of code conflicts, bugs introduced, lots of time invested into the analysis, as well as frustrated developers and project managers.

All these ingredients made it harder to deliver updates and value to the customers on time.

How does Continuous Integration Work?

Continuous Integration as a software development practice entails two components: automation and cultural.

The cultural component focuses on the principle of frequent integrations of your code changes to the mainline of the central repository, using a version control system such as Git, Mercurial or Subversion.

But applying the cultural component you will drastically lower the frustrations and time wasted merging code, because, in reality, you are merging small changes all the time.

As a matter of fact, you can practice Continuous Integration using only this principle, but with adding the automation component into your CI process you can exploit the full potential of the Continuous Integration principle.

Continuous Integration Image
Source

As shown in the picture above, this includes a CI server that will generate builds automatically, run automated tests against those builds and notify (or alert) the team members of the results.

By leveraging the automation component you will immediately be aware of any errors, thus allowing the team to fix them fast and without too much time spent analysing.

There are plenty of CI tools out there that you can choose from, but the most common are: Jenkins, CircleCI, GitHub Actions, Bitbucket Pipelines etc.

Continuous Integration Best Practices and Benefits

Everyone should commit to the mainline daily

By doing frequent commits and integrations, developers let other developers know about the changes they’ve done, so passive communication is being maintained.

Other benefits that come with developers integrating multiple times a day:

  • integration hell is drastically reduced
  • conflicts are easily resolved as not much has changed in the meantime
  • errors are quickly detected

The builds should be automated and fast

Given the fact several integrations will be done daily, automating the CI Pipeline is crucial to improving developer productivity as it leads to less manual work and faster detection of errors and bugs.

Another important aspect of the automated build is optimising its execution speed and make it as fast as possible as this enables faster feedback and leads to more satisfied developers and customers.

Everyone should know what’s happening with the system

Given Continuous Integration is all about communication, a good practice is to inform each team member of the status of the repository.

In other words, whenever a merge is made, thus a build is triggered, each team member should be notified of the merge as well as the results of the build itself.

To notify all team members or stakeholders, use your imagination, though email is the most common channel, but you can leverage SMS, integrate your CI server with communication platforms like Slack, Microsoft Teams, Webex etc.

Test Driven Development

Test Driven Development (TDD) is a software development approach relying on the principle of writing tests before writing the actual code. What TDD offers as a value in general, is improved test coverage and an even better understanding of the system requirements.

But, put those together, Continuous Integration and TDD, and you will get a lot more trust and comfort in the CI Pipelines as every new feature or bug fix will be shipped with even better test coverage.

Test Driven Development also inspires a cultural change into the team and even the whole organisation, by motivating the developers to write even better and more robust test cases.

Pull requests and code review

A big portion of the software development teams nowadays, practice pull request and code review workflow.

A pull request is typically created whenever a developer is ready to merge new code changes into the mainline, making the pull request perfect for triggering the CI Pipeline.

Usually, additional manual approval is required after a successful build, where other developers review the new code, make suggestions and approve or deny the pull request. This final step brings additional value such as knowledge sharing and an additional layer of communication between the team members.

Summary

Building software solutions in a multi-developer team are as complex as it was five, ten or even twenty years ago if you are not using the right tools and exercise the right practices and principles, and Continuous Integration is definitely one of them.


I hope you enjoyed this article and you are not leaving empty-handed.
Feel free to leave a comment. ๐Ÿ˜€

Follow N47 on InstagramTwitterLinkedInFacebook for any updates.

How we deploy with Terraform and BitBucket to Azure Kubernetes

Reading Time: 6 minutes

N47 implemented a set of back-office web applications for Prestige, a real estate management company located in Zurich, Switzerland. One application is a tool for displaying construction projects nearby properties managed by Prestige and a second example is a tool for creating and assigning orders to craftsmen. But the following examples aren’t specific for those use cases.

Screenshot of the Construction Project tool.

An Overview

The project entails one frontend application with multiple microservices whereby each service has its own database schema.

The application consumes data from Prestige’s main ERP system Abacus and third-party applications.

N47 is responsible for setting up and maintaining the full Kubernetes stack, MySQL Database, Azure Application Gateway and Azure Active Directory applications.

Another company is responsible for the networking and the Abacus part.

Architectural Overview

Involved Technologies

Our application uses following technologies:

  • Database: MySQL 8
  • Microservices: Java 11, Spring Boot 2.3, Flyway for database schema updates
  • Frontend: Vue.js 2.5 and Vuetify 2.3
  • API Gateway: ngix

The CI/CD technology stack includes:

  • Source code: BitBucket (Git)
  • Pipelines: BitBucket Pipelines
  • Static code analysis: SonarCloud
  • Infrastructure: Terraform
  • Cloud provider: Azure

We’ll focus on the second list of technologies.

Infrastructure as Code (IaC) with Terraform and BitBucket Pipelines

One thing I really like when using IaC is having the definition of the involved services and resources of the whole project in source code. That enables us to track the changes over time in the Git log and of course, it makes it far easier to set up a stage and deploy safely to production.

A blog post about some Terraform basics will follow soon. In the meanwhile, you can find some introduction on the official Terraform website.

Storage of Terraform State

One important thing when dealing with Terraform is storing the state in an appropriate place. We’ve chosen to create an Azure Storage Account and use Azure Blob Storage like this:

terraform {
  backend azurerm {
    storage_account_name = "prestigetoolsterraform"
    container_name = "prestige-tools-dev-tfstate"
    key = "prestige-tools-dev"
  }
}

The required access_key is passed as an argument to terraform within the pipeline (more details later). You can find more details in the official tutorial Store Terraform state in Azure Storage by Microsoft.

Another important point is not to run pipelines in parallel, as this could result in conflicts with locks.

Used Terraform Resources

We provide the needed resources on Azure via BitBucket + Terraform. Selection of important resources:

Structure of Terraform Project

We created an entry point for each stage (local, dev, test and prod) which is relatively small and mainly aggregate to the modules with some environment-specific configurations.

The configurations, credentials and other data are stored as variables in the BitBucket pipelines.

/environments
  /local
  /dev
  /test
  /prod
/modules
  /azure_active_directory
  /azure_application_gateway
  /azure_aplication_insights
    /_variables.tf
    /_output.tf
    /main.tf
  /azure_mysql
  /azure_kubernetes_cluster
  /...

The modules themselves have always a file _variables.tf, main.tf and _output.tf to have a clean separation of input, logic and output.


Example source code of the azure_aplication_insights module (please note that some of the text have been shortened in order to have enough space to display it properly)

_variables.tf

variable "name" {
  type = string
}

variable "location" {
  type = string
}

variable "resource_group_name" {
  type = string
}

main.tf

resource "azurerm_application_insights" "ai" {
  name                = var.name
  location            = var.location
  resource_group_name = var.resource_group_name
  application_type    = "web"
}

_output.tf

output "instrumentation_key" {
  value = azurerm_application_insights.ai.instrumentation_key
}

BitBucket Pipeline

The BitBucket pipeline controls Terraform and includes the init, plan and apply. We decided to manually apply the changes in the infrastructure in the beginning.

image: hashicorp/terraform:0.12.26

pipelines:
  default:
      - step:
        name: Plan DEV
        script:
          - cd environments/dev
          - terraform init -backend-config="access_key=$DEV_TF_CONFIG_ACCESS_KEY"
          - terraform plan -out out-overall.plan
        artifacts:
          - environments/dev/out-overall.plan

  branches:
    develop:
      - step:
        name: Plan DEV
        script:
          - cd environments/dev
          - terraform init -backend-config="access_key=$DEV_TF_CONFIG_ACCESS_KEY"
          - terraform plan -out out-overall.plan
        artifacts:
          - environments/dev/out-overall.plan
          - environments/dev/.terraform/**
      - step:
        name: Apply DEV
        trigger: manual
        deployment: dev
        script:
          - cd environments/dev
          - terraform apply out-overall.plan

    master:
      # PRESTIGE TEST
      - step:
          name: Plan TEST
          script:
            - cd environments/test
            - terraform init -backend-config="access_key=$PRESTIGE_TF_CONFIG_ACCESS_KEY"
            - terraform plan -out out-overall.plan
          artifacts:
            - environments/test/out-overall.plan
            - environments/test/.terraform/**
      - step:
          name: Apply TEST
          trigger: manual
          deployment: test
          script:
            - cd environments/test
            - terraform apply out-overall.plan

      # PRESTIGE PROD ...

Needed Steps for Deploying to Production

1. Create feature branch with some changes

2. Push to Git (BitBucket pipeline with step Plan DEV will run). All the details about the changes can be found in the Terraform plan command

3. Create a pull request and merge the feature branch into develop. This will start another pipeline with the two steps (plan + apply)

4. Check the output of the plan step before triggering the deploy on dev

5. Now the dev stage is updated and if everything is working as you wish, create another pull request to merge from develop to master. And re-do the same for the production of other stages

We have just deployed an infrastructure change to production without logging into any system except BitBucket. Time for celebration.

people watching concert
Symbol picture of N47 production deployment party (from unsplash)

Is Really Everything That Shiny?

Well, everything is a big word.

We found issues, for example with cross-module dependencies, which aren’t just solvable with a depends_on. Luckily, there are some alternatives:

network module:

output "id" {
  description = "The Azure assigned ID generated after the Virtual Network resource is created and available."
  value = azurerm_virtual_network.virtual_network.id
}

kubernetes cluster module, which depends on network:

variable "subnet_depends_on" {
  description = "Variable to force module to wait for the Virtual Network creation to finish"
}

and the usage of those two modules in environments/dev/main.tf

module "network" {
  source = "../../modules/azure_network"
}

module "kubernetes_cluster" {
  source = "../../modules/azure_kubernetes_cluster"
  subnet_depends_on = module.network.id
}

After having things set up, it really makes joy to wipe out a stage and just provision everything from scratch with running a BitBucket pipeline.


Create a CI/CD pipeline with GitHub Actions

Reading Time: 7 minutes

A CI/CD pipeline helps in automating your software delivery process. What the pipeline does is building code, running tests, and deploying a newer version of the application.

Not long ago GitHub announced GitHub Actions. Meaning that they have built it system for support for CI/CD. This means that developers can use GitHub Actions to create a CI/CD pipeline.

With Actions, GitHub now allows developers not just to host the code on the platform, but also to run it.

Let’s create a CI/CD pipeline using GitHub Actions, the pipeline will deploy a spring boot app to AWS Elastic Beanstalk.

First of all, let’s find a project

For this purpose, I will be using this project which I have forked:
When forked we need to open the project. Upon opening, we will see the section for GitHub Actions.

GitHub Actions Tool

Add predefined Java with Maven Workflow

Get started with GitHub Actions

By clicking on Actions we are provided with a set of predefined workflows. Since our project is Maven based we will be using the Java with Maven workflow.

By clicking “Start commit” GitHub Will add a commit with the workflow, the commit can be found here.

Let’s take a look at the predefined workflow:

name: Java CI with Maven

on:
  push:
    branches: [ master ]
  pull_request:
    branches: [ master ]

jobs:
  build:

    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v2
    - name: Set up JDK 1.8
      uses: actions/setup-java@v1
      with:
        java-version: 1.8
    - name: Build with Maven
      run: mvn -B package --file pom.xml

name: Java CI with Maven
This is just specifying the name for the workflow

on: push,pull_request
On command is used for specifying the events that will trigger the workflow. In this case, those events are push and pull_request on the master branch in this case

job:
A job is a set of steps that execute the same runner

runs-on: ubuntu-latest
The runs-on is specifying the underlaying OS we want for our workflow to run on for which we are using the latest version of ubuntu

steps:
A step is an individual task that can run commands (known as actions). Each step in a job executes on the same runner, allowing the actions in that job to share data with each other

actions:
Actions are the smallest portable building block of a workflow which are combined into steps to create a job. We can create our own actions, or use actions created by the GitHub community

Our steps actually setup Java and execute Maven commands needed for the build of the project.

Since we added the workflow by creating commit from the GUI the pipeline has automatically started and verified the commit – which we can see on the following image:

Pipeline report

Create an application in AWS Elastic Beanstalk

The next thing that we need to do is to create an app on Elastic Beanstalk where our application is going to be deployed. For that purpose, an AWS account is needed.

AWS Elastic Beanstalk service

Upon opening the Elastic Beanstalk service we need to choose the application name:

Application name

For the platform choose Java8.

Choose platform

For the application code, choose Sample application and click Create application.
Elastic Beanstalk will create and initialize an environment with a sample application.

Let’s continue working on our pipeline

We are going to use an action created from the GitHub community for deploying an application on Elastic Beanstalk. The action is einaregilsson/beanstalk-deploy.
This action requires additional configuration which is added using the keyboard with:

    - name: Deploy to EB
      uses: einaregilsson/beanstalk-deploy@v13
      with:
        aws_access_key: ${{ secrets.AWS_ACCESS_KEY_ID }}
        aws_secret_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        application_name: {change this with aws application name}
        environment_name: {change this with aws environment name}
        version_label: ${{github.SHA}}
        region: {change this with aws region}
        deployment_package: target/spring-petclinic-rest-2.2.5.jar

Add variables

We need to add value into the properties AWS Elastic Beanstalk application_name, environment_name AWS region and, AWS APIkey.

Go to AWS Elastic Beanstalk and copy the previously created Environment name and Application name.
Go to AWS Iam under your user in the security credentials section either create a new AWS access Key or use an existing one.
The AWS Access Key and AWS Secret access key should be added into the GitHub project settings under the secrets tab which looks like this:

GitHub Project Secrets

The complete pipeline should look like this:

name: Java CI with Maven

on:
  push:
    branches: [ master ]
  pull_request:
    branches: [ master ]

jobs:
  build:

    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v2
    - name: Set up JDK 1.8
      uses: actions/setup-java@v1
      with:
        java-version: 1.8
    - name: Build
      run: mvn -B package --file pom.xml
    - name: Deploy to EB
      uses: einaregilsson/beanstalk-deploy@v13
      with:
        aws_access_key: ${{ secrets.AWS_ACCESS_KEY_ID }}
        aws_secret_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        application_name: pet-clinic
        environment_name: PetClinic-env
        version_label: ${{github.SHA}}
        region: us-east-1
        deployment_package: target/spring-petclinic-rest-2.2.5.jar

Lastly, let’s modify the existing app

The deployed application in order to be considered a healthy instance from Elastic Beanstalk has to return an ok response when accessed from the Load Balancer which is standing upfront the Elastic Beanstalk. The load balancer is accessing the application on the root path. The forked application when accessed on the root path is forwarding the request towards swagger-ui.html. For that purpose, we need to remove the forwarding.

Change RootController.class:

@RequestMapping(value = "/", method = RequestMethod.GET, produces = "application/json")
    public ResponseEntity<Void> getRoot() {
        return new ResponseEntity<>(HttpStatus.OK);
    }

Change application.properties server port to 5000 since, by default, Spring Boot applications will listen on port 8080. Elastic Beanstalk assumes that the application will listen on port 5000.

server.port=5000

And remove the server.servlet.context-path=/petclinic/.

The successful commit which deployed our app on AWS Elastic Beanstalk can be seen here:

Pipeline build

And the Elastic Beanstalk with a green environment:

Elastic Beanstalk green environment

Voila, there we have it a CI/CD pipeline with GitHub Actions and deployment on AWS Elastic Beanstalk. You can find the forked project here.

Pet Clinic Swagger UI

CloudFormation: Passing values and parameters to nested stacks

Reading Time: 7 minutes

Why CloudFormation?

CloudFormation allows provisioning and managing AWS resources with simple configuration files, which let us spend less time managing those resources and have more time to focus on our applications that run on AWS instead.

We can simply write a configuration template (YAML/JSON file) that describes the resources we need in our application (like EC2 instances, Dynamo DB tables, or having the entire app monitoring automated in CloudWatch). We do not need to manually create and configure individual AWS resources and figure out what is dependent on what, and more importantly, it is scalable so we can re-use the same template, with a bunch of parameters, and have the entire infrastructure replicated in different stages/environments.

Another important aspect of CloudFormation is that we have our infrastructure as code, which can be version controlled, reviewed and easily maintained.

Nested stacks

CloudFormation nested stacks diagram

As our infrastructure grows, common patterns can emerge, which can be separated into dedicated templates, and re-used later in other templates. A good example is load balancers and VPC network. There is another reason, that may look unimportant, but CloudFormation stacks have a limit, which is 200 resources per stack, which can be easily reached as our application grows. That is why nested stacks can be really useful.

A nested stack is a simple stack resource of type AWS::CloudFormation::Stack. Nested stacks can have themselves contain other nested stacks, resulting in a hierarchy of stacks, as shown in the diagram on the right-hand side. There must be only one root stack, which is called parent.

Passing parameters to the nested stacks

One of the biggest challenges when having nested stacks is parameters exchange between stacks. Without parameters, it would be impossible to have robust and dynamic stacks, that are scalable and flexible.

The simplest example would be deploying the same CloudFormation stack to multiple stages, like beta, gamma and prod (dev, pre-prod, prod, or any other naming convention you prefer).

Depending on which stage you deploy your application, you may want to set different properties to certain resources. For example, in the development stage, you will not have the same traffic as prod, therefore you can fine-grain the resources for your needs, and prevent spending extra money for unused resources.

Another example is when an application is deployed to various regions, that have different traffic consumption and time spikes. For instance, an application may have 1 million users in Europe, but only 100 000 in Asia. Using stack parameters, allows you to reduce the resources you use in the latter region, which can significantly impact your finances.

Below is a code snippet, showing a simple use case where a DynamoDB table is created in a nested stack, that receives the stage parameter from the parent stack. Depending on which stage, at deploy time, we set different read and write capacity to our table resource.

Root stack

In the parent stack, we define Stage parameter under the Properties section. We later pass the parameters to the nested stack, which is created from a template child_stack.yml, stored in an S3 bucket.

---
AWSTemplateFormatVersion: '2010-09-09'
Description: Root stack
Parameters:
  Stage:
    Type: String
    Default: beta
    AllowedValues:
        - beta
        - gamma
        - prod
  TestRegion:
    Type: String
Resources:
    DynamoDBTablesStack:
      Type: AWS::CloudFormation::Stack
      Properties:
        TemplateURL: https://n47-cloudformation.s3.eu-central-1.amazonaws.com/child_stack.yml
        Parameters:
            Stage:
                Ref: Stage
Child stack

In the nested stack, we define the Stage parameter, just like we did in the parent. If we do not define it here either, the creation will fail because the passed parameter (from the parent) is not recognized. Whatever parameters we pass to the nested stack, have to be defined in its template parameters.

---
AWSTemplateFormatVersion: '2010-09-09'
Description: Nested stack
Parameters:
  Stage:
    Type: String
    Default: beta
    AllowedValues:
        - beta
        - gamma
        - prod
Mappings:
    UsersDDBReadWriteCapacityPerStage:
        beta:
            ReadCapacityUnits: 10
            WriteCapacityUnits: 10
        gamma:
            ReadCapacityUnits: 50
            WriteCapacityUnits: 50
        prod:
            ReadCapacityUnits: 500
            WriteCapacityUnits: 1000
Resources:
    UserTable:
        Type: AWS::DynamoDB::Table
        Properties:
            AttributeDefinitions:
                - AttributeName: user_id
                  AttributeType: 'S'
            KeySchema:
                - AttributeName: user_id
                  KeyType: HASH
            ProvisionedThroughput:
                ReadCapacityUnits: !FindInMap [UsersDDBReadWriteCapacityPerStage, !Ref Stage, ReadCapacityUnits]
                WriteCapacityUnits: !FindInMap [UsersDDBReadWriteCapacityPerStage, !Ref Stage, WriteCapacityUnits]
            TableName: Users

The Mappings section used in the child template is used for fetching the corresponding Read/Write capacity value at deploy time when the actual value for Stage parameter is available. More about Mappings can be found in the official documentation.

Output resources from nested stacks

Having many nested stacks usually implies cross-stack communication. This encourages more template code reuse.

We will do a simple illustration by extracting the DynamoDB table name we created in the nested stack before, and pass it as a parameter to a second nested stack, and also by exporting its value.

In order to expose resources from a stack, we need to define them in the Outputs section of the template. We start by adding an output resource, in the child stack, with logical id UsersDDBTableName, and an export named UsersDDBTableExport.

Outputs:
    UsersDDBTableName:
        # extract the table name from the arn
        Value: !Select [1, !Split ['/', !GetAtt UserTable.Arn]] 
        Export:
            Name: UsersDDBTableExport

Note: For each AWS account, Export names must be unique within a region.

Then we create a second nested stack, which will contain two DynamoDB tables, one named UsersWithParameter and the second one UsersWithImportValue. The former is created by passing the table name from the first child stack as a parameter, and the latter by importing the value that has been exported UsersDDBTableExport.

(Note, that this is just an example to showcase the two options to access resources between stacks, and is no real-world scenario)

For that, we added this stack definition in the root’s stack resources:

SecondChild:
    Type: AWS::CloudFormation::Stack
    Properties:
        TemplateURL: https://n47-cloudformation.s3.eu-central-1.amazonaws.com/child_stack_2.yml
        Parameters:
            TableName:
                Fn::GetAtt:
                  - DynamoDBTablesStack
                  - Outputs.UsersDDBTableName

Below is the entire content of the second child stack:

---
AWSTemplateFormatVersion: '2010-09-09'
Description: Nested stack
Parameters:
    TableName:
        Type: String
        
Resources:
    UserTableWithParameter:
        Type: AWS::DynamoDB::Table
        Properties:
            AttributeDefinitions:
                - AttributeName: customer_id
                  AttributeType: 'S'
            KeySchema:
                - AttributeName: customer_id
                  KeyType: HASH
            ProvisionedThroughput:
                ReadCapacityUnits: 1
                WriteCapacityUnits: 1
            TableName: !Join ['', [!Ref TableName, 'WithParameter'] ]
    UserTableWithImportValue:
        Type: AWS::DynamoDB::Table
        Properties:
            AttributeDefinitions:
                - AttributeName: customer_id
                  AttributeType: 'S'
            KeySchema:
                - AttributeName: customer_id
                  KeyType: HASH
            ProvisionedThroughput:
                ReadCapacityUnits: 1
                WriteCapacityUnits: 1
            TableName: !Join ['', [!ImportValue UsersDDBTableExport, 'WithImportValue'] ]

Even though we achieved the same thing by using nested stacks outputs, and exporting values, there is a difference between them. When you do an export, the exporting value is accessible to external stacks, within the same region, on the other hand, nested stacks outputs can be only passed, as a parameter to the other nested stacks within the same parent.

Notes:

  • Cross-stack references across regions cannot be created. You can use the intrinsic function Fn::ImportValue to import only values that have been exported within the same region
  • You cannot delete a stack if another stack references one of its outputs
  • You cannot modify or remove an output value that is referenced by another stack

Below are some screenshots from the AWS console, illustrating the created stacks, from the code snippets shared above:

Figure 1: root stack containing two nested stacks

Figure 2: first nested stack containing Users DynamoDB table

Figure 3: second nested stack containing UsersWithImportValue and UsersWithParameter DynamoDB tables

You can download the source templates here.


If you have any questions or feedback, feel free to comment here.

JHipster with Google App Engine and Cloud MySQL

Reading Time: 5 minutes

How does it sound to set up a complete spring application, with front-end and database? With all the models, repositories and controllers? Even with Unit and Integration tests, with mocked data? All within a few hours? Your solution is JHipster!

JHipster

JHipster or โ€œJava Hipsterโ€ is a handy application generator, a development platform, to develop and deploy web applications. JHipster has become popular in a short time, and it has been featured in many conferences all around the globe – Montreal, Omaha, Taipei, Richmond, Frankfurt, Paris, London. It supports:

  • Spring Boot (Back-end)
  • Angular/React/Vue (Front-end)
  • Spring microservices

JHipster is used for generating complete applications, it will create for you a Spring Boot and Angular/React/Vue application, high-quality application with most of the things pre-configured, using Java as back-end technology and an extensive set of Spring technologies: Spring Security, Spring Boot, Spring MVC (providing a framework for web-sockets, REST and MVC), Spring Data, etc. and Angular/React/Vue front-end and a suite of pre-configured development tools like Yeoman, Maven, Gradle, Grunt, Gulp.js and Bower.

JHipster gives you a head start in creating Spring Boot application with a set of pre-defined screens for user management, monitoring, and logging. The generated Spring Boot application is specifically tailored to make working with Angular/React/Vue a smoother experience. At the top of all that, JHipster also gives you the tools to update, manage and package the resulting application.

By now you may think it sounds too good to be trueโ€ฆ But it is not everything that JHipster offers. If you are a web developer, by now probably you have a lot of questions. ๐Ÿ™‚
One important question we will answer in this blog post: is it supported by today’s cloud solutions, is it compatible with all of them? The answer is yes, it is compatible with the popular cloud solutions from Google, Amazon, Microsoft, and Heroku. Letโ€™s see what it takes to make a complete integration in Googleโ€™s cloud platform, the app engine.

Compatibility Test - NEXCOM

Google App Engine

Google App Engine is a cloud solution provided by Google, a platform for developing and hosting web applications in data centres managed by Google; Platform as a Service (PaaS). Applications are sandboxed and run across multiple servers. The App Engine supports Java or Python, uses the Google query language and stores data in Google BigTable.

It is free of usage up to a certain amount of resource usage. After the user is exceeding the limited usage rates for storage, CPU resources, requests or number of API calls and concurrent requests can pay for more of these resources.

It is fully compatible with the JHipster generated projects. What it takes to host your application is just to follow the official how-to guide from Google App Engine documentation, as normal Spring Boot Application. To make things easier, Google offers a database which works closely with the Google App Engine, the Cloud SQL.

Cloud SQL

Cloud SQL is a database service offered by Google for their cloud solutions, fully-managed that makes it easy to configure, manage, maintain, and operate your relational databases on Google Cloud Platform.

It offers three database options to integrate with:

  • MySQL
  • PostgreSQL
  • SQL Server

Letโ€™s get into details of integrating with Cloud SQL for MySQL:

  1. The first step is to create a Cloud SQL instance on the Google Cloud Platform, which requires few things like instance ID, password and etc. to be set and it gives you the option to choose the MySQL database version.
  2. The following step is to create the database in the newly created instance. It is possible to have more databases in one instance.
  3. Now, our application, in the case to be able to communicate with the Cloud SQL, without any permission blockers, we need to register the application in the Cloud SQL and manually configure the service account roles.
  4. The final step is connecting your application to the created Cloud SQL instance. It is done through JDBC. All the required properties can be found in the overview of the Cloud SQL, instance connection name, credentials and etc.

So the conclusion: donโ€™t be afraid to invest some time in new technologies, be curious, you never know where they may lead you. Thank you for reading. ๐Ÿ™‚

My opinion on talks from JPoint Moscow 2019

Reading Time: 4 minutes

If you have read my previous parts, this is the last one in which I will give my highlights on the talks that I have visited.

First stop was the opening talk from Anton Keks on topic The world needs full-stack craftsmen. Interesting presentation about current problems in software development like splitting development roles and what is the real result of that. Another topic was about agile methodology and is it really helping the development teams to build a better product. Also, some words about startup companies and usual problems. In general, excellent presentation.

Simon Ritter, in my opinion, he had the best talks about JPoint. First day with the topic JDK 12: Pitfalls for the unwary. In this session, he covered the impact of application migration from previous versions of Java to the last one, from aspects like Java language syntax, class libraries and JVM options. Another interesting thing was how to choose which versions of Java to use in production. Well balanced presentation with real problems and solutions.

Next stop Kohsuke Kawaguchi, creator of Jenkins, with the topic Pushing a big project forward: the Jenkins story. It was like a story from a management perspective, about new projects that are coming up and what the demands of the business are. To be honest, it was a little bit boring for me, because I was expecting superpowers coming to Jenkins, but he changed the topic to this management story.

Sebastian Daschner from IBM, his topic was Bulletproof Java Enterprise applications. This session covered which non-functional requirements we need to be aware of to build stable and resilient applications. Interesting examples of different resiliency approaches, such as circuit breakers, bulkheads, or backpressure, in action. In the end, adding telemetry to our application and enhancing our microservice with monitoring, tracing, or logging in a minimalistic way.

Again, Simon Ritter, this time, with the topic Local variable type inference. His talk was about using var and let the compiler define the type of the variable. There were a lot of examples, when it makes sense to use it, but also when you should not. In my opinion, a very useful presentation.

Rafael Winterhalter talked about Java agents, to be more specific he covered the Byte Buddy library, and how to program Java agents with no knowledge of Java bytecode. Another thing was showing how Java classes can be used as templates for implementing highly performant code changes, that avoid solutions like AspectJ or Javassist and still performing better than agents implemented in low-level libraries.

To summarize, the conference was excellent, any Java developer would be happy to be here, so put JPoint on your roadmap for sure. Stay tuned for my next conference, thanks for reading, THE END ๐Ÿ™‚

Project story: Automate AEM deployments for a Swiss bank

Reading Time: 5 minutes

A large bank in St. Gallen, Switzerland had the need to improve the AEM deployment process for its various staging environments. It was one of my first projects for N47 and was settled to run for 6 months starting in October 2018. The following blog gives a short project overview.

Getting started

Starting to work for a new customer is always exciting to me because every company and team has a unique mindset and culture. Usually, it takes a few days or weeks to get to know the new teammates. But this time it was completely different, as I already worked with each of the three team members and their supervisor together in one before my previous company. It was nice to meet old colleagues and we had a very good start.

Deployment process before automation.
Source: www.dreamstime.com

Technology Stack

The technology stack was already defined and the servers ordered. But it took a while until the infrastructure was ready and for the time being, I worked on my local machine.

Jenkins was set as the central tool for build orchestration, deployments, and various DevOps tasks. All the pipeline source code is stored in GitLab and the main business application we’re dealing with is Adobe Experience Manager (AEM).

A relatively large amount of work was needed for the initial setup like enabling connectivity to the relevant systems, basic shared library, and getting to know the internal processes. Read more about Jenkins behind a corporate proxy as an example for this setup: https://www.north-47.com/knowledge-base/update-jenkins-plugins-behind-a-corporate-proxy/

Implemented Pipelines

The bank has two different AEM projects: one for the corporate website and another for their intranet. They require a slightly different deployment pipeline and both have three environments: development, staging, and production.

Besides the deployment pipelines, there are pipelines for copying content from the production to the development environment and restoring a complete production environment into the staging environment in order to have an exact copy and a good baseline for approvals.

Many auxiliary jobs like start/stop AEM + Dispatcher, checking the health of instances, fetch last backup time, and execute Groovy scripts are used in the deployment pipelines as well in an independent executable job.

An example of a Jenkins Pipeline
Source: https://jenkins.io
An example of a Gitlab Pipeline
Source: https://docs.gitlab.com/ee/ci/pipelines.html

Advantages of automation

The automation of the various processes brought faster deployments. But more important transparency and centralized logs about what exactly happened and higher quality as repetitive tasks are always executed.

One example is the backup check, which needed coordination and forced to long waiting times. Now an API is used and the automation pipeline has instant feedback about the last backup time and shows a note if a backup is missing. Before, such a step might have been skipped in order to save some time.

With each built pipeline, some more little and reusable helpers were introduced which made it then again easier and faster to create the next pipelines. Think of a construction kit.

Deployment process with automation.
Source: www.wikimedia.org

Project finished โžž client is very happy

After several months of close collaboration, more and more pipelines have been implemented and are used to support the crucial deployment processes countless times.

I enjoyed building-up the AEM automation and believe it’s a very good aid for higher quality and further extensions.

After a warm welcome and six months of working together, it was time to say good-bye as the project had a fixed time span. The client’s team was very kind and gave me even some great presents to remember the exciting time in St. Gallen.

Present from client: Swiss beer, chocolate, bratwurst and biber

Deploying a Vue.js app on the Google Cloud Platform using GitLab AutoDeploy

Reading Time: 4 minutes

For a few weeks now, we are working on several internal projects. We are currently developing different products and services, which we want to release soonโ„ข๏ธ. We started from scratch, so we had the freedom to choose our tools, technologies and frameworks. We decided to deploy our application on a Kubernetes cluster on the Google Cloud. Here is a short howto, to automate the deployment process.

Getting started

First, we need an account on Google Cloud. When you register for the first time, they give you access to the clusters and $300 in credit.

  • Google Cloud account is required
  • Node.js (v10.x)
  • npm (v5.6.0)
  • Docker
  • Git & GitLab

We are using the GitLab AutoDeploy, Google Cloud, Vue.js and Docker to build this CI/CD.

Creating The Vue App

# let's create our workspace
mkdir vue-ci-app
cd vue-ci-app/

# install vue
npm install @vue/cli -g

# create the vue-app (select default settings)
vue create vue-app
cd vue-app/

# let's test out the app locally
npm run serve

We first create a folder and enter it, then we use npm to install the Vue command line interface and use that to create a bootstrapped Vue app. It should be accessible at http://localhost:8080/

Docker Config

FROM node:lts-alpine

# install simple http server for serving static content
RUN npm install -g http-server

# make the 'app' folder the current working directory
WORKDIR /app

# copy both 'package.json' and 'package-lock.json' (if available)
COPY package*.json ./

# install project dependencies
RUN npm install

# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY . .

# build app for production with minification
RUN npm run build

EXPOSE 5000
CMD [ "http-server", "-p 5000", "dist" ]
  • From pulls the latest node from the public docker registry (Docker Hub)
  • Then we install http-server, a simple static serve
  • Afterwards, we make a directory where we will place the app
  • Copy our package.json local machine to the docker instance
  • After installing the dependencies and copying the dist of the app, we run a build which we serve using http-server
  • This is all done in a docker container

GitLab & Kubernetes

The last part of the deployment begins with setting up a Kubernetes cluster and enabling GitLab Autodeploy.

First, we need to go to our Project > Settings > CI/CD > Auto DevOps. Enable the default pipeline. This is the auto part that escapes the need for gitlab-ci.yml.

Then we need to add a cluster which means going to our GC account and setting up a Kubernetes cluster. We need to specify a name, environment scope, type of project, region, number of nodes, machine type, and whether it’s an RBAC-enabled cluster.

We need to go to GitLab, to the CI/CD page, and add a GitLab runner, this needs to be configured to run docker.

We need to set a base domain, and finally add our created Kubernetes cluster to the GitLab Autodeploy.

We have three jobs if all is set up and done, build and review phases where we have a build on the remote Kubernetes cluster and review where we can add linting and tests. Cleanup is a manual job that deletes the commit from the pipeline ready to be deployed again.

JPoint Java conference in Moscow – 2019

Reading Time: 2 minutes

JPoint is one of the three (JPoint, Joker and JBreak) most common technical Java conferences for experienced Java developers. There will be 40 talks in two days, separated in 4 tracks in parallel. The conference takes place each year, this is being the seventh consecutive year.

Organization to visit JPoint conference

Apart from changing flights to reach Moscow everything else should not be any bigger issue. Book flights and choose some nearby hotel.

There are a few types of tickets. From which I’ll choose the personal ticket, main reason is the discount of 58%.

What is scheduled by now?

Many interesting subjects are going to be covered during two days of presentations:

  • New projects in Jenkins
  • Java SE 10 variable types
  • More of Java collections
  • Decomposing Java applications
  • AOT Java compilation
  • Java vulnerability
  • Prepare Java Enterprise application for production
  • Application migration JDK 9, 10, 11 and 12
  • Jenkins X

The following topics on the conference will be the most interesting ones for me:

  • Prepare Enterprise application for production (telemetry is crucial).
  • Is Java so vulnerable? What can we do to reduce security issues?
  • What is the right way of splitting application to useful components?
  • It looks that now with Jenkins Essentials there is significant less overhead for managing it, without any user involvement. Let us see what Jenkins replaced with few commands.

Just half of presentations are scheduled by now. Expect many more to be announce.

FrontCon 2019 in Riga, Latvia

Reading Time: 3 minutes

Only a few weeks left until I go to my first tech conference this year. Travelling means for me learning something new. And I like learning. Especially the immersion in a foreign culture and the contact to people of other countries makes me happy.

It’s always time to grow beyond yourself. ๐Ÿค“

BUT why visit Riga just for a conference? Riga is a beautiful city on the Baltic Sea and the capital of Latvia. Latvia is a small country with the neighbours: Russia, Lithuania, Estonia and the sea. AND it’s a childhood dream of me to get to know this city. ๐Ÿ˜

The dream was created by an old computer game named “The Patrician”. It’s a historical trading simulation computer game and my brothers and I loved it. We lost a lot of hours to play it instead of finding a way to hack it. ๐Ÿ˜…
For this dream, I will take some extra private days to visit Riga and the Country as well. ๐Ÿ˜‡

Preparation

The most important preparation such as flight, hotel, workshop and conference are completed.

Furthermore, I also plan to visit some of the famous Latvian palaces and the Medieval Castle of Riga. I also need some tips for the evenings: restaurants and sightseeings from you. Feel free to share them in the comments. ๐Ÿ˜Š

Some facts about the conference

There are four workshops available on the first day:

  • Serverless apps development for frontend developers
  • Vue state management with vuex
  • From Zero to App: A React Workshop
  • Advanced React Workshop

I chose the workshop with VueJS of course ๐Ÿ˜ and I’m really happy to see that I can visit most of the talks in the following days. There are some interesting speeches like “Building resilient frontend architecture”, “AAA 3D graphics” and secure talks and server-less frontend development. Click here for the full list of tracks.

My expectations

Above all, I’m open to events to learn new things. Therefore, I have no great expectations in advance. So I’m looking forward to the

  • VueJS & Reacts parts
  • Visit the speakers from Wix, N26 and SumUp

I’m particularly curious about the open spaces between the speeches. I will be glad to have some great talks with the guys. ๐Ÿคฉ

For my private trips:

That’s all for now

to be continued…

My expectations on JPoint Moscow 2019

Reading Time: 3 minutes

PREPARATION

Tickets

Tickets for individuals: 280โ‚ฌ until 1st March.
No possibility to change the participant.

Personal tickets may not be acquired by companies in any way. The companies may not fully or partially reimburse these tickets’ costs to their employees.

Standard tickets: 465โ‚ฌ until 1st March. A possibility to change the participant is given.

Tickets for companies and individuals, no limits. Includes a set of closing documents and amendments to the contract.

Flight

Skopje-Vienna-Moscow. Visa for Russia is needed!

Hotel

I guess a hotel like Crowne Plaza Moscow – World Trade Centre is a good option, because it’s in the same place where the conference takes part.

DAY 1

So what are my plans and expectations for the first day of JPoint. I will start with Rafael Winterhalter who is a Java Champion and will talk about Java agents. It will be interesting to see how Java classes can be used as templates for implementing highly performant code changes.

Next stop will be the creator of Jenkins: Kohsuke Kawaguchi. He has great headline Superpowers coming to your Jenkins and I am exicted to see where Jenkins is going next.

Last stop for day one, Simon Ritter from Azul Systems, with focus on local variable type inference. As with many features, there are some unexpected nuances as well as both good and bad use cases that will be covered.

There will be many more for day one but I will focus on these three for now. Also at the end, party at 20:00.

DAY 2

I will start the second day with Simon Ritter again, this time with focus on JDK 12. Pitfalls for the unwary, it will be interesting to see all the areas of JDK 9, 10, 11 and 12 that may impact application migration. Another topic will be how the new JDK release cadence will impact Java support and the choices of which Java versions to use in production.

Other headliner talks for the second day are still under consideration, so I’m expecting something interesting from Pivotal and JetBrains.

Feel free to share some Moscow hints or interesting talks that I’m missing.

Deploy Spring Boot Application on Google Cloud with GitLab

Reading Time: 5 minutes

A lot of developers experience a painful process with their code being deployed on the environment. We, as a company, suffer the same thing so that we wanted to create something to make our life easier.

After internal discussions, we decided to make a fully automated CI/CD process. We investigated and came up with a decision for that purpose to implement Gitlab CI/CD with google cloud deployment.

Further in this blog, you can see how we achieved that and how you can achieve the same.

Let’s start with setting up. ๐Ÿ™‚

  • After that, we create a simple rest controller for testing purposes.
package com.northlabs.gitlabdemo.rest;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class RootController {

    @GetMapping(value = "/")
    public String root() {
        return "Hello from Root";
    }

    @GetMapping(value = "/demo")
    public String demo() {
        return "Hello from Demo";
    }

}
  • Next step is to push the application to our GitLab repo.
  1. cd path_to_root_folder
  2. git init
  3. git remote add origin https://gitlab.com/47northlabs/47northlabs/product/playground/gitlab-demo.git
  4. git add .
  5. git commit -m “Initial commit”
  6. git push -u origin master

Now, after we have our application in GitLab repository, we can go to setup Google Cloud. But, before you start, be sure that you have a G-Suite account with enabled billing.

  • The first step is to create a new project: in my case it is northlabsgitlab-demo.

Create project: northlabs-gitlab-demo
  • Now, let’s create our Kubernetes Cluster.

It will take some time after Kubernetes clusters are initialized so that GitLab will be able to create a cluster.

We are done with Google Cloud, so it’s time to set up Kubernetes in our GitLab repository.

  • First, we add a Kubernetes cluster.
Add Kubernetes Cluster
Sign in with Google
  • Next, we give a name to the cluster and select a project from our Google Cloud account: in my case it’s gitlab-demo.
  • The base domain name should be set up.
  • Installing Helm Tiller is required, and installing other applications is optional (I installed Ingress, Cert-Manager, Prometheus, and GitLab Runner).

Install Helm Tiller

Installed Ingress, Cert-Manager, Prometheus, and GitLab Runner

After installing the applications it’s IMPORTANT to update your DNS settings. Ingress IP address should be copied and added to your DNS configuration.
In my case, it looks like this:

Configure DNS

We are almost done. ๐Ÿ™‚

  • The last thing that should be done is to enable Auto DevOps.
  • And to set up Auto DevOps.

Now take your coffee and watch your pipelines. ๐Ÿ™‚
After a couple of minutes your pipeline will finish and will look like this:

Now open the production pipeline and in the log, under notes section, check the URL of the application. In my case that is:

Application should be accessible at: http://47northlabs-47northlabs-product-playground-gitlab-demo.gitlab-demo.north-47.com

Open the URL in browser or postman.

https://47northlabs-47northlabs-product-playground-gitlab-demo.gitlab-demo.north-47.com
https://47northlabs-47northlabs-product-playground-gitlab-demo.gitlab-demo.north-47.com/demo
  • Let’s edit our code and push it to GitLab repository.
package com.northlabs.gitlabdemo.rest;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class RootController {

    @GetMapping(value = "/")
    public String root() {
        return "Hello from Root v1";
    }

    @GetMapping(value = "/demo")
    public String demo() {
        return "Hello from Demo v1";
    }
}

After the job is finished, if you check the same URL, you will see that the values are now changed.


https://47northlabs-47northlabs-product-playground-gitlab-demo.gitlab-demo.north-47.com

https://47northlabs-47northlabs-product-playground-gitlab-demo.gitlab-demo.north-47.com/demo

And we are done !!!

This is just a basic implementation of the GitLab Auto DevOps. In some of the next blogs, we will show how to customize your pipeline, and how to add, remove or edit jobs.

Webpack: The Good, The Bad and The Ugly

Reading Time: 5 minutes

Introduction

Webpack, a static module bundler, a complex yet simple tool, that allows you to spend between 10 minutes and 10 hours to make a simple web application bundlable.

The Good๏ปฟ

As a static module bundler, Webpack delivers a bundle of code that’s easily parseable for your browser or node environment. It allows users to use the UMD/AMD system of bundling code and applies it on images, HTML, CSS and much much more. This allows the developer to create a module or more modules with a segment or multiple segments of a web app, and serve it all, or serve parts.
One of the major sell points of Webpack is the ability to modify it to your choosing. It’s a rich ecosystem of plugins that exist to enhance the already plentiful options of the bundler itself. This allows for Front-end libraries and frameworks to use it to bundle their code using custom solutions and plugins, the likes of Angular, ReactJS and VueJS.

This is made easy by the years of development through which we have reached Webpack 4, that allows for multiple configuration files for development and production environments.

The example in the image is a view into the differences that allow for good development overview and testing, yet at the same time make it possible to use a single script to build a production-ready build. All of this together makes Webpack a viable bundler for most JS projects, especially when it’s the base of ecosystems like in Angular or React.

The Bad

Whilst Webpack is a good bundler of JS, it’s not the only one available. Issues in Webpack come from the fact that most of the libraries that you use need to have been worked on and developed with Webpack bundling in mind. Its native module support is limited, requiring you to specify those resources and what way you do want them to be represented in the final build. And most of the time, a random version update of a model can break the entire project, even on tertiary modules.

The learning curve of Webpack is getting higher and higher, it all depends on how complex a project you are working on, and whether you are working with a preconfigured project or building a configuration on your own.

Just for this article I have gone over not only the Webpack docs, but around 20 articles, a ton of github issues. And around a year and a half of my personal setup experience bundling project with Webpack using Three.js, A-Frame, React, Angular, and a myriad of other niche applications. And in the end it still feel like i’ve barely scratched the surface.

The entire debugging process is ugly, it depends upon source-maps which vary from library to library. You can use the in-built Webpack option or use a plugin for your specific tech, it will never be fun. Loading up a 160k code bundle and blocking your pc, even with source-maps.

The Ugly

All in all, when you give Webpack a chance, your encounter will rarely be a pleasant one. There shall never be a valid standardized version for using and implementing the core. And the plugins don’t help. Each time you find something that works, something new will magically brake. It’s like trying to fix a sinking ship.

This image represents my average day using Webpack, if my project was the dog and Webpack the firestarter. Currently using it in combination with VueJS. It’s the same story, either use the vuecli and a preloaded config. Or regret not doing that later when you need to optimize your specific code integration that needs to run as a bundled part of a larger application.

The worst part of all of this probably is the widespread usage of black-box software like Webpack, which in theory is open source but is a bundle of libraries and custom code that takes as much time as a doctors thesis to study properly. And for all of this, it still is one of the better options out there.

Conclusion

Webpack as a bundler is excellent for use in a multitude of applications. Especially if someone else handles the config for you (Angular, React, Vue clis). It will hopefully become better, but like anything else in JS its roots and backwards compatibility will always bring it down. Be ready for a high learning curve and a lot of frustration. If you like to explore new things, or reimplement existing solutions, or optimize workflow, give it a try.

Voxxed Days Bucharest & Devoxx Ukraine – HERE WE COME!

Reading Time: 4 minutes

Last year conferences

Already in 2018, we had the pleasure to visit 2 conferences in Europe:

We had a great time visiting these two cities ๐Ÿ™Œ and we can’t wait to do that again this year ๐Ÿ˜Ž.

What do we expect from the two conferences in 2019 ๐Ÿ‘ค๐Ÿ’ฌ?

Like last year, we are interested in several different topics. For me, I am looking forward to Methodology & Culture slots, Shady is most interested in Java stuff. All in all, we hope that there are several different interesting speeches about:

  • Architecture & Security
  • Cloud, Containers & Infrastructure
  • Java
  • Big Data & Machine Learning
  • Methodology & Culture
  • Other programming languages
  • Web & UX
  • Mobile & IoT

We โค๏ธ food!

The title is speaking for itself. We just love food ๐Ÿด! Travelling โœˆ๏ธ gives a good opportunity to see and taste something new ๐Ÿ‘…. All over the world, every culture has unique and special cuisine. Each cuisine is very different because of the different methods of cooking food. We try to taste (almost) everything when we arrive in new countries and cities.

We are really looking forward to seeing, what Bucharest’s and Kiev’s specialities are ๐Ÿฝ and to trying them all! Here some snapshots from our trips to the conferences in Amsterdam Netherlands and Krakow Poland in 2018…

What about the costs ๐Ÿ’ธ?

One great thing at N47 is, that your personal development ๐Ÿง  is important to the company. Besides hosted internal events and workshops, you can also visit international conferences ๐Ÿ›ซ and everything is paid ๐Ÿ’ฐ. Every employee can choose his desired conferences/workshops, gather the information about the costs and request his visit. One step of the approval process is writing ๐Ÿ“ about his expectation in a blogpost. This is exactly what you are reading ๐Ÿค“๐Ÿ“– at the moment.

Costs breakdown (per person)

Flights: 170 USD
Hotel: 110 USD CHF (3 days, 2 nights)
Conference: 270 USD
Food and public transportation: 150 USD
Knowledge gains: priceless
Explore new country and food: priceless
Spend time with your buddy: priceless
—–
Total: 700 USD
—–

Any recommendations for Bucharest or Kiev?

We never visited the two cities ๐Ÿ™, so if you have any tips or recommendations, please let us know in the comments ๐Ÿ’ฌ!

Hackdayz #18: Git Repo Sync Tool

Reading Time: 3 minutes

Working as a consultant usually involves handling client and local repositories seamlessly and is often pretty simple as an individual.

When working in a distributed team environment where only a portion of the members have access to certain areas of the project, the situation becomes a bit tricky to handle. We identified this as a minor showstopper in our organization and during our internal hackathon Hackdayz18 we decided to make our lives easier.

Application overview and features

  • Ability to synchronize public and client repositories with a single button click
  • Persist user data and linked repositories
  • Visualized state of repositories
  • Link commits to different user
  • Modify commit message and squash all commits into one

Our Team

Nikola Gjeorgjiev (Frontend Engineer)
Antonie Zafirov (Software Engineer)
Fatih Korkmaz (Managing Partner)

Challenges and results

Initially what we had in mind was a tool that would read two repository URLs and with no further constraints squash all commits on one of the repositories, change the commit message and push the result on the second repository.

It turned out things weren’t that simple for the problem we were trying to solve.

Through trial and error, we managed to build a working demo of our tool in a short time frame that only need small tweaking in order to be used.

Workflow for the GitSync tool
Workflow for the GitSync tool

The final result was a small application that is able to persist user data and using given user credentials to read the state of repositories linked to the user. The final step is where all the magic happens and the content of the source repository is transferred into the destination repository.

There is still work to be done to get the application production ready and available to the team, but in the given timeframe we did our best, I am happy with our results.

Technology stack

  • Gitgraph.js – JavaScript library which visually presents Git branching, Git workflow or whatever Git tree you’d have in mind
  • GitLab API – Automating GitLab via a simple and powerful API
  • VueJS – front-end development framework
  • SpringBoot – back-end development framework

Conclusion

Even though we underestimated the problem we were facing, we pulled through and were able to deliver the base of the solution of the problem.

The hackathon was a valuable learning experience for the entire team and we’re looking forward to the next one!

Update Jenkins plugins behind a corporate proxy

Reading Time: 5 minutes

Many teams are running Jenkins in their environment. Most of the time it needs to go through a corporate proxy in order to access external resources. It’s relatively easy to configure a proxy in the advanced plugin configuration. But if each domain needs to be white-listed, trouble starts almost certainly.

But let’s start with possible ways of updating Jenkins plugins.

The offline aka manual scenario

If you aren’t allowed to communicate with the internet from within your productive Jenkins environment at all, you still have some options to choose from:

  1. Don’t update
    Not a good idea. Period.
  2. Manually check and download all your plugins with the plugin index https://plugins.jenkins.io/
    Obviously, you don’t want to do this for a typical Jenkins setup with likely more than 70 plugins.
  3. Run a second Jenkins in a zone where you DO have Internet access
    Check for updates and download the desired plugins. Then copy all the .hpi files to your productive Jenkins.

The corporate proxy scenario

If you’re allowed to communicate through a corporate proxy with the Internet, your half-way. But many corporate proxies force you to white-list all required domains or IP’s in order to control access to external resources.

At first, you might think “yeaahhhh… no problem!” and you ask your network security colleagues to enable:

But then, the Jenkins mirroring features hits you.

The meta-data JSON https://updates.jenkins.io/update-center.json provides binary URLs like this: http://updates.jenkins-ci.org/download/plugins/slack/2.10/slack.hpi

But if you call this URL, you get first redirected to http://mirrors.jenkins-ci.org/plugins/slack/2.10/slack.hpi and then another redirect is done depending on your geographic location. In my case, I get redirected to http://ftp-chi.osuosl.org/pub/jenkins/plugins/slack/2.10/slack.hpi

As the status of the mirrors might change, the possibility of returned domains change as well and you’ll find yourself talking to your network security guy quite often.

By-pass mirroring by rewriting meta-data JSON

One possible solution to go round this mirroring feature is to download the update-center.json, rewrite all links and then use the rewritten JSON.

Download and rewrite JSON

Downloading and rewriting the official update-center.json could be done with many technologies. But I choose Jenkins for this as well. Therefore I created a simple Jenkins job name “update-center”, which is scheduled to run once every day.

The following declarative pipeline does the job:

pipeline {
  agent any
  stages {
    stage('Download and modify Update-Center Data') {
      steps {
        httpRequest(
                url: "https://updates.jenkins.io/update-center.json?id=default&amp;version=" + Jenkins.instance.version,
                consoleLogResponseBody: false,
                acceptType: 'APPLICATION_JSON',
                httpProxy: 'http://my.corporate.proxy:8080',
                outputFile: 'update-center-original.json'
        )
        script {
          updateCenterJson = readFile file: 'update-center-original.json'
          updateCenterJson = updateCenterJson.replaceAll("http:\\/\\/updates\\.jenkins-ci\\.org\\/download\\/", "http://archives.jenkins-ci.org/")
        }
        writeFile text: updateCenterJson, file: 'update-center.json'
        archiveArtifacts 'update-center.json'
      }
    }
  } 
}

Some notes regarding the above pipeline:

  • You need to replace my.corporate.proxy:8080 with your actual proxy.
  • We read the current installed Jenkins version with Jenkins.instance.version. This needs to be explicitly approved: https://jenkins.io/doc/book/managing/script-approval/. If this isn’t an option, the version has to be hard-coded.
  • The https://plugins.jenkins.io/http_request plugin is used to download the JSON. You could achieve a similar thing with a simple curl if you don’t want this plugin.
  • You still need to white-list those two domains in your corporate proxy:
    • https://updates.jenkins.io
    • http://archives.jenkins-ci.org
  • Instead of using archives.jenkins-ci.org, you should use a mirror as the official archives server doesn’t provide great performance.

Use the rewritten JSON

Proxy configuration

Go to the advanced Plugin Manager configuration http://localhost:8080/pluginManager/advanced (or Jenkins > Manage Jenkins > Manage Plugins > Advanced) and configure your corporate proxy:

Update Site configuration

You can configure the URL to the JSON at the bottom of the same page.

In my setup, http://localhost/job/update-center/lastSuccessfulBuild/artifact/update-center.json is used.

Now the update will check and download behind your corporate proxy!

Permission

Note: You need to ensure, that anonymous users have read permission to your artifacts. Read more about security configuration here: https://wiki.jenkins-ci.org/display/JENKINS/Standard+Security+Setup

Questions?

Should you have specific questions on this subject, please feel free to comment and share any question you have. If it helped you please like โค๏ธ or share๐Ÿ“Œ this story, so others like you can find it.

Follow N47 on Instagram, Twitter, LinkedIn, Facebook to get updated as soon as possible.