What is CI? Continuous Integration Explained

Reading Time: 5 minutes

Continuous Integration (CI) is a software development practice that requires members of a team, to frequently integrate their code changes into a central repository (master branch), preferably several times a day.

Each merge is then verified by automatically generating a build, and running automated tests against that build.

By integrating regularly, you can detect errors quickly, as well as locate and fix them easier.

Why is Continuous Integration Needed?

Back in the days, BCI – Before Continuous Integration, developers from a single team might have worked in isolation for a longer period of time, and they merged their code changes only when they finished working on a particular feature or bug fix.

This caused the well-known merge hell (integration hell) or in other words a lot of code conflicts, bugs introduced, lots of time invested into the analysis, as well as frustrated developers and project managers.

All these ingredients made it harder to deliver updates and value to the customers on time.

How does Continuous Integration Work?

Continuous Integration as a software development practice entails two components: automation and cultural.

The cultural component focuses on the principle of frequent integrations of your code changes to the mainline of the central repository, using a version control system such as Git, Mercurial or Subversion.

But applying the cultural component you will drastically lower the frustrations and time wasted merging code, because, in reality, you are merging small changes all the time.

As a matter of fact, you can practice Continuous Integration using only this principle, but with adding the automation component into your CI process you can exploit the full potential of the Continuous Integration principle.

Continuous Integration Image
Source

As shown in the picture above, this includes a CI server that will generate builds automatically, run automated tests against those builds and notify (or alert) the team members of the results.

By leveraging the automation component you will immediately be aware of any errors, thus allowing the team to fix them fast and without too much time spent analysing.

There are plenty of CI tools out there that you can choose from, but the most common are: Jenkins, CircleCI, GitHub Actions, Bitbucket Pipelines etc.

Continuous Integration Best Practices and Benefits

Everyone should commit to the mainline daily

By doing frequent commits and integrations, developers let other developers know about the changes they’ve done, so passive communication is being maintained.

Other benefits that come with developers integrating multiple times a day:

  • integration hell is drastically reduced
  • conflicts are easily resolved as not much has changed in the meantime
  • errors are quickly detected

The builds should be automated and fast

Given the fact several integrations will be done daily, automating the CI Pipeline is crucial to improving developer productivity as it leads to less manual work and faster detection of errors and bugs.

Another important aspect of the automated build is optimising its execution speed and make it as fast as possible as this enables faster feedback and leads to more satisfied developers and customers.

Everyone should know what’s happening with the system

Given Continuous Integration is all about communication, a good practice is to inform each team member of the status of the repository.

In other words, whenever a merge is made, thus a build is triggered, each team member should be notified of the merge as well as the results of the build itself.

To notify all team members or stakeholders, use your imagination, though email is the most common channel, but you can leverage SMS, integrate your CI server with communication platforms like Slack, Microsoft Teams, Webex etc.

Test Driven Development

Test Driven Development (TDD) is a software development approach relying on the principle of writing tests before writing the actual code. What TDD offers as a value in general, is improved test coverage and an even better understanding of the system requirements.

But, put those together, Continuous Integration and TDD, and you will get a lot more trust and comfort in the CI Pipelines as every new feature or bug fix will be shipped with even better test coverage.

Test Driven Development also inspires a cultural change into the team and even the whole organisation, by motivating the developers to write even better and more robust test cases.

Pull requests and code review

A big portion of the software development teams nowadays, practice pull request and code review workflow.

A pull request is typically created whenever a developer is ready to merge new code changes into the mainline, making the pull request perfect for triggering the CI Pipeline.

Usually, additional manual approval is required after a successful build, where other developers review the new code, make suggestions and approve or deny the pull request. This final step brings additional value such as knowledge sharing and an additional layer of communication between the team members.

Summary

Building software solutions in a multi-developer team are as complex as it was five, ten or even twenty years ago if you are not using the right tools and exercise the right practices and principles, and Continuous Integration is definitely one of them.


I hope you enjoyed this article and you are not leaving empty-handed.
Feel free to leave a comment. 😀

Follow N47 on InstagramTwitterLinkedInFacebook for any updates.

How we deploy with Terraform and BitBucket to Azure Kubernetes

Reading Time: 6 minutes

N47 implemented a set of back-office web applications for Prestige, a real estate management company located in Zurich, Switzerland. One application is a tool for displaying construction projects nearby properties managed by Prestige and a second example is a tool for creating and assigning orders to craftsmen. But the following examples aren’t specific for those use cases.

Screenshot of the Construction Project tool.

An Overview

The project entails one frontend application with multiple microservices whereby each service has its own database schema.

The application consumes data from Prestige’s main ERP system Abacus and third-party applications.

N47 is responsible for setting up and maintaining the full Kubernetes stack, MySQL Database, Azure Application Gateway and Azure Active Directory applications.

Another company is responsible for the networking and the Abacus part.

Architectural Overview

Involved Technologies

Our application uses following technologies:

  • Database: MySQL 8
  • Microservices: Java 11, Spring Boot 2.3, Flyway for database schema updates
  • Frontend: Vue.js 2.5 and Vuetify 2.3
  • API Gateway: ngix

The CI/CD technology stack includes:

  • Source code: BitBucket (Git)
  • Pipelines: BitBucket Pipelines
  • Static code analysis: SonarCloud
  • Infrastructure: Terraform
  • Cloud provider: Azure

We’ll focus on the second list of technologies.

Infrastructure as Code (IaC) with Terraform and BitBucket Pipelines

One thing I really like when using IaC is having the definition of the involved services and resources of the whole project in source code. That enables us to track the changes over time in the Git log and of course, it makes it far easier to set up a stage and deploy safely to production.

A blog post about some Terraform basics will follow soon. In the meanwhile, you can find some introduction on the official Terraform website.

Storage of Terraform State

One important thing when dealing with Terraform is storing the state in an appropriate place. We’ve chosen to create an Azure Storage Account and use Azure Blob Storage like this:

terraform {
  backend azurerm {
    storage_account_name = "prestigetoolsterraform"
    container_name = "prestige-tools-dev-tfstate"
    key = "prestige-tools-dev"
  }
}

The required access_key is passed as an argument to terraform within the pipeline (more details later). You can find more details in the official tutorial Store Terraform state in Azure Storage by Microsoft.

Another important point is not to run pipelines in parallel, as this could result in conflicts with locks.

Used Terraform Resources

We provide the needed resources on Azure via BitBucket + Terraform. Selection of important resources:

Structure of Terraform Project

We created an entry point for each stage (local, dev, test and prod) which is relatively small and mainly aggregate to the modules with some environment-specific configurations.

The configurations, credentials and other data are stored as variables in the BitBucket pipelines.

/environments
  /local
  /dev
  /test
  /prod
/modules
  /azure_active_directory
  /azure_application_gateway
  /azure_aplication_insights
    /_variables.tf
    /_output.tf
    /main.tf
  /azure_mysql
  /azure_kubernetes_cluster
  /...

The modules themselves have always a file _variables.tf, main.tf and _output.tf to have a clean separation of input, logic and output.


Example source code of the azure_aplication_insights module (please note that some of the text have been shortened in order to have enough space to display it properly)

_variables.tf

variable "name" {
  type = string
}

variable "location" {
  type = string
}

variable "resource_group_name" {
  type = string
}

main.tf

resource "azurerm_application_insights" "ai" {
  name                = var.name
  location            = var.location
  resource_group_name = var.resource_group_name
  application_type    = "web"
}

_output.tf

output "instrumentation_key" {
  value = azurerm_application_insights.ai.instrumentation_key
}

BitBucket Pipeline

The BitBucket pipeline controls Terraform and includes the init, plan and apply. We decided to manually apply the changes in the infrastructure in the beginning.

image: hashicorp/terraform:0.12.26

pipelines:
  default:
      - step:
        name: Plan DEV
        script:
          - cd environments/dev
          - terraform init -backend-config="access_key=$DEV_TF_CONFIG_ACCESS_KEY"
          - terraform plan -out out-overall.plan
        artifacts:
          - environments/dev/out-overall.plan

  branches:
    develop:
      - step:
        name: Plan DEV
        script:
          - cd environments/dev
          - terraform init -backend-config="access_key=$DEV_TF_CONFIG_ACCESS_KEY"
          - terraform plan -out out-overall.plan
        artifacts:
          - environments/dev/out-overall.plan
          - environments/dev/.terraform/**
      - step:
        name: Apply DEV
        trigger: manual
        deployment: dev
        script:
          - cd environments/dev
          - terraform apply out-overall.plan

    master:
      # PRESTIGE TEST
      - step:
          name: Plan TEST
          script:
            - cd environments/test
            - terraform init -backend-config="access_key=$PRESTIGE_TF_CONFIG_ACCESS_KEY"
            - terraform plan -out out-overall.plan
          artifacts:
            - environments/test/out-overall.plan
            - environments/test/.terraform/**
      - step:
          name: Apply TEST
          trigger: manual
          deployment: test
          script:
            - cd environments/test
            - terraform apply out-overall.plan

      # PRESTIGE PROD ...

Needed Steps for Deploying to Production

1. Create feature branch with some changes

2. Push to Git (BitBucket pipeline with step Plan DEV will run). All the details about the changes can be found in the Terraform plan command

3. Create a pull request and merge the feature branch into develop. This will start another pipeline with the two steps (plan + apply)

4. Check the output of the plan step before triggering the deploy on dev

5. Now the dev stage is updated and if everything is working as you wish, create another pull request to merge from develop to master. And re-do the same for the production of other stages

We have just deployed an infrastructure change to production without logging into any system except BitBucket. Time for celebration.

people watching concert
Symbol picture of N47 production deployment party (from unsplash)

Is Really Everything That Shiny?

Well, everything is a big word.

We found issues, for example with cross-module dependencies, which aren’t just solvable with a depends_on. Luckily, there are some alternatives:

network module:

output "id" {
  description = "The Azure assigned ID generated after the Virtual Network resource is created and available."
  value = azurerm_virtual_network.virtual_network.id
}

kubernetes cluster module, which depends on network:

variable "subnet_depends_on" {
  description = "Variable to force module to wait for the Virtual Network creation to finish"
}

and the usage of those two modules in environments/dev/main.tf

module "network" {
  source = "../../modules/azure_network"
}

module "kubernetes_cluster" {
  source = "../../modules/azure_kubernetes_cluster"
  subnet_depends_on = module.network.id
}

After having things set up, it really makes joy to wipe out a stage and just provision everything from scratch with running a BitBucket pipeline.


CloudFormation: Passing values and parameters to nested stacks

Reading Time: 7 minutes

Why CloudFormation?

CloudFormation allows provisioning and managing AWS resources with simple configuration files, which let us spend less time managing those resources and have more time to focus on our applications that run on AWS instead.

We can simply write a configuration template (YAML/JSON file) that describes the resources we need in our application (like EC2 instances, Dynamo DB tables, or having the entire app monitoring automated in CloudWatch). We do not need to manually create and configure individual AWS resources and figure out what is dependent on what, and more importantly, it is scalable so we can re-use the same template, with a bunch of parameters, and have the entire infrastructure replicated in different stages/environments.

Another important aspect of CloudFormation is that we have our infrastructure as code, which can be version controlled, reviewed and easily maintained.

Nested stacks

CloudFormation nested stacks diagram

As our infrastructure grows, common patterns can emerge, which can be separated into dedicated templates, and re-used later in other templates. A good example is load balancers and VPC network. There is another reason, that may look unimportant, but CloudFormation stacks have a limit, which is 200 resources per stack, which can be easily reached as our application grows. That is why nested stacks can be really useful.

A nested stack is a simple stack resource of type AWS::CloudFormation::Stack. Nested stacks can have themselves contain other nested stacks, resulting in a hierarchy of stacks, as shown in the diagram on the right-hand side. There must be only one root stack, which is called parent.

Passing parameters to the nested stacks

One of the biggest challenges when having nested stacks is parameters exchange between stacks. Without parameters, it would be impossible to have robust and dynamic stacks, that are scalable and flexible.

The simplest example would be deploying the same CloudFormation stack to multiple stages, like beta, gamma and prod (dev, pre-prod, prod, or any other naming convention you prefer).

Depending on which stage you deploy your application, you may want to set different properties to certain resources. For example, in the development stage, you will not have the same traffic as prod, therefore you can fine-grain the resources for your needs, and prevent spending extra money for unused resources.

Another example is when an application is deployed to various regions, that have different traffic consumption and time spikes. For instance, an application may have 1 million users in Europe, but only 100 000 in Asia. Using stack parameters, allows you to reduce the resources you use in the latter region, which can significantly impact your finances.

Below is a code snippet, showing a simple use case where a DynamoDB table is created in a nested stack, that receives the stage parameter from the parent stack. Depending on which stage, at deploy time, we set different read and write capacity to our table resource.

Root stack

In the parent stack, we define Stage parameter under the Properties section. We later pass the parameters to the nested stack, which is created from a template child_stack.yml, stored in an S3 bucket.

---
AWSTemplateFormatVersion: '2010-09-09'
Description: Root stack
Parameters:
  Stage:
    Type: String
    Default: beta
    AllowedValues:
        - beta
        - gamma
        - prod
  TestRegion:
    Type: String
Resources:
    DynamoDBTablesStack:
      Type: AWS::CloudFormation::Stack
      Properties:
        TemplateURL: https://n47-cloudformation.s3.eu-central-1.amazonaws.com/child_stack.yml
        Parameters:
            Stage:
                Ref: Stage
Child stack

In the nested stack, we define the Stage parameter, just like we did in the parent. If we do not define it here either, the creation will fail because the passed parameter (from the parent) is not recognized. Whatever parameters we pass to the nested stack, have to be defined in its template parameters.

---
AWSTemplateFormatVersion: '2010-09-09'
Description: Nested stack
Parameters:
  Stage:
    Type: String
    Default: beta
    AllowedValues:
        - beta
        - gamma
        - prod
Mappings:
    UsersDDBReadWriteCapacityPerStage:
        beta:
            ReadCapacityUnits: 10
            WriteCapacityUnits: 10
        gamma:
            ReadCapacityUnits: 50
            WriteCapacityUnits: 50
        prod:
            ReadCapacityUnits: 500
            WriteCapacityUnits: 1000
Resources:
    UserTable:
        Type: AWS::DynamoDB::Table
        Properties:
            AttributeDefinitions:
                - AttributeName: user_id
                  AttributeType: 'S'
            KeySchema:
                - AttributeName: user_id
                  KeyType: HASH
            ProvisionedThroughput:
                ReadCapacityUnits: !FindInMap [UsersDDBReadWriteCapacityPerStage, !Ref Stage, ReadCapacityUnits]
                WriteCapacityUnits: !FindInMap [UsersDDBReadWriteCapacityPerStage, !Ref Stage, WriteCapacityUnits]
            TableName: Users

The Mappings section used in the child template is used for fetching the corresponding Read/Write capacity value at deploy time when the actual value for Stage parameter is available. More about Mappings can be found in the official documentation.

Output resources from nested stacks

Having many nested stacks usually implies cross-stack communication. This encourages more template code reuse.

We will do a simple illustration by extracting the DynamoDB table name we created in the nested stack before, and pass it as a parameter to a second nested stack, and also by exporting its value.

In order to expose resources from a stack, we need to define them in the Outputs section of the template. We start by adding an output resource, in the child stack, with logical id UsersDDBTableName, and an export named UsersDDBTableExport.

Outputs:
    UsersDDBTableName:
        # extract the table name from the arn
        Value: !Select [1, !Split ['/', !GetAtt UserTable.Arn]] 
        Export:
            Name: UsersDDBTableExport

Note: For each AWS account, Export names must be unique within a region.

Then we create a second nested stack, which will contain two DynamoDB tables, one named UsersWithParameter and the second one UsersWithImportValue. The former is created by passing the table name from the first child stack as a parameter, and the latter by importing the value that has been exported UsersDDBTableExport.

(Note, that this is just an example to showcase the two options to access resources between stacks, and is no real-world scenario)

For that, we added this stack definition in the root’s stack resources:

SecondChild:
    Type: AWS::CloudFormation::Stack
    Properties:
        TemplateURL: https://n47-cloudformation.s3.eu-central-1.amazonaws.com/child_stack_2.yml
        Parameters:
            TableName:
                Fn::GetAtt:
                  - DynamoDBTablesStack
                  - Outputs.UsersDDBTableName

Below is the entire content of the second child stack:

---
AWSTemplateFormatVersion: '2010-09-09'
Description: Nested stack
Parameters:
    TableName:
        Type: String
        
Resources:
    UserTableWithParameter:
        Type: AWS::DynamoDB::Table
        Properties:
            AttributeDefinitions:
                - AttributeName: customer_id
                  AttributeType: 'S'
            KeySchema:
                - AttributeName: customer_id
                  KeyType: HASH
            ProvisionedThroughput:
                ReadCapacityUnits: 1
                WriteCapacityUnits: 1
            TableName: !Join ['', [!Ref TableName, 'WithParameter'] ]
    UserTableWithImportValue:
        Type: AWS::DynamoDB::Table
        Properties:
            AttributeDefinitions:
                - AttributeName: customer_id
                  AttributeType: 'S'
            KeySchema:
                - AttributeName: customer_id
                  KeyType: HASH
            ProvisionedThroughput:
                ReadCapacityUnits: 1
                WriteCapacityUnits: 1
            TableName: !Join ['', [!ImportValue UsersDDBTableExport, 'WithImportValue'] ]

Even though we achieved the same thing by using nested stacks outputs, and exporting values, there is a difference between them. When you do an export, the exporting value is accessible to external stacks, within the same region, on the other hand, nested stacks outputs can be only passed, as a parameter to the other nested stacks within the same parent.

Notes:

  • Cross-stack references across regions cannot be created. You can use the intrinsic function Fn::ImportValue to import only values that have been exported within the same region
  • You cannot delete a stack if another stack references one of its outputs
  • You cannot modify or remove an output value that is referenced by another stack

Below are some screenshots from the AWS console, illustrating the created stacks, from the code snippets shared above:

Figure 1: root stack containing two nested stacks

Figure 2: first nested stack containing Users DynamoDB table

Figure 3: second nested stack containing UsersWithImportValue and UsersWithParameter DynamoDB tables

You can download the source templates here.


If you have any questions or feedback, feel free to comment here.

Taiko, useful toy for automation testing

Reading Time: 6 minutes

Every day we are implementing new features/client requirements. On every release, we want those changes to be correct, previous features should still be working… with other words we want a stable application. That is why it’s necessary for the BE and FE to write tests (unit and integration tests).

The best way is to have regression end to end automation tests. But it is not always fun to write them. Sometimes it is complex, takes some time so that we are avoiding writing them. Sometimes if a workload is larger it requires a dedicated team with QA to cover all this work. Furthermore to follow all changes and adapt existing tests etc.

There are a few tools that make all this work easier. Browser robots that record actions on the web pages, some frameworks that offer good and easy ways of writing automation tests, but either they are too difficult to learn and sometimes hard to use or they are not for free.

That is why I chose Taiko, a free and open source browser test automation framework that makes all this work easy to do. A few features that are crucial for writing end to end automated tests in my opinion are:

  • Easy setup
  • Interactive recording
  • Smart selectors
  • Easy integration with Cauge

The best way to present all this is to go through some simple examples. I’ll use http://saucedemo.com/ to write a simple test for adding items in the cart.

I want almost everyone to be able to write tests, it does not have to be a complex procedure to set it up. Taiko is a free open source node.js library and it works with chromium-based browsers. Tests are written in JavaScript or any language that compiles to JavaScript (TypeScript).

This means to start to write a test with Taiko we will need a pre-installed node.js. It is a straightforward setup. (https://nodejs.org/en/download/).

For the given example I used a power shell on windows but you can write it in any terminal application that you are familiar with. The command to install taiko is:

npm install -g taiko

After successful installation of taiko we will run REPL prompt:

npx taiko

Here are two important features:

  • Interactive recorder. It means that taiko will archive all successful commands that we are going to write here
  • And the second one is the use of Taiko’s API. We can list all available APIs with command
.api

or

.api <api>

All these API references are online too: https://docs.taiko.dev/api/reference.

Simple example

Let’s write one basic test for http://saucedemo.com/. By writing following commands in the prompt we will assure that saucedemo login, adding a product to cart and basket works:

await openBrowser();

// opens a new browser, I had chromium and it was open without any other setup because it uses Chrome DevTools Protocol instead of WebDriver…

await goto("saucedemo.com");

// navigates to / opens the web page that we want to test

var password = await text("_", below("Password for")).text();

In this line of code we have a few key commands:

  • var password – we will take the data (password) from and use it to log in
  • text – selector – which identifies an element on the page, it will look for text element with text to match with. In our example, it will be “_”
  • below – proximity selector – it makes relative HTML element search. It will search for elements under bellow “Password for” on the page

var usernames = await text("_", below("usernames")).text();
console.log(usernames);

// as it is js we can use this command too; it will be archived. I used it to check the values, it can be removed from the final script

var username = usernames.split("\n")[1];
console.log(username);
var password = passwords.split("\n")[1];
console.log(password);
await write(username, into(textBox({id: "user-name"})));

After the username and password are read from page, we will log in:

  • write – command that types given text into the given or focused element
  • into  – selector for the element to write text into
  • textBox – selector for a text field for input, selecting it with some attribute. In our case, it will be id, but it can be any attribute too
await write(password, into(textBox({id: "password"})));
await click("LOGIN");

// again smart selector, it automatically looks for and clicks button login

Since there are more products, we want to test a specific one, we will use proximity selector to add a specific product to the cart. If we don’t add “toRightOf” it will click the first component with “ADD TO CART” label on it.

await click("ADD TO CART", toRightOf("$9.99"));
await click("ADD TO CART", toRightOf("$15.99"));

To assure that ADD TO CART functionality works, we will check the basket, if the wanted products are there:

await click(link({class: "shopping_cart_link"}));

Assertions are made with every command, looking for any element. For example command

await click("ADD TO CART", toRightOf("$9.998"));

will throw an error

[FAIL] Error: Element with text $9.998 not found, run `.trace` for more info.

but if we want to make some customer check then we can use any node.js assertions:

assert.strictEqual(await text("9.99").exists(), true);
assert.strictEqual(await text("15.99").exists(), true);
await click("menu");
await click("Logout");

With all these commands we created one basic test scenario. All these commands are already archived and we can write them in js file to execute this test anytime:

.code testAddCart.js

And exit the recording session:

.exit

Running our previous test with:

npx taiko testAddCart.js

Other possibilities

Tests can be grouped and run with test runners. There are three that are supported Gauge, Mocha, Jest. Try it with Cauge, it is an easy straightforward procedure to set it up. Like that, by using Gauge, we can integrate these tests to build pipeline in Jenkins.

Conclusion

The setup is simple, very easy and fast.

The interactive way of writing tests, seeing the result of every command in real-time is very good for learning the library. You don’t have to make a write build run, just write it in REPL and that’s it, you see the results.

But the moment of selecting elements on the page was not so satisfying. Smart selectors are not so smart when there are more of the same elements. You have to go again with XPath or class and have to debug the page and check the code for attributes and values.

Tool Showcase: Node-RED

Reading Time: 5 minutes

Node-RED is a flow-oriented tool to wire together hardware devices, APIs and online services. It is mainly targeting the IoT market but can be used for a lot of other things as well. Because of its easy to use browser-based UI and drag and drop programming system, it’s really beginner-friendly with a steep learning curve.

Even though it was developed by IBM in 2013, it’s not really known to most of the IT community. At least none of my colleagues knew it. That’s worth for me to write this tool showcase. Enjoy reading!

Getting started

Instead of just reading along, I encourage everybody to just start Node-RED and try it yourself. If docker is installed, this is just a matter of seconds. Use the following command to start a Node-RED instance locally:

docker run -it -p 1880:1880 --name mynodered nodered/node-red

That’s it. You are ready to go! Open your browser and go to localhost:1880 to access the Node-RED UI.

One of the simplest flows is the following one:

Drag an “http in”, “template” and “http out” node into the flow and connect them. After clicking the deploy button you can access localhost:1880/<whateverPathYouConfiguredInYourHttpInNode> to see whatever you’ve configured in your template node. Congratulations, you have just created your first Node-RED flow!

Of course, rendering static content on an endpoint is not the most exciting thing to do, but between the HTTP in and out nodes, you’re free to do whatever you want. Nodes to make HTTP calls to different URLs, reading and writing files and much more are provided by Node-RED by default.

The power of the community

Node-RED uses Node.js for its nodes (yes, the terminology “node” is overused in the Node-RED context 🙂 ). This has a big advantage, that new nodes can be added easily from the node package manager (npm). You can find these nodes by searching for “node-red-contrib” in the npm repository. An even simpler option is to install these nodes using the “Manage Palette” option in the UI. There you can install new nodes with a single click.

There are nodes for almost everything. Need support for slack? Yep, there’s a node for that. Tired of pressing light switches in your house to turn off and on your Philips Hue lights? Yep, there’s a node for that as well. Whatever you can imagine, there’s a node for it.

A slightly more advanced flow

To test some Node-RED features, I tried to come up with a slightly more complicated example. I own some Philips Hue lamps and a LaMetric Time. After searching some nodes for these devices, I was really surprised that somebody already built some for these two devices (I was especially surprised about the support for the not so well-known LaMetric Time).

My use case was pretty straight forward. Turn on the lights when it gets dark and display a message on my LaMetric near my TV. At midnight, turn off the lights and display a different message. Additionally, I wanted some web endpoints that I could call to trigger both actions manually.

After only a few minutes, I had the following flow:

And it works! I found a node that sends an event as soon as the sun goes down at my particular location. Very cool. All the other nodes (integration for Philips Hue and LaMetric) can also easily be added with the “Manage Palette” option in the GUI. All in all, the implementation of my example use-case was pretty straight forward and required no programming know-how.

Expandability

Even though there are almost 3000 community-contributed nodes available to use, you might have some hardware or API that does not (yet) have some pre-made nodes. In that case, you can implement your own nodes pretty easily. The only thing required is a text editor and some node.js know-how.

The Node-RED documentation provides a good guide on how to create custom nodes: https://nodered.org/docs/creating-nodes/first-node

It is highly recommended to push your custom nodes to the npm repository to be used by the community.

Additional Resources

There are a whole lot more features that are not described in this blogpost.

  • Flows are just .json files and can easily be imported or exported or added to a git repository
  • Flows can be converted to subflows and used like nodes in other flows
  • Multiple flows can run in parallel and trigger each other
  • There are special nodes for error handling or low-level TCP communication
  • There are keyboard shortcuts for everything
  • … and much more!

Feel free to have a look yourself:

Thanks for reading!

A simple way of using Micrometer, Prometheus and Grafana (Spring Boot 2)

Reading Time: 7 minutes

When we run any java application, we are running JVM. That JVM uses resources like memory, processor etc. Same happens when we run any spring application too; it runs and uses our hardware resources. Monitoring and measuring these parameters is crucial when we are in production or when we like to test the performance of our application. With spring, it is easy. We should just include spring actuator and it will give us access to almost all measurements that we need like:

"jvm.memory.max",
"jvm.threads.states",
"jvm.gc.memory.promoted",
"jvm.memory.used",
"jvm.gc.max.data.size",
"jvm.gc.pause",
"jvm.memory.committed",
"system.cpu.count",
"logback.events",
…

To set up spring actuator add the following dependency in our project:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

and on the following endpoint:

<host/context-path>/actuator

we will have basic links to additional features of the application and monitoring:

{
    "_links": {
        "self": {
            "href": "http://localhost:8080/actuator",
            "templated": false
        },
        "health": {
            "href": "http://localhost:8080/actuator/health",
            "templated": false
        },
        "health-path": {
            "href": "http://localhost:8080/actuator/health/{*path}",
            "templated": true
        },
        "info": {
            "href": "http://localhost:8080/actuator/info",
            "templated": false
        }
    }
}

If these basic information are not enough we can extend them with adding the following parameter in the application configuration file:

management.endpoints.web.exposure.include=*

By following any of these links, we will access the details. For our use it will be http://localhost:8080/actuator/metrics from which we are going to access to the metrics of our application.

Now we have almost everything what we need to monitor our application how it performs. Requests, JVM memory, cache, threads etc…

Micrometer

However, if we have some more logic in our code and we need more precise metrics for our application and want to get metrics for our code we will need some other way to get them. Spring Boot 2 Actuator enrich all this already exiting metrics with the micrometer data provider.

Micrometer is a dimensional-first metrics collection facade whose aim is to allow you to time, count, and gauge your code with a vendor-neutral API.

Moreover, a micrometer is a vendor-neutral data provider and exposes application metrics to other external monitoring systems like Prometheus, AWS Cloudwatch etc…

Micrometer gives a set of Meter primitives and plus including Timer, Counter, Gauge, DistributionSummary, LongTaskTimer, FunctionCounter, FunctionTimer, and TimeGauge. Here we should be aware that every different meter type has a different number of time-series metrics. The gauge has a single metric, but the timer has a count of timed events and a total time of all events timed.

If we write something like this in our code:

List<Integer> gaugeList = registry.gauge("dummy.gauge.list", Collections.emptyList(), someList, List::size);
        List<Integer> gaugeCollectionsSizeList = registry.gaugeCollectionSize("dummy.size.list", Tags.empty(), someList);
        Map<Integer, Integer> gaugeMapSize = registry.gaugeMapSize("dummy.gauge.map", Tags.empty(), someMap);

registry.timer("dummy.timer", Tags.empty()).record(() -> {
            slowDummyMethod();
        });

We will have three parameters for the Timer (dummy_timer_seconds_count, dummy_timer_seconds_max. dummy_timer_seconds_sum) and dummy_gauge_list, dummy_gauge_map, dummy_gauge_list.

All this data can be used from many monitoring systems like Netflix Atlas, CloudWatch,  Datadog, Ganglia etc… Here in our case, we will use Prometheus.

Prometheus

Including Prometheus in our project is easy with adding maven dependency:

<dependency>
    <groupId>io.micrometer</groupId>
    <artifactId>micrometer-registry-prometheus</artifactId>
</dependency>

This will create the new endpoint in the actuator http://localhost:8080/baeldung/actuator/prometheus. If we access this URL we will get the metrics from the micrometer.

To see this data in some graphic UI we will have to start Prometheus server. We can do that directly by downloading the Prometheus server and run it.

https://prometheus.io/download/

The configuration is in the prometheus.yml file.

Basic parameters that we should set up here are:

global:
  scrape_interval:     10s # Scrape interval to every 10 seconds. Default value is every 1 minute.

and

scrape_configs:
  - job_name: 'spring_micrometer'

    metrics_path: '/micromexample/actuator/prometheus' # Path to the prometheus end point in our application. “micromexample” is the context and “actuator/prometheus” is default path for prometheus in our application
    static_configs:
    - targets: ['localhost:8080'] # host where our application is deployed

Or another way to have Prometheus server we can run docker image which will contain Prometheus in it. We can do that with the following command:

docker run -d -p 9090:9090 -v <yours-prometheus-config-file.yml>:/etc/prometheus/prometheus.yml prom/prometheus

“9090” – the port where our Prometheus will listen, this value is the default port

<yours-prometheus-config-file.yml> – our configuration file for Prometheus

“prom/prometheus” – docker image with Prometheus

After we run spring boot application with Prometheus included and we run Prometheus server we should be able to see the metrics in some basic view from Prometheus

http://localhost:9090/graph

this is what we should get from our service:

For this graph, we wrote the following code (to have something to be sure that everything works)

registry.timer("dummy.timer ", Tags.empty()).record(() -> {
    slowDummyMethod();
});

Grafana

If we want, reach graphical UI, easy to browse through the metrics data, dashboard editing, cloud monitoring compatibility then it will be a good idea to use Grafana.

Setting up Grafana is similar to Prometheus, we will need a Grafana server.

Again, we can download and install it locally. Like this, we will have service in our OS:

https://grafana.com/get

Or run docker image with Grafana in it:

docker run -d -p 3000:3000 grafana/grafana

“3000” – port for grafana

“grafana/grafana” – docker image with grafana

Default user and password are admin/admin. On the first login, you will be asked to add a new password.

After we log in we should add source, wherefrom Grafana will read the metrics. Go to the following left menu: Configuration -> Data Sources, chose the “Data Sources” tab and add new data source “Add data source”.

Since we decided to go with Prometheus we will select Prometheus source. In the new page (Configuration), because we did not set any authentication or anything else in Prometheus – everything is default, we need just to set HTTP -> URL field. For our case, it will be “http://localhost:9090”. If everything is ok by clicking “Save and test” we should get a green bar that Grafana is connected to Prometheus and we can access the metrics from it.

Let’s see our first metrics from the timer that we added in our application. For this one we will create our own new dashboard:

Chose “Add Query” and in the new window add following key in the “Metrics”: “dummy_timer_seconds_count”. This will add one metric in our graph.

In the same graph, we can add the second one from the timer “dummy_timer_seconds_max”. With this, we will have both metrics in the same graph.

There are other parameters that you can set, but for basic setup default values are fine.

With this, we have set up everything we need for monitoring our application. Next is to add more graphs for metrics that we want to monitor.

Can Siri finally understand more than the predefined Intents?

Reading Time: 3 minutes

GUI is so 90’s

Lately, I find myself increasingly annoyed to have to use my phone to perform boring and recurring tasks like look up the quote of a special cryptocurrency. Especially in some circumstances like eg when I’m home. At home, I want to feel foremost comfortable. This is hard to achieve when I have to get up to search for my phone, once again. Wouldn’t it be nice to just ask in the room and get the answers?

Wait but there is Siri, right? So what can it do for me and what can developers achieve with it today?

Hey Siri, are we there yet?

Turns out that SiriKit offers a set of predefined intents ready to be used. That’s a start. But those cannot handle my specific requests and I guess a lot of others as well. To be fully usable something like custom parametrized intents would be nice. I would like that Siri understands something like:

“Hey Siri, what’s the price of <your cryptocurrency> in <your currency>?”

To be fair. When asking this for Bitcoin and USD you would get an answer. Depending on how the question is formulated Siri would start the Stocks app in preview or get something from the Web. But when trying to get an answer for other rather “unknown” cryptocurrencies, Siri struggles. I totally get that this question may seem fairly simple for a human to process but it is certainly not that simple for Siri to filter out the domain in question and start the “right” app for the job.

Hence I would also be satisfied with something in the form of a QA for the beginning:

> “Hey Siri, cryptocurrency price”

< “For what cryptocurrency?”

> “Bitcoin”

< “In what currency?”

> “USD”

In that way, developers could assign to specific keywords (in this case “cryptocurrency price”) input dialogues to get params to process those and render a response. Something similar to URL schemas

After looking a bit deeper I stumbled upon an interesting blog post which clarified it for me:

There are also some hands on blogposts how to set up “Custom Intents”:

I’ll just wait here then

Since iOS 12 it is possible to create a custom Intent in the form of an Intents.intentdefinition file. Here app developers can specify parameters which the app can process. To stick with the cryptocurrency example: When the user is searching for a price of a cryptocurrency inside an app, the app can create an Intent with the parameters already filled out. Eg. Show the price of Bitcoin in USD. Furthermore, the app can now “donate” this specific Intent (already parametrized) to the system. This “donation” would appear on the lock screen and as a shortcut ready to be used.

This means one could assign a custom Siri voice command to trigger this Intent. It also means that if you have 5 favourite cryptocurrencies and 3 favourite currencies you would have to go through this step 15 times inside the app. Afterwards, you would need to assign 15 voice commands to those donations.

Well, honestly this is not the way I would like it to be. But it’s a start and I hope that with iOS 13 we get something like parametrized Intents for the user to trigger.