Spring Boot REST API with OpenAPI (SwaggerUI) Codegen

Reading Time: 5 minutes

When working with microservices architecture, one of the most important aspects is inter-service communication. Usually, each microservice stores data in its own database, and if we follow the MVC design pattern, we probably have model classes that map the relational database to object models, and components that contain methods for performing CRUD operations. These components are exposed by controller endpoints.

So that one microservice calls another, the caller needs to know the exact request and response model classes. This article will show a simple example of how to generate such models with SpringDoc OpenAPI.

I will create two services that will provide basic CRUD operations. For demonstrating purposes I chose to store data about vehicles:

  • vehicle-manager- the microservice that provides vehicles’ data to the client
  • vehicle-manager-client – the client microservice that requests vehicles’ data

For the purpose of this tutorial, I created empty Spring Boot projects via SpringInitializr.

In order to use the OpenAPI in our Spring Boot project, we need to add the following Maven dependency in our pom file:

<dependency>
  <groupId>org.springdoc</groupId>
  <artifactId>springdoc-openapi-ui</artifactId>
  <version>1.5.5</version>
</dependency>

In the vehicle-manager microservice I created a Vehicle class that looks like this:

@Data
@Builder
@Schema(name = "Vehicle", description = "Example vehicle schema")
public class Vehicle {
    private VehicleType vehicleType;
    private String registrationPlate;
    private int seatsCount;
    private Category category;
    private double price;
    private Currency currency;
    private boolean available;
}

And a controller:

package com.n47.vehiclemanager.ctrl;

import com.n47.vehiclemanager.model.Vehicle;
import com.n47.vehiclemanager.service.VehicleService;
import io.swagger.v3.oas.annotations.tags.Tag;
import lombok.RequiredArgsConstructor;
import org.springframework.web.bind.annotation.*;

import javax.validation.Valid;

@Tag(name = "vehicle", description = "Vehicle controller API")
@RestController
@RequiredArgsConstructor
@RequestMapping(path = "/vehicle")
public class VehicleCtrl {

    private final VehicleService vehicleService;

    @PostMapping(path = "/add")
    public void addVehicle(@RequestBody @Valid Vehicle vehicle) {
        vehicleService.addVehicle(vehicle);
    }

    @GetMapping(path = "/get")
    public Vehicle getVehicle(@RequestParam String registrationPlate) throws Exception {
        return vehicleService.getVehicle(registrationPlate);
    }
}

The important annotations here from openAPI are @Schema and @Tag. The former is used to define the actual class that needs to be included in the API documentation. The latter is used for grouping operations, such as all methods under one controller.

The swagger documentation interface for Vehiclemanager microservice is shown on Figure 1, and can be accessed on the following links:

If we open http://localhost:8080/api-docs in our browser (or any other port we set our Spring boot app to run on), we can get the entire documentation for the Vehiclemanager microservice. The important part for the model generation is right under components/schemas, while the controller endpoints are under paths.

{
   "openapi":"3.0.1",
   "info":{
      "title":"OpenAPI definition",
      "version":"v0"
   },
   "servers":[
      {
         "url":"http://localhost:8080",
         "description":"Generated server url"
      }
   ],
   "tags":[
      {
         "name":"vehicle",
         "description":"Vehicle controller API"
      }
   ],
   "paths":{
      "/vehicle/add":{
         "post":{
            "tags":[
               "vehicle"
            ],
            "operationId":"addVehicle",
            "requestBody":{
               "content":{
                  "application/json":{
                     "schema":{
                        "$ref":"#/components/schemas/Vehicle"
                     }
                  }
               },
               "required":true
            },
            "responses":{
               "200":{
                  "description":"OK"
               }
            }
         }
      },
      "/vehicle/get":{
         "get":{
            "tags":[
               "vehicle"
            ],
            "operationId":"getVehicle",
            "parameters":[
               {
                  "name":"registrationPlate",
                  "in":"query",
                  "required":true,
                  "schema":{
                     "type":"string"
                  }
               }
            ],
            "responses":{
               "200":{
                  "description":"OK",
                  "content":{
                     "*/*":{
                        "schema":{
                           "$ref":"#/components/schemas/Vehicle"
                        }
                     }
                  }
               }
            }
         }
      }
   },
   "components":{
      "schemas":{
         "Vehicle":{
            "type":"object",
            "properties":{
               "vehicleType":{
                  "type":"string",
                  "enum":[
                     "MOTORBIKE",
                     "CAR",
                     "VAN",
                     "BUS",
                     "TRUCK"
                  ]
               },
               "registrationPlate":{
                  "type":"string"
               },
               "seatsCount":{
                  "type":"integer",
                  "format":"int32"
               },
               "category":{
                  "type":"string",
                  "enum":[
                     "A",
                     "B",
                     "C",
                     "D",
                     "E"
                  ]
               },
               "price":{
                  "type":"number",
                  "format":"double"
               },
               "currency":{
                  "type":"string",
                  "enum":[
                     "EUR",
                     "USD",
                     "CHF",
                     "MKD"
                  ]
               },
               "available":{
                  "type":"boolean"
               }
            },
            "description":"Example vehicle schema"
         }
      }
   }
}

I am going to create a Vehiclemanager-client service, running on port 8082, that will get vehicle information for a given registration plate, by calling the Vehiclemanager microservice. In order to do so, we need to generate the Vehicle model class defined in the original Vehicle microservice. We can generate it by adding the swagger codegen plugin in the pom’s plugins section, in the new demo service, like this:

<profiles>
  <profile>
	<id>generateModels</id>
	<build>
	  <plugins>
		<plugin>
	      <groupId>io.swagger.codegen.v3</groupId>
			<artifactId>swagger-codegen-maven-plugin</artifactId>
			<version>3.0.11</version>
			<configuration>
		      <output>${project.basedir}</output>
			  <inputSpec>default-config</inputSpec>
			  <language>java</language>
			  <generateModels>true</generateModels>
			  <generateModelDocumentation>false</generateModelDocumentation>
			  <generateApis>false</generateApis>
			  <generateApiTests>false</generateApiTests>
			  <generateModelTests>false</generateModelTests>
			  <generateSupportingFiles>false</generateSupportingFiles>
			  <configOptions>
			    <sourceFolder>src/main/java</sourceFolder>
				<hideGenerationTimestamp>true</hideGenerationTimestamp>
				<sortParamsByRequiredFlag>true</sortParamsByRequiredFlag>
				<checkDuplicatedModelName>true</checkDuplicatedModelName>
				<useBeanValidation>true</useBeanValidation>
				<library>feign</library>
				<dateLibrary>java8-localdatetime</dateLibrary>
			  </configOptions>
			</configuration>
			<executions>
			  <execution>
				<id>generate-vehiclemanager-classes</id>
				<goals>
			      <goal>generate</goal>
				</goals>
				<configuration>
				  <inputSpec>http://localhost:8080/api-docs</inputSpec>
				  <language>java</language>
				  <modelPackage>com.n47.domain.external.model</modelPackage>
				  <modelsToGenerate>Vehicle</modelsToGenerate>
				</configuration>
			  </execution>
			</executions>
		  </plugin>
		</plugins>
	 </build>
  </profile>
</profiles>

After running the corresponding maven profile with:

> mvn clean compile -P generateModels

the models defined in <modelsToGenerate> tag will be created under the specified package in <modelPackage> tag.

Codegen generates for us the entire model class with all classes that are defined inside it.

It is important to note that we can have models generated from different services. In each execution (see line 30 from the XML snippet) we can define the corresponding API documentation link in the <inputSpec> tag (line 37).

To demo data transfer from Vehiclemanager to Vehiclemanager-client microservice, we can send a simple request via Postman. The request I am going to use will be a GET request, that accepts a parameter registrationPlate which is used to query the vehicles stored in the Vehiclemanager microservice. The response is shown in Figure 3, which is a JSON containing the vehicle’s data that I hardcoded in the Vehiclemanager microservice.

Using OpenAPI helps us getting rid of copy-paste and boilerplate code, and more importantly, we have an automated mechanism that on each Maven clean compile generates the latest models from other microservices.

You can find the full code example microservices in the links below:

Feel free to download and run them yourself, and leave a comment or feedback.

Spring Boot password encryption with Jasypt

Reading Time: 5 minutes

Securing sensitive data is extremely important. In the following tutorial, we will go through the process of encrypting sensitive data in a Spring Boot application. We will take an easy approach to this very common procedure which takes place in any software project. This will be easy in the context of setup and usage of the given high-security java library. Without the need for deep knowledge/in-depth understanding of cryptography, encryption capabilities and encryption algorithms. Just following a simple setup with a few configuration steps. It is recommended to rely on the secure default configuration, but also Jasypt offers quite some customization if one needs it.

Jasypt (Java Simplified Encryption) provides utilities for encrypting user sensitive information, such as DB passwords, servers’ credentials, or other sensitive personal data. This information is key to users privacy, so we as developers need to make sure that no one gets the right to access them, irrelevant of the place where they are stored, they always need to be encrypted. Never store sensitive data in plain mode. It’s common sense we need to follow and it’s also something we need to honour if we want to gain our user’s trust. For this tutorial, we will use a specific library, Jasypt Spring Boot Starter, widely used across the Spring Boot community.

Jasypt setup steps

  1. Add jasypt-spring-boot-starter maven dependency in the pom.xml of the Spring Boot project
  2. Select a secret key to be used for encryption and decryption
  3. Generate Encrypted Key
  4. Add the Encrypted key in the config file
  5. Run the application

Let’s go into details in all of these steps:

Step 1. Adding maven dependency

<dependency>
    <groupId>com.github.ulisesbocchio</groupId>
    <artifactId>jasypt-spring-boot-starter</artifactId>
    <version>3.0.3</version>
</dependency>

This comes as a regular entry in the pom.xml file list of dependencies.

We should try to use latest stable versions. This version is the latest one at this moment and it offers better support for newer versions of Spring Boot. starting from 2.1.x and upwards. Also would advise using this because it comes with a more secure encryption algorithm by default, “PBEWITHHMACSHA512ANDAES_256”.

Step 2. Select a secret key to be used for encryption and decryption

This secret key will be used to encrypt and descript the data. You can think of it as a safeguard to further improve security. What it does, it actually adds a random string to the beginning or end of the input text prior to hashing or encrypting the value. This secret key goes in the property file, application.yml/application.properties in the Spring Boot project itself.

jasypt:
     encryptor:
           password: salting

Step 3. Generate Encrypted Key

Jasypt supplies a lot of (CLI) tools. In order to use these tools, you should download the distribution zip file (named jasypt-$VERSION-dist.zip) and unzip it. There will be an appropriate .bat or .sh file for the needed operation digest/encrypt/decrypt.

Example for encryption

$ ./encrypt.sh input="This is my message to be encrypted" password=MYPASSWORD

Example for decryption

$ ./decrypt.sh input="8fsdfdsafdsa9ffsad0fdsa0fdsfdsa3231x" password=MYPASSWORD

Another way of using Jasypt for encrypting your data is by using some online tools that provide Jasypt operations.

The simplest and most convenient way is a maven plugin. Not only that you can use it for a single value, it offers the capabilities to encrypt all sensitive data with a single command, meaning all placeholders will be updated in one step.

<build>
  <plugins>
    <plugin>
      <groupId>com.github.ulisesbocchio</groupId>
      <artifactId>jasypt-maven-plugin</artifactId>
      <version>3.0.3</version>
    </plugin>
  </plugins>
</build>

This jasypt-maven-plugin, by default, will check for configuration files under ./src/main/resources, or the regular Spring Boot resource folders. But also, Environment variables can be used to supply this master password. Instead of exposing the password “salting” inside the project itself, an Environment Variable can be created with, for instance, ENCRYPTION_MASTER_PASSWORD and then in the config file, password: ${ENCRYPTION_MASTER_PASSWORD}.

Example for encrypting a single value from a terminal.

This example uses the encryption password as an argument. Important, the terminal session needs to be opened where the pom.xml file with the maven plugin is located.

mvn jasypt:encrypt-value -Djasypt.encryptor.password=salting -Djasypt.plugin.value="secureDataWeNeedToEncrypt"

Example for encrypting all strings within projects property file.

The last argument is optional since Jasypt will scan that location anyway. What is important is that sensitive placeholders in the application property file MUST be wrapped in DEC() parenthesis. Activedirectory:password: DEC(supersecret) OracleDB:password: DEC(alsosupersecret).

mvn jasypt:encrypt -Djasypt.encryptor.password=salting -Djasypt.plugin.path="file:src/main/resources/application.yml"

If the previous statement completed successfully then, all sensitive data should be updated with their encrypted value. Updated properties output should be something like, Activedirectory:password: ENC(sFJDfdsfjjA8saT7YC65bsf71d0) OracleDB:password: ENC(34jjfsdfds+fds/fsd7Hs)

Step 4. Add the encrypted key in the config file

If you have been using the latest approach, then the application.properties/application.yaml files have already been updated with the newly encrypted values. All sensitive data wrapped with a DEC() is now encrypted, and all other strings in the configuration remained unchanged. If some of the other approaches were chosen, going one placeholder at a time, or using the cli, then we need to update the configuration file entries one by one. Still, the properties need to be wrapped in ENC() parenthesis anyway, since the output of the cli is only the encrypted value.

For the reverse process, it’s vice-versa, the first argument of the statement is: decrypt and all placeholders must be wrapped in ENC() parenthesis before execution.

Step 5. Run the application

That’s it. Your Spring Boot project will automatically decrypt all sensitive data when you start the application, no additional configuration is needed. Let me know in the comments section how was your experience. Was it smooth or are there some ongoing issues?

Deploy microservice on Kubernetes, step by step

Reading Time: 11 minutes

In this tutorial, I will try to explain step by step, how you can set up Kubernetes, deploy your microservice on Kubernetes, and check the result via the Kubernetes dashboard. All other things will be “as simple as possible”. As a cloud platform gcloud will be used. We will cover the following aspects of the problem:

  1. Create microservice to be deployed
  2. Placed application in your docker container
  3. What is Kubernetes and how to install it?
  4. Create a new Kubernetes project
  5. Create new Cluster
  6. Allow access from your local machine
  7. Create service account
  8. Activate service account
  9. Connect to cluster
  10. Gcloud initialization
  11. Generate access token
  12. Deploy and start Kubernetes dashboard
  13. Deploy microservice

Step 1: Create microservice to be deployed

Traditionally, in the programming world, everything starts with “Hello World”. So, as mentioned previously, to keep things simple, create a microservice that returns just “Hello World”. You can use https://start.spring.io/ for this goal. Create HelloController like this:

package com.example.demojooq.controllers;


import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequestMapping("/api/v1")
public class HelloController {

    @GetMapping("/say-hello")
    public String sayHello() {
        return "Hello world";
    }
}

Step 2: Placed application in your docker container

We have a microservice, need to put this microservice in a docker container and upload it on Kubernetes. From that point, Kubernetes will orchestrate the container according to your settings. Let’s create the first image from the microservice. Normally, as you might guess, it is called Dockerfile (without any extension), and the content is:

Dockerfile

FROM adoptopenjdk/openjdk11:jre-11.0.8_10-debianslim
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","app.jar"]

The next step is to create the docker-compose file. For that purpose, a call to Dockerfile will be made to build the image. You can do it manually, but the best way is from the docker-compose file, as you have a permanent track of the solution. This is a .yaml file. (picture below)

docker-compose.yaml

version: "3"
services:
  hello-world:
    build: .
    ports:
      - "8099:8080"

After starting docker, go to the folder where docker-compose is located and execute the command “docker-compose up”. The expectation is to reach this microservice on 8099 port. If everything is ok, in your docker will be something like this:

Check microservice docker installation with postman calling http://localhost:8099/api/v1/say-hello. In response, you have “Hello World”.

Step 3: What is Kubernetes and how to install it?

What is Kubernetes?

Kubernetes is an open-source container orchestrator that automates many tasks involved in deploying, managing, and scaling containerized applications. What happens when you use Docker, and your container fails? Probably the first thing to do is to restart your container. You do that manually (if you don’t have Kubernetes). Here comes Kubernetes, observe that container is down and start new container automatically. This is just a basic use case. Please read more on the internet, there is a bunch of information about this.

How to install Kubernetes?

Ok, until now you are sure that Kubernetes is needed, but where to find it, what are the costs, and how to install it? First of all, try “download Kubernetes” on Google… Pick the site https://kubernetes.io/docs/tasks/tools/… Options for Windows, Mac, Linux appear… A different installation like kind, minikube, kubeadmin… So, is it worth spending so much time setting properly this Kubernetes? You do not have to ask me, I agree with you, it is too much time. Fortunately, we can make a “go around” and skip all that: Go to Gcloud where Kubernetes is offered as a service and just use it. Somebody else takes care of this, we can focus just on the business logic in our microservice and use out-of-the-box Kubernetes installation from Gcloud. Sounds good, doesn’t it? The last and most important question; money. Is it for free? No, it is not. You have to pay for the Gcloud services and here is the price list: https://cloud.google.com/kubernetes-engine/pricing. But for ordinary people like you and me, Gcloud offers a free account for 3 months up to 300$ and it seems fair. It is enough time to learn about deploying microservices on Kubernetes. For any professional use in future, the company should stay behind this. Here is the link where you can create your free cloud account https://cloud.google.com/. One more thing, during the creation of a free account, Google will ask for your bank account, to automatically charge you. But do not worry, you are safe for the first three months and below 300$. And for any charging, you will be asked for permission before… So, until now my personal experience is positive, as Google is keeping the promise when you create an account. But the final decision is up to you.

Step 4: Create new Kubernetes project

Open up your Google account, sign in and go to the console.

Create a new project from the main dashboard; the name of the new project is “hello-world”. From now on, this is your active project.

Step 5: Create new cluster

Create new cluster (named it cluster2). Accept default values for others fields.

Step 6: Allow access from your local machine

Now, we must allow access from our local machine to Kubernetes, via kubectl. For that purpose, we need to follow these steps:

  1. Click on cluster2
  2. Find your local IP address and add it here according to the CIDR standard in the Edit control plane authorized networks

Step 7: Create service account

Give new account role “Owner”. Accept default values for other fields. After a service account is created, you should have something like this:

Generate keys for this service account with key type JSON. When the key is downloaded, it has some random name like hello-world-315318-ab0c74d58a70.json. Keep this file in a safe place, we will need it later.

Now, install Google Cloud SDK Shell on your machine according to your OS. Let’s do the configuration so kubectl can reach cluster2. Copy the file hello-world-315318-ab0c74d58a70.json and put it in the CLOUD SDK folder. For the Windows environment, it looks like this:

Step 8: Activate service account

The first thing to do is to activate the service account with the command: gcloud auth activate-service-account hello-world-service-account@hello-world-315318.iam.gserviceaccount.com –key-file=hello-world-315318-ab0c74d58a70.json

Step 9: Connect to cluster

Now go to cluster2 again and find the connection string to connect to the new cluster

Execute this connection string in Google Cloud Shell: gcloud container clusters get-credentials cluster2 –zone us-central1-c –project hello-world-315318

Step 10: Gcloud initialization

The next command to execute is gcloud init, to initialize connection with the new project. Here is the complete code on how to do that from the Gcloud Shell:

C:\Users\Dimche Trifunov\AppData\Local\Google\Cloud SDK>gcloud init
Welcome! This command will take you through the configuration of gcloud.

Settings from your current configuration [dev] are:
accessibility:
  screen_reader: 'False'
compute:
  region: europe-west3
  zone: europe-west3-a
core:
  account: hello-world-service-account@hello-world-315318.iam.gserviceaccount.com
  disable_usage_reporting: 'True'
  project: dops-containers

Pick configuration to use:
 [1] Re-initialize this configuration [dev] with new settings
 [2] Create a new configuration
 [3] Switch to and re-initialize existing configuration: [database-connection]
 [4] Switch to and re-initialize existing configuration: [default]
Please enter your numeric choice:  2

Enter configuration name. Names start with a lower case letter and
contain only lower case letters a-z, digits 0-9, and hyphens '-':  hello-world
Your current configuration has been set to: [hello-world]

You can skip diagnostics next time by using the following flag:
  gcloud init --skip-diagnostics

Network diagnostic detects and fixes local network connection issues.
Checking network connection...done.
Reachability Check passed.
Network diagnostic passed (1/1 checks passed).

Choose the account you would like to use to perform operations for
this configuration:
 [1] cicd-worker@devops-platform-n47.iam.gserviceaccount.com
 [2] d.trifunov74@gmail.com
 [3] dimche.trifunov@north-47.com
 [4] dtrifunov@lunar-sled-314616.iam.gserviceaccount.com
 [5] hello-world-service-account@hello-world-315318.iam.gserviceaccount.com
 [6] service-account-demo-dime@blissful-epoch-305214.iam.gserviceaccount.com
 [7] Log in with a new account
Please enter your numeric choice:  5

You are logged in as: [hello-world-service-account@hello-world-315318.iam.gserviceaccount.com].

API [cloudresourcemanager.googleapis.com] not enabled on project
[580325979968]. Would you like to enable and retry (this will take a
few minutes)? (y/N)?  y

Enabling service [cloudresourcemanager.googleapis.com] on project [580325979968]...
Operation "operations/acf.p2-580325979968-f1bf2515-deea-49d5-ae35-a0adfef9973e" finished successfully.
Pick cloud project to use:
 [1] hello-world-315318
 [2] Create a new project
Please enter numeric choice or text value (must exactly match list
item):  1

Your current project has been set to: [hello-world-315318].

Do you want to configure a default Compute Region and Zone? (Y/n)?  n

Error creating a default .boto configuration file. Please run [gsutil config -n] if you would like to create this file.
Your Google Cloud SDK is configured and ready to use!

* Commands that require authentication will use hello-world-service-account@hello-world-315318.iam.gserviceaccount.com by default
* Commands will reference project `hello-world-315318` by default
Run `gcloud help config` to learn how to change individual settings

This gcloud configuration is called [hello-world]. You can create additional configurations if you work with multiple accounts and/or projects.
Run `gcloud topic configurations` to learn more.

Some things to try next:

* Run `gcloud --help` to see the Cloud Platform services you can interact with. And run `gcloud help COMMAND` to get help on any gcloud command.
* Run `gcloud topic --help` to learn about advanced features of the SDK like arg files and output formatting

Step 11: Generate access token

Type kubectl get namespace, access token is generated in .kube folder (in home folder), in config file:

If you open this config file, you will find your access token. You will need this later.

Step 12: Deploy and start Kubernetes dashboard

Now, deploy Kubernetes dashboard with the next command: kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

C:\Users\Dimche Trifunov\AppData\Local\Google\Cloud SDK>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

C:\Users\Dimche Trifunov\AppData\Local\Google\Cloud SDK>kubectl proxy
Starting to serve on 127.0.0.1:8001

Start the dashboard with kubectl proxy command. Now open the dashboard from the link: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/overview?namespace=default

In front of you, this screen will appear:

Now, you need the token from the config file that we spoke about a moment ago. Open the config file with Notepad (on Windows), find your access token, and copy from there and paste it in the Enter token* field. Be careful when you are copying token from the config file as there might be several tokens. You must choose yours (image below).

Finally, the stage is prepared to deploy microservice.

Step 13: Deploy microservice

Build the docker image from Dockerfile with the command: docker build -t docker2222/dimac:latest. docker2222/dimac is my public docker repository.
Push the image on docker hub with the command: docker image push docker2222/dimac:latest.
Execute kubectl apply -f k8s.yaml where k8s.yaml is the file below:

---

apiVersion: v1
kind: Namespace
metadata:
  name: hello

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-world
  namespace: hello
  annotations:
    buildNumber: "1.0"
spec:
  selector:
    matchLabels:
      app: hello-world
  replicas: 1
  template:
    metadata:
      labels:
        app: hello-world
      annotations:
        buildNumber: "1.0"
    spec:
      containers:
        - name: hello-world
          image: docker2222/dimac:latest
          readinessProbe:
            httpGet:
              path: "/actuator/health/readiness"
              port: 8080
            initialDelaySeconds: 5
          ports:
            - containerPort: 8080
          env:
            - name: APPLICATION_VERSION
              value: "1.0"
---


apiVersion: v1
kind: Service
metadata:
  name: hello-world
  namespace: hello
spec:
  selector:
    app: hello-world
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
---

Last but not least, open the Kubernetes dashboard. If everything is OK, you should see your service.

Spring Boot Internationalization using Resource Bundles

Reading Time: 3 minutes

Implementing Spring Boot internationalization can be easily achieved using Resource Bundles. I will show you a code example of how you can implement it in your projects.

Let’s create a simple Spring Boot application from start.spring.io.

The first step is to create a resource bundle (a set of properties files with the same base name and language suffix) in the resources package.

I will create properties files with base name texts and only one key greeting:

  • texts_en.properties
  • texts_de.properties
  • texts_it.properties
  • texts_fr.properties

In all of that files I will add the value “Hello World !!!”, and the translations for that phrase. I was using Google Translate so please do not judge me if something is wrong :).

After that, I will add some simple YML configuration in application.yml file which I will use later.

server:
  port: 7000

application:
  translation:
    properties:
      baseName: texts
      defaultLocale: de

Now, let’s create the configuration. I will create two Beans LocaleResolver and ResourceBundleMessageSource. Let’s explain both of them.

With the LocaleResolver interface, we are defining which implementation we are going to use. For this example, I chose to use AcceptHeaderLocaleResolver implementation. It means that the language value must be provided via Accept-Language header.

@Bean
    public LocaleResolver localeResolver() {
        AcceptHeaderLocaleResolver acceptHeaderLocaleResolver = new AcceptHeaderLocaleResolver();
        acceptHeaderLocaleResolver.setDefaultLocale(new Locale(defaultLocale));
        return acceptHeaderLocaleResolver;
    }

With ResourceBundleMessageSource we are defining which bundle we are going to use in the Translator component (I will create it later 🙂 ).

@Bean(name = "textsResourceBundleMessageSource")
    public ResourceBundleMessageSource messageSource() {
        ResourceBundleMessageSource rs = new ResourceBundleMessageSource();
        rs.setBasename(propertiesBasename);
        rs.setDefaultEncoding("UTF-8");
        rs.setUseCodeAsDefaultMessage(true);
        return rs;
    }

Now, let’s create the Translator component. In this component, I will create only one method, toLocale. In that method, I will fetch the Locale from the LocaleContexHolder and I will take the translation from the resource bundle.

@Component
public class Translator {

    private static ResourceBundleMessageSource messageSource;

    public Translator(@Qualifier("textsResourceBundleMessageSource") ResourceBundleMessageSource messageSource) {
        this.messageSource = messageSource;
    }

    public static String toLocale(String code) {
        Locale locale = LocaleContextHolder.getLocale();
        return messageSource.getMessage(code, null, locale);
    }
}

That’s all the configuration we need for this feature. Now, let’s create Controller, Service and TranslatorCodes Util classes so we can test the APIs.

@RestController
@RequestMapping("/index")
public class IndexController {

    private final TranslationService translationService;

    public IndexController(TranslationService translationService) {
        this.translationService = translationService;
    }

    @GetMapping("/translate")
    public ResponseEntity<String> getTranslation() {
        String translation = translationService.translate();
        return ResponseEntity.ok(translation);
    }
}
@Service
public class TranslationService {

    public String translate() {
        return toLocale(GREETINGS);
    }
}
public class TranslatorCode {
    public static final String GREETINGS = "greetings";
}

Now, you can start the application. After the application is started successfully, you can start making API calls.

Here is an example of API call that you can use as a cURL command.

curl –location –request GET “localhost:7000/index/translate” –header “Accept-Language: en”

These are some of the responses from the calls I made:

You can change the default behaviour, add some protection, add multiple resource bundles, you are not limited to using this feature.

Download the source code

This project is available on our BitBucket repository. Feel free to fix any mistakes and to leave a comment here if you have any questions or feedback.

https://bitbucket.org/n47/spring-boot-internationalization/src/master/

Improve your performance using JPA Entity Graph

Reading Time: 7 minutes

If you are a web developer, you probably have developed some endpoint which has a slow response time. The issue for that might be that you are calling some 3rd party API, you have file processing or it might be how your entities are retrieved from the database.

In this article, we are going to take a look at how the Entity Graph might help us to improve our query performance when using JPA and Spring Boot.

Let’s discuss the following scenario:

We want to build an application where we can keep track of buildings, how many apartments every building has and how many tenants every apartment has. I have already created a simple application that can be downloaded from here.

In order to achieve the previously mentioned scenario, we will need to have the following entities:

package com.north47.entitygraphdemo.repository.model;

import lombok.AllArgsConstructor;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.Setter;

import javax.persistence.*;
import java.util.ArrayList;
import java.util.List;

@Entity
@NoArgsConstructor
@AllArgsConstructor
@Getter
@Setter
public class Building {

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;

    private String buildingName;

    @OneToMany(mappedBy = "building", cascade = CascadeType.ALL)
    private List<Apartment> apartments;

    public void addApartment(Apartment apartment) {
        if (apartments == null) {
            apartments = new ArrayList<>();
        }
        apartments.add(apartment);
        apartment.setBuilding(this);
    }

}
package com.north47.entitygraphdemo.repository.model;

import lombok.AllArgsConstructor;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.Setter;

import javax.persistence.*;
import java.util.ArrayList;
import java.util.List;

@Entity
@NoArgsConstructor
@AllArgsConstructor
@Getter
@Setter
public class Apartment {

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;

    private String type;

    @JoinColumn(name = "building_id")
    @ManyToOne
    private Building building;

    @OneToMany(mappedBy = "apartment", cascade = CascadeType.ALL)
    private List<Tenant> tenants;

    public void addTenant(Tenant tenant) {
        if (tenants == null) {
            tenants = new ArrayList<>();
        }
        tenants.add(tenant);
        tenant.setApartment(this);
    }

}
package com.north47.entitygraphdemo.repository.model;

import lombok.AllArgsConstructor;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.Setter;

import javax.persistence.*;

@Entity
@NoArgsConstructor
@AllArgsConstructor
@Getter
@Setter
public class Tenant {

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;

    private String name;

    private String lastName;

    @JoinColumn(name = "apartment_id")
    @ManyToOne
    private Apartment apartment;
}

We want to observe what will happen when we want to retrieve all the entities. For that purpose, a service method is created in BuildingService called iterate that will get all the buildings and loop through all remaining entities. For this method to be visible to the outer world a BuildingController is created that exposes GET endpoint from where we can access the iterate method in BuildingService. In order to have some data in our database, there is a SQL script data.sql that will insert some data and will be executed on startup. I would strongly suggest to start your application in debug mode and iterate through every step of the method iterate.

If you have already started your application insert the following URL: http://localhost:8080/building/iterate in your browser or some API application (Postman for example) and execute this GET request. This will execute the iterate method that was created previously.

Let’s see the content of the iterate service method we are calling with this endpoint and observe the console while executing it:

package com.north47.entitygraphdemo.service;

import com.north47.entitygraphdemo.repository.BuildingRepository;
import com.north47.entitygraphdemo.repository.model.Apartment;
import com.north47.entitygraphdemo.repository.model.Building;
import com.north47.entitygraphdemo.repository.model.Tenant;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Service;

import java.util.List;

@Slf4j
@Service
@RequiredArgsConstructor
public class BuildingService {

    private final BuildingRepository buildingRepository;

    public void iterate() {
        log.debug("Iteration started");
        log.debug("Get all buildings");
        final List<Building> buildings = buildingRepository.findAll();
        buildings.forEach(building -> {
            log.debug("Get all apartments for building with id: {}", building.getId());
            final List<Apartment> apartments = building.getApartments();
            apartments.forEach(apartment -> {
                log.debug("Get all tenants for apartment with id: {}", apartment.getId());
                final List<Tenant> tenants = apartment.getTenants();
                log.debug("Apartment with id : {} has {} tenants", apartment.getId(), tenants.size());
            });
        });
    }
}

If you are in debug mode you may notice that after buildingRepository.findAll() is executed we can see the following log in the console:

Hibernate: select building0_.id as id1_1_, building0_.building_name as building2_1_ from building building0_

Let’s continue with executing the rest of the code. What will appear in the console is the following:

Hibernate: select apartments0_.building_id as building3_0_0_, apartments0_.id as id1_0_0_, apartments0_.id as id1_0_1_, apartments0_.building_id as building3_0_1_, apartments0_.type as type2_0_1_ from apartment apartments0_ where apartments0_.building_id=?
Hibernate: select tenants0_.apartment_id as apartmen4_2_0_, tenants0_.id as id1_2_0_, tenants0_.id as id1_2_1_, tenants0_.apartment_id as apartmen4_2_1_, tenants0_.last_name as last_nam2_2_1_, tenants0_.name as name3_2_1_ from tenant tenants0_ where tenants0_.apartment_id=?
Hibernate: select tenants0_.apartment_id as apartmen4_2_0_, tenants0_.id as id1_2_0_, tenants0_.id as id1_2_1_, tenants0_.apartment_id as apartmen4_2_1_, tenants0_.last_name as last_nam2_2_1_, tenants0_.name as name3_2_1_ from tenant tenants0_ where tenants0_.apartment_id=?

Even though we are not calling some repository methods, SQL queries are executed. This is happening because it is not specified the fetch type in the entities and the default one is the LAZY for OneToMany relationships. This means that when we will try to get the entities (in our case call methods getApartments in Building and getTenants in Aparment) that are annotated with @OneToMany, additional query will be executed. Imagine having a lot’s of data, and we want to execute a similar logic, then this will cause executing a lot more additional queries which will cause a huge latency. One solution is that we can always switch to changing the fetch type to EAGER, but that means that these collections will be always called and we won’t be able to change that in runtime.

One of the solutions can be the JPA Entity Graph. Let’s see how it can make our life easier. We will do the following changes in our domain class Building:

package com.north47.entitygraphdemo.repository.model;

import lombok.AllArgsConstructor;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.Setter;

import javax.persistence.*;
import java.util.HashSet;
import java.util.Set;

@Entity
@NoArgsConstructor
@AllArgsConstructor
@Getter
@Setter
@NamedEntityGraph(
        name = "Building.List",
        attributeNodes = {@NamedAttributeNode(value = "apartments", subgraph = "Building.Apartment")},
        subgraphs = {
                @NamedSubgraph(name = "Building.Apartment",
                        attributeNodes = @NamedAttributeNode(value = "tenants"))
        }
)
public class Building {

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;

    private String buildingName;

    @OneToMany(mappedBy = "building", cascade = CascadeType.ALL)
    private Set<Apartment> apartments;

    public void addApartment(Apartment apartment) {
        if (apartments == null) {
            apartments = new HashSet<>();
        }
        apartments.add(apartment);
        apartment.setBuilding(this);
    }

}

So what happened here? We have defined an entity graph with named Building.List. With the attribute.nodes we are saying which collections to be loaded. Since we also want to get the tenants, we have defined a subgraph called Building.Apartment and in the subgraphs, we are saying to load all the tenants for every apartment. In order for this entity graph to be used we need to create a method in our BuildingRepository to whom we will specify to use this entity graph:

package com.north47.entitygraphdemo.repository;

import com.north47.entitygraphdemo.repository.model.Building;
import org.springframework.data.jpa.repository.EntityGraph;
import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.data.jpa.repository.Query;

import java.util.List;

public interface BuildingRepository extends JpaRepository<Building, Long> {


    @Override
    List<Building> findAll();

    @EntityGraph(value = "Building.List")
    @Query("select b from Building as b")
    List<Building> findAllWithEntityGraph();
}

And of course, we will provide a service method that has the same logic but findAllWithEntityGraph() will be called:

public void iterateWithEntityGraph() {
        log.debug("Iteration with entity started");
        log.debug("Get all buildings");
        final List<Building> buildings = buildingRepository.findAllWithEntityGraph();
        buildings.forEach(building -> {
            log.debug("Get all apartments for building with id: {}", building.getId());
            final Set<Apartment> apartments = building.getApartments();
            apartments.forEach(apartment -> {
                log.debug("Get all tenants for apartment with id: {}", apartment.getId());
                final Set<Tenant> tenants = apartment.getTenants();
                log.debug("Apartment with id : {} has {} tenants", apartment.getId(), tenants.size());
            });
        });
    }

And what is remaining is to expose this method using the BuildingController so we can test our new functionality:

@GetMapping(value = "/iterate/entityGraph")
    public ResponseEntity<Void> iterateWithEntityGraph() {
        buildingService.iterateWithEntityGraph();
        return new ResponseEntity<>(HttpStatus.OK);
    }

Now if we put the following URL http://localhost:8080/building/iterate/entityGraph in our browser and observe our console we can see that only one query is executed:

Hibernate: select building0_.id as id1_1_0_, apartments1_.id as id1_0_1_, tenants2_.id as id1_2_2_, building0_.building_name as building2_1_0_, apartments1_.building_id as building3_0_1_, apartments1_.type as type2_0_1_, apartments1_.building_id as building3_0_0__, apartments1_.id as id1_0_0__, tenants2_.apartment_id as apartmen4_2_2_, tenants2_.last_name as last_nam2_2_2_, tenants2_.name as name3_2_2_, tenants2_.apartment_id as apartmen4_2_1__, tenants2_.id as id1_2_1__ from building building0_ left outer join apartment apartments1_ on building0_.id=apartments1_.building_id left outer join tenant tenants2_ on apartments1_.id=tenants2_.apartment_id

So we managed to reduce the number of queries from 4 to 1 and we still have the possibility to call the findAll() method in the BuildingRepository where we won’t load all the apartments or the tenants. In a real case scenario, you can define as many entity graphs as you want and specify which collections to be loaded or not.

Hope you had fun, you can find the code in our repository.

Multitenancy with Spring Boot

Reading Time: 7 minutes

Why should you consider implementing multitenancy in your project?

  • Cost: Multi-tenant architecture allows the sharing of resources, databases, and the application itself, thus the cost to run the system is fixed.
  • Maintenance: Users do not have to pay a considerable amount of fees to keep the software up to date. This reduces the overall cost of maintenance for each tenant.
  • Performance: Easier to assess and optimize speed, utilization, response time across the entire system, and even update the technology stack when needed.

In this blog we will implement multitenancy in our Spring Boot project.

Let’s create a simple Spring Boot project from start.spring.io, with only basic dependencies (Spring Web, Spring Data JPA, Spring Configuration Processor, MySQL Driver).

The good thing for implementing multitenancy is that we do not need additional dependencies.
We will split this example into two parts. In the first one, we will explain the idea/logic behind it and split the approach into 7 configuration steps, and explain every step. In the second part, we will see how it’s implemented in real life and we will test the solution.

1. Let’s start with creating Tenant Storage. We will use it for keeping the tenant value while the request is executing.

public class TenantStorage {

    private static ThreadLocal<String> currentTenant = new ThreadLocal<>();

    public static void setCurrentTenant(String tenantId) {
        currentTenant.set(tenantId);
    }

    public static String getCurrentTenant() {
        return currentTenant.get();
    }

    public static void clear() {
        currentTenant.remove();
    }
}

2. Next, we will create the Tenant Interceptor. For every request, we will set the value at the beginning and clear it at the end. As you can see, in Tenant Interceptor, I decided for this demo to fetch the value of the tenant from request header (X-Tenant), this is up to you. Just keep an eye on data security when using this in production. Maybe you want to fetch from a cookie or some other header name.

@Component
public class TenantInterceptor implements WebRequestInterceptor {

    private static final String TENANT_HEADER = "X-Tenant";

    @Override
    public void preHandle(WebRequest request) {
        TenantStorage.setCurrentTenant(request.getHeader(TENANT_HEADER));
    }

    @Override
    public void postHandle(WebRequest webRequest, ModelMap modelMap) {
        TenantStorage.clear();
    }

    @Override
    public void afterCompletion(WebRequest webRequest, Exception e) {

    }
}

3. Next thing is to add the tenant Interceptor in the interceptor registry. For that purpose, I will create WebConfiguration that will implement WebMvcConfigurer.

@Configuration
public class WebConfiguration implements WebMvcConfigurer {

    private final TenantInterceptor tenantInterceptor;

    public WebConfiguration(TenantInterceptor tenantInterceptor) {
        this.tenantInterceptor = tenantInterceptor;
    }

    @Override
    public void addInterceptors(InterceptorRegistry registry) {
        registry.addWebRequestInterceptor(tenantInterceptor);
    }
}

4. Now, let’s update the application.yml file with some properties for the database connections.

tenants:
  datasources:
    n47schema1:
      jdbcUrl: jdbc:mysql://localhost:3306/n47schema1?verifyServerCertificate=false&useSSL=false&requireSSL=false
      driverClassName: com.mysql.cj.jdbc.Driver
      username: root
      password:
    n47schema2:
      jdbcUrl: jdbc:mysql://localhost:3306/n47schema2?verifyServerCertificate=false&useSSL=false&requireSSL=false
      driverClassName: com.mysql.cj.jdbc.Driver
      username: root
      password:
spring:
  jpa:
    database-platform: org.hibernate.dialect.MySQL5Dialect

5. Following, we will wrap the tenant’s values to map with key = tenant name, value = data source in DataSourceProperties.

@ConfigurationProperties(prefix = "tenants")
@Component
public class DataSourceProperties {

    private Map<Object, Object> dataSources = new LinkedHashMap<>();

    public Map<Object, Object> getDataSources() {
        return dataSources;
    }

    public void setDataSources(Map<String, Map<String, String>> datasources) {
        datasources.forEach((key, value) -> this.dataSources.put(key, convert(value)));
    }

    public DataSource convert(Map<String, String> source) {
        return DataSourceBuilder.create()
                .url(source.get("jdbcUrl"))
                .driverClassName(source.get("driverClassName"))
                .username(source.get("username"))
                .password(source.get("password"))
                .build();
    }
}

6. Afterwards, we should create DataSource Bean, and for that purpose, I will create DataSourceConfig.

@Configuration
public class DataSourceConfig {

    private final DataSourceProperties dataSourceProperties;

    public DataSourceConfig(DataSourceProperties dataSourceProperties) {
        this.dataSourceProperties = dataSourceProperties;
    }

    @Bean
    public DataSource dataSource() {
        TenantRoutingDataSource customDataSource = new TenantRoutingDataSource();
        customDataSource.setTargetDataSources(dataSourceProperties.getDataSources());
        return customDataSource;
    }
}

7. At last, we will extend the AbstractRoutingDataSource and implement our lookup key.

public class TenantRoutingDataSource extends AbstractRoutingDataSource {

    @Override
    protected Object determineCurrentLookupKey() {
        return TenantStorage.getCurrentTenant();
    }

}

And we are done with the first part.

Let’s see how it looks in the real world:

For this example, we will use two schemas from the same database instance, we will create a user and get all users. Also, I will show you how you can implement Flyway and test the solution.

First, let’s configure our databases. In my local instance of MySQL server, we will create two schemas: n47schema1 and n47schema2.

Next step is to execute this CREATE statement on both schemas:

CREATE TABLE `users` (
	`id` INT(11) NOT NULL AUTO_INCREMENT,
	`name` VARCHAR(64) NOT NULL DEFAULT '' COLLATE 'utf8_general_ci',
	PRIMARY KEY (`id`)
);

Then, we will create two APIs, one for creating a user, and the other one to fetch all users.

@RestController
@RequestMapping("/users")
public class UserController {

    private final UserRepository userRepository;

    public UserController(UserRepository userRepository) {
        this.userRepository = userRepository;
    }

    @PostMapping
    public UserDomain addUser(@RequestBody UserRequestBody userRequestBody) {
        UserDomain userDomain = new UserDomain(userRequestBody.getName());
        return userRepository.save(userDomain);
    }

    @GetMapping
    public List<UserDomain> getAll() {
        return userRepository.findAll();
    }
}

Also we need to create UserDomain, UserRepository and UserRequestBody.

@Entity
@Table(name = "users")
public class UserDomain {

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;

    private String name;

    public UserDomain() {
    }

    public UserDomain(String name) {
        this.name = name;
    }

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }
}
public interface UserRepository extends JpaRepository<UserDomain, Long> {
}
public class UserRequestBody {
    private String name;

    public UserRequestBody() {
    }

    public UserRequestBody(String name) {
        this.name = name;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }
}

And we are done with coding.

We can run our application and start making a request.

First, let’s create some users with a POST request to http://localhost:8080/users. The most important thing not to forget is that we need to provide header X-Tenant with the value n47schema1 or n47schema2.

We will create two users for tenant n47schema1: Antonie and John. Example:

After that, we will change the X-Tenant header value to n47schema2 and create two users: William and Joseph.

You will notice that the ids retrieved in the response are the same as the first tenant value. Now let’s fetch the users by the API.

When you make a GET request to http://localhost:8080/users with header X-Tenant having value n47schema1 you will fetch the users from the n47schema1 schema, and when you make a request with a header value n47schema2 you will fetch from the n47schema2 schema.

You can also check the data in the database to be sure that it is stored correctly.

You can always set fallback if the X-Tenant header is not provided, or it’s with a wrong value.

As the last thing, I will show you how you can implement Flyway with multitenancy. First, you need to add flyway as a dependency and disable it in the application.yml

spring:
  flyway:
    enabled: false

Add PostConstruct method in DataSourceConfig configuration:

@PostConstruct
public void migrate() {
        for (Object dataSource : dataSourceProperties.getDataSources().values()) {
            DataSource source = (DataSource) dataSource;
            Flyway flyway = Flyway.configure().dataSource(source).load();
            flyway.migrate();
        }
}

And we are done, hope this blog helps you to understand what multitenancy is and how it’s implemented in Spring Boot project.

Download the source code

The project is freely available on our Bitbucket repository. Feel free to fix any mistakes and to comment here if you have any questions or feedback.

https://bitbucket.org/n47/spring-boot-multitenancy/src/master/

Spring Cloud OpenFeign

Reading Time: 3 minutes

Choosing the microservice architecture and Spring Boot means that you’ll need to pick the cleanest possible way for your services to communicate between themselves. Feign Client is one of the best solutions for this issue. It is a declarative Java web service client initially developed by Netflix. It’s an abstraction over REST-based calls allowing your microservices to communicate cleanly without the need to know REST details happening underneath. The main idea behind Feign Client is to create an interface with method definitions representing your service call. Even if you need some customization on requests or responses, you can do it in a declarative way. In this article, we will learn about integrating Feign in a Spring Boot application with an example for REST-based HTTP calls. An example will be given, in which two microservices will communicate with each other to transfer some data. But, first, let’s get familiar with feign.

What is Feign?

Feign is a declarative web service client that makes writing web service clients easier. We use the various annotations provided by the Spring framework such as Requestmapping, @PathVariable in a Java interface to Feign, a declarative web service client. It makes writing web service clients easier. To use Feign create an interface and annotate it. It has pluggable annotation support including Feign annotations and JAX-RS annotations. Feign also supports pluggable encoders and decoders. Spring Cloud adds support for Spring MVC annotations and for using the same HttpMessageConverters used by default in Spring Web. Spring Cloud integrates Ribbon and Eureka to provide a load-balanced HTTP client when using Feign.

Example Management API simulator

In the following code section, you can see a Feign Client resource example. The interface extends the origin API interface to declare the @FeignClient. The @FeignClient declares that a REST client should be created with this interface.

Setup pom.xml

The following dependency will be added:

    <dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-starter-openfeign</artifactId>
    </dependency>

Enable Feign Client

Now enable the Eureka Feign by using the @EnableFeignClients annotation in a main Spring Boot application class that is also annotated with the @SpringBootApplication annotation.

package com.example.demo;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.openfeign.EnableFeignClients;
@SpringBootApplication
@EnableFeignClients
@EnableDiscoveryClient
public class FeignClientApplication {
  public static void main(String[] args) {
    SpringApplication.run(FeignClientApplication.class, args);
  }
}

Use a Circuit Breaker with a Feign Client

If you want to use the Spring Cloud OpenFeign support for Hystrix circuit breakers, you must set the feign.hystrix.enabled property to true. In the Feign version of the Agency app, this property is configured in application.yml:

feign:
  hystrix:
    enabled: true
@FeignClient(name = "Validations", url = "${validations.host}")
public interface ValidationsClient {

    @GetMapping(value = "/validate-phone")
    InfoMessageResponse<PhoneNumber> validatePhoneNumber(@RequestParam("phoneNumber") String phoneNumber);

}

In the application.yml file, we will store the URL of the microservice with which we need to communicate:

validations:
  host: "http://localhost:9080/validations"

We will need to add a config for Feign as follows:

package com.demo;

import feign.Contract;
import org.springframework.cloud.openfeign.support.SpringMvcContract;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class FeignClientConfiguration {
    @Bean
    public Contract feignContract() {
        return new SpringMvcContract();
    }
}

Congrats! You just managed to run your Feign Client application by which you can easily locate and consume the REST service.

Summary

In this article, we have launched an example of microservice that communicates with one another. This article should be treated as an introduction to the subject of Feign Client and a discussion of integration with other important components of the microservice architecture.

Crypto Trading Bot

Reading Time: 6 minutes

Every year, N47 as a tech family celebrates a tech festival as Hackdays at the end of the year. In December 2019 we were in Budapest, Hungary for Hackdays. There were five different teams and each team had created some cool projects in a short time. I was also part of a team and we implemented a simple Trading Bot for Crypto. In this blog post, I want to share my experiences.

Trading Platform

To create a Trading Bot, you first need to find the right trading platform. We selected Binance DEX, which can offer a good volume for selected trading pairs, testnet for test purposes and was a Decentralized EXchange (DEX). Thanks to DEX, we can connect the wallet directly and use the credit directly from it.

Binance Chain is a new blockchain and peer-to-peer system developed by Binance and the community. Binance DEX is a secure, native marketplace that is based on the Binance Chain and enables the exchange of digital assets that are issued and listed in the DEX. Reconciliation takes place within the blockchain nodes, and all transactions are recorded in the chain, creating a complete, verifiable activity book. BNB is the native token in the Binance Chain, so users are charged the BNB for sending transactions.

Trading fees are subject to a complex logic, which can lead to individual transactions not being calculated exactly at the rates mentioned here, but instead between them. This is due to the block-based matching engine used in the DEX. The difference between Binance Chain and Ethereum is that there is no idea ​​of gas. As a result, the fees for the remaining transactions are set. There are no fees for a new order.

The testnet is a test environment for Binance Chain network, run by the Binance Chain development community, which is open to developers. The validators on the testnet are from the development team. There is also a web wallet that can directly interact with the DEX testnet. It also provides 200 testnet BNB so that you can interact with the Binance DEX testnet.

For developers, Binance DEX has also provided the REST API for testnet and main net. It also provides different Binance Chain SDKs for different languages like GoLang, Javascript, Java etc. We used Java SDK for the Trading Bot with Spring Boot.

Trading Strategy

To implement a Trading Bot, you need to know which pair and when to buy and sell Crypto for these pairs. We selected a very simple trading strategy for our project. First, we selected the NEXO / BINANCE trading pair (Nexo / BNB) because this pair has the highest trading volume. Perhaps you can choose a different trading pair based on your analysis.

For the purchase and sale, we made a decision based on Candlestick count. We considered the size of Candlestick for 15 minutes. If three are still red (price drops), buy Nexo and if three are still green (price increase), sell Nexo. Once you’ve bought or sold, you’ll have to wait for the next three. Continue with the red or green Candlestick. The purchase and sales volume is always 20 Nexo. You can also choose this based on your analysis.

Let’s Code IT

We have implemented the frontend (Vue.Js) and the backend (Spring Boot) for the Trading Bot, but here I will only go into the backend application as it contains the main logic. As already mentioned, the backend application was created with Spring Boot and Binance Chain Java SDK.

We used ThreadPoolTaskScheduler for the application. This scheduler runs every 2 seconds and checks Candlestick. This scheduler has to be activated once via the frontend app and is then triggered automatically every 2 seconds.

ThreadPoolTaskScheduler.scheduleAtFixedRate(task, 2000);

Based on the scheduler, the execute() method is triggered every two seconds. This method first collects all previous Candlestick for 15 minutes and calculates the green and red Candlestick. Based on this, it will buy or sell.

private double quantity = 20.0;
private String symbol = NEXO-A84_BNB; 
public void execute() {
        List<Candlestick> candleSticks = binanceDexApiRestClient.getCandleStickBars(this.symbol, CandlestickInterval.FIFTEEN_MINUTES);
        List<Candlestick> lastThreeElements = candleSticks.subList(candleSticks.size() - 4, candleSticks.size() - 1);
        // check if last three candlesticks are all red (close - open is negative)
        boolean allRed = lastThreeElements.stream()
                .filter(cs -> Double.parseDouble(cs.getClose()) - Double.parseDouble(cs.getOpen()) < 0.0d).count() == 3;
        // check if last three candlesticks are all green (close - open is positive)
        boolean allGreen = lastThreeElements.stream()
                .filter(cs -> Double.parseDouble(cs.getOpen()) - Double.parseDouble(cs.getClose()) < 0.0d).count() == 3;
        Wallet wallet = new Wallet(privateKey, binanceDexEnvironment);

        // open and closed orders required to check last order creation time
        OrderList closedOrders = binanceDexApiRestClient.getClosedOrders(wallet.getAddress());
        OrderList openOrders = binanceDexApiRestClient.getOpenOrders(wallet.getAddress());

        // order book required for buying and selling price
        OrderBook orderBook = binanceDexApiRestClient.getOrderBook(symbol, 5);
        Account account = binanceDexApiRestClient.getAccount(wallet.getAddress());

        if ((openOrders.getOrder().isEmpty() || openOrders.getOrder().get(0).getOrderCreateTime().plusMinutes(45).isBeforeNow()) && (closedOrders.getOrder().isEmpty() || closedOrders.getOrder().get(0).getOrderCreateTime().plusMinutes(45).isBeforeNow())) {
            if (allRed) {
                if (Double.parseDouble(account.getBalances().stream().filter(b -> b.getSymbol().equals(symbol.split("_")[1])).findFirst().get().getFree()) >= (quantity * Double.parseDouble(orderBook.getBids().get(0).getPrice()))) {
                    order(wallet, symbol, OrderSide.BUY, orderBook.getBids().get(0).getPrice());
                    System.out.println("Buy Order Placed  Quantity:" + quantity + "  Symbol:" + symbol + "  Price:" + orderBook.getAsks().get(0).getPrice());
                    
                } else {
                    System.out.println("do not have enough Token: " + symbol + " in wallet for buy");
                }

            } else if (allGreen) {
                if (Double.parseDouble(account.getBalances().stream().filter(b -> b.getSymbol().equals(symbol.split("_")[0])).findFirst().get().getFree()) >= quantity) {
                    order(wallet, symbol, OrderSide.SELL, orderBook.getAsks().get(0).getPrice());
                    System.out.println("Sell Order Placed  Quantity:" + quantity + "  Symbol:" + symbol + "  Price:" + orderBook.getAsks().get(0).getPrice());
                } else {
                    System.out.println("do not have enough Token:" + symbol + " in wallet for sell");
                }

            } else System.out.println("do nothing");
        } else System.out.println("do nothing");

    }

    private void order(Wallet wallet, String symbol, OrderSide orderSide, String price) {
        NewOrder no = new NewOrder();
        no.setTimeInForce(TimeInForce.GTE);
        no.setOrderType(OrderType.LIMIT);
        no.setSide(orderSide);
        no.setPrice(price);
        no.setQuantity(String.valueOf(quantity));
        no.setSymbol(symbol);

        TransactionOption options = TransactionOption.DEFAULT_INSTANCE;

        try {
            List<TransactionMetadata> resp = binanceDexApiRestClient.newOrder(no, wallet, options, true);
            log.info("TransactionMetadata", resp);
        } catch (Exception e) {
            log.error("Error occurred while order", e);
        }
    }

At first glance, the strategy looks really simple, I agree. After this initial setup, however, it’s easy to add more complex logic with some AI.

Result

Since 12th December 2019, this bot is running on Google Cloud and did 1130 transactions (buy/sell) until 14th April 2020. Initially, I started this bot with 2.6 BNB. On 7th February 2020, the balance was 2.1 BNB in the wallet, but while writing this blog on 14th April 2020, it looks like the bot has recovered the loss and the balance is 2.59 BNB. Hopefully, in future it will make some profit💰🙂.

Let me know your suggestions in a comment on this bot and I would also like to answer your questions if you have anything on this topic. Thanks for the time.

JHipster with Google App Engine and Cloud MySQL

Reading Time: 5 minutes

How does it sound to set up a complete spring application, with front-end and database? With all the models, repositories and controllers? Even with Unit and Integration tests, with mocked data? All within a few hours? Your solution is JHipster!

JHipster

JHipster or “Java Hipster” is a handy application generator, a development platform, to develop and deploy web applications. JHipster has become popular in a short time, and it has been featured in many conferences all around the globe – Montreal, Omaha, Taipei, Richmond, Frankfurt, Paris, London. It supports:

  • Spring Boot (Back-end)
  • Angular/React/Vue (Front-end)
  • Spring microservices

JHipster is used for generating complete applications, it will create for you a Spring Boot and Angular/React/Vue application, high-quality application with most of the things pre-configured, using Java as back-end technology and an extensive set of Spring technologies: Spring Security, Spring Boot, Spring MVC (providing a framework for web-sockets, REST and MVC), Spring Data, etc. and Angular/React/Vue front-end and a suite of pre-configured development tools like Yeoman, Maven, Gradle, Grunt, Gulp.js and Bower.

JHipster gives you a head start in creating Spring Boot application with a set of pre-defined screens for user management, monitoring, and logging. The generated Spring Boot application is specifically tailored to make working with Angular/React/Vue a smoother experience. At the top of all that, JHipster also gives you the tools to update, manage and package the resulting application.

By now you may think it sounds too good to be true… But it is not everything that JHipster offers. If you are a web developer, by now probably you have a lot of questions. 🙂
One important question we will answer in this blog post: is it supported by today’s cloud solutions, is it compatible with all of them? The answer is yes, it is compatible with the popular cloud solutions from Google, Amazon, Microsoft, and Heroku. Let’s see what it takes to make a complete integration in Google’s cloud platform, the app engine.

Compatibility Test - NEXCOM

Google App Engine

Google App Engine is a cloud solution provided by Google, a platform for developing and hosting web applications in data centres managed by Google; Platform as a Service (PaaS). Applications are sandboxed and run across multiple servers. The App Engine supports Java or Python, uses the Google query language and stores data in Google BigTable.

It is free of usage up to a certain amount of resource usage. After the user is exceeding the limited usage rates for storage, CPU resources, requests or number of API calls and concurrent requests can pay for more of these resources.

It is fully compatible with the JHipster generated projects. What it takes to host your application is just to follow the official how-to guide from Google App Engine documentation, as normal Spring Boot Application. To make things easier, Google offers a database which works closely with the Google App Engine, the Cloud SQL.

Cloud SQL

Cloud SQL is a database service offered by Google for their cloud solutions, fully-managed that makes it easy to configure, manage, maintain, and operate your relational databases on Google Cloud Platform.

It offers three database options to integrate with:

  • MySQL
  • PostgreSQL
  • SQL Server

Let’s get into details of integrating with Cloud SQL for MySQL:

  1. The first step is to create a Cloud SQL instance on the Google Cloud Platform, which requires few things like instance ID, password and etc. to be set and it gives you the option to choose the MySQL database version.
  2. The following step is to create the database in the newly created instance. It is possible to have more databases in one instance.
  3. Now, our application, in the case to be able to communicate with the Cloud SQL, without any permission blockers, we need to register the application in the Cloud SQL and manually configure the service account roles.
  4. The final step is connecting your application to the created Cloud SQL instance. It is done through JDBC. All the required properties can be found in the overview of the Cloud SQL, instance connection name, credentials and etc.

So the conclusion: don’t be afraid to invest some time in new technologies, be curious, you never know where they may lead you. Thank you for reading. 🙂

JHipster, is it worth it?

Reading Time: 7 minutes

JHipster is an open-source platform to generate, develop and deploy Spring Boot + Angular / React / Vue web applications. And with over 15 000 stars on Github, it is the most popular code generation framework for Spring Boot. But is it worth the hype or is the generated code too difficult to maintain and not production-ready?

How does it work?

The first thing to note is that JHipster is not a separate framework by itself. It uses yeoman and .jdl files in order to generate code in Spring Boot for backend and Angular or React or Vue for frontend. And after the initial generation of the project, you have the option to use the generated code without ever running JHipster commands again or to use JHipster in order to incrementally grow the projects and develop new features.

What exactly is JDL?

JDL is a JHipster-specific domain language where you can describe all your applications, deployments, entities and their relationships in a single file (or more than one) with a user-friendly syntax.

You can use our online JDL-Studio or one of the JHipster IDE plugins/extensions, which support working with JDL files.

Example of simple JDL file for Blog application:

entity Blog {
  name String required minlength(3)
  handle String required minlength(2)
}

entity Post {
  title String required
  content TextBlob required
  date Instant required
}

entity Tag {
  name String required minlength(2)
}

relationship ManyToOne {
  Blog{user(login)} to User
  Post{blog(name)} to Blog
}

relationship ManyToMany {
  Post{tag(name)} to Tag{entry}
}

paginate Post, Tag with infinite-scroll

Which technologies are used?

On the backend we have the following technologies:

  • Spring Boot as the primary backend framework
  • Maven or Gradle for configuration
  • Spring Security as a Security framework
  • Spring MVC REST + Jackson for REST communication
  • Spring Data JPA + Bean Validation for Object Relational Mapping
  • Liquibase for Database updates
  • MySQL, PostgreSQL, Oracle, MsSQL or MariaDB as SQL databases
  • MongoDB, Counchbase or Cassandra as NoSQL databases
  • Thymleaf as a templating engine
  • Optional Elasticsearch support if you want to have search capabilities on top of your database
  • Optional Spring WebSockets for Web Socket communication
  • Optional Kafka support as a publish-subscribe messaging system

On the frontend side these technologies are used:

  • Angular or React or Vue as a primary frontend framework
  • Responsive Web Design with Twitter Bootstrap
  • HTML5 Boilerplate compatible with modern browsers
  • Full internationalization support
  • Installation of new JavaScript libraries with NPM
  • Build, optimization and live reload with Webpack
  • Testing with Jest and Protractor
  • Optional Sass support for CSS design

How to get started?

  1. Pre-requirements: JavaGit and Node.js.
  2. Install JHipster npm install -g generator-jhipster
  3. Create a new directory and go into it mkdir myApp && cd myApp
  4. Run JHipster and follow instructions on the screen jhipster
  5. Model your entities with JDL Studio and download the resulting jhipster-jdl.jh file
  6. Generate your entities with jhipster import-jdl jhipster-jdl.jh
  7. Run ./mvnw to start generated backend
  8. Run npm start to start generated frontend with live reload support

How does the generated code and application look like?

In case you only want to see a sample generated application without starting the whole framework you can check this official Github repo for the latest up-to-date sample code: https://github.com/jhipster/jhipster-sample-app.

Following are some screen from my up and running JHipster application:

Welcome screen jhipster homepageThis is the initial screen when you open your JHipster app

Create a user screenjhipster user create screenWith this form you can create a new user in the app

View all users screenjhipster user management screenIn this screen you have the option to manage all your existing users

Monitoring of your JHipster application screenjhipster monitoring screenMonitoring of JVM metrics, as well as HTTP requests statistics

What are the pros and cons

The important thing to remember is that JHipster is not a “magic bullet” that will solve all your problems and is not an optimal solution for all the new projects. As a good software engineer, you will have to weigh in the pros and cons of this platform and decide when it makes sense to use and when it’s better to go with a different approach. Having used JHipster for production projects these are some of the pros and cons that I’ve experienced:

Pros

  • Easy bootstrap of a new project with a lot of technologies preconfigured
  • JHipster almost always follows best practices and latest trends in backend and frontend development
  • Login, register, management of users and monitoring comes out-of-the-box
  • Wizard for generating your project, only the technologies that you select are included in the project
  • After defining your own JDL file, all of the models, repository, service and controllers classes for your entities are generated, together with integration tests. This is saving a lot of time in the begging of the project when you want to get to feature development as soon as possible

Cons

  • If you are not familiar with technologies that are being used in the generated project it can be overwhelming and it’s easy to get lost into this mix of lots of different technologies
  • Using JHipster after the initial project is not a smooth experience. Classes and Liquibase scripts are being overwritten and you have to be very careful with changing the initial JDL model. Or you can decide to continue without using JHipster after the initial generation of projects
  • REST responses that are returned from endpoints will not always correspond to business requirements, very often you will have to manually modify your initial JHipster REST responses
  • Not all of the options that are available are at the same level, some technologies that JHipster is using and configuring are more polished than the others. Especially true if you decide to use community modules

What kind of projects are a good fit?

Having said all of this, it’s important to understand that there are projects which can benefit a lot from JHipster and projects that are better without using this platform.

In my experience, a good candidate is a greenfield project where it’s expected to deliver a lot of features fast. JHipster will help a lot to be productive from day one and to cut on the boilerplate code that you need to write. So you will be able, to begin with, feature development really fast. This works well with new projects with tight deadlines, proof of concepts, internal projects, hackathons, and startups.

On the other hand, a not so ideal situation is if you have an already started and up and running project, there is not much a JHipster can do in this case. Or another case would if the application has a lot of specific business logic and its not a simple CRUD application, for example, an AI project, a chatbot or a legacy ecosystem where these new technologies are not suitable or supported.

JHipster, is it worth it?

There is only one sure way to decide if JHipster is worth it for your next project or not and that is to try it out yourself and play around with the different features and configuration that JHipster offers.

At best, you will find a new framework for your next project and save a lot of effort next time you have to start a project. At worst, you will get to know the latest trends in both backend and frontend and learn some of the best practices from a very large community.

Reactive Spring with WebFlux and SQL Databases

Reading Time: 6 minutes

Since SpringBoot 2 the Spring WebFlux was introduced so we can create reactive web applications. This was great and it was working fine with NoSql databases but when it came to relational databases this was an issue. The JDBC database operations are blocking by nature and this will stop you to create a totally non-blocking application. But in order to have an asynchronous and non-blocking application, we will need to cover every layer of the application. The hero that solved this was the R2DBC – Reactive Relational Database Connectivity that gives a possibility to make none-blocking calls to Relational Databases.

The combination of WebFlux and R2DBC is enough to cover every layer in our application that we are going to build. As a relational database, we are going to use H2. So on to the coding!

Go to the spring initializr page from where we are going to build our application and select the following configuration:

  • Group: com.north47 (or your package name)
  • Artifact: spring-r2dbc
  • Dependencies: Spring Reactive Web, Spring Data R2DBC [Experimental], H2 Database, Lombok

(You won’t be able to see the Lombok on this picture, but there it is! If for some reason the Lombok is causing you issues you might need to install a plugin. To do this in Intellij go to File -> Settings -> Plugins search for Lombok, install it and restart your IDE. If you can’t manage to do it just go the old way remove the annotations @Data, @AllArgsConstructor, @NoArgsConstructor in the Book.java class and just create your own setters, getters and constructors).

Now click on Generate, unzip the application and open it via your IDE.

Let’s first create a SQL script that will create our table. Go to src -> main -> resources and right-click on it and select New -> File. Name the file: schema.sql and enter there the following code:

CREATE TABLE BOOK (
ID INTEGER IDENTITY PRIMARY KEY ,
NAME VARCHAR(255) NOT NULL,
AUTHOR VARCHAR (255) NOT NULL
);

This will create a table with name ‘Book’ and the following columns: ID, NAME and AUTHOR.

We will create an additional script that will put us some data in our database. Repeat the following procedure from previous and this time give a name to the file: data.sql and add the following code:

INSERT INTO BOOK (ID,NAME,AUTHOR) VALUES (1,'Angels and Demons','Dan Brown');
INSERT INTO BOOK (ID,NAME, AUTHOR) VALUES (2,'The Matarese Circle', 'Robert Ludlum');
INSERT INTO BOOK (ID,NAME,AUTHOR) VALUES (3,'Name of the Rose', 'Umberto Eco');

This will put some data into our database.

In resources delete the application.properties file and let’s create a new file where we are going to add the following:

logging:
  level:
    org.springframework.data.r2dbc: DEBUG
spring:
  r2dbc:
    url: r2dbc:h2:mem:///test?options=DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE
    name: sa
    password:


Now that we have defined the r2dbc URL and enabled DEBUG logging level for r2dbc let’s go to create our java classes.

Create a new package domain under the ‘com.north47.springr2dbc’ and create a new class Book. This will be our database model:

package com.north47.springr2dbc.domain;

import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import org.springframework.data.annotation.Id;
import org.springframework.data.relational.core.mapping.Column;
import org.springframework.data.relational.core.mapping.Table;

@Table("book")
@Data
@AllArgsConstructor
@NoArgsConstructor
public class Book {

    @Id
    private Long id;

    @Column(value = "name")
    private String name;

    @Column(value = "author")
    private String author;

}

Now to create our repository first create a new package named ‘repository’ under ‘com.north47.springrdbc’. In there create an interface named BookRepository. This interface will extend the R2dbRepository:

package com.north47.springr2dbc.repository;

import com.north47.springr2dbc.domain.Book;
import org.springframework.data.r2dbc.repository.R2dbcRepository;

public interface BookRepository extends R2dbcRepository<Book, Long> {
}

As you may notice we are not extending the JpaRepository as usual. The R2dbcRepository will provide us with methods that can work with objects like Flux, Mono etc…

After this, we will create endpoints from where we can access the previously inserted data or create new, modify it or delete it.

Create a new package ‘resource’ under the ‘com.north47.springr2dbc’ package and in there we will create our BookResource:

package com.north47.springr2dbc.resource;

import com.north47.springr2dbc.domain.Book;
import com.north47.springr2dbc.repository.BookRepository;
import org.springframework.http.MediaType;
import org.springframework.web.bind.annotation.*;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;

@RestController
@RequestMapping(value = "/books")
public class BookResource {

    private final BookRepository bookRepository;

    public BookResource(BookRepository bookRepository) {
        this.bookRepository = bookRepository;
    }

    @GetMapping(produces = MediaType.TEXT_EVENT_STREAM_VALUE)
    public Flux<Book> getAllBooks() {
        return bookRepository.findAll();
    }

    @GetMapping(value = "/{id}")
    public Mono<Book> findById(@PathVariable Long id) {
        return bookRepository.findById(id);
    }

    @PostMapping(consumes = MediaType.APPLICATION_JSON_VALUE)
    public Mono<Book> save(@RequestBody Book book) {
        return bookRepository.save(book);
    }

    @DeleteMapping(value = "/{id}")
    public Mono<Void> delete(@PathVariable Long id) {
        return bookRepository.deleteById(id);
    }
}

And there we have endpoints from where we can access our data and modify it.

On to the postman so we can test our application, but of course, first, let’s start it. When you run the application you can see in the console that your server is started:

Netty started on port(s): 8080

Also since we enabled DEBUG log level you should be able to see al the SQL queries that are executed from the scripts that we wrote previously.

In postman set a GET method and the url: localhost:8080/books. In the Headers add key: ‘Content-Type’, value:’application-json’.

Press that send button and there it is you will get the data:

data:{"id":1,"name":"Angels and Demons","author":"Dan Brown"}

data:{"id":2,"name":"The Matarese Circle","author":"Robert Ludlum"}

data:{"id":3,"name":"Name of the Rose","author":"Umberto Eco"}

You can test also the other endpoints, for example, getting a book by id just by changing the URL to localhost:8080/books/1. The result will be:

{
    "id": 1,
    "name": "Angels and Demons",
    "author": "Dan Brown"
}

Now you can test the other endpoints by creating a new Book by sending a POST request to the localhost:8080/books or delete a book by sending a DELETE to localhost:8080/books/{id}.

Here you can find the whole code:

Spring-R2DBC

Hope you enjoyed it!

A simple way of using Micrometer, Prometheus and Grafana (Spring Boot 2)

Reading Time: 7 minutes

When we run any java application, we are running JVM. That JVM uses resources like memory, processor etc. Same happens when we run any spring application too; it runs and uses our hardware resources. Monitoring and measuring these parameters is crucial when we are in production or when we like to test the performance of our application. With spring, it is easy. We should just include spring actuator and it will give us access to almost all measurements that we need like:

"jvm.memory.max",
"jvm.threads.states",
"jvm.gc.memory.promoted",
"jvm.memory.used",
"jvm.gc.max.data.size",
"jvm.gc.pause",
"jvm.memory.committed",
"system.cpu.count",
"logback.events",
…

To set up spring actuator add the following dependency in our project:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

and on the following endpoint:

<host/context-path>/actuator

we will have basic links to additional features of the application and monitoring:

{
    "_links": {
        "self": {
            "href": "http://localhost:8080/actuator",
            "templated": false
        },
        "health": {
            "href": "http://localhost:8080/actuator/health",
            "templated": false
        },
        "health-path": {
            "href": "http://localhost:8080/actuator/health/{*path}",
            "templated": true
        },
        "info": {
            "href": "http://localhost:8080/actuator/info",
            "templated": false
        }
    }
}

If these basic information are not enough we can extend them with adding the following parameter in the application configuration file:

management.endpoints.web.exposure.include=*

By following any of these links, we will access the details. For our use it will be http://localhost:8080/actuator/metrics from which we are going to access to the metrics of our application.

Now we have almost everything what we need to monitor our application how it performs. Requests, JVM memory, cache, threads etc…

Micrometer

However, if we have some more logic in our code and we need more precise metrics for our application and want to get metrics for our code we will need some other way to get them. Spring Boot 2 Actuator enrich all this already exiting metrics with the micrometer data provider.

Micrometer is a dimensional-first metrics collection facade whose aim is to allow you to time, count, and gauge your code with a vendor-neutral API.

Moreover, a micrometer is a vendor-neutral data provider and exposes application metrics to other external monitoring systems like Prometheus, AWS Cloudwatch etc…

Micrometer gives a set of Meter primitives and plus including Timer, Counter, Gauge, DistributionSummary, LongTaskTimer, FunctionCounter, FunctionTimer, and TimeGauge. Here we should be aware that every different meter type has a different number of time-series metrics. The gauge has a single metric, but the timer has a count of timed events and a total time of all events timed.

If we write something like this in our code:

List<Integer> gaugeList = registry.gauge("dummy.gauge.list", Collections.emptyList(), someList, List::size);
        List<Integer> gaugeCollectionsSizeList = registry.gaugeCollectionSize("dummy.size.list", Tags.empty(), someList);
        Map<Integer, Integer> gaugeMapSize = registry.gaugeMapSize("dummy.gauge.map", Tags.empty(), someMap);

registry.timer("dummy.timer", Tags.empty()).record(() -> {
            slowDummyMethod();
        });

We will have three parameters for the Timer (dummy_timer_seconds_count, dummy_timer_seconds_max. dummy_timer_seconds_sum) and dummy_gauge_list, dummy_gauge_map, dummy_gauge_list.

All this data can be used from many monitoring systems like Netflix Atlas, CloudWatch,  Datadog, Ganglia etc… Here in our case, we will use Prometheus.

Prometheus

Including Prometheus in our project is easy with adding maven dependency:

<dependency>
    <groupId>io.micrometer</groupId>
    <artifactId>micrometer-registry-prometheus</artifactId>
</dependency>

This will create the new endpoint in the actuator http://localhost:8080/baeldung/actuator/prometheus. If we access this URL we will get the metrics from the micrometer.

To see this data in some graphic UI we will have to start Prometheus server. We can do that directly by downloading the Prometheus server and run it.

https://prometheus.io/download/

The configuration is in the prometheus.yml file.

Basic parameters that we should set up here are:

global:
  scrape_interval:     10s # Scrape interval to every 10 seconds. Default value is every 1 minute.

and

scrape_configs:
  - job_name: 'spring_micrometer'

    metrics_path: '/micromexample/actuator/prometheus' # Path to the prometheus end point in our application. “micromexample” is the context and “actuator/prometheus” is default path for prometheus in our application
    static_configs:
    - targets: ['localhost:8080'] # host where our application is deployed

Or another way to have Prometheus server we can run docker image which will contain Prometheus in it. We can do that with the following command:

docker run -d -p 9090:9090 -v <yours-prometheus-config-file.yml>:/etc/prometheus/prometheus.yml prom/prometheus

“9090” – the port where our Prometheus will listen, this value is the default port

<yours-prometheus-config-file.yml> – our configuration file for Prometheus

“prom/prometheus” – docker image with Prometheus

After we run spring boot application with Prometheus included and we run Prometheus server we should be able to see the metrics in some basic view from Prometheus

http://localhost:9090/graph

this is what we should get from our service:

For this graph, we wrote the following code (to have something to be sure that everything works)

registry.timer("dummy.timer ", Tags.empty()).record(() -> {
    slowDummyMethod();
});

Grafana

If we want, reach graphical UI, easy to browse through the metrics data, dashboard editing, cloud monitoring compatibility then it will be a good idea to use Grafana.

Setting up Grafana is similar to Prometheus, we will need a Grafana server.

Again, we can download and install it locally. Like this, we will have service in our OS:

https://grafana.com/get

Or run docker image with Grafana in it:

docker run -d -p 3000:3000 grafana/grafana

“3000” – port for grafana

“grafana/grafana” – docker image with grafana

Default user and password are admin/admin. On the first login, you will be asked to add a new password.

After we log in we should add source, wherefrom Grafana will read the metrics. Go to the following left menu: Configuration -> Data Sources, chose the “Data Sources” tab and add new data source “Add data source”.

Since we decided to go with Prometheus we will select Prometheus source. In the new page (Configuration), because we did not set any authentication or anything else in Prometheus – everything is default, we need just to set HTTP -> URL field. For our case, it will be “http://localhost:9090”. If everything is ok by clicking “Save and test” we should get a green bar that Grafana is connected to Prometheus and we can access the metrics from it.

Let’s see our first metrics from the timer that we added in our application. For this one we will create our own new dashboard:

Chose “Add Query” and in the new window add following key in the “Metrics”: “dummy_timer_seconds_count”. This will add one metric in our graph.

In the same graph, we can add the second one from the timer “dummy_timer_seconds_max”. With this, we will have both metrics in the same graph.

There are other parameters that you can set, but for basic setup default values are fine.

With this, we have set up everything we need for monitoring our application. Next is to add more graphs for metrics that we want to monitor.

Securing your microservices with OAuth 2.0. Building Authorization and Resource server

Reading Time: 8 minutes

We live in a world of microservices. They give us an easy opportunity to scale our application. But as we scale our application it becomes more and more vulnerable. We need to think of a way of how to protect our services and how to keep the wrong people from accessing protected resources. One way to do that is by enabling user authorization and authentication. With authorization and authentication, we need a way to manage credentials, check the access of the requester and make sure people are doing what they suppose to.

When we speak about Spring (Cloud) Security, we are talking about Service authorization powered by OAuth 2.0. This is how it exactly works:

 

The actors in this OAuth 2.0 scenario that we are going to discuss are:

  • Resource Owner – Entity that grants access to a resource, usually you!
  • Resource Server – Server hosting the protected resource
  • Client – App making protected resource requests on behalf of a resource owner
  • Authorization server – server issuing access tokens to clients

The client will ask the resource owner to authorize itself. When the resource owner will provide an authorization grant with the client will send the request to the authorization server. The authorization server replies by sending an access token to the client. Now that the client has access token it will put it in the header and ask the resource server for the protected resource. And finally, the client will get the protected data.

Now that everything is clear about how the general OAuth 2.0 flow is working, let’s get our hands dirty and start writing our resource and authorization server!

Building OAuth2.0 Authorization server

Let’s start by creating our authorization server using the Spring Initializr. Create a project with the following configuration:

  • Project: Maven Project
  • Artefact: auth-server
  • Dependencies: Spring Web, Cloud Security, Cloud OAuth2

Download the project, copy it into your workspace and open it via your IDE. Go to your main class and add the @EnableAuthorizationServer annotation.

@SpringBootApplication
@EnableAuthorizationServer
public class AuthServerApplication {

    public static void main(String[] args) {
        SpringApplication.run(AuthServerApplication.class, args);
    }

}

Go to the application.properties file and make the following modification:

  • Change the server port to 8083
  • Set the context path to be “/api/auth”
  • Set the client id to “north47”
  • Set the client secret to “north47secret”
  • Enable all authorized grant types
  • Set the client scope to read and write
server.port=8083

server.servlet.context-path=/api/auth

security.oauth2.client.client-id=north47
security.oauth2.client.client-secret=north47secret
security.oauth2.client.authorized-grant-types=authorization,password,refresh_token,password,client_credentials
security.oauth2.client.scope=read,write

The client id is a public identifier for applications. The way that we used it is not a good practice for the production environment. It is usually a 32-character hex string so it won’t be so easy guessable.

Let’s add some users into our application. We are going to use in-memory users and we will achieve that by creating a new class ServiceConfig. Create a package called “config” with the following path: com.north47.authserver.config and in there create the above-mentioned class:

@Configuration
public class ServiceConfig extends GlobalAuthenticationConfigurerAdapter {

    @Override
    public void init(AuthenticationManagerBuilder auth) throws Exception {
        auth.inMemoryAuthentication()
                .withUser("filip")
                .password(passwordEncoder().encode("1234"))
                .roles("ADMIN");
    }

    @Bean
    public BCryptPasswordEncoder passwordEncoder() {
        return new BCryptPasswordEncoder();
    }
}

With this we are defining one user with username: ‘filip’ and password: ‘1234’ with a role ADMIN. We are defining that BCryptPasswordEncoder bean so we can encode our password.

In order to authenticate the users that will arrive from another service we are going to add another class called UserResource into the newly created package resource (com.north47.autserver.resource):

@RestController
public class UserResource {

    @RequestMapping("/user")
    public Principal user(Principal user) {
        return user;
    }
}

When the users from other services will try to send a token for validation the user will also be validated with this method.

And that’s it! Now we have our authorization server! The authorization server is providing some default endpoints which we are going to see when we will be testing the resource server.

Building Resource Server

Now let’s build our resource server where we are going to keep our secure data. We will do that with the help of the Spring Initializr. Create a project with the following configuration:

  • Project: Maven Project
  • Artefact: resource-server
  • Dependencies: Spring Web, Cloud Security, Cloud OAuth2

Download the project and copy it in your workspace. First, we are going to create our entity called Train. Create a new package called domain into com.north47.resourceserver and create the class there.

public class Train {

    private int trainId;
    private boolean express;
    private int numOfSeats;

    public Train(int trainId, boolean express, int numOfSeats) {
        this.trainId = trainId;
        this.express = express;
        this.numOfSeats = numOfSeats;
    }

   public int getTrainId() {
        return trainId;
    }

    public void setTrainId(int trainId) {
        this.trainId = trainId;
    }

    public boolean isExpress() {
        return express;
    }

    public void setExpress(boolean express) {
        this.express = express;
    }

    public int getNumOfSeats() {
        return numOfSeats;
    }

    public void setNumOfSeats(int numOfSeats) {
        this.numOfSeats = numOfSeats;
    }

}

Let’s create one resource that will expose an endpoint from where we can get the protected data. Create a new package called resource and there create a class TrainResource. We will have one method only that will expose an endpoint behind we can get the protected data.

@RestController
@RequestMapping("/train")
public class TrainResource {


    @GetMapping
    public List<Train> getTrainData() {

        return Arrays.asList(new Train(1, true, 100),
                new Train(2, false, 80),
                new Train(3, true, 90));
    }
}

Let’s start the application and send a GET request to http://localhost:8082/api/services/train. You will be asked to enter a username and password. The username is user and the password you can see from the console where the application was started. By entering this credentials will give the protected data.

Let’s change the application now to be a resource server by going to the main class ResourceServerApplication and adding the annotation @EnableResourceServer.

@SpringBootApplication
@EnableResourceServer
public class ResourceServerApplication {

    public static void main(String[] args) {
        SpringApplication.run(ResourceServerApplication.class, args);
    }

}

Go to the application properties file and do the following changes:

server.port=8082
server.servlet.context-path=/api/services
security.oauth2.resource.user-info-uri=http://localhost:8083/api/auth/user 

What we have done here is:

  • Changed our server port to 8082
  • Set context path: /api/services
  • Gave user info URI where the user will be validated when he will try to pass a token

Now if you try to get the protected data by sending a GET request to http://localhost:8082/api/services/train the server will return to you a message that you are unauthorized and that full authentication is required. That means that without a token you won’t be able to access the resource.

So that means that we need a fresh new token in order to get the data. We will ask the authorization server to give us a token for the user that we previously created. Our client in this scenario will be the postman. The authorization server that we previously created is exposing some endpoints out of the box. To ask the authorization server for a fresh new token send a POST request to the following URL: localhost:8083/api/auth/oauth/token.

As it was said previously that postman in this scenario is the client that is accessing the resource, it will need to send the client credentials to the authorization server. Those are the client id and the client secret. Go to the authorization tab and add as a username the client id (north47) and the password will be the client secret (north47secret). On the picture below is presented how to set the request:

What is left is to say the username and password of the user. Open the body tab and select x-www-form-urlencoded and add the following values:

  • key: ‘grant_type’, value: ‘password’
  • key: ‘ client_id’, value: ‘north47’
  • key: ‘ username’, value: ‘filip’
  • key: ‘password’, value ‘1234’

Press send and you will get a response with the access_token:

{
    "access_token": "ae27c519-b3da-4da8-bacd-2ffc98450b18",
    "token_type": "bearer",
    "refresh_token": "d97c9d2d-31e7-456d-baa2-c2526fc71a5a",
    "expires_in": 43199,
    "scope": "read write"
}

Now that we have the access token we can call our protected resource by inserting the token into the header of the request. Open postman again and send a GET request to localhost:8082/api/services/train. Open the header tab and here is the place where we will insert the access token. For a key add “Authorization” and for value add “Bearer ae27c519-b3da-4da8-bacd-2ffc98450b18”.

 

And there it is! You have authorized itself and got a new token which allowed you to get the protected data.

You can find the projects in our repository:

And that’s it! Hope you enjoyed it!

JMeter

Reading Time: 4 minutes

Today, we are gonna take a look at JMeter. You can embed it in your application as a library and make an external integration testing solution. You don’t have to use it for load testing, it could simply send onej request, check the return status code, check the return value and move on. There is an argument that JMeter may be overkill for that, but it provides an easy way to verify the return, allows you to set it up using JMeter desktop app and then you can move into testing latency under load.

First, we need to create a test file that will be put later in our spring boot application. The steps for creating the .jmx file are as follows:

1 – Open the JMeter window by clicking on /home/apache-jmeter-5.1.1/bin/jmeter.bat. The next step you want to do with every JMeter Test Plan is to add a thread group element. Set the “Loop Count” parameter equal to 1, as shown below:

2 – The next step is to create a while controller. The purpose the while controller is to repeat a given set of actions until the condition is broken. While is a basic programming concept to run actions where the number of iterations to run are unknown or varying.

3 – Create an HTTP request as shown in the figure below:

4 – Afterwards, we are gonna create a CSV Data Set Config. This step refers to the CSV file for which the partner users will be collected and replaced as in the httprequest.

5- After running our test, we want to see the results e.g. which calls have been done, and which ones have failed. This is done through Listeners, a recording mechanism that shows results, including logging and debugging.

The View Results Tree is the most common Listener.

Right-click – Add->Listener->View Results Tree

6 – At the end, it should be something like the figure below:

Now click ‘Save’. Your test will be saved as a .jmx file. To run the test, click the green arrow on top. After the test completes running, you can view the results on the Listener as in the figure below. In this example, you can see the tests were successful because they’re green. On the right, you can see more detailed results, like load time, connect time, errors, the request data, the response data, etc. You can also save the results if you want to.

JMeter can also be used for Maven testing through a plugin and work quite nicely with variables and prerequisites etc. Integrating Performance Testing in your projects has many benefits:

  • It provides a constant check of the performances of the application
  • Secures continuous delivery in production
  • Allows early detection of performance problems or performance regressions
  • Automates the process means less manual work, allowing your team to focus on more valuable tasks like performance analysis and optimisation

First of all, you need to add your plugin to the project. So go to Maven project directory (jmeter-testproject in this case) and edit the pom.xml file. Here you must add the plugin. You can find the basic configuration here. You just need to copy the configuration text and paste it in your pom.xml file.

Finally, you have a pom.xml file that looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<plugin>
   <groupId>com.lazerycode.jmeter</groupId>
   <artifactId>jmeter-maven-plugin</artifactId>
   <version>2.9.0</version>
   <executions>
      <execution>
         <id>jmeter-tests</id>
         <phase>test</phase>
         <goals>
            <goal>jmeter</goal>
         </goals>
      </execution>
      <execution>
         <id>jmeter-check-results</id>
         <phase>test</phase>
         <goals>
            <goal>results</goal>
         </goals>
      </execution>
   </executions>
   <configuration>
      <testFilesDirectory>src/test/resources/jmeter</testFilesDirectory>
      <ignoreResultFailures>false</ignoreResultFailures>
   </configuration>
</plugin>

In the code above we will be running 2 goals:

  • jmeter in phase Test: This goal will run the load test and generate the HTML report
  • results in phase Test: This goal runs some verification on error rate and fails the build

Testing Spring Boot application with examples

Reading Time: 7 minutes

Why bother writing tests is already a well-discussed topic in software engineering. I won’t go into much details on this topic, but I will mention some of the main benefits.

In my opinion, testing your software is the only way to achieve confidence that your code will work on the production environment. Another huge benefit is that it allows you to refactor your code without fear that you will break some existing features.

Risk of bugs vs the number of tests

In the Java world, one of the most popular frameworks is Spring Boot, and part of the popularity and success of Spring Boot is exactly the topic of this blog – testing. Spring Boot and Spring framework offer out-of-the-box support for testing and new features are being added constantly. When Spring framework appeared on the Java scene in 2005, one of the reasons for its success was exactly this, ease of writing and maintaining tests, as opposed to JavaEE where writing integration requires additional libraries like Arquillian.

In the following, I will go over different types of tests in Spring Boot, when to use them and give a short example.

Testing pyramid

We can roughly group all automated tests into 3 groups:

  • Unit tests
  • Service (integration) tests
  • UI (end to end) tests

As we go from the bottom of the pyramid to the top tests become slower for execution, so if we measure execution times, unit tests will be in orders of few milliseconds, service in hundreds milliseconds and UI will execute in seconds. If we measure the scope of tests, unit as the name suggest test small units of code. Service will test the whole service or slice of that service that involve multiple units and UI has the largest scope and they are testing multiple different services. In the following sections, I will go over some examples and how we can unit test and service test spring boot application. UI testing can be achieved using external tools like Selenium and Protractor, but they are not related to Spring Boot.

Unit testing

In my opinion, unit tests make the most sense when you have some kind of validators, algorithms or other code that has lots of different inputs and outputs and executing integration tests would take too much time. Let’s see how we can test validator with Spring Boot.

Validator class for emails

public class Validators {

    private static final String EMAIL_REGEX = "(?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*|\"(?:[\\x01-\\x08\\x0b\\x0c\\x0e-\\x1f\\x21\\x23-\\x5b\\x5d-\\x7f]|\\\\[\\x01-\\x09\\x0b\\x0c\\x0e-\\x7f])*\")@(?:(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?|\\[(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?|[a-z0-9-]*[a-z0-9]:(?:[\\x01-\\x08\\x0b\\x0c\\x0e-\\x1f\\x21-\\x5a\\x53-\\x7f]|\\\\[\\x01-\\x09\\x0b\\x0c\\x0e-\\x7f])+)\\])";

    public static boolean isEmailValid(String email) {
        return email.matches(EMAIL_REGEX);
    }
}

Unit tests for email validator with Spring Boot

@RunWith(MockitoJUnitRunner.class)
public class ValidatorsTest {
    @Test
    public void testEmailValidator() {
        assertThat(isEmailValid("valid@north-47.com")).isTrue();
        assertThat(isEmailValid("invalidnorth-47.com")).isFalse();
        assertThat(isEmailValid("invalid@47")).isFalse();
    }
}

MockitoJUnitRunner is used for using Mockito in tests and detection of @Mock annotations. In this case, we are testing email validator as a separate unit from the rest of the application. MockitoJUnitRunner is not a Spring Boot annotation, so this way of writing unit tests can be done in other frameworks as well.

Integration testing of the whole application

If we have to choose only one type of test in Spring Boot, then using the integration test to test the whole application makes the most sense. We will not be able to cover all the scenarios, but we will significantly reduce the risk. In order to do integration testing, we need to start the application context. In Spring Boot 2, this is achieved with following annotations @RunWith(SpringRunner.class) and @SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT. This will start the application on some random port and we can inject beans into our tests and do REST calls on application endpoints.

In the following is an example code for testing book endpoints. For making rest API calls we are using Spring TestRestTemplate which is more suitable for integration tests compared to RestTemplate.

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
public class SpringBootTestingApplicationTests {

    @Autowired
    private TestRestTemplate restTemplate;

    @Autowired
    private BookRepository bookRepository;

    private Book defaultBook;

    @Before
    public void setup() {
        defaultBook = new Book(null, "Asimov", "Foundation", 350);
    }

    @Test
    public void testShouldReturnCreatedWhenValidBook() {
        ResponseEntity<Book> bookResponseEntity = this.restTemplate.postForEntity("/books", defaultBook, Book.class);

        assertThat(bookResponseEntity.getStatusCode()).isEqualTo(HttpStatus.CREATED);
        assertThat(bookResponseEntity.getBody().getId()).isNotNull();
        assertThat(bookRepository.findById(1L)).isPresent();
    }

    @Test
    public void testShouldFindBooksWhenExists() throws Exception {
        Book savedBook = bookRepository.save(defaultBook);

        ResponseEntity<Book> bookResponseEntity = this.restTemplate.getForEntity("/books/" + savedBook.getId(), Book.class);

        assertThat(bookResponseEntity.getStatusCode()).isEqualTo(HttpStatus.OK);
        assertThat(bookResponseEntity.getBody().getId()).isEqualTo(savedBook.getId());
    }

    @Test
    public void testShouldReturn404WhenBookMissing() throws Exception {
        Long nonExistingId = 999L;
        ResponseEntity<Book> bookResponseEntity = this.restTemplate.getForEntity("/books/" + nonExistingId, Book.class);

        assertThat(bookResponseEntity.getStatusCode()).isEqualTo(HttpStatus.NOT_FOUND);
    }
}

Integration testing of web layer (controllers)

Spring Boot offers the ability to test layers in isolation and only starting the necessary beans that are required for testing. From Spring Boot v1.4 on there is a very convenient annotation @WebMvcTest that only the required components in order to do a typical web layer test like controllers, Jackson converters and similar without starting the full application context and avoid startup of unnecessary components for this test like database layer. When we are using this annotation we will be making the REST calls with MockMvc class.

Following is an example of testing the same endpoints like in the above example, but in this case, we are only testing if the web layer is working as expected and we are mocking the database layer using @MockBean annotation which is also available starting from Spring Boot v1.4. Using these annotations we are only using BookController in the application context and mocking database layer.

@RunWith(SpringRunner.class)
@WebMvcTest(BookController.class)
public class BookControllerTest {
    @Autowired
    private MockMvc mockMvc;

    @MockBean
    private BookRepository repository;

    @Autowired
    private ObjectMapper objectMapper;

    private static final Book DEFAULT_BOOK = new Book(null, "Asimov", "Foundation", 350);

    @Test
    public void testShouldReturnCreatedWhenValidBook() throws Exception {
        when(repository.save(Mockito.any())).thenReturn(DEFAULT_BOOK);

        this.mockMvc.perform(post("/books")
                .content(objectMapper.writeValueAsString(DEFAULT_BOOK))
                .contentType(MediaType.APPLICATION_JSON)
                .accept(MediaType.APPLICATION_JSON))
                .andExpect(status().isCreated())
                .andExpect(MockMvcResultMatchers.jsonPath("$.name").value(DEFAULT_BOOK.getName()));
    }

    @Test
    public void testShouldFindBooksWhenExists() throws Exception {
        Long id = 1L;
        when(repository.findById(id)).thenReturn(Optional.of(DEFAULT_BOOK));

        this.mockMvc.perform(get("/books/" + id)
                .accept(MediaType.APPLICATION_JSON))
                .andExpect(status().isOk())
                .andExpect(MockMvcResultMatchers.content().string(Matchers.is(objectMapper.writeValueAsString(DEFAULT_BOOK))));
    }

    @Test
    public void testShouldReturn404WhenBookMissing() throws Exception {
        Long id = 1L;
        when(repository.findById(id)).thenReturn(Optional.empty());

        this.mockMvc.perform(get("/books/" + id)
                .accept(MediaType.APPLICATION_JSON))
                .andExpect(status().isNotFound());
    }
}

Integration testing of database layer (repositories)

Similarly to the way that we tested web layer we can test the database layer in isolation, without starting the web layer. This kind of testing in Spring Boot is achieved using the annotation @DataJpaTest. This annotation will do only the auto-configuration related to JPA layer and by default will use an in-memory database because its fastest to startup and for most of the integration tests will do just fine. We also get access TestEntityManager which is EntityManager with supporting features for integration tests of JPA.

Following is an example of testing the database layer of the above application. With these tests we are only checking if the database layer is working as expected we are not making any REST calls and we are verifying results from BookRepository, by using the provided TestEntityManager.

@RunWith(SpringRunner.class)
@DataJpaTest
public class BookRepositoryTest {
    @Autowired
    private TestEntityManager entityManager;

    @Autowired
    private BookRepository repository;

    private Book defaultBook;

    @Before
    public void setup() {
        defaultBook = new Book(null, "Asimov", "Foundation", 350);
    }

    @Test
    public void testShouldPersistBooks() {
        Book savedBook = repository.save(defaultBook);

        assertThat(savedBook.getId()).isNotNull();
        assertThat(entityManager.find(Book.class, savedBook.getId())).isNotNull();
    }

    @Test
    public void testShouldFindByIdWhenBookExists() {
        Book savedBook = entityManager.persistAndFlush(defaultBook);

        assertThat(repository.findById(savedBook.getId())).isEqualTo(Optional.of(savedBook));
    }

    @Test
    public void testFindByIdShouldReturnEmptyWhenBookNotFound() {
        long nonExistingID = 47L;
        
        assertThat(repository.findById(nonExistingID)).isEqualTo(Optional.empty());
    }
}

Conclusion

You can find a working example with all of these tests on the following repo: https://gitlab.com/47northlabs/public/spring-boot-testing.

In the following table, I’m showing the execution times with the startup of the different types of tests that I’ve used as examples. We can clearly see that unit tests, as mentioned in the beginning, are the fastest ones and that separating integration tests into layered testing leads to faster execution times.

Type of testExecution time with startup
Unit test80 ms
Integration test620 ms
Web layer test190 ms
Database layer test220 ms

Generate Spring Boot REST API using Swagger/OpenAPI

Reading Time: 5 minutes

Writing API definition is pretty cool stuff. It helps consumers to understand the API and agree on its attributes. In our company for that purpose we are using OpenAPI Specification (formerly Swagger Specification).

But the real deal is generating code and documentation from the specification file. In this blog, I will show you how we are doing that at N47.

We will split this blog into two parts. The first part will be generating code, and the second part will be using the generated code.

Part 1

We are creating an empty maven project named “demo-specification”.

Next thing is creating an API definition file, api.yaml in src/main/resources/ directory. The demo content of this file is:

openapi: "3.0.0"
info:
  description: "Codegen for demo service"
  version: "0.0.1"
  title: "Demo Service Specification"
  contact:
    email: "antonie.zafirov@north-47.com"
tags:
  - name: "user"
    description: "User tag for demo purposes"
servers:
  - url: http://localhost:8000/
    description: "local host"
paths:
  /user/{id}:
    get:
      tags:
        - "user"
      summary: "Retrieves User by ID"
      operationId: "getUserById"
      parameters:
        - name: "id"
          in: "path"
          description: "retrieves user by user id"
          required: true
          schema:
            type: "integer"
            format: "int64"
      responses:
        200:
          description: "Retrieves family members by person id"
          content:
            application/json:
              schema:
                type: "object"
                $ref: '#/components/schemas/User'
components:
  schemas:
    User:
      type: "object"
      required:
        - "id"
        - "firstName"
        - "lastName"
        - "dateOfBirth"
        - "gender"
      properties:
        id:
          type: "integer"
          format: "int64"
        firstName:
          type: "string"
          example: "John"
        lastName:
          type: "string"
          example: "Smith"
        dateOfBirth:
          type: "string"
          example: "1992-10-05"
        gender:
          type: "string"
          enum:
            - "MALE"
            - "FEMALE"
            - "UNKNOWN"

Next step is updating pom.xml file

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.47northlabs</groupId>
    <artifactId>demo-specification</artifactId>
    <version>0.0.1-SNAPSHOT</version>

    <properties>
        <swagger-annotations-version>1.5.22</swagger-annotations-version>
        <jersey-version>2.27</jersey-version>
        <jackson-version>2.8.9</jackson-version>
        <jodatime-version>2.7</jodatime-version>
        <maven-plugin-version>1.0.0</maven-plugin-version>
        <junit-version>4.8.1</junit-version>
        <springfox-version>2.9.2</springfox-version>
        <threetenbp-version>1.3.8</threetenbp-version>
        <datatype-threetenbp-version>2.6.4</datatype-threetenbp-version>
        <spring-boot-starter-test-version>2.1.1.RELEASE</spring-boot-starter-test-version>
        <spring-boot-starter-web-version>2.1.0.RELEASE</spring-boot-starter-web-version>
        <junit-version>4.12</junit-version>
        <migbase64-version>2.2</migbase64-version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>io.swagger</groupId>
            <artifactId>swagger-annotations</artifactId>
            <version>${swagger-annotations-version}</version>
        </dependency>
        <dependency>
            <groupId>org.glassfish.jersey.core</groupId>
            <artifactId>jersey-client</artifactId>
            <version>${jersey-version}</version>
        </dependency>
        <dependency>
            <groupId>org.glassfish.jersey.media</groupId>
            <artifactId>jersey-media-multipart</artifactId>
            <version>${jersey-version}</version>
        </dependency>
        <dependency>
            <groupId>org.glassfish.jersey.media</groupId>
            <artifactId>jersey-media-json-jackson</artifactId>
            <version>${jersey-version}</version>
        </dependency>
        <dependency>
            <groupId>com.fasterxml.jackson.jaxrs</groupId>
            <artifactId>jackson-jaxrs-base</artifactId>
            <version>${jackson-version}</version>
        </dependency>
        <dependency>
            <groupId>com.fasterxml.jackson.core</groupId>
            <artifactId>jackson-core</artifactId>
            <version>${jackson-version}</version>
        </dependency>
        <dependency>
            <groupId>com.fasterxml.jackson.core</groupId>
            <artifactId>jackson-annotations</artifactId>
            <version>${jackson-version}</version>
        </dependency>
        <dependency>
            <groupId>com.fasterxml.jackson.core</groupId>
            <artifactId>jackson-databind</artifactId>
            <version>${jackson-version}</version>
        </dependency>
        <dependency>
            <groupId>com.fasterxml.jackson.jaxrs</groupId>
            <artifactId>jackson-jaxrs-json-provider</artifactId>
            <version>${jackson-version}</version>
        </dependency>
        <dependency>
            <groupId>com.fasterxml.jackson.datatype</groupId>
            <artifactId>jackson-datatype-joda</artifactId>
            <version>${jackson-version}</version>
        </dependency>
        <dependency>
            <groupId>joda-time</groupId>
            <artifactId>joda-time</artifactId>
            <version>${jodatime-version}</version>
        </dependency>
        <dependency>
            <groupId>com.brsanthu</groupId>
            <artifactId>migbase64</artifactId>
            <version>${migbase64-version}</version>
        </dependency>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>${junit-version}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <version>${spring-boot-starter-test-version}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
            <version>${spring-boot-starter-web-version}</version>
        </dependency>
        <dependency>
            <groupId>io.springfox</groupId>
            <artifactId>springfox-swagger2</artifactId>
            <version>${springfox-version}</version>
        </dependency>
        <dependency>
            <groupId>io.springfox</groupId>
            <artifactId>springfox-swagger-ui</artifactId>
            <version>${springfox-version}</version>
        </dependency>
        <dependency>
            <groupId>org.threeten</groupId>
            <artifactId>threetenbp</artifactId>
            <version>${threetenbp-version}</version>
        </dependency>
        <dependency>
            <groupId>com.github.joschi.jackson</groupId>
            <artifactId>jackson-datatype-threetenbp</artifactId>
            <version>${datatype-threetenbp-version}</version>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.openapitools</groupId>
                <artifactId>openapi-generator-maven-plugin</artifactId>
                <version>3.3.4</version>
                <executions>
                    <execution>
                        <id>spring-boot-api</id>
                        <goals>
                            <goal>generate</goal>
                        </goals>
                        <configuration>
                            <inputSpec>${project.basedir}/src/main/resources/api.yaml</inputSpec>
                            <generatorName>spring</generatorName>
                            <configOptions>
                                <dateLibrary>joda</dateLibrary>
                            </configOptions>
                            <library>spring-boot</library>
                            <apiPackage>com.northlabs.demo.api</apiPackage>
                            <modelPackage>com.northlabs.demo.api.model</modelPackage>
                            <invokerPackage>com.northlabs.demo.api.handler</invokerPackage>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.6.1</version>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                </configuration>
            </plugin>
            <plugin>
                <artifactId>maven-deploy-plugin</artifactId>
                <version>2.8.1</version>
                <executions>
                    <execution>
                        <id>default-deploy</id>
                        <phase>deploy</phase>
                        <goals>
                            <goal>deploy</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
</project>

After that, we are executing MVN clean install in the root directory of the project. The result is in target/generated-sources/. com.northlabs.demo.api.UserApi generated API interface is what we need.

The magic is done by openapi-generator-maven-plugin. There are a lot of different generators that can be used, with a lot of options. Here is the list of them.

Part 2

Let’s create a new spring boot project demo-service from https://start.spring.io/.

What we need to do is to add demo-specification as a maven dependency in the demo-service project.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>
	<parent>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-parent</artifactId>
		<version>2.1.4.RELEASE</version>
		<relativePath/> <!-- lookup parent from repository -->
	</parent>
	<groupId>com.47northlabs</groupId>
	<artifactId>demo-service</artifactId>
	<version>0.0.1-SNAPSHOT</version>
	<name>demo-service</name>
	<description>Demo project for Spring Boot</description>

	<properties>
		<java.version>1.8</java.version>
	</properties>

	<dependencies>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-web</artifactId>
		</dependency>

		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-test</artifactId>
			<scope>test</scope>
		</dependency>
		<dependency>
			<groupId>com.47northlabs</groupId>
			<artifactId>demo-specification</artifactId>
			<version>0.0.1-SNAPSHOT</version>
		</dependency>
	</dependencies>

	<build>
		<plugins>
			<plugin>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-maven-plugin</artifactId>
			</plugin>
		</plugins>
	</build>

</project>

In application.properties file we are setting server.port to 8000.

server.port=8000

Next step is creating a class UserRestController which will implement previously generated UserApi from demo-specification.

package com.northlabs.demoservice.rest.controller;

import com.northlabs.demo.api.UserApi;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class UserRestController implements UserApi {
}

Now, if we run the application and try to make GET request to /user/1 the response status will be 501 Not Implemented.

Let’s make some simple implementation of the API:

package com.northlabs.demoservice.rest.controller;

import com.northlabs.demo.api.UserApi;
import com.northlabs.demo.api.model.User;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class UserRestController implements UserApi {

    @Override
    public ResponseEntity<User> getUserById(@PathVariable("id") Long id) {
        User user = new User();
        user.setId(id);
        user.setFirstName("John");
        user.setLastName("Doe");
        user.setGender(User.GenderEnum.MALE);
        user.setDateOfBirth("01-01-1970");
        return ResponseEntity.ok(user);
    }
}

Now the response will be:

And we are done!

This is how we are implementing OpenAPI/Swagger in our projects.
In the next blog, I will show you how you can provide Swagger UI, generate Java client, JavaScript client modify base paths etc.

Download the source code

Both projects are freely available on our GitLab repository. Feel free to fix any mistakes and to comment here if you have any questions or feedback.

https://gitlab.com/47northlabs/public/openapi-codegen-demo/demo-specification

https://gitlab.com/47northlabs/public/openapi-codegen-demo/demo-service

Spring I/O, The Conference in Barcelona – 2019

Reading Time: 2 minutes

Spring I/O is the conference, which is leading the European Conference for the Spring Framework ecosystem. This year it will be the 8th edition and take place in Barcelona, Spain between 16 to 17 May and I’m going to attend it for the first time. This conference is also my first conference for this year, so I’m very excited 😊 about it.

Preparation

Initial preparation is done as mentioned below:

  • Ticket booking, The Conference ✔️
  • Flight booking, Zürich to Barcelona ✔️
  • Hotel booking ✔️

The Conference will take place in Palau de Congressos de Catalunya, Barcelona.

Location Palau de Congressos de Catalunya on Google Maps
The entrance

Topics

Detailed agenda and topics will be available here. But I’m interested in below-mentioned topics:

  • The State of Java Relational Persistence
  • Configuration Management with Kubernetes, a Spring Boot use-case
  • Moving beyond REST: GraphQL and Java & Spring
  • Spring Framework 5.2: Core Container Revisited
  • JUnit 5: what’s new and what’s coming
  • Migrating a modern spring web application to serverless
  • Relational Persistence with Spring Data JDBC [Workshop]
  • Clean Architecture with Spring
  • How to secure your Spring apps with Keycloak
  • Boot Loot – up your game and Spring like the pros
  • Spring Boot with Kotlin, Kofu and Coroutines
  • Multi-Service Reactive Streams Using Spring, Reactor, and RSocket
  • Zero Downtime Migrations with Spring Boot

Apart from the conference, I am planning to visit Font Màgica de Montjuïc, which is near to the conference venue.

I’m open to further suggestions regarding my visit to Barcelona. What else should I visit? Is there any special food that I should try?

Spring Cloud Stream (event-driven microservice) with Apache Kafka… in 15 Minutes!

Reading Time: 5 minutes

Introduction

In March 2019 Shady and I visited Voxxed Days Romania in Bucharest. If you haven’t seen our post about that, check it out now! There were some really cool talks and so I decided to pick one and write about it.

At my previous employer, we switched from a monolithic service to a microservice architecture. After implementing about 20 different microservices in 2 years, the communication between them got more complex. In addition to that, all microservices where communicating synchronously! Did we build another monolith? I just recently read a blog post about that on another site: https://thenewstack.io/synchronous-rest-turns-microservices-back-monoliths/

I tried to recreate the complexity of synchronous communication in microservices in this picture 😅

So back to the topic… This is why I always was interested in asynchronous communication (streams, message bus, pubsub, whatever). I heard a lot from Uber using Google Clouds PubSub, how it’s highly scalable, asynchronous and most important: just cool to use! I was inspired by Mark Heckler’s talk “Drinking from the Stream: How to Use Messaging Platforms for Scalability&Performance” and tried it out myself. Of course, I’m sharing my experiences and example with you…

Technologies

Spring Cloud Stream

Spring Cloud Stream is a framework for building highly scalable event-driven microservices connected with shared messaging systems.

https://spring.io/projects/spring-cloud-stream#overview

Spring Cloud Stream supports a variety of binder implementations:

We will use Spring Cloud Stream to create 3 different projects (microservices), with the Apache Kafka Binder using the Spring Initializr.

Documentation

https://cloud.spring.io/spring-cloud-static/spring-cloud-stream/2.1.2.RELEASE/single/spring-cloud-stream.html

Kafka

Apache Kafka is a distributed streaming platform. Communication between endpoints is driven by messaging-middleware parties like Apache Kafka or RabbitMQ.

Documentation

https://kafka.apache.org/documentation/

Let’s get started!

Prerequisites

So this is all you need to get yourself started:

  • Maven 3.2+
  • Java 7+ (Java 8 highly recommended!)
  • Docker

The idea: Money money money 💰

Let’s build a money-printing machine 🤑! So the idea is…

  • Producer
    • Prints money (coins and notes) in different currencies, values and qualities.
  • Processor
    • Fetch money and polish coins/notes to”perfect” quality. This is quality assurance 😉.
  • Consumer
    • Fetch (spend) money and show type, currency, value and quality.
Three microservices communicating through Kafka

Bootstrap your application with Spring Initializr

Create a new project just with a few clicks 🖱

  • Project: Maven Project
  • Language: Java
  • Spring Boot: 2.1.4
  • Project Metadata
    • Group: com.47northlabs
    • Artefact: moneyprinter-producer
  • Dependencies
    • Web
    • Cloud Stream
    • Kafka
    • Lombok
Screenshot from my setup in the Spring Initializr

Implementation of the producer

Create or edit /src/main/resources/application.properties

server.port=0

spring.cloud.stream.bindings.output.destination=processor
spring.cloud.stream.bindings.output.group=processor

spring.cloud.stream.kafka.binder.auto-add-partitions=true
spring.cloud.stream.kafka.binder.min-partition-count=4

The destination defines to which pipeline (or topic) the message is published to.

Create or edit /src/main/java/com/northlabs/lab/moneyprinterproducer/MoneyprinterProducerApplication.java

package com.northlabs.lab.moneyprinterproducer;

import lombok.AllArgsConstructor;
import lombok.Data;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.messaging.Source;
import org.springframework.scheduling.annotation.EnableScheduling;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;
import org.springframework.messaging.support.MessageBuilder;

import java.util.Random;
import java.util.UUID;

@SpringBootApplication
public class MoneyprinterProducerApplication {

	public static void main(String[] args) {
		SpringApplication.run(MoneyprinterProducerApplication.class, args);
	}

}

@EnableBinding(Source.class)
@EnableScheduling
@AllArgsConstructor
class Spammer {
	private final Source source;
	private final SubscriberGenerator generator;

	@Scheduled(fixedRate = 1000)
	private void spam() {
		Money money = generator.printMoney();
		System.out.println(money);
		source.output().send(MessageBuilder.withPayload(money).build());
	}
}

@Component
class SubscriberGenerator {
	private final String[] type = "Coin, Note".split(", ");
	private final String[] currency = "CHF, EUR, USD, JPY, GBP".split(", ");
	private final String[] value = "1, 2, 5, 10, 20, 50, 100, 200, 500, 1000".split(", ");
	private final String[] quality = "poor, fair, good, premium, flawless, perfect".split(", ");
	private final Random rnd = new Random();
	private int i = 0, j = 0, k=0, l=0;

	Money printMoney() {
		i = rnd.nextInt(2);
		j = rnd.nextInt(5);
		k = rnd.nextInt(10);
		l = rnd.nextInt(6);

		return new Money(UUID.randomUUID().toString(), type[i], currency[j], value[k], quality[l]);
	}
}

@Data
@AllArgsConstructor
class Money {
	private final String id, type, currency, value, quality;
}

Here we simply create the whole microservice in one class. The most important code is highlighted here. SUPER SIMPLE! Now you already have a microservice, which prints money and publishes it to the destination topic/pipeline “processor” 👏.

Implementation Processor

https://gitlab.com/47northlabs/public/spring-cloud-stream-money/blob/master/moneyprinter-processor/src/main/resources/application.properties

https://gitlab.com/47northlabs/public/spring-cloud-stream-money/blob/master/moneyprinter-processor/src/main/java/com/northlabs/lab/moneyprinterprocessor/MoneyprinterProcessorApplication.java

Implementation Consumer

https://gitlab.com/47northlabs/public/spring-cloud-stream-money/blob/master/moneyprinter-consumer/src/main/resources/application.properties

https://gitlab.com/47northlabs/public/spring-cloud-stream-money/blob/master/moneyprinter-consumer/src/main/java/com/northlabs/lab/moneyprinterconsumer/MoneyprinterConsumerApplication.java

Docker for Kafka and Zookeeper

Run these commands to create a network and run Kafka and Zookeeper in docker containers.

docker network create kafka

docker run -d --net=kafka --name=zookeeper -e ZOOKEEPER_CLIENT_PORT=2181 confluentinc/cp-zookeeper:5.0.0
docker run -d --net=kafka --name=kafka -p 9092:9092 -e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092 -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 confluentinc/cp-kafka:5.0.0

If you can’t connect, add this line to /etc/hosts to ensure proper routing to container network “kafka”:

127.0.0.1 kafka

Start messaging platforms with the docker start command:

docker start zookeeper
docker start kafka

It’s a wrap!

Congratulations! You made it. Now just run your producer, processor and consumer and it should look something like this:

My example

Getting started

  1. Run docker/runKafka.sh
  2. Run docker/startMessagingPlatforms.sh
  3. Start producer, processor and consumer microservice (e.g. inside IntelliJ)
  4. Enjoy the log output 👨‍💻📋

Download the source code

The whole project is freely available on our Gitlab repository. Feel free to fix any mistakes and to comment here if you have any questions or feedback.

https://gitlab.com/47northlabs/public/spring-cloud-stream-money

Live from JPoint, Moscow 2019

Reading Time: 3 minutes

The conference took place at the World Trade Center in Moscow and started at 9 am. It looked like it will be huge from the beginning, well organized and big conference halls. The first step was an attendee registration.

After completing the registration and picking up some welcome packages, we had some starting coffee break and drinks. Also, we had visited most of the big company representative stands, that were in front of the conference halls. You can find interesting free materials there, like stickers, manuals and packages from the company you are visiting.

The next step was the conference. There were four conference halls, each one with different speakers. The opening talk was made by Anton Keks from Codeborne on the topic The world needs full-stack craftsmen.

After the opening ceremony talk, the conference started with different speakers on every track. Some of them were Russian speakers, so we focused on the English ones. Every talk was one to one and a half hour long and after that was a coffee break in the lounge room. There were also two lunch breaks included. In the end, the party at 20:00. You can check the full schedule here.

Day two was completely the same setup, some different speakers or the same one with a different topic. In general, the whole organization of the conference was amazing, like it should be for a world-class event. I highly recommend visiting if you have a chance.

Stay tuned for my next part where I will describe my opinion of the talks that I have visited…

DEVOXX UKRAINE, Here I come

Reading Time: 2 minutes

As a developer, when you need to extend your programming knowledge theoretical, practical, or either or, you need to go to a conference. Also, conferences are a good change to peer others in your field. Unfortunately, most software engineering conferences focus on introducing new technologies more than defining how a software engineer becomes an architect. That makes developer conferences a place to broaden the technical horizons, but not the vertical horizons. Exactly this makes DEVOXX so special. I have already had the pleasure to visit a DEVOXX conference in Europe and other conferences. Check out the articleabout that here!

What we expect from this conference 👤💬?

Normally, I focus on the new technical topics like what is new in Java. What do the new versions of Java offer? However, at this time, I would like to focus on both, the technical topics and software architecture, as it is a massive and fast-moving discipline. I would like to expect some training and insights to help you stay current with the latest trends in technologies, frameworks, and techniques — and build the skills needed to advance your career.

Source: https://earlycoders.com/so-you-want-to-learn-to-code-are-you-a-newbie-programmer-developer-or-a-software-engineer/

Organization to visit Devoxx Ukraine conference

The conference will be held in Kiev. So, my colleague Jeremy and I will be travelling from Zurich airport to Kiev. According to some articles, Kiev is considered one of the cheapest cities in Europe. We will try to explore the nightlife of Kiev. To be honest, I didn’t expect that the conference ticket is so cheap, it just costs 150 usd.

My private trips:

I will write another blog to explain what I and my colleague Jeremy did in Kiev. I can say one thing at the end: “Stay Tuned”!

Spring Boot 2.0 new Features

Reading Time: 5 minutes

Spring Boot is the most used framework by Java developer for creating microservices. The first version of Spring Boot 1.0 was released in January 2014. After that many releases were done, but Spring Boot 2.0 is the first major release after its launch. Spring Boot-2.0 was released on March 2018 and while writing this blog, recently released version is 2.1.3, which was released on 15th February 2019.

There are many changes which will break your existing application if you want to upgrade from Spring Boot 1.x to 2.x. here is a described migration guide.

We are using Spring Boot 2.0 too 💻!

Currently, here at N47, we are implementing different services and also an in-house developed product(s). We decided to use Spring Boot 2.0 and we already have a blog post about Deploy Spring Boot Application on Google Cloud with GitLab. Check it out and if you have any questions, feel free to use the commenting functionality 💬.

Java

Spring boot 2.0 requires Java 8 as minimum version and it also supports Java 9. if you are using Java 7 or earlier and want to use Spring Boot 2.0 version then it’s not possible, you have to upgrade to Java 8 or 9. also Spring Boot 1.5 version will not support Java 9 and new latest version of Java.

Spring Boot 2.1 has also supported Java 11. it has continuous integration configured to build and test Spring Boot against the latest Java 11 release.

Gradle Plugin

Spring Boot’s Gradle plugin 🔌 has been mostly rewritten to enable a number of significant improvements. Spring Boot 2.0 now requires Gradle 4.x.

Third-party Library Upgrades

Spring Boot builds on Spring Framework. Spring Boot 2.0 requires Spring Framework 5, while Spring Boot 2.1 requires Spring Framework 5.1.

Spring Boot has upgraded to the latest stable releases of other third-party jars wherever it possible. Some notable dependency upgrades in 2.0 release include:

  • Tomcat 8.5
  • Flyway 5
  • Hibernate 5.2
  • Thymeleaf 3

Some notable dependency upgrades in 2.1 release include:

  • Tomcat 9
  • Undertow 2
  • Hibernate 5.3
  • JUnit 5.2
  • Micrometre 1.1

Reactive Spring

Many projects in the Spring portfolio are now providing first-class support for developing reactive applications. Reactive applications are fully asynchronous and non-blocking. They’re intended for use in an event-loop execution model (instead of the more traditional one thread-per-request execution model).

Spring Boot 2.0 fully supports reactive applications via auto-configuration and starter-POMs. The internals of Spring Boot itself has also been updated where necessary to offer reactive alternatives.

Spring WebFlux & WebFlux.fn

Spring WebFlux is a fully non-blocking reactive alternative to Spring MVC. Spring Boot provides auto-configuration for both annotation-based Spring WebFlux applications, as well as WebFlux.fn which offers a more functional style API. To get started, use the starter spring-boot-starter-webflux POM which will provide Spring WebFlux backed by an embedded Netty server.

Reactive Spring Data

Where the underlying technology enables it, Spring Data also provides support for reactive applications. Currently, Cassandra, MongoDB, Couchbase and Redis all have reactive API support.

Spring Boot includes special starter-POMs for these technologies that provide everything you need to get started. For example, spring-boot-starter-data-mongodb-reactive includes dependencies to the reactive mongo driver and project reactor.

Reactive Spring Security

Spring Boot 2.0 can make use of Spring Security 5.0 to secure your reactive applications. Auto-configuration is provided for WebFlux applications whenever Spring Security is on the classpath. Access rules for Spring Security with WebFlux can be configured via a SecurityWebFilterChain. If you’ve used Spring Security with Spring MVC before, this should feel quite familiar.

Embedded Netty Server

Since WebFlux does not rely on Servlet APIs, Spring Boot is now able to support Netty as an embedded server for the first time. The starter spring-boot-starter-webflux POM will pull-in Netty 4.1 and Reactor Netty.

HTTP/2 Support

HTTP/2 support is provided for Tomcat, Undertow and Jetty. Support depends on the chosen web server and the application environment.

Kotlin

Spring Boot 2.0 now includes support for Kotlin 1.2.x and offers a functionrunApplication which provides a way to run a Spring Boot application using Kotlin.

Actuator Improvements

There have been many improvements and refinements to the actuator endpoints with Spring Boot 2.0. All HTTP actuator endpoints are now exposed under the path and resulting /actuator JSON payloads have been improved.

Data Support

In addition, the “Reactive Spring Data” support mentioned above, several other updates and improvements have been made in the area of Data.

  • HikariCP
  • Initialization
  • JOOQ
  • JdbcTemplate
  • Spring Data Web Configuration
  • Influx DB
  • Flyway/Liquibase Flexible Configuration
  • Hibernate
  • MongoDB Client Customization
  • Redis

Here I have mentioned only the list for changes in Data support but a detailed description will be available here for each topic.

Animated ASCII Art

Finally, Spring Boot 2.0 also provides support for animated GIF banners.

For a complete overview of changes in configuration go here and the release note for 2.1 available here.

Deploy Spring Boot Application on Google Cloud with GitLab

Reading Time: 5 minutes

A lot of developers experience a painful process with their code being deployed on the environment. We, as a company, suffer the same thing so that we wanted to create something to make our life easier.

After internal discussions, we decided to make a fully automated CI/CD process. We investigated and came up with a decision for that purpose to implement Gitlab CI/CD with google cloud deployment.

Further in this blog, you can see how we achieved that and how you can achieve the same.

Let’s start with setting up. 🙂

  • After that, we create a simple rest controller for testing purposes.
package com.northlabs.gitlabdemo.rest;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class RootController {

    @GetMapping(value = "/")
    public String root() {
        return "Hello from Root";
    }

    @GetMapping(value = "/demo")
    public String demo() {
        return "Hello from Demo";
    }

}
  • Next step is to push the application to our GitLab repo.
  1. cd path_to_root_folder
  2. git init
  3. git remote add origin https://gitlab.com/47northlabs/47northlabs/product/playground/gitlab-demo.git
  4. git add .
  5. git commit -m “Initial commit”
  6. git push -u origin master

Now, after we have our application in GitLab repository, we can go to setup Google Cloud. But, before you start, be sure that you have a G-Suite account with enabled billing.

  • The first step is to create a new project: in my case it is northlabsgitlab-demo.

Create project: northlabs-gitlab-demo
  • Now, let’s create our Kubernetes Cluster.

It will take some time after Kubernetes clusters are initialized so that GitLab will be able to create a cluster.

We are done with Google Cloud, so it’s time to set up Kubernetes in our GitLab repository.

  • First, we add a Kubernetes cluster.
Add Kubernetes Cluster
Sign in with Google
  • Next, we give a name to the cluster and select a project from our Google Cloud account: in my case it’s gitlab-demo.
  • The base domain name should be set up.
  • Installing Helm Tiller is required, and installing other applications is optional (I installed Ingress, Cert-Manager, Prometheus, and GitLab Runner).

Install Helm Tiller

Installed Ingress, Cert-Manager, Prometheus, and GitLab Runner

After installing the applications it’s IMPORTANT to update your DNS settings. Ingress IP address should be copied and added to your DNS configuration.
In my case, it looks like this:

Configure DNS

We are almost done. 🙂

  • The last thing that should be done is to enable Auto DevOps.
  • And to set up Auto DevOps.

Now take your coffee and watch your pipelines. 🙂
After a couple of minutes your pipeline will finish and will look like this:

Now open the production pipeline and in the log, under notes section, check the URL of the application. In my case that is:

Application should be accessible at: http://47northlabs-47northlabs-product-playground-gitlab-demo.gitlab-demo.north-47.com

Open the URL in browser or postman.

https://47northlabs-47northlabs-product-playground-gitlab-demo.gitlab-demo.north-47.com
https://47northlabs-47northlabs-product-playground-gitlab-demo.gitlab-demo.north-47.com/demo
  • Let’s edit our code and push it to GitLab repository.
package com.northlabs.gitlabdemo.rest;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class RootController {

    @GetMapping(value = "/")
    public String root() {
        return "Hello from Root v1";
    }

    @GetMapping(value = "/demo")
    public String demo() {
        return "Hello from Demo v1";
    }
}

After the job is finished, if you check the same URL, you will see that the values are now changed.


https://47northlabs-47northlabs-product-playground-gitlab-demo.gitlab-demo.north-47.com

https://47northlabs-47northlabs-product-playground-gitlab-demo.gitlab-demo.north-47.com/demo

And we are done !!!

This is just a basic implementation of the GitLab Auto DevOps. In some of the next blogs, we will show how to customize your pipeline, and how to add, remove or edit jobs.

Voxxed Days Bucharest & Devoxx Ukraine – HERE WE COME!

Reading Time: 4 minutes

Last year conferences

Already in 2018, we had the pleasure to visit 2 conferences in Europe:

We had a great time visiting these two cities 🙌 and we can’t wait to do that again this year 😎.

What do we expect from the two conferences in 2019 👤💬?

Like last year, we are interested in several different topics. For me, I am looking forward to Methodology & Culture slots, Shady is most interested in Java stuff. All in all, we hope that there are several different interesting speeches about:

  • Architecture & Security
  • Cloud, Containers & Infrastructure
  • Java
  • Big Data & Machine Learning
  • Methodology & Culture
  • Other programming languages
  • Web & UX
  • Mobile & IoT

We ❤️ food!

The title is speaking for itself. We just love food 🍴! Travelling ✈️ gives a good opportunity to see and taste something new 👅. All over the world, every culture has unique and special cuisine. Each cuisine is very different because of the different methods of cooking food. We try to taste (almost) everything when we arrive in new countries and cities.

We are really looking forward to seeing, what Bucharest’s and Kiev’s specialities are 🍽 and to trying them all! Here some snapshots from our trips to the conferences in Amsterdam Netherlands and Krakow Poland in 2018…

What about the costs 💸?

One great thing at N47 is, that your personal development 🧠 is important to the company. Besides hosted internal events and workshops, you can also visit international conferences 🛫 and everything is paid 💰. Every employee can choose his desired conferences/workshops, gather the information about the costs and request his visit. One step of the approval process is writing 📝 about his expectation in a blogpost. This is exactly what you are reading 🤓📖 at the moment.

Costs breakdown (per person)

Flights: 170 USD
Hotel: 110 USD CHF (3 days, 2 nights)
Conference: 270 USD
Food and public transportation: 150 USD
Knowledge gains: priceless
Explore new country and food: priceless
Spend time with your buddy: priceless
—–
Total: 700 USD
—–

Any recommendations for Bucharest or Kiev?

We never visited the two cities 🏙, so if you have any tips or recommendations, please let us know in the comments 💬!

Hackdayz #18: Git Repo Sync Tool

Reading Time: 3 minutes

Working as a consultant usually involves handling client and local repositories seamlessly and is often pretty simple as an individual.

When working in a distributed team environment where only a portion of the members have access to certain areas of the project, the situation becomes a bit tricky to handle. We identified this as a minor showstopper in our organization and during our internal hackathon Hackdayz18 we decided to make our lives easier.

Application overview and features

  • Ability to synchronize public and client repositories with a single button click
  • Persist user data and linked repositories
  • Visualized state of repositories
  • Link commits to different user
  • Modify commit message and squash all commits into one

Our Team

Nikola Gjeorgjiev (Frontend Engineer)
Antonie Zafirov (Software Engineer)
Fatih Korkmaz (Managing Partner)

Challenges and results

Initially what we had in mind was a tool that would read two repository URLs and with no further constraints squash all commits on one of the repositories, change the commit message and push the result on the second repository.

It turned out things weren’t that simple for the problem we were trying to solve.

Through trial and error, we managed to build a working demo of our tool in a short time frame that only need small tweaking in order to be used.

Workflow for the GitSync tool
Workflow for the GitSync tool

The final result was a small application that is able to persist user data and using given user credentials to read the state of repositories linked to the user. The final step is where all the magic happens and the content of the source repository is transferred into the destination repository.

There is still work to be done to get the application production ready and available to the team, but in the given timeframe we did our best, I am happy with our results.

Technology stack

  • Gitgraph.js – JavaScript library which visually presents Git branching, Git workflow or whatever Git tree you’d have in mind
  • GitLab API – Automating GitLab via a simple and powerful API
  • VueJS – front-end development framework
  • SpringBoot – back-end development framework

Conclusion

Even though we underestimated the problem we were facing, we pulled through and were able to deliver the base of the solution of the problem.

The hackathon was a valuable learning experience for the entire team and we’re looking forward to the next one!