Service Discovery in a Microservices Architecture: Client vs Service side discovery

Reading Time: 11 minutes

Service discovery is one of the key things of distributed systems. This allow us to automatically discover services on a network. In order to make some REST request, your service needs to know the service location. This service location includes IP address and port of a service instance. In the most scenarios, from the same service will be deployed multiple instances. The traditional applications are running on a physical hardware and the network locations are static. On the other hand, in the modern cloud-based microservice solutions, this network locations are dynamic. This makes them much harder to manage and more difficult challenge.

There are two main service discovery patterns:

  • Client-side discovery (Eureka)
  • Service-side discovery (Kubernetes)

Service Registry?

The Service Registry is a process that involves registering a service’s location to some central place. You can imagine that as a database of a service locations. The Instances of services would be register into the service registry on startup and deregistered accordingly. It will register it’s host, ports and some additional information like on the image below.
I have already created example application using Netflix Eureka service registry. It comes with some of the predefined annotations. For this purpose, I have created two spring applications. One of application will act as a discovery server. The application needs to include the following dependencies: Eureka Server, Spring Web, Actuator. I have added the @EnableEurekaServer annotation on the main class like on the code below. With this annotation, the app will act like microservice registry and discovery server.

@SpringBootApplication
@EnableEurekaServer
public class EurekaServerApplication {

    public static void main(String[] args) {
        SpringApplication.run(EurekaServerApplication.class, args);
    }
}
spring.application.name=eureka-server
server.port=8761
eureka.client.register-with-eureka=false
eureka.client.fetch-registry=false

The second Spring boot application will act as a client which will be registered on the eureka server (the first created application). This application needs to have the following dependencies: Actuator, Eureka Discovery, and Spring Web. We need to add the @EnableDiscoveryClient annotation on the Spring Boot app class and set up properties like on the code below into the application.yml file.

@SpringBootApplication
@EnableDiscoveryClient
public class ClientDiscoveryApplication {

    public static void main(String[] args) {
        SpringApplication.run(ClientDiscoveryApplication.class, args);
    }
}
spring.application.name=eureka-client-service
server.port=8085
eureka.client.service-url.defaultZone=http://localhost:8761/eureka/

If we navigate on the following URL http://localhost:8761/ like on the image below. We can notice that 2 instances were registered on the eureka-client-service application and started on a different port. One of them was registered on port 8085 and the second instance in on port 8087.

Client-side discovery pattern

In this pattern, the client is responsible for determining the network locations of available service instances and load balancing requests. The network location of the service instance will be registered to the service registry when it start up and deregistered when it will be terminated.
The client is responsible for determining the network locations of available service instances and load balancing requests across them. The Service registry for client-side discovery which we are using in the previous example is Netflix Eureka Server. It provides management of service instances for querying available instances.

From the image below we can consider that we have 3 Services (Service A, Service B, and Service C ) which have different IP addresses provided to each service. For instance, the IP address of Service A is 10.10.1.2:15202. If we want to query the available instances, we need to access Service Registry (Eureka). The Eureka is responsible for storing the instances of all services that were previously registered.

  1. The locations of Service A, Service B and Service C are sent to the Service Registry like on the image below.
  2. The service consumer ask for the Service Discovery Server for the Location of Service A / Service B
  3. The location of Service A will be searched by the Service Registry which store the instance’s location
  4. The Service Customer gets the location of Service A and can make a direct request

Benefits of using this pattern
⦁ flexible, application specific load balancer
Drawback
⦁ must implement register and discovery mechanism for each framework
The significant disadvantage of this pattern is that it couples the client with the service registry.

Alternatives to Eureka discovery

Other alternatives on Eureka are Consul and Zookeeper. Consul makes it simple for services to register themselves and to discover other services via DNS or HTTP interface. Also, provides a richer health checking. It has its own internal distributed key value store that can we use it as well.

Apache Zookeeper on the other side, is a distributed key value store. We can use it as the basis to implement service discovery. A centralized service for maintaining configuration information, naming, providing distributed synchronization. One of the benefit is that we can use it with a large community that supporting it.

Service-side discovery pattern

The alternative approach to Service Discovery is the Server-side Discovery pattern. Clients just make requests to a service instance via a load balancer, which acts as an orchestrator. The load balancer queries the service registry and routes each request to an available service instance like on the image below.
Benefits of using this pattern
⦁ no need to implement discovery logic for each framework used by service clients.
Drawback
⦁ need to set up and manage the Load Balancer, unless it’s already provided in the deployment environment.
Some of the clustering solutions such as Kubernetes, run a proxy on each host in the cluster. In order to make a request to the service, a client connects to the local proxy using the port which is assigned to that service.

From the image below we can consider that we have added Load Balancer, that acts as an orchestrator. The Load Balancer queries the Service Registry and routes each HTTP request to an available service instance.

Kubernetes

If we try to find some analogies between Kubernetes and more traditional architectures. I would compare Kubernetes Pods with service instances and Kubernetes Service with a logical set of pods.

Kubernetes Pods are the smallest, deployable unit in Kubernetes, which contain one or more containers, such as Docker containers. When a Pod runs multiple containers, the containers will be managed as a single entity and share the Pod’s resources. Each pod has its own IP address. When the pod restarts, it gets the new IP address. However there is no guarantee that the pod’s IP address will remain the same throughout all the time.

Kubernetes may relocate or re-instantiate pods at runtime. It doesn’t make sense to use the pod IP addresses for service discovery. This is one of the main reason to include Kubernetes Services into our story.

Service in Kubernetes

It is a component just like pod but it’s not a process. It’s just an abstraction layer which basically represents an IP address. Service will know to forward the request to one of the pods that have registered as service endpoints. On the other hand, unlike Kubernetes pods which don’t have the same IP address, the service has a stable IP address. Each service exposes an IP address and also exposes a DNS endpoint which will never change, even when the pod dies.

The Services provided by Kubernetes allow us to connect to our pods and provide a dynamic resource to access them. For instance, a load balancer can use these services to automatically determine which servers are trying to load balance. Services also provide load balancing ( if we have for example 3 instances of the microservice app, the service will get each request target to that and forward it to one of those pods).

In Kubernetes we have a concept of namespaces, which present a logical group of resources. For our examples I will using hello-today-dev namespace which consists of couples of pods.

kubectl -n hello-today-dev get pod -o wide

kubectl -n hello-today-dev get svc 

With this command we can see all available services in this namespace

How does Service Discovery work in Kubernetes?

Kubernetes has a powerful concept of labeling. Labels are just a key-value pair that provides metadata on our objects. Any pod whose labels match the selector defined in the service manifest will automatically be discover by a service.

Here we have a single service that is front-ending two of our pods. The two pods have a label named “app:my-app” and the Service has defined a label selector that is looking for those same labels.

If we have a single service that is front-ending two of our pods, instances of our app. These two pods have a label named “app=my-app” and the Service has defined a label selector that is looking for those same labels. This means that the service will send traffic to them, even thought the pods might change their addresses. You might also notice that there is a Pod3 that has a different label. The Service won’t front end that pod.

Example of service selector

The other example is when in a service selector we have defined two labels (app = my-app, microservice). Then service will looking for both of labels and it must match all the selectors, not just one and it will register as its endpoints. This is how the service will know which pods belong to it meaning where to forward that request to.

kubectl -n hello-today-dev get pods –show-labels

A service identifies its member pods using a selector attribute. We should specify a selector attribute which has a key-value pair defined as a list, which in our case is app = my-app. It creates a binding between the service and the pods with this name. It provides a flexible mechanism for service discovery, allowing us automatic load balancing. The Kubernetes will use the endpoints object to keep track of which pods are members of the service. Any pod whose label ‘s value (app=my-app) matches with the defined selector (my-app in our case) by the service will be exposed as its endpoint. Load Balancing will be provided by service by routing requests across matching pods.

kubectl -n hello-today-dev describe svc/ht-employee

The following command is to view the status of the service, in our case ht-employee. The service uses the selector app=ht-employee to select the pod 10.0.21.217 as backend pods. The virtual IP address in the cluster is created for the Kubernetes Service to evenly distribute traffic to the pod at the backend.

We can see the defined endpoints of ht-employee microservice (that means that our service is working)

If we have black endpoints it is the result of not selecting any pods, which means that your service traffic will go nowhere. The Endpoints field indicates the pods specified by the selector field.

Summary

Depending of the deployment environments you needed, you can set up your own service discovery using service registry like Eureka. In other deployment environments as in our case is Kubernetes, service discovery is build in. The Kubernetes, as was explained previously is responsible for handling service instance registration and deregistration.

SFTP file transfers with Java

Reading Time: 7 minutes

Migration of data or files moving around different servers is a common case in software development.
For java and spring boot based solutions, jsch implementations of SFTP are often used. In the following blogpost we will learn about the knowhows in usage and setup of most commonly used java libraries for file uploads and file downloads from remote servers.

SFTP is a secure protocol for file transfers and its fully complaint with the SSH authentication functionalities. This protocol makes authentication on both the server and the user and offers cryptographic hash functions. Appliances of this protocol are file transfers which guarantee data integrity such as migrations of documents or logs, copying data across various servers inside or outside an organization, making automated system backups etc. Everywhere where the file security is integral, this protocol should be used over FTP.

JCraft JSch is a pure java implementation of SFTP protocol which offers easy to setup and easy to use library which covers most of the basic use-cases. It allows connections to remote servers, authentication, port forwarding, secure file transfers, compressing data. Commons VFS is another library which is build on top of JCraft JSch which offers better support, and even more customization. For power users and production enterprise application the latter one is preferred. In the following exercise I will demonstrate, how we can setup a test SFTP server and write java implementation to upload and download files via SFTP.

SFTP vs. FTPS

SFTP runs over the SSH protocol, FTPS is actually the old FTP protocol ran over SSL/TLS. In the end, the FTPS needs much more effort in configuration, when firewall configuration needs to be added and many ports explicitly opened. It also might not work in NAT context. SFTP on the other hand, does not require additional configuration on the server, SSH standard port 22 is the port in which this communication is achieved. For SFTP certificates are not centralized, meaning no additional support and maintenance is expected for them, they can be utilized by any method which use SSH and by any certificate authority. SFTP can be used as a filesystem and various configuration can be added to it, extending use cases. Before we dig into details in the java setup, lets create a docker compose file.

Docker

Atmoz/sftp is a easy to use SFTP server, docker image. This docker image can be used as a mock SFTP server for integration tests, or other research and development scenario. User test can login with SFTP and access files in “Documents” folder. If there is a necessity for testing Logging in with SSH keys, then use the volumes to mount publicKey from the host to the users “.ssh/keys/” of the docker container.

version: "2"
services:
  sftp:
    image: atmoz/sftp
    ports:
      - "2222:22"
    command: test::1001::Documents
    volumes:
      - "publicKey.pub:/home/test/.ssh/keys/publicKey.pub"

Now that the container is running, lets add some java configuration.

Jsch

  1. Adding maven dependency
  2. Setting up JSch
  3. Upload to SFTP server/Download from SFTP server

So basically there are three important steps. Lets go into details in these steps for Jsch

Step 1. Add maven dependency

<!-- https://mvnrepository.com/artifact/com.jcraft/jsch -->
<dependency>
    <groupId>com.jcraft</groupId>
    <artifactId>jsch</artifactId>
    <version>0.1.55</version>
</dependency>

This comes as a regular entry in the pom.xml file list of dependencies.

We should always try to use latest stable versions. Version 0.1.55 the latest and it covers some basic use cases. Still, if the use case is something more that a proof of concept, lets say some production web service then, there are better alternatives. Different forks of this implementation have much better support and implement OpenSSH. More about them in the later part of this blogpost. 

Step 2. Setup JSch

This java library allows public key authentication or password authentication to a remote server. For the following example, lets use the password authentication one.

private ChannelSftp setupJsch() throws JSchException {
    JSch jsch = new JSch();
    Session jschSession = jsch.getSession(username, remoteHost);
    jschSession.setPassword(password);
    jschSession.connect();
    return (ChannelSftp) jschSession.openChannel("sftp");
}

Step 3. Upload a file to SFTP server/Download file from SFTP server

For uploading local file to a remote server, put() method is available out of the box.

public void uploadFileUsingResources() throws JschException, SftpException {  
    ChannelSftp channelSftp = setupJsch();
    channelSftp.connect();
    
    String localFile = "src/main/resources/sample.txt";
    String remoteDir = "Documents/";
 
    channelSftp.put(localFile, remoteDir + "remoteFile.txt");
    channelSftp.exit();
}

Or if the source is an InputStream and not a resource file, or we need to bypass setting path to the local file then here is the following snippet. Jsch put() method also can take streams as arguments.

public void uploadFileInputStream() throws JschException, SftpException {  
    ChannelSftp channelSftp = setupJsch();
    channelSftp.connect();
    
    InputStream inputStream = new FileInputStream("c:\\samples\\sample.txt");
    String remoteDir = "Documents/";
 
    channelSftp.put(inputStream, remoteDir + "remoteFile.txt");
    channelSftp.exit();
}

Apart from supporting uploading file to a remote server, Jsch also offers service method for downloading file from a remote SFTP server. Downloading file is supported by the get() method, which is also very easy to use.

public void downloadFileUsingResources() throws JschException, SftpException {  
    ChannelSftp channelSftp = setupJsch();
    channelSftp.connect();
    
    String remoteFile = "sample.txt";
    String localDir = "src/main/resources/";
 
    channelSftp.get(remoteFile, localDir + "localFile.txt");
    channelSftp.exit();
}

For enterprise applications, in the context of transferring files between servers, and in the same time making sure that there is no vulnerability for man in the middle attacks, or password sniffing latest encryption functions and algorithms need to be used. Jsch is not enough in this case as the library is not actively maintained. Somewhere in the beginning of this blogpost I mentioned a fork of this library which gets more, and more active users and is build on top of Jsch. The future of Jsch without ssh-rsa is mwiede/jsch. For power users, which need even more customization, alternatives are Commons VFS and SSHJ.

Apache Commons VFS

Apache Commons VFS is a Virtual File System library. It allows uploading of local files to remote servers, it has methods for checks if file exists in remote location. The library supports moving and deleting files from remote host and all of this in secure manner, using SFTP. Commons VFS is basically Jsch but updated, and with cleaner. better API and maintained repository.
Same as the case was in Jsch, the manual for usage describes three steps…

Step 1. Add maven dependency

<!-- https://mvnrepository.com/artifact/org.apache.commons/commons-vfs2 -->
<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-vfs2</artifactId>
    <version>2.9.0</version>
</dependency>

This comes as a regular entry in the pom.xml file list of dependencies.

Step 2. Setup VSF2

This time, lets use a bit more production like setup. In this configuration class, lets define a file system and lets use a public key authentication. The property privateKey, or the value of it, of course must be encrypted with jasypt for further security, so that we don’t have plain keys exposed in the repository.

@Configuration
@RequiredArgsConstructor
public class SFTPConfiguration {

    @Value("${sftp.private-key}")
    private String privateKey;

    @Bean
    public FileSystemOptions remoteFileSystemOptions() throws FileSystemException {
        FileSystemOptions fileSystemOptions = new FileSystemOptions();
        SftpFileSystemConfigBuilder fileSystemConfigBuilder = SftpFileSystemConfigBuilder.getInstance();
        fileSystemConfigBuilder.setUserDirIsRoot(fileSystemOptions, true);
        fileSystemConfigBuilder.setPreferredAuthentications(fileSystemOptions, "publickey");
        fileSystemConfigBuilder.setIdentityProvider(fileSystemOptions, jSch -> jSch.addIdentity(privateKey, privateKey.getBytes(StandardCharsets.UTF_8), null, null));
        return fileSystemOptions;
    }

}

Step 3. Upload a file to SFTP server/Download file from SFTP server

FileSystemManager interface is used to create file objects, which are then used as arguments in the copyFrom() method. FileSystemManager is used to locate a FileObject by name from one of those file systems. FileObject is a file, and is used to access the content and structure of the file. The method copyFrom() copies another file, and all its descendants, to the method caller. On another note, the sftpUrl is a construct which consist of the sftp://theUsername@theRemoteHost:22/andThePathToTheFile

public void uploadFileToSFTP() {
    FileSystemManager manager = VFS.getManager();
    FileObject localFile = manager.resolveFile(
System.getProperty("user.dir") + "/" + localFile);
    String sftpUrl = "sftp://" + username + "@" + remoteHost + ":" + remoteHostPort + "/Documents/" + fileName;
    FileObject remoteFile = manager.resolveFile(sftpUrl, fileSystemOptions);

 
    remoteFile.copyFrom(localFile, Selectors.SELECT_SELF);
 
    local.close();
    remote.close();
}

Or for the other operation of downloading a file from a sftp server:

public void downloadFileFromSFTP() {
    FileSystemManager manager = VFS.getManager();
    FileObject localFile = manager.resolveFile( System.getProperty("user.dir") + "/" + localDir + "vfsFile.txt");
    String sftpUrl = "sftp://" + username + "@" + remoteHost + ":" + remoteHostPort + "/Documents/" + fileName;
    FileObject remoteFile = manager.resolveFile(sftpUrl, fileSystemOptions);

 
    localFile.copyFrom(remoteFile, Selectors.SELECT_SELF);
 
    local.close();
    remote.close();
}

In this blogpost, we covered two things. We covered how to setup mock SFTP server with docker and also, covered how to use SFTP integration java libraries to securely transfer files from/to remote servers. Write me in the comments sections if you find this useful or if you have further questions.

Multi Instance Process in Camunda

Reading Time: 10 minutes

As a developer, there is a 90% chance that the client or the business will come to you one day and say “look we are doing this the old way, but we want software now for this”. And there is a big chance that they will have some procedure with a lot of if-else scenarios. There also might be an option for a part of the procedure to be repeated for every customer that they have which means you will have to execute the same logic multiple times. If that happens, don’t be scared, Camunda is here to help and I hope that this blog will help you. We are going to dive into Camunda processes and how we can handle a part of it being repeated multiple times. In order to follow it may be some basic Camunda knowledge, Java and Spring Boot can be of help.

Let’s look at the following scenario:
We have a bank and in it, there is one guy responsible for the whole process of giving loans to people. He is sitting there and a guy John comes and he wants to get a loan. So the bank guy first gives an application to John. Once John is done with the application and returns it to the bank guy, the bank guy requests some documents from his place of work so they can be sure that he is employed full-time. Once that is done, the bank guy needs to send this to corporate so that they can make the final decision.

If we make a diagram of it, it will look something like this:

This is great and everything if you have one customer, but if you have around 100 customers per day the bank guy will have no clue which guy gave him the work information or filled the application.

So let’s make software for this guy and make his life easier, let’s open our Intellij and write some code!

I have already created the project (you can download it from here) and will go through some of the more important things.

First, let’s take a look at the pom file:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.6.1</version>
        <relativePath/> <!-- lookup parent from repository -->
    </parent>
    <groupId>north.com</groupId>
    <artifactId>multi-process-instance</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <name>multi-process-instance</name>
    <description>multi-process-instance</description>
    <properties>
        <java.version>11</java.version>
    </properties>
    <dependencies>
        <!-- Spring -->
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-jpa</artifactId>
        </dependency>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>

        <!-- Camunda -->
        <dependency>
            <groupId>org.camunda.bpm.springboot</groupId>
            <artifactId>camunda-bpm-spring-boot-starter-rest</artifactId>
            <version>7.14.0</version>
        </dependency>
        <dependency>
            <groupId>org.camunda.bpm.springboot</groupId>
            <artifactId>camunda-bpm-spring-boot-starter-webapp</artifactId>
            <version>7.14.0</version>
        </dependency>
        <dependency>
            <groupId>org.camunda.bpm</groupId>
            <artifactId>camunda-engine-plugin-spin</artifactId>
            <version>7.14.0</version>
        </dependency>
        <dependency>
            <groupId>org.camunda.spin</groupId>
            <artifactId>camunda-spin-dataformat-json-jackson</artifactId>
            <version>1.10.0</version>
        </dependency>

        <!-- Database -->
        <dependency>
            <groupId>com.h2database</groupId>
            <artifactId>h2</artifactId>
            <scope>runtime</scope>
        </dependency>
        <!-- Lombok -->
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
        </plugins>
    </build>

</project>

I have added the camunda dependencies because we are going to create camunda processes and an h2 database as well. Even though we are not going to write code that will access the database or create some tables, Camunda needs a database to create its own tables.

Next, let’s look at the application.yml

spring:
  datasource:
    driver-class-name: org.h2.Driver
    url: jdbc:h2:mem:testdb
    username: sa
  jpa:
    database-platform: org.hibernate.dialect.H2Dialect
    hibernate.ddl-auto: update

camunda:
  bpm:
    authorization:
      enabled: false
    default-serialization-format: application/json
    admin-user:
      id: admin
    database:
      type: h2
    generic-properties.properties:
      telemetry-reporter-activate: false
    deployment-resource-pattern:
      - classpath*:processes/*.bpmn

Above we are setting some database properties and some Camunda properties. Mainly we are enabling the admin-user for Camunda to open their cockpit with user admin and we are saying where the processes will be and what kind of database will Camunda use.

Now, having said all that, let’s create the whole scenario as a process in Camunda. First thing you will need is a Camunda modeller which you can download from here. Once you have the modeller up and running go to File -> New File -> BPMN Diagram. What we need to build (or you can just open the existing bpmn diagram from the application) is something that will look like this:

As a first step, we will generate some credit loaners (we are doing this just to have some data to play with) and then set this collection of credit loaners as a value for a variable called creditLoanerList. This will be achieved by using a service task. In this scenario, if you click on the Generate Credit Loaners you will see that we are providing as implementation: Java class and as a java class we are pointing to: north.com.multiprocessinstance.camunda.task.GenerateCreditLoanersTask.java:

package north.com.multiprocessinstance.camunda.task;

import north.com.multiprocessinstance.entity.CreditLoaner;
import org.camunda.bpm.engine.delegate.DelegateExecution;
import org.camunda.bpm.engine.delegate.JavaDelegate;

import java.util.Arrays;
import java.util.List;

public class GenerateCreditLoanersTask implements JavaDelegate {
    @Override
    public void execute(DelegateExecution delegateExecution) throws Exception {
        List<CreditLoaner> creditLoaners = Arrays.asList(new CreditLoaner("Sarah", "Cox"),
                new CreditLoaner("John", "Doe"),
                new CreditLoaner("Filip", "Trajkovski"));
        delegateExecution.setVariable("creditLoanerList", creditLoaners);
    }
}

If you click on the subprocess (it is the largest rectangle with the three dashes) you will see the reason why we set the list of credit loaners as a variable value to creditLoanerList:

In order to pass all the credit loaners and for each of them a subprocess to be started we need to provide a collection of credit loaners. That is why we are setting the Collection field to: creditLoanerList and in order for every credit loaner to be accessed in the appropriate subprocess we are setting an Element Variable that we will call: creditLoaner. What will happen is that we will have this process once for Filip, once for John and once for Sarah.

Let’s look at the tasks in our subprocess. The first one Fulfill Application is a user task and in it, we want the credit loaner to fill out an application. For the next step Get work recommendation to be completed, the credit loaner needs to provide some documents that he is employed and has a regular salary (this is not completely implemented, but we are describing a possible scenario and currently only form fields of type string are defined). In the third step, a service task will be executed and in it, the NotifyCorporate.java class will be executed:

package north.com.multiprocessinstance.camunda.task;

import lombok.extern.slf4j.Slf4j;
import north.com.multiprocessinstance.entity.CreditLoaner;
import org.camunda.bpm.engine.delegate.DelegateExecution;
import org.camunda.bpm.engine.delegate.JavaDelegate;

@Slf4j
public class NotifyCorporate implements JavaDelegate {
    @Override
    public void execute(DelegateExecution delegateExecution) throws Exception {
        final CreditLoaner creditLoaner = (CreditLoaner) delegateExecution.getVariable("creditLoaner");
        log.info("Notify corporate that person {} wants to get credit", creditLoaner.getFirstName());
        delegateExecution.setVariable("creditLoanerFirstName", creditLoaner.getFirstName());
        log.info("This is the user: {}", creditLoaner);
    }
}

Here we will notify the corporate for the credit loaner and to get the information about the credit loaner we will use the variable that was previously set as a Element Variable which is creditLoaner (look at the image of the subprocess details). One additional thing that we will do here is to set one more variable called creditLoanerFirstName and the reason for that will be revealed when we take a look at the next task: Wait for feedback from corporate.

In this Wait for feedback from corporate step one option is that the bank guy got a mail from corporate that the credit loaner is ok and he can get his loan so the bank guy will complete this step and will continue with the next one. For the other option, we are providing a boundary event where the corporate will send a message STOP and for that credit loaner, the request will be rejected. But in our scenario, we have three subprocesses, one for each credit loaner. When sending this STOP message, we need to somehow specify for which credit loaner it is sent. That’s why in the previous step we set the creditLoanerFirstName variable (in real cases don’t use the first name but something like email or other unique value) and one last thing that we will need to do is to set an input parameter in the Wait for feedback from corporate task.

In the Camunda modeller if you click on the task with the name: Wait for feedback from corporate and go to the tab Input/Output you should be able to see the input parameter which will be of type text and the value will be the variable that we set previously, called creditLoanerFirstName.

This input parameter provides us with an option to specify for which credit loaner we want to STOP the procedure and decline his loan request.

But let’s start our application and then start our bank process. Once you have started the application you can open the Camunda cockpit on the following URL: http://localhost:8080/camunda/app/tasklist/default/. The user name and password are admin and admin. Once you have entered the credentials you should be able to start a process from the top right by clicking on Start Process and the BankProcess will be provided (previously we needed to be sure that the BPMN file is saved under resources/processes in your application, since we define in our application.yaml that this will be the place where we will keep the processes).

Let’s take a look at the Camunda cockpit:

Camunda cockpit

Once you started the process you should be able to see the task list (if for some reason they are not visible just click on Add simple filter and they should be displayed). If you click on one of the tasks, tasks details will be displayed and from here we are able to complete it or to open the process details:

Here we have a visual presentation at which step the process is currently. Let’s complete all three tasks Fullfill application by claiming them and completing them and let’s do the same for Get work recommendation. Now we are at the step Wait for feedback from corporate and we can test the boundary event STOP that we added.

There is already an exposed post endpoint in the ProcessController for which we need to provide the process instance id and a MessageEventRequest. In this MessageEventRequest we need to provide the message name, which in our scenario is STOP and a list of correlation keys. The correlation key is where we will specify the credit loaner whose loan request we want to decline. The key will be set to creditLoanerFirstName and let’s decline Sarah’s request (before executing it, make sure that there is a task: Wait for feedback from corporate for Sarah) so we will set the value to Sarah. What we need to do is execute a post request towards: localhost:8080/{processInstanceId}/messageEvent with the following request body:

{
    "messageName":"STOP",
    "correlationKeys":[{
        "key":"     ",
        "value":"Sarah"
    }]
}

As I mentioned previously you can figure out the process instance id and monitor the whole flow of the task using the camunda cockpit that will be available on the following URL: http://localhost:8080/camunda/app/tasklist/default/.

So we started a whole process, started three subprocesses, finished a service task and a user task. I hope that you enjoyed it and also you can find the code on the following link.

Deploy microservice on Kubernetes, step by step

Reading Time: 11 minutes

In this tutorial, I will try to explain step by step, how you can set up Kubernetes, deploy your microservice on Kubernetes, and check the result via the Kubernetes dashboard. All other things will be “as simple as possible”. As a cloud platform gcloud will be used. We will cover the following aspects of the problem:

  1. Create microservice to be deployed
  2. Placed application in your docker container
  3. What is Kubernetes and how to install it?
  4. Create a new Kubernetes project
  5. Create new Cluster
  6. Allow access from your local machine
  7. Create service account
  8. Activate service account
  9. Connect to cluster
  10. Gcloud initialization
  11. Generate access token
  12. Deploy and start Kubernetes dashboard
  13. Deploy microservice

Step 1: Create microservice to be deployed

Traditionally, in the programming world, everything starts with “Hello World”. So, as mentioned previously, to keep things simple, create a microservice that returns just “Hello World”. You can use https://start.spring.io/ for this goal. Create HelloController like this:

package com.example.demojooq.controllers;


import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequestMapping("/api/v1")
public class HelloController {

    @GetMapping("/say-hello")
    public String sayHello() {
        return "Hello world";
    }
}

Step 2: Placed application in your docker container

We have a microservice, need to put this microservice in a docker container and upload it on Kubernetes. From that point, Kubernetes will orchestrate the container according to your settings. Let’s create the first image from the microservice. Normally, as you might guess, it is called Dockerfile (without any extension), and the content is:

Dockerfile

FROM adoptopenjdk/openjdk11:jre-11.0.8_10-debianslim
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","app.jar"]

The next step is to create the docker-compose file. For that purpose, a call to Dockerfile will be made to build the image. You can do it manually, but the best way is from the docker-compose file, as you have a permanent track of the solution. This is a .yaml file. (picture below)

docker-compose.yaml

version: "3"
services:
  hello-world:
    build: .
    ports:
      - "8099:8080"

After starting docker, go to the folder where docker-compose is located and execute the command “docker-compose up”. The expectation is to reach this microservice on 8099 port. If everything is ok, in your docker will be something like this:

Check microservice docker installation with postman calling http://localhost:8099/api/v1/say-hello. In response, you have “Hello World”.

Step 3: What is Kubernetes and how to install it?

What is Kubernetes?

Kubernetes is an open-source container orchestrator that automates many tasks involved in deploying, managing, and scaling containerized applications. What happens when you use Docker, and your container fails? Probably the first thing to do is to restart your container. You do that manually (if you don’t have Kubernetes). Here comes Kubernetes, observe that container is down and start new container automatically. This is just a basic use case. Please read more on the internet, there is a bunch of information about this.

How to install Kubernetes?

Ok, until now you are sure that Kubernetes is needed, but where to find it, what are the costs, and how to install it? First of all, try “download Kubernetes” on Google… Pick the site https://kubernetes.io/docs/tasks/tools/… Options for Windows, Mac, Linux appear… A different installation like kind, minikube, kubeadmin… So, is it worth spending so much time setting properly this Kubernetes? You do not have to ask me, I agree with you, it is too much time. Fortunately, we can make a “go around” and skip all that: Go to Gcloud where Kubernetes is offered as a service and just use it. Somebody else takes care of this, we can focus just on the business logic in our microservice and use out-of-the-box Kubernetes installation from Gcloud. Sounds good, doesn’t it? The last and most important question; money. Is it for free? No, it is not. You have to pay for the Gcloud services and here is the price list: https://cloud.google.com/kubernetes-engine/pricing. But for ordinary people like you and me, Gcloud offers a free account for 3 months up to 300$ and it seems fair. It is enough time to learn about deploying microservices on Kubernetes. For any professional use in future, the company should stay behind this. Here is the link where you can create your free cloud account https://cloud.google.com/. One more thing, during the creation of a free account, Google will ask for your bank account, to automatically charge you. But do not worry, you are safe for the first three months and below 300$. And for any charging, you will be asked for permission before… So, until now my personal experience is positive, as Google is keeping the promise when you create an account. But the final decision is up to you.

Step 4: Create new Kubernetes project

Open up your Google account, sign in and go to the console.

Create a new project from the main dashboard; the name of the new project is “hello-world”. From now on, this is your active project.

Step 5: Create new cluster

Create new cluster (named it cluster2). Accept default values for others fields.

Step 6: Allow access from your local machine

Now, we must allow access from our local machine to Kubernetes, via kubectl. For that purpose, we need to follow these steps:

  1. Click on cluster2
  2. Find your local IP address and add it here according to the CIDR standard in the Edit control plane authorized networks

Step 7: Create service account

Give new account role “Owner”. Accept default values for other fields. After a service account is created, you should have something like this:

Generate keys for this service account with key type JSON. When the key is downloaded, it has some random name like hello-world-315318-ab0c74d58a70.json. Keep this file in a safe place, we will need it later.

Now, install Google Cloud SDK Shell on your machine according to your OS. Let’s do the configuration so kubectl can reach cluster2. Copy the file hello-world-315318-ab0c74d58a70.json and put it in the CLOUD SDK folder. For the Windows environment, it looks like this:

Step 8: Activate service account

The first thing to do is to activate the service account with the command: gcloud auth activate-service-account hello-world-service-account@hello-world-315318.iam.gserviceaccount.com –key-file=hello-world-315318-ab0c74d58a70.json

Step 9: Connect to cluster

Now go to cluster2 again and find the connection string to connect to the new cluster

Execute this connection string in Google Cloud Shell: gcloud container clusters get-credentials cluster2 –zone us-central1-c –project hello-world-315318

Step 10: Gcloud initialization

The next command to execute is gcloud init, to initialize connection with the new project. Here is the complete code on how to do that from the Gcloud Shell:

C:\Users\Dimche Trifunov\AppData\Local\Google\Cloud SDK>gcloud init
Welcome! This command will take you through the configuration of gcloud.

Settings from your current configuration [dev] are:
accessibility:
  screen_reader: 'False'
compute:
  region: europe-west3
  zone: europe-west3-a
core:
  account: hello-world-service-account@hello-world-315318.iam.gserviceaccount.com
  disable_usage_reporting: 'True'
  project: dops-containers

Pick configuration to use:
 [1] Re-initialize this configuration [dev] with new settings
 [2] Create a new configuration
 [3] Switch to and re-initialize existing configuration: [database-connection]
 [4] Switch to and re-initialize existing configuration: [default]
Please enter your numeric choice:  2

Enter configuration name. Names start with a lower case letter and
contain only lower case letters a-z, digits 0-9, and hyphens '-':  hello-world
Your current configuration has been set to: [hello-world]

You can skip diagnostics next time by using the following flag:
  gcloud init --skip-diagnostics

Network diagnostic detects and fixes local network connection issues.
Checking network connection...done.
Reachability Check passed.
Network diagnostic passed (1/1 checks passed).

Choose the account you would like to use to perform operations for
this configuration:
 [1] cicd-worker@devops-platform-n47.iam.gserviceaccount.com
 [2] d.trifunov74@gmail.com
 [3] dimche.trifunov@north-47.com
 [4] dtrifunov@lunar-sled-314616.iam.gserviceaccount.com
 [5] hello-world-service-account@hello-world-315318.iam.gserviceaccount.com
 [6] service-account-demo-dime@blissful-epoch-305214.iam.gserviceaccount.com
 [7] Log in with a new account
Please enter your numeric choice:  5

You are logged in as: [hello-world-service-account@hello-world-315318.iam.gserviceaccount.com].

API [cloudresourcemanager.googleapis.com] not enabled on project
[580325979968]. Would you like to enable and retry (this will take a
few minutes)? (y/N)?  y

Enabling service [cloudresourcemanager.googleapis.com] on project [580325979968]...
Operation "operations/acf.p2-580325979968-f1bf2515-deea-49d5-ae35-a0adfef9973e" finished successfully.
Pick cloud project to use:
 [1] hello-world-315318
 [2] Create a new project
Please enter numeric choice or text value (must exactly match list
item):  1

Your current project has been set to: [hello-world-315318].

Do you want to configure a default Compute Region and Zone? (Y/n)?  n

Error creating a default .boto configuration file. Please run [gsutil config -n] if you would like to create this file.
Your Google Cloud SDK is configured and ready to use!

* Commands that require authentication will use hello-world-service-account@hello-world-315318.iam.gserviceaccount.com by default
* Commands will reference project `hello-world-315318` by default
Run `gcloud help config` to learn how to change individual settings

This gcloud configuration is called [hello-world]. You can create additional configurations if you work with multiple accounts and/or projects.
Run `gcloud topic configurations` to learn more.

Some things to try next:

* Run `gcloud --help` to see the Cloud Platform services you can interact with. And run `gcloud help COMMAND` to get help on any gcloud command.
* Run `gcloud topic --help` to learn about advanced features of the SDK like arg files and output formatting

Step 11: Generate access token

Type kubectl get namespace, access token is generated in .kube folder (in home folder), in config file:

If you open this config file, you will find your access token. You will need this later.

Step 12: Deploy and start Kubernetes dashboard

Now, deploy Kubernetes dashboard with the next command: kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

C:\Users\Dimche Trifunov\AppData\Local\Google\Cloud SDK>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

C:\Users\Dimche Trifunov\AppData\Local\Google\Cloud SDK>kubectl proxy
Starting to serve on 127.0.0.1:8001

Start the dashboard with kubectl proxy command. Now open the dashboard from the link: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/overview?namespace=default

In front of you, this screen will appear:

Now, you need the token from the config file that we spoke about a moment ago. Open the config file with Notepad (on Windows), find your access token, and copy from there and paste it in the Enter token* field. Be careful when you are copying token from the config file as there might be several tokens. You must choose yours (image below).

Finally, the stage is prepared to deploy microservice.

Step 13: Deploy microservice

Build the docker image from Dockerfile with the command: docker build -t docker2222/dimac:latest. docker2222/dimac is my public docker repository.
Push the image on docker hub with the command: docker image push docker2222/dimac:latest.
Execute kubectl apply -f k8s.yaml where k8s.yaml is the file below:

---

apiVersion: v1
kind: Namespace
metadata:
  name: hello

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-world
  namespace: hello
  annotations:
    buildNumber: "1.0"
spec:
  selector:
    matchLabels:
      app: hello-world
  replicas: 1
  template:
    metadata:
      labels:
        app: hello-world
      annotations:
        buildNumber: "1.0"
    spec:
      containers:
        - name: hello-world
          image: docker2222/dimac:latest
          readinessProbe:
            httpGet:
              path: "/actuator/health/readiness"
              port: 8080
            initialDelaySeconds: 5
          ports:
            - containerPort: 8080
          env:
            - name: APPLICATION_VERSION
              value: "1.0"
---


apiVersion: v1
kind: Service
metadata:
  name: hello-world
  namespace: hello
spec:
  selector:
    app: hello-world
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
---

Last but not least, open the Kubernetes dashboard. If everything is OK, you should see your service.

Improve your performance using JPA Entity Graph

Reading Time: 7 minutes

If you are a web developer, you probably have developed some endpoint which has a slow response time. The issue for that might be that you are calling some 3rd party API, you have file processing or it might be how your entities are retrieved from the database.

In this article, we are going to take a look at how the Entity Graph might help us to improve our query performance when using JPA and Spring Boot.

Let’s discuss the following scenario:

We want to build an application where we can keep track of buildings, how many apartments every building has and how many tenants every apartment has. I have already created a simple application that can be downloaded from here.

In order to achieve the previously mentioned scenario, we will need to have the following entities:

package com.north47.entitygraphdemo.repository.model;

import lombok.AllArgsConstructor;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.Setter;

import javax.persistence.*;
import java.util.ArrayList;
import java.util.List;

@Entity
@NoArgsConstructor
@AllArgsConstructor
@Getter
@Setter
public class Building {

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;

    private String buildingName;

    @OneToMany(mappedBy = "building", cascade = CascadeType.ALL)
    private List<Apartment> apartments;

    public void addApartment(Apartment apartment) {
        if (apartments == null) {
            apartments = new ArrayList<>();
        }
        apartments.add(apartment);
        apartment.setBuilding(this);
    }

}
package com.north47.entitygraphdemo.repository.model;

import lombok.AllArgsConstructor;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.Setter;

import javax.persistence.*;
import java.util.ArrayList;
import java.util.List;

@Entity
@NoArgsConstructor
@AllArgsConstructor
@Getter
@Setter
public class Apartment {

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;

    private String type;

    @JoinColumn(name = "building_id")
    @ManyToOne
    private Building building;

    @OneToMany(mappedBy = "apartment", cascade = CascadeType.ALL)
    private List<Tenant> tenants;

    public void addTenant(Tenant tenant) {
        if (tenants == null) {
            tenants = new ArrayList<>();
        }
        tenants.add(tenant);
        tenant.setApartment(this);
    }

}
package com.north47.entitygraphdemo.repository.model;

import lombok.AllArgsConstructor;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.Setter;

import javax.persistence.*;

@Entity
@NoArgsConstructor
@AllArgsConstructor
@Getter
@Setter
public class Tenant {

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;

    private String name;

    private String lastName;

    @JoinColumn(name = "apartment_id")
    @ManyToOne
    private Apartment apartment;
}

We want to observe what will happen when we want to retrieve all the entities. For that purpose, a service method is created in BuildingService called iterate that will get all the buildings and loop through all remaining entities. For this method to be visible to the outer world a BuildingController is created that exposes GET endpoint from where we can access the iterate method in BuildingService. In order to have some data in our database, there is a SQL script data.sql that will insert some data and will be executed on startup. I would strongly suggest to start your application in debug mode and iterate through every step of the method iterate.

If you have already started your application insert the following URL: http://localhost:8080/building/iterate in your browser or some API application (Postman for example) and execute this GET request. This will execute the iterate method that was created previously.

Let’s see the content of the iterate service method we are calling with this endpoint and observe the console while executing it:

package com.north47.entitygraphdemo.service;

import com.north47.entitygraphdemo.repository.BuildingRepository;
import com.north47.entitygraphdemo.repository.model.Apartment;
import com.north47.entitygraphdemo.repository.model.Building;
import com.north47.entitygraphdemo.repository.model.Tenant;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Service;

import java.util.List;

@Slf4j
@Service
@RequiredArgsConstructor
public class BuildingService {

    private final BuildingRepository buildingRepository;

    public void iterate() {
        log.debug("Iteration started");
        log.debug("Get all buildings");
        final List<Building> buildings = buildingRepository.findAll();
        buildings.forEach(building -> {
            log.debug("Get all apartments for building with id: {}", building.getId());
            final List<Apartment> apartments = building.getApartments();
            apartments.forEach(apartment -> {
                log.debug("Get all tenants for apartment with id: {}", apartment.getId());
                final List<Tenant> tenants = apartment.getTenants();
                log.debug("Apartment with id : {} has {} tenants", apartment.getId(), tenants.size());
            });
        });
    }
}

If you are in debug mode you may notice that after buildingRepository.findAll() is executed we can see the following log in the console:

Hibernate: select building0_.id as id1_1_, building0_.building_name as building2_1_ from building building0_

Let’s continue with executing the rest of the code. What will appear in the console is the following:

Hibernate: select apartments0_.building_id as building3_0_0_, apartments0_.id as id1_0_0_, apartments0_.id as id1_0_1_, apartments0_.building_id as building3_0_1_, apartments0_.type as type2_0_1_ from apartment apartments0_ where apartments0_.building_id=?
Hibernate: select tenants0_.apartment_id as apartmen4_2_0_, tenants0_.id as id1_2_0_, tenants0_.id as id1_2_1_, tenants0_.apartment_id as apartmen4_2_1_, tenants0_.last_name as last_nam2_2_1_, tenants0_.name as name3_2_1_ from tenant tenants0_ where tenants0_.apartment_id=?
Hibernate: select tenants0_.apartment_id as apartmen4_2_0_, tenants0_.id as id1_2_0_, tenants0_.id as id1_2_1_, tenants0_.apartment_id as apartmen4_2_1_, tenants0_.last_name as last_nam2_2_1_, tenants0_.name as name3_2_1_ from tenant tenants0_ where tenants0_.apartment_id=?

Even though we are not calling some repository methods, SQL queries are executed. This is happening because it is not specified the fetch type in the entities and the default one is the LAZY for OneToMany relationships. This means that when we will try to get the entities (in our case call methods getApartments in Building and getTenants in Aparment) that are annotated with @OneToMany, additional query will be executed. Imagine having a lot’s of data, and we want to execute a similar logic, then this will cause executing a lot more additional queries which will cause a huge latency. One solution is that we can always switch to changing the fetch type to EAGER, but that means that these collections will be always called and we won’t be able to change that in runtime.

One of the solutions can be the JPA Entity Graph. Let’s see how it can make our life easier. We will do the following changes in our domain class Building:

package com.north47.entitygraphdemo.repository.model;

import lombok.AllArgsConstructor;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.Setter;

import javax.persistence.*;
import java.util.HashSet;
import java.util.Set;

@Entity
@NoArgsConstructor
@AllArgsConstructor
@Getter
@Setter
@NamedEntityGraph(
        name = "Building.List",
        attributeNodes = {@NamedAttributeNode(value = "apartments", subgraph = "Building.Apartment")},
        subgraphs = {
                @NamedSubgraph(name = "Building.Apartment",
                        attributeNodes = @NamedAttributeNode(value = "tenants"))
        }
)
public class Building {

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;

    private String buildingName;

    @OneToMany(mappedBy = "building", cascade = CascadeType.ALL)
    private Set<Apartment> apartments;

    public void addApartment(Apartment apartment) {
        if (apartments == null) {
            apartments = new HashSet<>();
        }
        apartments.add(apartment);
        apartment.setBuilding(this);
    }

}

So what happened here? We have defined an entity graph with named Building.List. With the attribute.nodes we are saying which collections to be loaded. Since we also want to get the tenants, we have defined a subgraph called Building.Apartment and in the subgraphs, we are saying to load all the tenants for every apartment. In order for this entity graph to be used we need to create a method in our BuildingRepository to whom we will specify to use this entity graph:

package com.north47.entitygraphdemo.repository;

import com.north47.entitygraphdemo.repository.model.Building;
import org.springframework.data.jpa.repository.EntityGraph;
import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.data.jpa.repository.Query;

import java.util.List;

public interface BuildingRepository extends JpaRepository<Building, Long> {


    @Override
    List<Building> findAll();

    @EntityGraph(value = "Building.List")
    @Query("select b from Building as b")
    List<Building> findAllWithEntityGraph();
}

And of course, we will provide a service method that has the same logic but findAllWithEntityGraph() will be called:

public void iterateWithEntityGraph() {
        log.debug("Iteration with entity started");
        log.debug("Get all buildings");
        final List<Building> buildings = buildingRepository.findAllWithEntityGraph();
        buildings.forEach(building -> {
            log.debug("Get all apartments for building with id: {}", building.getId());
            final Set<Apartment> apartments = building.getApartments();
            apartments.forEach(apartment -> {
                log.debug("Get all tenants for apartment with id: {}", apartment.getId());
                final Set<Tenant> tenants = apartment.getTenants();
                log.debug("Apartment with id : {} has {} tenants", apartment.getId(), tenants.size());
            });
        });
    }

And what is remaining is to expose this method using the BuildingController so we can test our new functionality:

@GetMapping(value = "/iterate/entityGraph")
    public ResponseEntity<Void> iterateWithEntityGraph() {
        buildingService.iterateWithEntityGraph();
        return new ResponseEntity<>(HttpStatus.OK);
    }

Now if we put the following URL http://localhost:8080/building/iterate/entityGraph in our browser and observe our console we can see that only one query is executed:

Hibernate: select building0_.id as id1_1_0_, apartments1_.id as id1_0_1_, tenants2_.id as id1_2_2_, building0_.building_name as building2_1_0_, apartments1_.building_id as building3_0_1_, apartments1_.type as type2_0_1_, apartments1_.building_id as building3_0_0__, apartments1_.id as id1_0_0__, tenants2_.apartment_id as apartmen4_2_2_, tenants2_.last_name as last_nam2_2_2_, tenants2_.name as name3_2_2_, tenants2_.apartment_id as apartmen4_2_1__, tenants2_.id as id1_2_1__ from building building0_ left outer join apartment apartments1_ on building0_.id=apartments1_.building_id left outer join tenant tenants2_ on apartments1_.id=tenants2_.apartment_id

So we managed to reduce the number of queries from 4 to 1 and we still have the possibility to call the findAll() method in the BuildingRepository where we won’t load all the apartments or the tenants. In a real case scenario, you can define as many entity graphs as you want and specify which collections to be loaded or not.

Hope you had fun, you can find the code in our repository.

Scaling Microservices with Spring Cloud Netflix

Reading Time: 10 minutes

If you need to build large distributed systems, then this is the place to be. We are going to talk about some of the components that the solution from Spring and Netflix provides and how easy it is to use them. If you follow this article, in the end, you will create a complete application with service discovery, client-side load balancing, feign clients and much more.

Before we start, let’s explain some of the terms that we are going to use in this article:

  • Eureka – a service discovery service, where every client will register itself
  • Ribbon – a client-side load balancer
  • Feign client – declarative web service client which provides communication between microservices

On the picture above it is presented what we are going to build. We will create two user-oriented microservices, one called Supplier and the other called Order. The user will be able to place an order for some supplier but the supplier in order to perform the order will call the Order microservice. For the communication between Supplier and Order, we will use Feign Client in combination with service discovery that will be enabled by Eureka. In the end, we are going to scale the microservice Order and we will see how the Ribbon load balancer will work when we have more instances.

Let’s start by creating the Eureka service discovery microservice.

The easiest way is to go to the Spring Initializer and create your microservice with the following properties as you can see on the picture below:

The required dependencies for our service discovery service are only the Eureka Server.

Once you are done with this, click on generate and your project will be downloaded. Open it via your favourite IDE (I will be using IntelliJ) and there are just two more things that you need to do. In your main class you should add the following annotation @EnableEurekaServer:

package com.north;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.server.EnableEurekaServer;

@SpringBootApplication
@EnableEurekaServer
public class EurekaApplication {

    public static void main(String[] args) {
        SpringApplication.run(EurekaApplication.class, args);
    }

}

One more thing that we will need to change is in our application.yml file. By default an application.properties file is created, but if this is the case we will rename it to application.yml and add the following code:

server:
  port: 8761

eureka:
  instance:
    hostname: localhost
  client:
    registerWithEureka: false
    fetchRegistry: false
    serviceUrl:
      defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka/

With these, we are setting the server port and the service URL. And there we have our first service discovery. Start the application and go to your browser and enter the following link: http://localhost:8761. Now we should be able to see the eureka homepage:

As you can see, there are no instances registered at the moment. So let’s create our first instance.

We will start by creating the Order microservice. Go to the Spring Initializer and create a project with the following properties:

And we will add the following dependencies:

Let’s start by setting the name and the port of the application. Change your application.properties to application.yml and add the following code:

spring:
  application:
    name: order

server:
  port: 8082

Now the name of the application is set to Order and the application will run on port: 8082. If this port is taken on your machine, feel free to change the port. We are not going to be dependent on this port but you will see that we will be dependent on the application name when we want to communicate with it.

In order to enable this instance to be discovered by Eureka we need to add the following annotation to the main class:

package com.north.order;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;

@EnableDiscoveryClient
@SpringBootApplication
public class OrderApplication {

    public static void main(String[] args) {
        SpringApplication.run(OrderApplication.class, args);
    }

}

Now if we start the application and go back to the homepage of eureka by going to our browser and entering the following link: http://localhost:8761 we should be able to see that this instance is registered to Eureka.

Since we confirmed that this instance is registered to Eureka we can now create an endpoint from where an order can be placed. First, let’s create an entity Order:

package com.north.order.domain;

import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;

@Data
@AllArgsConstructor
@NoArgsConstructor
public class Order {

    private String productName;
    private int quantity;
}

It is a simple entity that will contain the name of the products and how many pieces of it we want to order. The rest controller should contain the following logic:

package com.north.order.ctrl;

import com.north.order.domain.Order;
import lombok.extern.slf4j.Slf4j;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RestController;

@RestController
@Slf4j
public class OrderController {

    @PostMapping(value = "/order")
    ResponseEntity<Void> placeOrder(@RequestBody Order order) {
        log.info("Placing an order for product: {} with quantity: {}", order.getProductName(), order.getQuantity());
        return ResponseEntity.ok().body(null);
    }
}

You can test this endpoint by using Postman or some similar tool but we want the Supplier microservice to call this endpoint.

Now that we are done with the Order microservice, let’s build the Supplier. Again we will open the Spring Initializer and create a project with the following properties:

And we will have the following dependencies:

Generate the project and import it into your IDE. First thing let’s change the application.properties file by changing the extension to yml and add the following code:

spring:
  application:
    name: supplier
server:
  servlet:
    context-path: /supplier

With this, we have set the application name and set a context-path. Since here we didn’t change the port, the default 8080 will be taken. In order to register this instance to Eureka and to be able to use Feign Client we need to add the following two annotations in our main class:

package com.north.supplier;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.openfeign.EnableFeignClients;

@SpringBootApplication
@EnableFeignClients
@EnableDiscoveryClient
public class SupplierApplication {

    public static void main(String[] args) {
        SpringApplication.run(SupplierApplication.class, args);
    }

}

Next thing is to create the same entity Order as we have in the previous microservice.

package com.north.supplier.domain;

import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;

@Data
@AllArgsConstructor
@NoArgsConstructor
public class Order {

    private String productName;
    private int quantity;
}

For communication with the Order microservice we will create a feign client called OrderClient:

package com.north.supplier.service;

import com.north.supplier.domain.Order;
import org.springframework.cloud.openfeign.FeignClient;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;

@FeignClient("order")
public interface OrderClient {

    @PostMapping("/order")
    void performOrder(@RequestBody Order order);
}

As a value in the @FeignClient annotation, we need to use the application name of the microservice that we want to communicate with, in our case Order. The method written here is the one that will call the previously exposed endpoint in the Order microservice. Let’s create a service that will use this feign client and execute an order:

package com.north.supplier.service;

import com.north.supplier.domain.Order;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Service;

@Slf4j
@Service
@RequiredArgsConstructor
public class OrderService {

    private final OrderClient orderClient;

    public void placeOrder(Order order) {
        log.info("Requesting order ms to place an order");
        orderClient.performOrder(order);
    }
}

At the end we will expose one endpoint that we can use to test this scenario:

package com.north.supplier.ctrl;

import com.north.supplier.domain.Order;
import com.north.supplier.service.OrderService;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequiredArgsConstructor
@Slf4j
public class OrderController {

    private final OrderService orderService;

    @RequestMapping(value = "/order")
    public ResponseEntity<Void> placeOrder(@RequestBody Order order) {
        orderService.placeOrder(order);
        return ResponseEntity.ok().body(null);
    }
}

Now that we are done, let’s start the application. First, if we check the Eureka homepage we should be able to see this instance also that is registered. You can also see this in the console of where the Supplier is being started:

2020-09-20 20:02:43.907  INFO 7956 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient    : DiscoveryClient_SUPPLIER/host.docker.internal:supplier: registering service...
2020-09-20 20:02:43.911  INFO 7956 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient    : DiscoveryClient_SUPPLIER/host.docker.internal:supplier - registration status: 204

To test this complete scenario make sure that all three applications are started and that the Order and Supplier are registered to Eureka. By using postman I will send a post request to the endpoint on the Supplier microservice and I should be able to see the order being placed in the Order microservice:

Just make sure that you have added in your Headers tab a header with key: Content-Type and value application/json. What should happen if execute this request is in the Supplier microservice console? -we should see the following log:

2020-09-20 20:20:36.674  INFO 19420 --- [nio-8080-exec-4] com.north.supplier.service.OrderService  : Requesting order ms to place an order

in the Order microservice console we should see:

2020-09-20 20:20:36.678  INFO 17800 --- [io-8082-exec-10] com.north.order.ctrl.OrderController     : Placing an order for product: bread with quantity: 300

At this point, we managed to create three microservices, two for user purpose and one for service discovery. We used the feign client for communication with microservices. At some point, if we decide to grow this application and there are too many orders to be executed and we add some complex logic to our Order microservice, we will reach a point where the Order microservice won’t be able to execute all the orders. Let’s see what will happen if we scale our Order microservice.

First, from your IDE stop the Order microservice. Just be sure that Eureka and Supplier are still running. Now go to the folder directory in the Order project (something like …\Documents\blog\order) and open in that location three command prompt windows. In each of them we will type the following command:

mvn spring-boot:run -Dspring-boot.run.arguments="--server.port=8084"

in the second:

mvn spring-boot:run -Dspring-boot.run.arguments="--server.port=8085"

in the third:

mvn spring-boot:run -Dspring-boot.run.arguments="--server.port=8086"

It should be something like this:

Three instances of the application should be now up and running on the port that we previously specified. If you open again the Eureka home page, you should be able to see all three instances registered. Now go back to the postman and execute the same post call to Supplier as we did previously and do this many times as fast as possible. Now if you take a look at the command prompt windows that we opened you should be able to see that every time a different instance of the Order microservice is called. This is provided by Ribbon that is provided out of the box on the client-side (in this case the Supplier microservice), without adding some additional code. As we have mentioned before we are not dependent on the port but we are using the application name in order for Supplier to send a request to Order.

To summarize, our Supplier microservice became aware of all the instances and now he is sending the request every time to a different instance of Order so the load will be balanced.

Hope that you enjoyed this article and had fun with it. The code is available on the our N47 Bitbucket:

Multitenancy with Spring Boot

Reading Time: 7 minutes

Why should you consider implementing multitenancy in your project?

  • Cost: Multi-tenant architecture allows the sharing of resources, databases, and the application itself, thus the cost to run the system is fixed.
  • Maintenance: Users do not have to pay a considerable amount of fees to keep the software up to date. This reduces the overall cost of maintenance for each tenant.
  • Performance: Easier to assess and optimize speed, utilization, response time across the entire system, and even update the technology stack when needed.

In this blog we will implement multitenancy in our Spring Boot project.

Let’s create a simple Spring Boot project from start.spring.io, with only basic dependencies (Spring Web, Spring Data JPA, Spring Configuration Processor, MySQL Driver).

The good thing for implementing multitenancy is that we do not need additional dependencies.
We will split this example into two parts. In the first one, we will explain the idea/logic behind it and split the approach into 7 configuration steps, and explain every step. In the second part, we will see how it’s implemented in real life and we will test the solution.

1. Let’s start with creating Tenant Storage. We will use it for keeping the tenant value while the request is executing.

public class TenantStorage {

    private static ThreadLocal<String> currentTenant = new ThreadLocal<>();

    public static void setCurrentTenant(String tenantId) {
        currentTenant.set(tenantId);
    }

    public static String getCurrentTenant() {
        return currentTenant.get();
    }

    public static void clear() {
        currentTenant.remove();
    }
}

2. Next, we will create the Tenant Interceptor. For every request, we will set the value at the beginning and clear it at the end. As you can see, in Tenant Interceptor, I decided for this demo to fetch the value of the tenant from request header (X-Tenant), this is up to you. Just keep an eye on data security when using this in production. Maybe you want to fetch from a cookie or some other header name.

@Component
public class TenantInterceptor implements WebRequestInterceptor {

    private static final String TENANT_HEADER = "X-Tenant";

    @Override
    public void preHandle(WebRequest request) {
        TenantStorage.setCurrentTenant(request.getHeader(TENANT_HEADER));
    }

    @Override
    public void postHandle(WebRequest webRequest, ModelMap modelMap) {
        TenantStorage.clear();
    }

    @Override
    public void afterCompletion(WebRequest webRequest, Exception e) {

    }
}

3. Next thing is to add the tenant Interceptor in the interceptor registry. For that purpose, I will create WebConfiguration that will implement WebMvcConfigurer.

@Configuration
public class WebConfiguration implements WebMvcConfigurer {

    private final TenantInterceptor tenantInterceptor;

    public WebConfiguration(TenantInterceptor tenantInterceptor) {
        this.tenantInterceptor = tenantInterceptor;
    }

    @Override
    public void addInterceptors(InterceptorRegistry registry) {
        registry.addWebRequestInterceptor(tenantInterceptor);
    }
}

4. Now, let’s update the application.yml file with some properties for the database connections.

tenants:
  datasources:
    n47schema1:
      jdbcUrl: jdbc:mysql://localhost:3306/n47schema1?verifyServerCertificate=false&useSSL=false&requireSSL=false
      driverClassName: com.mysql.cj.jdbc.Driver
      username: root
      password:
    n47schema2:
      jdbcUrl: jdbc:mysql://localhost:3306/n47schema2?verifyServerCertificate=false&useSSL=false&requireSSL=false
      driverClassName: com.mysql.cj.jdbc.Driver
      username: root
      password:
spring:
  jpa:
    database-platform: org.hibernate.dialect.MySQL5Dialect

5. Following, we will wrap the tenant’s values to map with key = tenant name, value = data source in DataSourceProperties.

@ConfigurationProperties(prefix = "tenants")
@Component
public class DataSourceProperties {

    private Map<Object, Object> dataSources = new LinkedHashMap<>();

    public Map<Object, Object> getDataSources() {
        return dataSources;
    }

    public void setDataSources(Map<String, Map<String, String>> datasources) {
        datasources.forEach((key, value) -> this.dataSources.put(key, convert(value)));
    }

    public DataSource convert(Map<String, String> source) {
        return DataSourceBuilder.create()
                .url(source.get("jdbcUrl"))
                .driverClassName(source.get("driverClassName"))
                .username(source.get("username"))
                .password(source.get("password"))
                .build();
    }
}

6. Afterwards, we should create DataSource Bean, and for that purpose, I will create DataSourceConfig.

@Configuration
public class DataSourceConfig {

    private final DataSourceProperties dataSourceProperties;

    public DataSourceConfig(DataSourceProperties dataSourceProperties) {
        this.dataSourceProperties = dataSourceProperties;
    }

    @Bean
    public DataSource dataSource() {
        TenantRoutingDataSource customDataSource = new TenantRoutingDataSource();
        customDataSource.setTargetDataSources(dataSourceProperties.getDataSources());
        return customDataSource;
    }
}

7. At last, we will extend the AbstractRoutingDataSource and implement our lookup key.

public class TenantRoutingDataSource extends AbstractRoutingDataSource {

    @Override
    protected Object determineCurrentLookupKey() {
        return TenantStorage.getCurrentTenant();
    }

}

And we are done with the first part.

Let’s see how it looks in the real world:

For this example, we will use two schemas from the same database instance, we will create a user and get all users. Also, I will show you how you can implement Flyway and test the solution.

First, let’s configure our databases. In my local instance of MySQL server, we will create two schemas: n47schema1 and n47schema2.

Next step is to execute this CREATE statement on both schemas:

CREATE TABLE `users` (
	`id` INT(11) NOT NULL AUTO_INCREMENT,
	`name` VARCHAR(64) NOT NULL DEFAULT '' COLLATE 'utf8_general_ci',
	PRIMARY KEY (`id`)
);

Then, we will create two APIs, one for creating a user, and the other one to fetch all users.

@RestController
@RequestMapping("/users")
public class UserController {

    private final UserRepository userRepository;

    public UserController(UserRepository userRepository) {
        this.userRepository = userRepository;
    }

    @PostMapping
    public UserDomain addUser(@RequestBody UserRequestBody userRequestBody) {
        UserDomain userDomain = new UserDomain(userRequestBody.getName());
        return userRepository.save(userDomain);
    }

    @GetMapping
    public List<UserDomain> getAll() {
        return userRepository.findAll();
    }
}

Also we need to create UserDomain, UserRepository and UserRequestBody.

@Entity
@Table(name = "users")
public class UserDomain {

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;

    private String name;

    public UserDomain() {
    }

    public UserDomain(String name) {
        this.name = name;
    }

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }
}
public interface UserRepository extends JpaRepository<UserDomain, Long> {
}
public class UserRequestBody {
    private String name;

    public UserRequestBody() {
    }

    public UserRequestBody(String name) {
        this.name = name;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }
}

And we are done with coding.

We can run our application and start making a request.

First, let’s create some users with a POST request to http://localhost:8080/users. The most important thing not to forget is that we need to provide header X-Tenant with the value n47schema1 or n47schema2.

We will create two users for tenant n47schema1: Antonie and John. Example:

After that, we will change the X-Tenant header value to n47schema2 and create two users: William and Joseph.

You will notice that the ids retrieved in the response are the same as the first tenant value. Now let’s fetch the users by the API.

When you make a GET request to http://localhost:8080/users with header X-Tenant having value n47schema1 you will fetch the users from the n47schema1 schema, and when you make a request with a header value n47schema2 you will fetch from the n47schema2 schema.

You can also check the data in the database to be sure that it is stored correctly.

You can always set fallback if the X-Tenant header is not provided, or it’s with a wrong value.

As the last thing, I will show you how you can implement Flyway with multitenancy. First, you need to add flyway as a dependency and disable it in the application.yml

spring:
  flyway:
    enabled: false

Add PostConstruct method in DataSourceConfig configuration:

@PostConstruct
public void migrate() {
        for (Object dataSource : dataSourceProperties.getDataSources().values()) {
            DataSource source = (DataSource) dataSource;
            Flyway flyway = Flyway.configure().dataSource(source).load();
            flyway.migrate();
        }
}

And we are done, hope this blog helps you to understand what multitenancy is and how it’s implemented in Spring Boot project.

Download the source code

The project is freely available on our Bitbucket repository. Feel free to fix any mistakes and to comment here if you have any questions or feedback.

https://bitbucket.org/n47/spring-boot-multitenancy/src/master/

Spring Cloud OpenFeign

Reading Time: 3 minutes

Choosing the microservice architecture and Spring Boot means that you’ll need to pick the cleanest possible way for your services to communicate between themselves. Feign Client is one of the best solutions for this issue. It is a declarative Java web service client initially developed by Netflix. It’s an abstraction over REST-based calls allowing your microservices to communicate cleanly without the need to know REST details happening underneath. The main idea behind Feign Client is to create an interface with method definitions representing your service call. Even if you need some customization on requests or responses, you can do it in a declarative way. In this article, we will learn about integrating Feign in a Spring Boot application with an example for REST-based HTTP calls. An example will be given, in which two microservices will communicate with each other to transfer some data. But, first, let’s get familiar with feign.

What is Feign?

Feign is a declarative web service client that makes writing web service clients easier. We use the various annotations provided by the Spring framework such as Requestmapping, @PathVariable in a Java interface to Feign, a declarative web service client. It makes writing web service clients easier. To use Feign create an interface and annotate it. It has pluggable annotation support including Feign annotations and JAX-RS annotations. Feign also supports pluggable encoders and decoders. Spring Cloud adds support for Spring MVC annotations and for using the same HttpMessageConverters used by default in Spring Web. Spring Cloud integrates Ribbon and Eureka to provide a load-balanced HTTP client when using Feign.

Example Management API simulator

In the following code section, you can see a Feign Client resource example. The interface extends the origin API interface to declare the @FeignClient. The @FeignClient declares that a REST client should be created with this interface.

Setup pom.xml

The following dependency will be added:

    <dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-starter-openfeign</artifactId>
    </dependency>

Enable Feign Client

Now enable the Eureka Feign by using the @EnableFeignClients annotation in a main Spring Boot application class that is also annotated with the @SpringBootApplication annotation.

package com.example.demo;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.openfeign.EnableFeignClients;
@SpringBootApplication
@EnableFeignClients
@EnableDiscoveryClient
public class FeignClientApplication {
  public static void main(String[] args) {
    SpringApplication.run(FeignClientApplication.class, args);
  }
}

Use a Circuit Breaker with a Feign Client

If you want to use the Spring Cloud OpenFeign support for Hystrix circuit breakers, you must set the feign.hystrix.enabled property to true. In the Feign version of the Agency app, this property is configured in application.yml:

feign:
  hystrix:
    enabled: true
@FeignClient(name = "Validations", url = "${validations.host}")
public interface ValidationsClient {

    @GetMapping(value = "/validate-phone")
    InfoMessageResponse<PhoneNumber> validatePhoneNumber(@RequestParam("phoneNumber") String phoneNumber);

}

In the application.yml file, we will store the URL of the microservice with which we need to communicate:

validations:
  host: "http://localhost:9080/validations"

We will need to add a config for Feign as follows:

package com.demo;

import feign.Contract;
import org.springframework.cloud.openfeign.support.SpringMvcContract;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class FeignClientConfiguration {
    @Bean
    public Contract feignContract() {
        return new SpringMvcContract();
    }
}

Congrats! You just managed to run your Feign Client application by which you can easily locate and consume the REST service.

Summary

In this article, we have launched an example of microservice that communicates with one another. This article should be treated as an introduction to the subject of Feign Client and a discussion of integration with other important components of the microservice architecture.

JHipster with Google App Engine and Cloud MySQL

Reading Time: 5 minutes

How does it sound to set up a complete spring application, with front-end and database? With all the models, repositories and controllers? Even with Unit and Integration tests, with mocked data? All within a few hours? Your solution is JHipster!

JHipster

JHipster or “Java Hipster” is a handy application generator, a development platform, to develop and deploy web applications. JHipster has become popular in a short time, and it has been featured in many conferences all around the globe – Montreal, Omaha, Taipei, Richmond, Frankfurt, Paris, London. It supports:

  • Spring Boot (Back-end)
  • Angular/React/Vue (Front-end)
  • Spring microservices

JHipster is used for generating complete applications, it will create for you a Spring Boot and Angular/React/Vue application, high-quality application with most of the things pre-configured, using Java as back-end technology and an extensive set of Spring technologies: Spring Security, Spring Boot, Spring MVC (providing a framework for web-sockets, REST and MVC), Spring Data, etc. and Angular/React/Vue front-end and a suite of pre-configured development tools like Yeoman, Maven, Gradle, Grunt, Gulp.js and Bower.

JHipster gives you a head start in creating Spring Boot application with a set of pre-defined screens for user management, monitoring, and logging. The generated Spring Boot application is specifically tailored to make working with Angular/React/Vue a smoother experience. At the top of all that, JHipster also gives you the tools to update, manage and package the resulting application.

By now you may think it sounds too good to be true… But it is not everything that JHipster offers. If you are a web developer, by now probably you have a lot of questions. 🙂
One important question we will answer in this blog post: is it supported by today’s cloud solutions, is it compatible with all of them? The answer is yes, it is compatible with the popular cloud solutions from Google, Amazon, Microsoft, and Heroku. Let’s see what it takes to make a complete integration in Google’s cloud platform, the app engine.

Compatibility Test - NEXCOM

Google App Engine

Google App Engine is a cloud solution provided by Google, a platform for developing and hosting web applications in data centres managed by Google; Platform as a Service (PaaS). Applications are sandboxed and run across multiple servers. The App Engine supports Java or Python, uses the Google query language and stores data in Google BigTable.

It is free of usage up to a certain amount of resource usage. After the user is exceeding the limited usage rates for storage, CPU resources, requests or number of API calls and concurrent requests can pay for more of these resources.

It is fully compatible with the JHipster generated projects. What it takes to host your application is just to follow the official how-to guide from Google App Engine documentation, as normal Spring Boot Application. To make things easier, Google offers a database which works closely with the Google App Engine, the Cloud SQL.

Cloud SQL

Cloud SQL is a database service offered by Google for their cloud solutions, fully-managed that makes it easy to configure, manage, maintain, and operate your relational databases on Google Cloud Platform.

It offers three database options to integrate with:

  • MySQL
  • PostgreSQL
  • SQL Server

Let’s get into details of integrating with Cloud SQL for MySQL:

  1. The first step is to create a Cloud SQL instance on the Google Cloud Platform, which requires few things like instance ID, password and etc. to be set and it gives you the option to choose the MySQL database version.
  2. The following step is to create the database in the newly created instance. It is possible to have more databases in one instance.
  3. Now, our application, in the case to be able to communicate with the Cloud SQL, without any permission blockers, we need to register the application in the Cloud SQL and manually configure the service account roles.
  4. The final step is connecting your application to the created Cloud SQL instance. It is done through JDBC. All the required properties can be found in the overview of the Cloud SQL, instance connection name, credentials and etc.

So the conclusion: don’t be afraid to invest some time in new technologies, be curious, you never know where they may lead you. Thank you for reading. 🙂

Reactive Spring with WebFlux and SQL Databases

Reading Time: 6 minutes

Since SpringBoot 2 the Spring WebFlux was introduced so we can create reactive web applications. This was great and it was working fine with NoSql databases but when it came to relational databases this was an issue. The JDBC database operations are blocking by nature and this will stop you to create a totally non-blocking application. But in order to have an asynchronous and non-blocking application, we will need to cover every layer of the application. The hero that solved this was the R2DBC – Reactive Relational Database Connectivity that gives a possibility to make none-blocking calls to Relational Databases.

The combination of WebFlux and R2DBC is enough to cover every layer in our application that we are going to build. As a relational database, we are going to use H2. So on to the coding!

Go to the spring initializr page from where we are going to build our application and select the following configuration:

  • Group: com.north47 (or your package name)
  • Artifact: spring-r2dbc
  • Dependencies: Spring Reactive Web, Spring Data R2DBC [Experimental], H2 Database, Lombok

(You won’t be able to see the Lombok on this picture, but there it is! If for some reason the Lombok is causing you issues you might need to install a plugin. To do this in Intellij go to File -> Settings -> Plugins search for Lombok, install it and restart your IDE. If you can’t manage to do it just go the old way remove the annotations @Data, @AllArgsConstructor, @NoArgsConstructor in the Book.java class and just create your own setters, getters and constructors).

Now click on Generate, unzip the application and open it via your IDE.

Let’s first create a SQL script that will create our table. Go to src -> main -> resources and right-click on it and select New -> File. Name the file: schema.sql and enter there the following code:

CREATE TABLE BOOK (
ID INTEGER IDENTITY PRIMARY KEY ,
NAME VARCHAR(255) NOT NULL,
AUTHOR VARCHAR (255) NOT NULL
);

This will create a table with name ‘Book’ and the following columns: ID, NAME and AUTHOR.

We will create an additional script that will put us some data in our database. Repeat the following procedure from previous and this time give a name to the file: data.sql and add the following code:

INSERT INTO BOOK (ID,NAME,AUTHOR) VALUES (1,'Angels and Demons','Dan Brown');
INSERT INTO BOOK (ID,NAME, AUTHOR) VALUES (2,'The Matarese Circle', 'Robert Ludlum');
INSERT INTO BOOK (ID,NAME,AUTHOR) VALUES (3,'Name of the Rose', 'Umberto Eco');

This will put some data into our database.

In resources delete the application.properties file and let’s create a new file where we are going to add the following:

logging:
  level:
    org.springframework.data.r2dbc: DEBUG
spring:
  r2dbc:
    url: r2dbc:h2:mem:///test?options=DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE
    name: sa
    password:


Now that we have defined the r2dbc URL and enabled DEBUG logging level for r2dbc let’s go to create our java classes.

Create a new package domain under the ‘com.north47.springr2dbc’ and create a new class Book. This will be our database model:

package com.north47.springr2dbc.domain;

import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import org.springframework.data.annotation.Id;
import org.springframework.data.relational.core.mapping.Column;
import org.springframework.data.relational.core.mapping.Table;

@Table("book")
@Data
@AllArgsConstructor
@NoArgsConstructor
public class Book {

    @Id
    private Long id;

    @Column(value = "name")
    private String name;

    @Column(value = "author")
    private String author;

}

Now to create our repository first create a new package named ‘repository’ under ‘com.north47.springrdbc’. In there create an interface named BookRepository. This interface will extend the R2dbRepository:

package com.north47.springr2dbc.repository;

import com.north47.springr2dbc.domain.Book;
import org.springframework.data.r2dbc.repository.R2dbcRepository;

public interface BookRepository extends R2dbcRepository<Book, Long> {
}

As you may notice we are not extending the JpaRepository as usual. The R2dbcRepository will provide us with methods that can work with objects like Flux, Mono etc…

After this, we will create endpoints from where we can access the previously inserted data or create new, modify it or delete it.

Create a new package ‘resource’ under the ‘com.north47.springr2dbc’ package and in there we will create our BookResource:

package com.north47.springr2dbc.resource;

import com.north47.springr2dbc.domain.Book;
import com.north47.springr2dbc.repository.BookRepository;
import org.springframework.http.MediaType;
import org.springframework.web.bind.annotation.*;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;

@RestController
@RequestMapping(value = "/books")
public class BookResource {

    private final BookRepository bookRepository;

    public BookResource(BookRepository bookRepository) {
        this.bookRepository = bookRepository;
    }

    @GetMapping(produces = MediaType.TEXT_EVENT_STREAM_VALUE)
    public Flux<Book> getAllBooks() {
        return bookRepository.findAll();
    }

    @GetMapping(value = "/{id}")
    public Mono<Book> findById(@PathVariable Long id) {
        return bookRepository.findById(id);
    }

    @PostMapping(consumes = MediaType.APPLICATION_JSON_VALUE)
    public Mono<Book> save(@RequestBody Book book) {
        return bookRepository.save(book);
    }

    @DeleteMapping(value = "/{id}")
    public Mono<Void> delete(@PathVariable Long id) {
        return bookRepository.deleteById(id);
    }
}

And there we have endpoints from where we can access our data and modify it.

On to the postman so we can test our application, but of course, first, let’s start it. When you run the application you can see in the console that your server is started:

Netty started on port(s): 8080

Also since we enabled DEBUG log level you should be able to see al the SQL queries that are executed from the scripts that we wrote previously.

In postman set a GET method and the url: localhost:8080/books. In the Headers add key: ‘Content-Type’, value:’application-json’.

Press that send button and there it is you will get the data:

data:{"id":1,"name":"Angels and Demons","author":"Dan Brown"}

data:{"id":2,"name":"The Matarese Circle","author":"Robert Ludlum"}

data:{"id":3,"name":"Name of the Rose","author":"Umberto Eco"}

You can test also the other endpoints, for example, getting a book by id just by changing the URL to localhost:8080/books/1. The result will be:

{
    "id": 1,
    "name": "Angels and Demons",
    "author": "Dan Brown"
}

Now you can test the other endpoints by creating a new Book by sending a POST request to the localhost:8080/books or delete a book by sending a DELETE to localhost:8080/books/{id}.

Here you can find the whole code:

Spring-R2DBC

Hope you enjoyed it!

Securing your microservices with OAuth 2.0. Building Authorization and Resource server

Reading Time: 8 minutes

We live in a world of microservices. They give us an easy opportunity to scale our application. But as we scale our application it becomes more and more vulnerable. We need to think of a way of how to protect our services and how to keep the wrong people from accessing protected resources. One way to do that is by enabling user authorization and authentication. With authorization and authentication, we need a way to manage credentials, check the access of the requester and make sure people are doing what they suppose to.

When we speak about Spring (Cloud) Security, we are talking about Service authorization powered by OAuth 2.0. This is how it exactly works:

 

The actors in this OAuth 2.0 scenario that we are going to discuss are:

  • Resource Owner – Entity that grants access to a resource, usually you!
  • Resource Server – Server hosting the protected resource
  • Client – App making protected resource requests on behalf of a resource owner
  • Authorization server – server issuing access tokens to clients

The client will ask the resource owner to authorize itself. When the resource owner will provide an authorization grant with the client will send the request to the authorization server. The authorization server replies by sending an access token to the client. Now that the client has access token it will put it in the header and ask the resource server for the protected resource. And finally, the client will get the protected data.

Now that everything is clear about how the general OAuth 2.0 flow is working, let’s get our hands dirty and start writing our resource and authorization server!

Building OAuth2.0 Authorization server

Let’s start by creating our authorization server using the Spring Initializr. Create a project with the following configuration:

  • Project: Maven Project
  • Artefact: auth-server
  • Dependencies: Spring Web, Cloud Security, Cloud OAuth2

Download the project, copy it into your workspace and open it via your IDE. Go to your main class and add the @EnableAuthorizationServer annotation.

@SpringBootApplication
@EnableAuthorizationServer
public class AuthServerApplication {

    public static void main(String[] args) {
        SpringApplication.run(AuthServerApplication.class, args);
    }

}

Go to the application.properties file and make the following modification:

  • Change the server port to 8083
  • Set the context path to be “/api/auth”
  • Set the client id to “north47”
  • Set the client secret to “north47secret”
  • Enable all authorized grant types
  • Set the client scope to read and write
server.port=8083

server.servlet.context-path=/api/auth

security.oauth2.client.client-id=north47
security.oauth2.client.client-secret=north47secret
security.oauth2.client.authorized-grant-types=authorization,password,refresh_token,password,client_credentials
security.oauth2.client.scope=read,write

The client id is a public identifier for applications. The way that we used it is not a good practice for the production environment. It is usually a 32-character hex string so it won’t be so easy guessable.

Let’s add some users into our application. We are going to use in-memory users and we will achieve that by creating a new class ServiceConfig. Create a package called “config” with the following path: com.north47.authserver.config and in there create the above-mentioned class:

@Configuration
public class ServiceConfig extends GlobalAuthenticationConfigurerAdapter {

    @Override
    public void init(AuthenticationManagerBuilder auth) throws Exception {
        auth.inMemoryAuthentication()
                .withUser("filip")
                .password(passwordEncoder().encode("1234"))
                .roles("ADMIN");
    }

    @Bean
    public BCryptPasswordEncoder passwordEncoder() {
        return new BCryptPasswordEncoder();
    }
}

With this we are defining one user with username: ‘filip’ and password: ‘1234’ with a role ADMIN. We are defining that BCryptPasswordEncoder bean so we can encode our password.

In order to authenticate the users that will arrive from another service we are going to add another class called UserResource into the newly created package resource (com.north47.autserver.resource):

@RestController
public class UserResource {

    @RequestMapping("/user")
    public Principal user(Principal user) {
        return user;
    }
}

When the users from other services will try to send a token for validation the user will also be validated with this method.

And that’s it! Now we have our authorization server! The authorization server is providing some default endpoints which we are going to see when we will be testing the resource server.

Building Resource Server

Now let’s build our resource server where we are going to keep our secure data. We will do that with the help of the Spring Initializr. Create a project with the following configuration:

  • Project: Maven Project
  • Artefact: resource-server
  • Dependencies: Spring Web, Cloud Security, Cloud OAuth2

Download the project and copy it in your workspace. First, we are going to create our entity called Train. Create a new package called domain into com.north47.resourceserver and create the class there.

public class Train {

    private int trainId;
    private boolean express;
    private int numOfSeats;

    public Train(int trainId, boolean express, int numOfSeats) {
        this.trainId = trainId;
        this.express = express;
        this.numOfSeats = numOfSeats;
    }

   public int getTrainId() {
        return trainId;
    }

    public void setTrainId(int trainId) {
        this.trainId = trainId;
    }

    public boolean isExpress() {
        return express;
    }

    public void setExpress(boolean express) {
        this.express = express;
    }

    public int getNumOfSeats() {
        return numOfSeats;
    }

    public void setNumOfSeats(int numOfSeats) {
        this.numOfSeats = numOfSeats;
    }

}

Let’s create one resource that will expose an endpoint from where we can get the protected data. Create a new package called resource and there create a class TrainResource. We will have one method only that will expose an endpoint behind we can get the protected data.

@RestController
@RequestMapping("/train")
public class TrainResource {


    @GetMapping
    public List<Train> getTrainData() {

        return Arrays.asList(new Train(1, true, 100),
                new Train(2, false, 80),
                new Train(3, true, 90));
    }
}

Let’s start the application and send a GET request to http://localhost:8082/api/services/train. You will be asked to enter a username and password. The username is user and the password you can see from the console where the application was started. By entering this credentials will give the protected data.

Let’s change the application now to be a resource server by going to the main class ResourceServerApplication and adding the annotation @EnableResourceServer.

@SpringBootApplication
@EnableResourceServer
public class ResourceServerApplication {

    public static void main(String[] args) {
        SpringApplication.run(ResourceServerApplication.class, args);
    }

}

Go to the application properties file and do the following changes:

server.port=8082
server.servlet.context-path=/api/services
security.oauth2.resource.user-info-uri=http://localhost:8083/api/auth/user 

What we have done here is:

  • Changed our server port to 8082
  • Set context path: /api/services
  • Gave user info URI where the user will be validated when he will try to pass a token

Now if you try to get the protected data by sending a GET request to http://localhost:8082/api/services/train the server will return to you a message that you are unauthorized and that full authentication is required. That means that without a token you won’t be able to access the resource.

So that means that we need a fresh new token in order to get the data. We will ask the authorization server to give us a token for the user that we previously created. Our client in this scenario will be the postman. The authorization server that we previously created is exposing some endpoints out of the box. To ask the authorization server for a fresh new token send a POST request to the following URL: localhost:8083/api/auth/oauth/token.

As it was said previously that postman in this scenario is the client that is accessing the resource, it will need to send the client credentials to the authorization server. Those are the client id and the client secret. Go to the authorization tab and add as a username the client id (north47) and the password will be the client secret (north47secret). On the picture below is presented how to set the request:

What is left is to say the username and password of the user. Open the body tab and select x-www-form-urlencoded and add the following values:

  • key: ‘grant_type’, value: ‘password’
  • key: ‘ client_id’, value: ‘north47’
  • key: ‘ username’, value: ‘filip’
  • key: ‘password’, value ‘1234’

Press send and you will get a response with the access_token:

{
    "access_token": "ae27c519-b3da-4da8-bacd-2ffc98450b18",
    "token_type": "bearer",
    "refresh_token": "d97c9d2d-31e7-456d-baa2-c2526fc71a5a",
    "expires_in": 43199,
    "scope": "read write"
}

Now that we have the access token we can call our protected resource by inserting the token into the header of the request. Open postman again and send a GET request to localhost:8082/api/services/train. Open the header tab and here is the place where we will insert the access token. For a key add “Authorization” and for value add “Bearer ae27c519-b3da-4da8-bacd-2ffc98450b18”.

 

And there it is! You have authorized itself and got a new token which allowed you to get the protected data.

You can find the projects in our repository:

And that’s it! Hope you enjoyed it!

Spring Cloud Function meets AWS Lambda

Reading Time: 5 minutes

Why spring cloud functions? 🤔

  • Serverless architecture
  • Ignore transport details and infrastructure, and focus on business logic
  • Keep using Spring Boot features
  • Run same code as REST API, a stream processor, or a task

AWS Lambda is one of the most popular serverless solutions. In this blog, we will create a simple spring function and deploy it as an AWS Lambda function.

First, we will create spring function

Let’s create a new project from https://start.spring.io/

In this example, we will create a simple function that will receive some name and will return the sum of all ASCII values of characters. For that purpose, we will create two DTO classes.

  • InputDTO
public class InputDTO {

    private String name;

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }
}
  • OutputDTO
public class OutputDTO {
    private int sum;

    public OutputDTO(int sum) {
        this.sum = sum;
    }

    public int getSum() {
        return sum;
    }

    public void setSum(int sum) {
        this.sum = sum;
    }
}

Let’s update our pom file. We will add all the dependencies we need for this demo. This is how the pom file should look like:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.1.7.RELEASE</version>
        <relativePath/> <!-- lookup parent from repository -->
    </parent>
    <groupId>com.north</groupId>
    <artifactId>north-demo-spring-aws</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <name>north-demo-spring-aws</name>
    <description>Demo project for Spring Boot</description>

    <dependencies>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-function-adapter-aws</artifactId>
            <version>${spring-cloud-function.version}</version>
        </dependency>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-function-web</artifactId>
            <version>${spring-cloud-function.version}</version>
        </dependency>
        <dependency>
            <groupId>com.amazonaws</groupId>
            <artifactId>aws-lambda-java-events</artifactId>
            <version>${aws-lambda-events.version}</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>com.amazonaws</groupId>
            <artifactId>aws-lambda-java-core</artifactId>
            <version>${aws-lambda-java-core.version}</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-deploy-plugin</artifactId>
                <configuration>
                    <skip>true</skip>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
                <dependencies>
                    <dependency>
                        <groupId>org.springframework.boot.experimental</groupId>
                        <artifactId>spring-boot-thin-layout</artifactId>
                        <version>1.0.10.RELEASE</version>
                    </dependency>
                </dependencies>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-shade-plugin</artifactId>
                <configuration>
                    <createDependencyReducedPom>false</createDependencyReducedPom>
                    <shadedArtifactAttached>true</shadedArtifactAttached>
                    <shadedClassifierName>aws</shadedClassifierName>
                </configuration>
            </plugin>
        </plugins>
    </build>

    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
        <spring-cloud-function.version>2.1.1.RELEASE</spring-cloud-function.version>
        <aws-lambda-events.version>2.2.6</aws-lambda-events.version>
        <aws-lambda-java-core.version>1.2.0</aws-lambda-java-core.version>
    </properties>

</project>

Now let’s write the function. We are implementing the function interface and override the method “apply”. All of the business logic we need, we are writing in that method.

public class UseCaseHandler implements Function<InputDTO, OutputDTO> {

    @Override
    public OutputDTO apply(InputDTO inputDTO) {
        int sum = 0;
        for (int i = 0; i < inputDTO.getName().length(); i++) {
            sum += ((int) inputDTO.getName().charAt(i));
        }
        return new OutputDTO(sum);
    }
}

UseCaseHandler class is created in com.north.northdemospringaws.function. Because of that, application.yml file needs an update.

spring:
  cloud:
    function:
      scan:
        packages: com.north.northdemospringaws.function

Now we will test our function. I will try it with my name Antonie Zafirov.
First, let’s create a simple unit test to check if the function works correctly.

@RunWith(MockitoJUnitRunner.class)
public class UseCaseHandlerTest {

    @InjectMocks
    private UseCaseHandler useCaseHandler;

    @Test
    public void testUseCaseHandler() {
        InputDTO inputDTO = new InputDTO();
        inputDTO.setName("Antonie Zafirov");
        OutputDTO outputDTO = useCaseHandler.apply(inputDTO);
        assertEquals(1487, outputDTO.getSum());
    }
}

We can check with postman if the function is acting like RESTful API

And it works!

Next step is exposing our function and uploading as a Lambda function.

The magic is done with extending SpringBootRequestHandler from AWS adapter. This class is acting as the entry point of the Lambda function and also defining its input and output.

package com.north.northdemospringaws.function;

import com.north.northdemospringaws.dto.InputDTO;
import com.north.northdemospringaws.dto.OutputDTO;
import org.springframework.cloud.function.adapter.aws.SpringBootRequestHandler;

public class DemoLambdaFunctionHandler extends SpringBootRequestHandler<InputDTO, OutputDTO> {

}

You should have AWS account for this step, so if you do not have you should create one. After that, go to AWS console and select Lambda from the services list.

From submenu select functions and click on the create function.
Add name on function and select runtime Java 8.
In my case, the function name is “demo”.

Build jar from the application with simple maven command, mvn clean package.

Upload the aws jar, in my case north-demo-spring-aws-0.0.1-SNAPSHOT-aws.jar

In the handler part, we should write the path to the DemoLambdaFunctionHandler. In this example, the path is “com.north.northdemospringaws.function.DemoLambdaFunctionHandler”.

We create environment variable FUNCTION_NAME with the name of our function as value starting with a lowercase letter useCaseHandler. Now let’s save it all and we are done!!! And the last step is to test it.
Create a test event with the name testEvent and value:

{
  "name": "Antonie Zafirov"
}

Choose testEvent as event and execute the function with clicking Test button. The result is:

And we are done, and it works!!!

Download the source code

Project is freely available on our GitLab repository. Feel free to fix any mistakes and to comment here if you have any questions or feedback.

https://gitlab.com/47northlabs/public/spring-functions-aws

Deploy Spring Boot Application on Google Cloud with GitLab

Reading Time: 5 minutes

A lot of developers experience a painful process with their code being deployed on the environment. We, as a company, suffer the same thing so that we wanted to create something to make our life easier.

After internal discussions, we decided to make a fully automated CI/CD process. We investigated and came up with a decision for that purpose to implement Gitlab CI/CD with google cloud deployment.

Further in this blog, you can see how we achieved that and how you can achieve the same.

Let’s start with setting up. 🙂

  • After that, we create a simple rest controller for testing purposes.
package com.northlabs.gitlabdemo.rest;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class RootController {

    @GetMapping(value = "/")
    public String root() {
        return "Hello from Root";
    }

    @GetMapping(value = "/demo")
    public String demo() {
        return "Hello from Demo";
    }

}
  • Next step is to push the application to our GitLab repo.
  1. cd path_to_root_folder
  2. git init
  3. git remote add origin https://gitlab.com/47northlabs/47northlabs/product/playground/gitlab-demo.git
  4. git add .
  5. git commit -m “Initial commit”
  6. git push -u origin master

Now, after we have our application in GitLab repository, we can go to setup Google Cloud. But, before you start, be sure that you have a G-Suite account with enabled billing.

  • The first step is to create a new project: in my case it is northlabsgitlab-demo.

Create project: northlabs-gitlab-demo
  • Now, let’s create our Kubernetes Cluster.

It will take some time after Kubernetes clusters are initialized so that GitLab will be able to create a cluster.

We are done with Google Cloud, so it’s time to set up Kubernetes in our GitLab repository.

  • First, we add a Kubernetes cluster.
Add Kubernetes Cluster
Sign in with Google
  • Next, we give a name to the cluster and select a project from our Google Cloud account: in my case it’s gitlab-demo.
  • The base domain name should be set up.
  • Installing Helm Tiller is required, and installing other applications is optional (I installed Ingress, Cert-Manager, Prometheus, and GitLab Runner).

Install Helm Tiller

Installed Ingress, Cert-Manager, Prometheus, and GitLab Runner

After installing the applications it’s IMPORTANT to update your DNS settings. Ingress IP address should be copied and added to your DNS configuration.
In my case, it looks like this:

Configure DNS

We are almost done. 🙂

  • The last thing that should be done is to enable Auto DevOps.
  • And to set up Auto DevOps.

Now take your coffee and watch your pipelines. 🙂
After a couple of minutes your pipeline will finish and will look like this:

Now open the production pipeline and in the log, under notes section, check the URL of the application. In my case that is:

Application should be accessible at: http://47northlabs-47northlabs-product-playground-gitlab-demo.gitlab-demo.north-47.com

Open the URL in browser or postman.

https://47northlabs-47northlabs-product-playground-gitlab-demo.gitlab-demo.north-47.com
https://47northlabs-47northlabs-product-playground-gitlab-demo.gitlab-demo.north-47.com/demo
  • Let’s edit our code and push it to GitLab repository.
package com.northlabs.gitlabdemo.rest;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class RootController {

    @GetMapping(value = "/")
    public String root() {
        return "Hello from Root v1";
    }

    @GetMapping(value = "/demo")
    public String demo() {
        return "Hello from Demo v1";
    }
}

After the job is finished, if you check the same URL, you will see that the values are now changed.


https://47northlabs-47northlabs-product-playground-gitlab-demo.gitlab-demo.north-47.com

https://47northlabs-47northlabs-product-playground-gitlab-demo.gitlab-demo.north-47.com/demo

And we are done !!!

This is just a basic implementation of the GitLab Auto DevOps. In some of the next blogs, we will show how to customize your pipeline, and how to add, remove or edit jobs.