Service Discovery in a Microservices Architecture: Client vs Service side discovery

Reading Time: 11 minutes

Service discovery is one of the key things of distributed systems. This allow us to automatically discover services on a network. In order to make some REST request, your service needs to know the service location. This service location includes IP address and port of a service instance. In the most scenarios, from the same service will be deployed multiple instances. The traditional applications are running on a physical hardware and the network locations are static. On the other hand, in the modern cloud-based microservice solutions, this network locations are dynamic. This makes them much harder to manage and more difficult challenge.

There are two main service discovery patterns:

  • Client-side discovery (Eureka)
  • Service-side discovery (Kubernetes)

Service Registry?

The Service Registry is a process that involves registering a service’s location to some central place. You can imagine that as a database of a service locations. The Instances of services would be register into the service registry on startup and deregistered accordingly. It will register it’s host, ports and some additional information like on the image below.
I have already created example application using Netflix Eureka service registry. It comes with some of the predefined annotations. For this purpose, I have created two spring applications. One of application will act as a discovery server. The application needs to include the following dependencies: Eureka Server, Spring Web, Actuator. I have added the @EnableEurekaServer annotation on the main class like on the code below. With this annotation, the app will act like microservice registry and discovery server.

@SpringBootApplication
@EnableEurekaServer
public class EurekaServerApplication {

    public static void main(String[] args) {
        SpringApplication.run(EurekaServerApplication.class, args);
    }
}
spring.application.name=eureka-server
server.port=8761
eureka.client.register-with-eureka=false
eureka.client.fetch-registry=false

The second Spring boot application will act as a client which will be registered on the eureka server (the first created application). This application needs to have the following dependencies: Actuator, Eureka Discovery, and Spring Web. We need to add the @EnableDiscoveryClient annotation on the Spring Boot app class and set up properties like on the code below into the application.yml file.

@SpringBootApplication
@EnableDiscoveryClient
public class ClientDiscoveryApplication {

    public static void main(String[] args) {
        SpringApplication.run(ClientDiscoveryApplication.class, args);
    }
}
spring.application.name=eureka-client-service
server.port=8085
eureka.client.service-url.defaultZone=http://localhost:8761/eureka/

If we navigate on the following URL http://localhost:8761/ like on the image below. We can notice that 2 instances were registered on the eureka-client-service application and started on a different port. One of them was registered on port 8085 and the second instance in on port 8087.

Client-side discovery pattern

In this pattern, the client is responsible for determining the network locations of available service instances and load balancing requests. The network location of the service instance will be registered to the service registry when it start up and deregistered when it will be terminated.
The client is responsible for determining the network locations of available service instances and load balancing requests across them. The Service registry for client-side discovery which we are using in the previous example is Netflix Eureka Server. It provides management of service instances for querying available instances.

From the image below we can consider that we have 3 Services (Service A, Service B, and Service C ) which have different IP addresses provided to each service. For instance, the IP address of Service A is 10.10.1.2:15202. If we want to query the available instances, we need to access Service Registry (Eureka). The Eureka is responsible for storing the instances of all services that were previously registered.

  1. The locations of Service A, Service B and Service C are sent to the Service Registry like on the image below.
  2. The service consumer ask for the Service Discovery Server for the Location of Service A / Service B
  3. The location of Service A will be searched by the Service Registry which store the instance’s location
  4. The Service Customer gets the location of Service A and can make a direct request

Benefits of using this pattern
⦁ flexible, application specific load balancer
Drawback
⦁ must implement register and discovery mechanism for each framework
The significant disadvantage of this pattern is that it couples the client with the service registry.

Alternatives to Eureka discovery

Other alternatives on Eureka are Consul and Zookeeper. Consul makes it simple for services to register themselves and to discover other services via DNS or HTTP interface. Also, provides a richer health checking. It has its own internal distributed key value store that can we use it as well.

Apache Zookeeper on the other side, is a distributed key value store. We can use it as the basis to implement service discovery. A centralized service for maintaining configuration information, naming, providing distributed synchronization. One of the benefit is that we can use it with a large community that supporting it.

Service-side discovery pattern

The alternative approach to Service Discovery is the Server-side Discovery pattern. Clients just make requests to a service instance via a load balancer, which acts as an orchestrator. The load balancer queries the service registry and routes each request to an available service instance like on the image below.
Benefits of using this pattern
⦁ no need to implement discovery logic for each framework used by service clients.
Drawback
⦁ need to set up and manage the Load Balancer, unless it’s already provided in the deployment environment.
Some of the clustering solutions such as Kubernetes, run a proxy on each host in the cluster. In order to make a request to the service, a client connects to the local proxy using the port which is assigned to that service.

From the image below we can consider that we have added Load Balancer, that acts as an orchestrator. The Load Balancer queries the Service Registry and routes each HTTP request to an available service instance.

Kubernetes

If we try to find some analogies between Kubernetes and more traditional architectures. I would compare Kubernetes Pods with service instances and Kubernetes Service with a logical set of pods.

Kubernetes Pods are the smallest, deployable unit in Kubernetes, which contain one or more containers, such as Docker containers. When a Pod runs multiple containers, the containers will be managed as a single entity and share the Pod’s resources. Each pod has its own IP address. When the pod restarts, it gets the new IP address. However there is no guarantee that the pod’s IP address will remain the same throughout all the time.

Kubernetes may relocate or re-instantiate pods at runtime. It doesn’t make sense to use the pod IP addresses for service discovery. This is one of the main reason to include Kubernetes Services into our story.

Service in Kubernetes

It is a component just like pod but it’s not a process. It’s just an abstraction layer which basically represents an IP address. Service will know to forward the request to one of the pods that have registered as service endpoints. On the other hand, unlike Kubernetes pods which don’t have the same IP address, the service has a stable IP address. Each service exposes an IP address and also exposes a DNS endpoint which will never change, even when the pod dies.

The Services provided by Kubernetes allow us to connect to our pods and provide a dynamic resource to access them. For instance, a load balancer can use these services to automatically determine which servers are trying to load balance. Services also provide load balancing ( if we have for example 3 instances of the microservice app, the service will get each request target to that and forward it to one of those pods).

In Kubernetes we have a concept of namespaces, which present a logical group of resources. For our examples I will using hello-today-dev namespace which consists of couples of pods.

kubectl -n hello-today-dev get pod -o wide

kubectl -n hello-today-dev get svc 

With this command we can see all available services in this namespace

How does Service Discovery work in Kubernetes?

Kubernetes has a powerful concept of labeling. Labels are just a key-value pair that provides metadata on our objects. Any pod whose labels match the selector defined in the service manifest will automatically be discover by a service.

Here we have a single service that is front-ending two of our pods. The two pods have a label named “app:my-app” and the Service has defined a label selector that is looking for those same labels.

If we have a single service that is front-ending two of our pods, instances of our app. These two pods have a label named “app=my-app” and the Service has defined a label selector that is looking for those same labels. This means that the service will send traffic to them, even thought the pods might change their addresses. You might also notice that there is a Pod3 that has a different label. The Service won’t front end that pod.

Example of service selector

The other example is when in a service selector we have defined two labels (app = my-app, microservice). Then service will looking for both of labels and it must match all the selectors, not just one and it will register as its endpoints. This is how the service will know which pods belong to it meaning where to forward that request to.

kubectl -n hello-today-dev get pods –show-labels

A service identifies its member pods using a selector attribute. We should specify a selector attribute which has a key-value pair defined as a list, which in our case is app = my-app. It creates a binding between the service and the pods with this name. It provides a flexible mechanism for service discovery, allowing us automatic load balancing. The Kubernetes will use the endpoints object to keep track of which pods are members of the service. Any pod whose label ‘s value (app=my-app) matches with the defined selector (my-app in our case) by the service will be exposed as its endpoint. Load Balancing will be provided by service by routing requests across matching pods.

kubectl -n hello-today-dev describe svc/ht-employee

The following command is to view the status of the service, in our case ht-employee. The service uses the selector app=ht-employee to select the pod 10.0.21.217 as backend pods. The virtual IP address in the cluster is created for the Kubernetes Service to evenly distribute traffic to the pod at the backend.

We can see the defined endpoints of ht-employee microservice (that means that our service is working)

If we have black endpoints it is the result of not selecting any pods, which means that your service traffic will go nowhere. The Endpoints field indicates the pods specified by the selector field.

Summary

Depending of the deployment environments you needed, you can set up your own service discovery using service registry like Eureka. In other deployment environments as in our case is Kubernetes, service discovery is build in. The Kubernetes, as was explained previously is responsible for handling service instance registration and deregistration.

SFTP file transfers with Java

Reading Time: 7 minutes

Migration of data or files moving around different servers is a common case in software development.
For java and spring boot based solutions, jsch implementations of SFTP are often used. In the following blogpost we will learn about the knowhows in usage and setup of most commonly used java libraries for file uploads and file downloads from remote servers.

SFTP is a secure protocol for file transfers and its fully complaint with the SSH authentication functionalities. This protocol makes authentication on both the server and the user and offers cryptographic hash functions. Appliances of this protocol are file transfers which guarantee data integrity such as migrations of documents or logs, copying data across various servers inside or outside an organization, making automated system backups etc. Everywhere where the file security is integral, this protocol should be used over FTP.

JCraft JSch is a pure java implementation of SFTP protocol which offers easy to setup and easy to use library which covers most of the basic use-cases. It allows connections to remote servers, authentication, port forwarding, secure file transfers, compressing data. Commons VFS is another library which is build on top of JCraft JSch which offers better support, and even more customization. For power users and production enterprise application the latter one is preferred. In the following exercise I will demonstrate, how we can setup a test SFTP server and write java implementation to upload and download files via SFTP.

SFTP vs. FTPS

SFTP runs over the SSH protocol, FTPS is actually the old FTP protocol ran over SSL/TLS. In the end, the FTPS needs much more effort in configuration, when firewall configuration needs to be added and many ports explicitly opened. It also might not work in NAT context. SFTP on the other hand, does not require additional configuration on the server, SSH standard port 22 is the port in which this communication is achieved. For SFTP certificates are not centralized, meaning no additional support and maintenance is expected for them, they can be utilized by any method which use SSH and by any certificate authority. SFTP can be used as a filesystem and various configuration can be added to it, extending use cases. Before we dig into details in the java setup, lets create a docker compose file.

Docker

Atmoz/sftp is a easy to use SFTP server, docker image. This docker image can be used as a mock SFTP server for integration tests, or other research and development scenario. User test can login with SFTP and access files in “Documents” folder. If there is a necessity for testing Logging in with SSH keys, then use the volumes to mount publicKey from the host to the users “.ssh/keys/” of the docker container.

version: "2"
services:
  sftp:
    image: atmoz/sftp
    ports:
      - "2222:22"
    command: test::1001::Documents
    volumes:
      - "publicKey.pub:/home/test/.ssh/keys/publicKey.pub"

Now that the container is running, lets add some java configuration.

Jsch

  1. Adding maven dependency
  2. Setting up JSch
  3. Upload to SFTP server/Download from SFTP server

So basically there are three important steps. Lets go into details in these steps for Jsch

Step 1. Add maven dependency

<!-- https://mvnrepository.com/artifact/com.jcraft/jsch -->
<dependency>
    <groupId>com.jcraft</groupId>
    <artifactId>jsch</artifactId>
    <version>0.1.55</version>
</dependency>

This comes as a regular entry in the pom.xml file list of dependencies.

We should always try to use latest stable versions. Version 0.1.55 the latest and it covers some basic use cases. Still, if the use case is something more that a proof of concept, lets say some production web service then, there are better alternatives. Different forks of this implementation have much better support and implement OpenSSH. More about them in the later part of this blogpost. 

Step 2. Setup JSch

This java library allows public key authentication or password authentication to a remote server. For the following example, lets use the password authentication one.

private ChannelSftp setupJsch() throws JSchException {
    JSch jsch = new JSch();
    Session jschSession = jsch.getSession(username, remoteHost);
    jschSession.setPassword(password);
    jschSession.connect();
    return (ChannelSftp) jschSession.openChannel("sftp");
}

Step 3. Upload a file to SFTP server/Download file from SFTP server

For uploading local file to a remote server, put() method is available out of the box.

public void uploadFileUsingResources() throws JschException, SftpException {  
    ChannelSftp channelSftp = setupJsch();
    channelSftp.connect();
    
    String localFile = "src/main/resources/sample.txt";
    String remoteDir = "Documents/";
 
    channelSftp.put(localFile, remoteDir + "remoteFile.txt");
    channelSftp.exit();
}

Or if the source is an InputStream and not a resource file, or we need to bypass setting path to the local file then here is the following snippet. Jsch put() method also can take streams as arguments.

public void uploadFileInputStream() throws JschException, SftpException {  
    ChannelSftp channelSftp = setupJsch();
    channelSftp.connect();
    
    InputStream inputStream = new FileInputStream("c:\\samples\\sample.txt");
    String remoteDir = "Documents/";
 
    channelSftp.put(inputStream, remoteDir + "remoteFile.txt");
    channelSftp.exit();
}

Apart from supporting uploading file to a remote server, Jsch also offers service method for downloading file from a remote SFTP server. Downloading file is supported by the get() method, which is also very easy to use.

public void downloadFileUsingResources() throws JschException, SftpException {  
    ChannelSftp channelSftp = setupJsch();
    channelSftp.connect();
    
    String remoteFile = "sample.txt";
    String localDir = "src/main/resources/";
 
    channelSftp.get(remoteFile, localDir + "localFile.txt");
    channelSftp.exit();
}

For enterprise applications, in the context of transferring files between servers, and in the same time making sure that there is no vulnerability for man in the middle attacks, or password sniffing latest encryption functions and algorithms need to be used. Jsch is not enough in this case as the library is not actively maintained. Somewhere in the beginning of this blogpost I mentioned a fork of this library which gets more, and more active users and is build on top of Jsch. The future of Jsch without ssh-rsa is mwiede/jsch. For power users, which need even more customization, alternatives are Commons VFS and SSHJ.

Apache Commons VFS

Apache Commons VFS is a Virtual File System library. It allows uploading of local files to remote servers, it has methods for checks if file exists in remote location. The library supports moving and deleting files from remote host and all of this in secure manner, using SFTP. Commons VFS is basically Jsch but updated, and with cleaner. better API and maintained repository.
Same as the case was in Jsch, the manual for usage describes three steps…

Step 1. Add maven dependency

<!-- https://mvnrepository.com/artifact/org.apache.commons/commons-vfs2 -->
<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-vfs2</artifactId>
    <version>2.9.0</version>
</dependency>

This comes as a regular entry in the pom.xml file list of dependencies.

Step 2. Setup VSF2

This time, lets use a bit more production like setup. In this configuration class, lets define a file system and lets use a public key authentication. The property privateKey, or the value of it, of course must be encrypted with jasypt for further security, so that we don’t have plain keys exposed in the repository.

@Configuration
@RequiredArgsConstructor
public class SFTPConfiguration {

    @Value("${sftp.private-key}")
    private String privateKey;

    @Bean
    public FileSystemOptions remoteFileSystemOptions() throws FileSystemException {
        FileSystemOptions fileSystemOptions = new FileSystemOptions();
        SftpFileSystemConfigBuilder fileSystemConfigBuilder = SftpFileSystemConfigBuilder.getInstance();
        fileSystemConfigBuilder.setUserDirIsRoot(fileSystemOptions, true);
        fileSystemConfigBuilder.setPreferredAuthentications(fileSystemOptions, "publickey");
        fileSystemConfigBuilder.setIdentityProvider(fileSystemOptions, jSch -> jSch.addIdentity(privateKey, privateKey.getBytes(StandardCharsets.UTF_8), null, null));
        return fileSystemOptions;
    }

}

Step 3. Upload a file to SFTP server/Download file from SFTP server

FileSystemManager interface is used to create file objects, which are then used as arguments in the copyFrom() method. FileSystemManager is used to locate a FileObject by name from one of those file systems. FileObject is a file, and is used to access the content and structure of the file. The method copyFrom() copies another file, and all its descendants, to the method caller. On another note, the sftpUrl is a construct which consist of the sftp://theUsername@theRemoteHost:22/andThePathToTheFile

public void uploadFileToSFTP() {
    FileSystemManager manager = VFS.getManager();
    FileObject localFile = manager.resolveFile(
System.getProperty("user.dir") + "/" + localFile);
    String sftpUrl = "sftp://" + username + "@" + remoteHost + ":" + remoteHostPort + "/Documents/" + fileName;
    FileObject remoteFile = manager.resolveFile(sftpUrl, fileSystemOptions);

 
    remoteFile.copyFrom(localFile, Selectors.SELECT_SELF);
 
    local.close();
    remote.close();
}

Or for the other operation of downloading a file from a sftp server:

public void downloadFileFromSFTP() {
    FileSystemManager manager = VFS.getManager();
    FileObject localFile = manager.resolveFile( System.getProperty("user.dir") + "/" + localDir + "vfsFile.txt");
    String sftpUrl = "sftp://" + username + "@" + remoteHost + ":" + remoteHostPort + "/Documents/" + fileName;
    FileObject remoteFile = manager.resolveFile(sftpUrl, fileSystemOptions);

 
    localFile.copyFrom(remoteFile, Selectors.SELECT_SELF);
 
    local.close();
    remote.close();
}

In this blogpost, we covered two things. We covered how to setup mock SFTP server with docker and also, covered how to use SFTP integration java libraries to securely transfer files from/to remote servers. Write me in the comments sections if you find this useful or if you have further questions.

Microservice architecture: Using Java thread locals and Tomcat/Spring capabilities for automated information propagation

Reading Time: 13 minutes

Inter-microservice communication has always brought questions and challenges to software architects. For example, when it comes to propagating certain information (via HTTP headers for instance) through a whole chain of calls in the scope of one transaction, we want this to happen outside of the microservices’ business logic. We do not want to tackle and work with these headers in the presentation or service layers of the application, especially if they are not important to the microservice for completing some business logic task. I would like to show you how you can automate this process by using Java thread locals and Tomcat/Spring capabilities, showing a simple microservice architecture.

Architecture overview

Architecture overview

This is the sample architecture we will be looking at. We have a Zuul Proxy Server that will act as a gateway towards our two microservices: the Licensing Microservice and the Organization Service. Those three will be the main focus of this article. Let’s say that a single License belongs to a single Organization and a single Organization can deal with multiple Licenses. Additionally, our services are registered to a Eureka Service Discovery and they pull their application config from a separate Configuration Server.

Simple enough, right? Now, what is the goal we want to achieve?

Let’s say that we have some sort of HTTP headers related to authentication or tracking of the chain of calls the application goes through. These headers arrive at the proxy along with each request from the client-side and they need to be propagated towards each microservice participating in the action of completing the user’s request. For simplicity’s sake, let’s introduce two made up HTTP headers that we need to send: correlation-id and authentication-token. You may say: “Well, the Zuul proxy gateway will automatically propagate those to the corresponding microservices, if not stated otherwise in its config”. And you are correct because this is a gateway that has an out-of-the-box feature for achieving that. But, what happens when we have an inter-microservice communication, for example, between the Licensing Microservice and the Organization Microservice. The Licensing Microservice needs to make a call to the Organization Microservice in order to complete some task. The Organization Microservice needs to have the headers sent to it somehow. The “go-to, technical debt” solution would be to read these headers in the controller/presentation layer of our application, then pass them down to the business logic in the service layer, which in turn is gonna pass them to our configured HTTP client, which in the end is gonna send them to the Organization Microservice. Ugly right? What if we have dozens of microservices and need to do this in each and every single one of them? Luckily, there is a lot prettier solution that includes using a neat Java feature: ThreadLocal.

Java thread locals

The Java ThreadLocal class provides us with thread-local variables. What does this mean? Simply put, it enables setting a context (tying all kinds of objects) to a certain thread, that can later be accessed no matter where we are in the application, as long as we access them within the thread that set them up initially. Let’s look at an example:

public class Main {

    public static final ThreadLocal<String> threadLocalContext = new ThreadLocal<>();

    public static void main(String[] args) {
        threadLocalContext.set("Hello from parent thread!");
        Thread childThread = new Thread(() -> {
            System.out.println("Child thread: " + threadLocalContext.get()); // Prints null
        });
        childThread.start();
        childThread.join(); // Waiting for the child thread to finish
        System.out.println("Parent thread: " + threadLocalContext.get()); // Prints "Hello from parent thread!"
    }
}

We have a single static final ThreadLocal<String> reference that we use for setting some information to the thread (in this case, the string “Hello from parent thread!” to the main thread). Accessing this variable via threadLocalContext.get() (no matter in which class we are, as long as we are on the same main thread) produces the expected string we have set previously. Accessing it in a child thread produces a null result. What if we set some context to the child thread as well:

threadLocalContext.set("Hello from parent thread!");
Thread childThread = new Thread(() -> {
    threadLocalContext.set("Hello from child thread!");
    System.out.println("Child thread: " + threadLocalContext.get()); // Prints "Hello from child thread!"
});
childThread.start();
childThread.join(); // Waiting for the child thread to finish
System.out.println("Parent thread: " + threadLocalContext.get()); // Prints "Hello from parent thread!"

We can notice that the two threads have completely separate contexts. Even though they access the same threadLocalContext reference, in the background, the context is relative to the calling thread. What if we wanted the child thread to inherit its parent context:

public class Main {

    private static final ThreadLocal<String> threadLocalContext = new InheritableThreadLocal<>();

    public static void main(String[] args) throws InterruptedException {
        threadLocalContext.set("Hello from parent thread!");
        Thread childThread = new Thread(() -> {
            System.out.println("Child thread: " + threadLocalContext.get()); // Prints "Hello from parent thread!"
            threadLocalContext.set("Hello from child thread!");
            System.out.println("Child thread: " + threadLocalContext.get()); // Prints "Hello from child thread!"
        });
        childThread.start();
        childThread.join(); // Waiting for the child thread to finish
        System.out.println("Parent thread: " + threadLocalContext.get()); // Prints "Hello from parent thread!"
    }
}

We only changed the ThreadLocal to an InheritableThreadLocal in order to achieve that. We can notice that the first print inside the child thread does not render null anymore. The moment we set another context to the child thread, the two contexts become disconnected and the parent keeps its old one. Note that by using the InheritableThreadLocal, the reference to the parent context gets copied to the child, meaning: this is not a deep copy, but two references pointing to the same object (in this case, the string “Hello from parent thread!”). If, for example, we used InheritableThreadLocal<SomeCustomObject> and tackled directly some of the properties of the object inside the child thread (threadLocalContext.get().setSomeProperty("some value")), then this would also be reflected in the parent thread and vice versa. If we want to disconnect the two contexts completely, we just call .set(new SomeCustomObject()) on one of the threads, which will turn its local reference to point to the newly created object.

Now, you may be wondering: “What does this have to do with automatically propagating headers to a microservice?”. Well, by using Servlet containers such as Tomcat (which Spring Boot has it embedded by default), we handle each new HTTP request (whether we like/know it or not :-)) in a separate thread. The servlet container picks an idle thread from its dedicated thread pool each time a new call is made. This thread is then used by Spring Boot throughout the processing of the request and the return of the response. Now, it is only a matter of setting up Spring filters and HTTP client interceptors that will set and get the local thread context that will contain the HTTP headers.

Solution

First off, let’s create a simple POJO class that is going to contain both of the headers that need propagating:

@Getter
@Setter
@ToString
@AllArgsConstructor
@NoArgsConstructor
public class RequestHeadersContext {

    public static final String CORRELATION_ID = "correlation-id";
    public static final String AUTHENTICATION_TOKEN = "authentication-token";

    private String correlationId;
    private String authenticationToken;
}

Next, we will create a utility class for setting and retrieving the thread-local context:

public final class RequestHeadersContextHolder {

    private static final ThreadLocal<RequestHeadersContext> requestHeaderContext = new ThreadLocal<>();

    public static void clearContext() {
        requestHeaderContext.remove();
    }

    public static RequestHeadersContext getContext() {
        RequestHeadersContext context = requestHeaderContext.get();
        if (context == null) {
            context = createEmptyContext();
            requestHeaderContext.set(context);
        }
        return context;
    }

    public static void setContext(RequestHeadersContext context) {
        Assert.notNull(context, "Only not-null RequestHeadersContext instances are permitted");
        requestHeaderContext.set(context);
    }

    public static RequestHeadersContext createEmptyContext() {
        return new RequestHeadersContext();
    }
}

The idea is to have a Spring filter, that is going to read the HTTP headers from the incoming request and place them in the RequestHeadersContextHolder:

@Configuration
public class RequestHeadersServiceConfiguration {

    @Bean
    public Filter getFilter() {
        return new RequestHeadersContextFilter();
    }

    private static class RequestHeadersContextFilter implements Filter {

        @Override
        public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain filterChain) throws IOException, ServletException {
            HttpServletRequest httpServletRequest = (HttpServletRequest) servletRequest;
            RequestHeadersContext context = new RequestHeadersContext(
                    httpServletRequest.getHeader(RequestHeadersContext.CORRELATION_ID),
                    httpServletRequest.getHeader(RequestHeadersContext.AUTHENTICATION_TOKEN)
            );
            RequestHeadersContextHolder.setContext(context);
            filterChain.doFilter(servletRequest, servletResponse);
        }
    }
}

We created a RequestHeadersServiceConfiguration class which, at the moment, has a single Spring filter bean defined. This filter is going to read the needed headers from the incoming request and set them in the RequestHeadersContextHolder (we will need to propagate those later when we make an outgoing request to another microservice). Afterwards, it will resume the processing of the request and will give control to the other filters that might be present in the filter chain. Keep in mind that, all the while, this code executes within the boundaries of the dedicated Tomcat thread, which the container had assigned to us.

Next, we need to define an HTTP client interceptor which we are going to link to a RestTemplate client, which in turn is going to execute the interceptor’s code each time before it makes a request to an outer microservice. We can add this new RestTemplate bean inside the same configuration file:

@Configuration
public class RequestHeadersServiceConfiguration {
    
    // .....

    @LoadBalanced
    @Bean
    public RestTemplate getRestTemplate() {
        RestTemplate restTemplate = new RestTemplate();
        List<ClientHttpRequestInterceptor> interceptors = restTemplate.getInterceptors();
        interceptors.add(new RequestHeadersContextInterceptor());
        return restTemplate;
    }

    private static class RequestHeadersContextInterceptor implements ClientHttpRequestInterceptor {

        @Override
        @NonNull
        public ClientHttpResponse intercept(@NonNull HttpRequest httpRequest,
                                            @NonNull byte[] body,
                                            @NonNull ClientHttpRequestExecution clientHttpRequestExecution) throws IOException {
            RequestHeadersContext context = RequestHeadersContextHolder.getContext();
            HttpHeaders headers = httpRequest.getHeaders();
            headers.add(RequestHeadersContext.CORRELATION_ID, context.getCorrelationId());
            headers.add(RequestHeadersContext.AUTHENTICATION_TOKEN, context.getAuthenticationToken());
            return clientHttpRequestExecution.execute(httpRequest, body);
        }
    }
}

As you might have guessed, the interceptor reads the header values from the thread-local context and sets them up for the outgoing request. The RestTemplate just adds this interceptor to the list of its already existing ones.

A good-to-have thing will be to eventually clear/remove the thread-local variables from the thread. When we have an embedded Tomcat container, missing out on this point will not impose a problem, since along with the Spring application, the Tomcat container dies as well. This means that all of the threads will altogether be destroyed and the thread-local memory released. However, if we happen to have a separate servlet container and we deploy our app as a .war instead of a standalone .jar, not clearing the context might introduce some memory leaks. Imagine having multiple applications on our standalone servlet container and each of them messing around with thread locals. The container shares its threads with all of the applications. When one of the applications is turned off, the container is going to continue to run and the threads which it borrowed to the application will not cease to exist. Hence, the thread-local variables will not be garbage collected, since there are still references to them. That is why we are going to define and add an interceptor to the Spring interceptor registry, which will clear the context after a request finishes and the thread can be assigned to other tasks:

@Configuration
public class WebMvcInterceptorsConfiguration implements WebMvcConfigurer {

    @Override
    public void addInterceptors(InterceptorRegistry registry) {
        registry.addInterceptor(new RequestHeadersContextClearInterceptor()).addPathPatterns("/**");
    }

    private static class RequestHeadersContextClearInterceptor implements HandlerInterceptor {

        @Override
        public void afterCompletion(HttpServletRequest request, HttpServletResponse response, Object handler, Exception exception) {
            RequestHeadersContextHolder.clearContext();
        }
    }
}

All we need to do now is wire these configurations into our microservices. We can create a separate library extracting the config (and maybe upload it to an online repository, such as Maven Central, or our own Nexus) so that we do not need to copy-paste all of the code into each of our microservices. Whatever the case, it is good to make this library easy to use. That is why we are going to create a custom annotation for enabling it:

@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
@Import({RequestHeadersServiceConfiguration.class, WebMvcInterceptorsConfiguration.class})
public @interface EnableRequestHeadersService {
}

Usage

Let’s see how we can leverage and use this library from inside a microservice. Only a couple of things are needed.

First, we need to annotate our application with the @EnableRequestHeadersService:

@SpringBootApplication
@EnableRequestHeadersService
public class LicensingServiceApplication {

    public static void main(String[] args) {
        SpringApplication.run(LicensingServiceApplication.class, args);
    }
}

Second, we need to inject the already defined RestTemplate in our microservice and use it as given:

@Component
public class OrganizationRestTemplateClient {

    private final RestTemplate restTemplate;

    public OrganizationRestTemplateClient(RestTemplate restTemplate) {
        this.restTemplate = restTemplate;
    }

    public Organization getOrganization(String organizationId) {
        ResponseEntity<Organization> restExchange = restTemplate.exchange(
                "http://organizationservice/v1/organizations/{organizationId}",
                HttpMethod.GET,
                null,
                Organization.class,
                organizationId
        );
        return restExchange.getBody();
    }
}

We can notice that the getOrganization(String organizationId) method does not handle any HTTP headers whatsoever. It just passes the URL and the HTTP method and lets the imported configuration do its magic. As simple as that! We can now call the getOrganization method wherever we like, without having any sort of knowledge about the headers that are being sent in the background. If we have the need to read them somewhere in our code, or even change them, then we can use the RequestHeadersContextHolder.getContext()/setContext() static methods wherever we like in our microservice, without the need to parse them from the request object.

Feign HTTP Client

If we want to leverage a more declarative type of coding we can always use the Feign HTTP Client. There are ways to configure interceptors here as well, so, using the RestTemplate is not strictly required. We can add the new interceptor configuration to the already existing RequestHeadersServiceConfiguration class:

@Configuration
public class RequestHeadersServiceConfiguration {

    // .....

    @Bean
    public RequestInterceptor getFeignRequestInterceptor() {
        return new RequestHeadersContextFeignInterceptor();
    }

    private static class RequestHeadersContextFeignInterceptor implements RequestInterceptor {

        @Override
        public void apply(RequestTemplate requestTemplate) {
            RequestHeadersContext context = RequestHeadersContextHolder.getContext();
            requestTemplate.header(RequestHeadersContext.CORRELATION_ID, context.getCorrelationId());
            requestTemplate.header(RequestHeadersContext.AUTHENTICATION_TOKEN, context.getAuthenticationToken());
        }
    }
}

The new bean we created is going to automatically be wired as a new Feign interceptor for our client.

Next, in our microservice, we can annotate our application class with @EnableFeignClients and then create our Feign client:

@FeignClient("organizationservice")
public interface OrganizationFeignClient {

    @GetMapping(value = "/v1/organizations/{organizationId}")
    Organization getOrganization(@PathVariable("organizationId") String organizationId);
}

All that we need to do now, is just inject our new client anywhere in our services and use it from there. In comparison to the RestTemplate, this is a more concise way of making HTTP calls.

Asynchronous HTTP requests

What if we do not want to wait for the request to the Organization Microservice to finish and want to execute it asynchronously and concurrently (using the @EnableAsync and @Async annotations from Spring, for example). How are we going to access the headers that need to be propagated in this case? You might have guessed it: by using InheritableThreadLocal instead of ThreadLocal. As mentioned earlier above, the child threads we create separately (aside from the Tomcat ones which will be the parents) can inherit their parent’s context. That way we can send header-populated requests in an asynchronous manner. There is no need to clear the context for the child threads (side note: clearing it will not affect the parent thread, and clearing the parent thread will not affect the child thread; it will only set the current thread’s local context reference to null), since these will be created from a separate thread pool that has nothing to do with the container’s one. The child threads’ memory will be cleared after execution or after the Spring application exits because eventually, they die off.

Summary

I hope you will find this neat little trick useful while refactoring your microservices. A lot of Spring’s functionality is actually based on thread locals. Looking into its source code, you will find a lot of similar/same concepts as the ones mentioned above. Spring Security is one example, Zuul Proxy is another.

The full code for this article can be found here.

References

Spring Microservices In Action by John Carnell

Multi Instance Process in Camunda

Reading Time: 10 minutes

As a developer, there is a 90% chance that the client or the business will come to you one day and say “look we are doing this the old way, but we want software now for this”. And there is a big chance that they will have some procedure with a lot of if-else scenarios. There also might be an option for a part of the procedure to be repeated for every customer that they have which means you will have to execute the same logic multiple times. If that happens, don’t be scared, Camunda is here to help and I hope that this blog will help you. We are going to dive into Camunda processes and how we can handle a part of it being repeated multiple times. In order to follow it may be some basic Camunda knowledge, Java and Spring Boot can be of help.

Let’s look at the following scenario:
We have a bank and in it, there is one guy responsible for the whole process of giving loans to people. He is sitting there and a guy John comes and he wants to get a loan. So the bank guy first gives an application to John. Once John is done with the application and returns it to the bank guy, the bank guy requests some documents from his place of work so they can be sure that he is employed full-time. Once that is done, the bank guy needs to send this to corporate so that they can make the final decision.

If we make a diagram of it, it will look something like this:

This is great and everything if you have one customer, but if you have around 100 customers per day the bank guy will have no clue which guy gave him the work information or filled the application.

So let’s make software for this guy and make his life easier, let’s open our Intellij and write some code!

I have already created the project (you can download it from here) and will go through some of the more important things.

First, let’s take a look at the pom file:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.6.1</version>
        <relativePath/> <!-- lookup parent from repository -->
    </parent>
    <groupId>north.com</groupId>
    <artifactId>multi-process-instance</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <name>multi-process-instance</name>
    <description>multi-process-instance</description>
    <properties>
        <java.version>11</java.version>
    </properties>
    <dependencies>
        <!-- Spring -->
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-jpa</artifactId>
        </dependency>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>

        <!-- Camunda -->
        <dependency>
            <groupId>org.camunda.bpm.springboot</groupId>
            <artifactId>camunda-bpm-spring-boot-starter-rest</artifactId>
            <version>7.14.0</version>
        </dependency>
        <dependency>
            <groupId>org.camunda.bpm.springboot</groupId>
            <artifactId>camunda-bpm-spring-boot-starter-webapp</artifactId>
            <version>7.14.0</version>
        </dependency>
        <dependency>
            <groupId>org.camunda.bpm</groupId>
            <artifactId>camunda-engine-plugin-spin</artifactId>
            <version>7.14.0</version>
        </dependency>
        <dependency>
            <groupId>org.camunda.spin</groupId>
            <artifactId>camunda-spin-dataformat-json-jackson</artifactId>
            <version>1.10.0</version>
        </dependency>

        <!-- Database -->
        <dependency>
            <groupId>com.h2database</groupId>
            <artifactId>h2</artifactId>
            <scope>runtime</scope>
        </dependency>
        <!-- Lombok -->
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
        </plugins>
    </build>

</project>

I have added the camunda dependencies because we are going to create camunda processes and an h2 database as well. Even though we are not going to write code that will access the database or create some tables, Camunda needs a database to create its own tables.

Next, let’s look at the application.yml

spring:
  datasource:
    driver-class-name: org.h2.Driver
    url: jdbc:h2:mem:testdb
    username: sa
  jpa:
    database-platform: org.hibernate.dialect.H2Dialect
    hibernate.ddl-auto: update

camunda:
  bpm:
    authorization:
      enabled: false
    default-serialization-format: application/json
    admin-user:
      id: admin
    database:
      type: h2
    generic-properties.properties:
      telemetry-reporter-activate: false
    deployment-resource-pattern:
      - classpath*:processes/*.bpmn

Above we are setting some database properties and some Camunda properties. Mainly we are enabling the admin-user for Camunda to open their cockpit with user admin and we are saying where the processes will be and what kind of database will Camunda use.

Now, having said all that, let’s create the whole scenario as a process in Camunda. First thing you will need is a Camunda modeller which you can download from here. Once you have the modeller up and running go to File -> New File -> BPMN Diagram. What we need to build (or you can just open the existing bpmn diagram from the application) is something that will look like this:

As a first step, we will generate some credit loaners (we are doing this just to have some data to play with) and then set this collection of credit loaners as a value for a variable called creditLoanerList. This will be achieved by using a service task. In this scenario, if you click on the Generate Credit Loaners you will see that we are providing as implementation: Java class and as a java class we are pointing to: north.com.multiprocessinstance.camunda.task.GenerateCreditLoanersTask.java:

package north.com.multiprocessinstance.camunda.task;

import north.com.multiprocessinstance.entity.CreditLoaner;
import org.camunda.bpm.engine.delegate.DelegateExecution;
import org.camunda.bpm.engine.delegate.JavaDelegate;

import java.util.Arrays;
import java.util.List;

public class GenerateCreditLoanersTask implements JavaDelegate {
    @Override
    public void execute(DelegateExecution delegateExecution) throws Exception {
        List<CreditLoaner> creditLoaners = Arrays.asList(new CreditLoaner("Sarah", "Cox"),
                new CreditLoaner("John", "Doe"),
                new CreditLoaner("Filip", "Trajkovski"));
        delegateExecution.setVariable("creditLoanerList", creditLoaners);
    }
}

If you click on the subprocess (it is the largest rectangle with the three dashes) you will see the reason why we set the list of credit loaners as a variable value to creditLoanerList:

In order to pass all the credit loaners and for each of them a subprocess to be started we need to provide a collection of credit loaners. That is why we are setting the Collection field to: creditLoanerList and in order for every credit loaner to be accessed in the appropriate subprocess we are setting an Element Variable that we will call: creditLoaner. What will happen is that we will have this process once for Filip, once for John and once for Sarah.

Let’s look at the tasks in our subprocess. The first one Fulfill Application is a user task and in it, we want the credit loaner to fill out an application. For the next step Get work recommendation to be completed, the credit loaner needs to provide some documents that he is employed and has a regular salary (this is not completely implemented, but we are describing a possible scenario and currently only form fields of type string are defined). In the third step, a service task will be executed and in it, the NotifyCorporate.java class will be executed:

package north.com.multiprocessinstance.camunda.task;

import lombok.extern.slf4j.Slf4j;
import north.com.multiprocessinstance.entity.CreditLoaner;
import org.camunda.bpm.engine.delegate.DelegateExecution;
import org.camunda.bpm.engine.delegate.JavaDelegate;

@Slf4j
public class NotifyCorporate implements JavaDelegate {
    @Override
    public void execute(DelegateExecution delegateExecution) throws Exception {
        final CreditLoaner creditLoaner = (CreditLoaner) delegateExecution.getVariable("creditLoaner");
        log.info("Notify corporate that person {} wants to get credit", creditLoaner.getFirstName());
        delegateExecution.setVariable("creditLoanerFirstName", creditLoaner.getFirstName());
        log.info("This is the user: {}", creditLoaner);
    }
}

Here we will notify the corporate for the credit loaner and to get the information about the credit loaner we will use the variable that was previously set as a Element Variable which is creditLoaner (look at the image of the subprocess details). One additional thing that we will do here is to set one more variable called creditLoanerFirstName and the reason for that will be revealed when we take a look at the next task: Wait for feedback from corporate.

In this Wait for feedback from corporate step one option is that the bank guy got a mail from corporate that the credit loaner is ok and he can get his loan so the bank guy will complete this step and will continue with the next one. For the other option, we are providing a boundary event where the corporate will send a message STOP and for that credit loaner, the request will be rejected. But in our scenario, we have three subprocesses, one for each credit loaner. When sending this STOP message, we need to somehow specify for which credit loaner it is sent. That’s why in the previous step we set the creditLoanerFirstName variable (in real cases don’t use the first name but something like email or other unique value) and one last thing that we will need to do is to set an input parameter in the Wait for feedback from corporate task.

In the Camunda modeller if you click on the task with the name: Wait for feedback from corporate and go to the tab Input/Output you should be able to see the input parameter which will be of type text and the value will be the variable that we set previously, called creditLoanerFirstName.

This input parameter provides us with an option to specify for which credit loaner we want to STOP the procedure and decline his loan request.

But let’s start our application and then start our bank process. Once you have started the application you can open the Camunda cockpit on the following URL: http://localhost:8080/camunda/app/tasklist/default/. The user name and password are admin and admin. Once you have entered the credentials you should be able to start a process from the top right by clicking on Start Process and the BankProcess will be provided (previously we needed to be sure that the BPMN file is saved under resources/processes in your application, since we define in our application.yaml that this will be the place where we will keep the processes).

Let’s take a look at the Camunda cockpit:

Camunda cockpit

Once you started the process you should be able to see the task list (if for some reason they are not visible just click on Add simple filter and they should be displayed). If you click on one of the tasks, tasks details will be displayed and from here we are able to complete it or to open the process details:

Here we have a visual presentation at which step the process is currently. Let’s complete all three tasks Fullfill application by claiming them and completing them and let’s do the same for Get work recommendation. Now we are at the step Wait for feedback from corporate and we can test the boundary event STOP that we added.

There is already an exposed post endpoint in the ProcessController for which we need to provide the process instance id and a MessageEventRequest. In this MessageEventRequest we need to provide the message name, which in our scenario is STOP and a list of correlation keys. The correlation key is where we will specify the credit loaner whose loan request we want to decline. The key will be set to creditLoanerFirstName and let’s decline Sarah’s request (before executing it, make sure that there is a task: Wait for feedback from corporate for Sarah) so we will set the value to Sarah. What we need to do is execute a post request towards: localhost:8080/{processInstanceId}/messageEvent with the following request body:

{
    "messageName":"STOP",
    "correlationKeys":[{
        "key":"     ",
        "value":"Sarah"
    }]
}

As I mentioned previously you can figure out the process instance id and monitor the whole flow of the task using the camunda cockpit that will be available on the following URL: http://localhost:8080/camunda/app/tasklist/default/.

So we started a whole process, started three subprocesses, finished a service task and a user task. I hope that you enjoyed it and also you can find the code on the following link.

Using Hystrix as a fault-tolerant strategy

Reading Time: 5 minutes

Until now as everybody knows, microservice architecture represents a collection of multiple services. Each service contains it’s own business logic in comparison to the monolithic architecture which contains everything in one place. This means we have to maintain multiple services usually at the same time.

The microservices communicate with each other in order to fulfil their needs. As usually, when you need it the most, an instance of one of the microservices can go down or have delay in response, what we usually call unreachable service. The chances of failure need to be taken into consideration and to be handled in an appropriate way.

Why taking care of latency is important in the microservice architecture?

Increased latency one can face is when one of the microservices is:

  • Reading/writing to database
  • Synchronously calling another service
  • Hitting the timeout of asynchronous communication

If we consider the following scenario:
We have 5 microservices that communicate with each other. If microservice 5 goes down, all the other services that depend on it can be affected.

In this type of scenario, the solution for this is the strategy of fault-tolerance.

Circuit breaker

A circuit breaker is a pattern that can help in achieving fault tolerance. The circuit breaker detects when external service fails and in that case, the circuit breaker is open. All the incoming requests to the unhealthy service will be rejected and errors will be returned instead of trying to reach out to the unhealthy service over and over again. For this, we can use Hystrix.

What is Hystrix?

Hystrix is a library which implements the fault-tolerance strategy and is used for isolating the access to remote services and increasing resilience, in order to prevent cascading failures and offer ability to recover quickly from disaster.

So, how does Hystrix actually work?

Let’s take again the architecture from above. So suppose that there are multiple user requests from microservice One that are requiring a piece of information from microservice Five. In this situation, the possibility of microservice One being blocked is very obvious since it might wait for responses from microservice Five. Also microservice Five can be overloaded with the requests so the outcome would be blocking the whole service. This is when Hystrix kicks in and helps avoiding the problem.

The external requests to the service of microservice Five are wrapped in HystrixCommand which defines the behaviour of the requests. The behaviour is defined as the available number of threads that can handle the requests. In our example, the service in microservice Five can be defined as ten available threads for handling external requests. By wrapping the service in HystrixCommand, we are limiting the number of requests which service is supposed to get.

By default, Hystrix uses ten threads. If there are more concurrent threads than the default value, the rest of the requests are rejected – redirected to the fallback method.

Using Hystrix in Spring boot

First thing, adding the dependency in pom.xml:

<dependency> 
   <groupId>org.springframework.cloud</groupId> 
   <artifactId>spring-cloud-starter-hystrix</artifactId> 
   <version>1.4.7.RELEASE</version> 
</dependency>

<dependencyManagement> 
   <dependencies> 
      <dependency> 
         <groupId>org.springframework.cloud</groupId> 
         <artifactId>spring-cloud-dependencies</artifactId> 
         <version>Hoxton.SR8</version> 
         <type>pom</type> 
         <scope>import</scope> 
      </dependency> 
   </dependencies> 
</dependencyManagement>

Add the @HystrixCommand annotation to the main class

@SpringBootApplication 
@EnableHystrix 
public class HystrixApplication { 

   public static void main(String[] args) { 
      SpringApplication.run(HystrixApplication.class, args); 
   } 
}

The next step is to define the fallback method for HystrixCommand:

@Service 
@Slf4j 
public class HystrixService { 
 
    @HystrixCommand(fallbackMethod="fallbackHystrix", 
    commandProperties = {@HystrixProperty(name = 
    "execution.isolation.thread.timeoutInMilliseconds", value = "2000")}) 
    public String testHystrix(String message) throws InterruptedException { 
        Thread.sleep(4000); 
        return message != null ? message : "Message is null"; 
    } 
 
    public String fallbackHystrix(String message) { 
        log.error("Request took to long. Timeout limit: 2000ms."); 
        return "Request took to long. Timeout limit: 2000ms. Message: " + message; 
    } 
 }

This is just a simple example of how the library can be used.

In order to improvise a timeout, Thread.sleep(4000) is set in milliseconds and the timeout for response is set to 2000 milliseconds as a HystrixProperty in HystrixCommand annotation, after which the call should end up in the fallback method.

Now we can test the implementation by executing the following request:

http://localhost:8080/hystrix-example?message=hello

If we want to change the default thread pool size of HystrixCommand, we can add the following thread pool properties:

@Service 
@Slf4j 
public class HystrixService { 
 
    @HystrixCommand(fallbackMethod = "fallbackHystrix", 
            commandProperties = {@HystrixProperty(name = 
            "execution.isolation.thread.timeoutInMilliseconds", value = "2000")}, 
            threadPoolProperties = {@HystrixProperty(name = "coreSize", value = "3")}) 
    public String testHystrix(String message) throws InterruptedException { 
        return message != null ? message : "Message is null"; 
    } 
 
    public String fallbackHystrix(String message) { 
       log.error("Request took to long. Timeout limit: 2000ms."); 
       return "Request took to long. Timeout limit: 2000ms. Message: " + message; 
   } 
 }

The fallback method is called when some fault occurs.
An important thing to notice here is that the signature of the fallback method should be the same as the method on which HystrixCommand annotation is defined.

The working example of the above exercise can be found here hystrix-example

Summary

Moving away from monolithic architecture to microservices is usually coming with quite some challenges.
In this blog post, we took a look at one of them, but we just scratched the surface.
As more challenges are coming in the pipeline, stay tuned and hope to see you in one of the next posts.

Rest assure your API

Reading Time: 4 minutes


Most if not all of the todays’ applications expose some API for interaction. Either for customers or other applications. Application programming interface or API is a software mediator that allows two applications. Each time when we are using Facebook, YouTube or some other app, essentially we are using an API. API is a set of HTTP endpoints that use to send and retrieve data in some form, JSON or XML. Making sure those HTTP endpoints are sending and retrieving correct data thus are working according to the specifications is a vital requirement. Testing APIs belongs to the last (E2E) layer of the testing pyramid for which you may find more information in my previous blog.

Introduction to Rest Assured

Rest Assured is an open-source Java library that is used for testing RESTfull web services. It allows us to write tests using the BDD pattern. Rest Assured is a headless client for accessing Rest web services. The library is highly customizable, allowing us to create a wide variety of request combinations to test different application core business logic combinations.

High customizability also comes in handy when we want to verify the responses from the server, where we can verify the Status code, Status message, Body, Headers, etc. This makes Rest-Assured a versatile library and is often used for API testing.

Rest Assured

Pseudo Syntax:

Given(). 
        param("a", "b"). 
        header("c", "d").
when().
Method().
Then(). 
        statusCode(XXX).
        body("x, ”y", equalTo("z"));

The syntax of Rest Assured.io is the most interesting part, it’s using the BDD syntax and it’s very understandable.

Explanation:

CodeDescription
Given()‘Given’ keyword, is used to set up the (Pre-conditions/ Context), here, you pass the request headers, query and path param, body, cookies. This is optional if these items are not needed in the request
When()‘when’ keyword is a notion that marks the premise of the test
Method()Specifies the action of HTTP method (POST,GET,PUT,PATCH,DELETE)
Then()Specifies the (Result/Outcomes) and is used for assertions

Let’s create automation tests

Create new project in intelliJ

Add the following dependency:

        <dependency>
            <groupId>org.junit.jupiter</groupId>
            <artifactId>junit-jupiter</artifactId>
            <version>RELEASE</version>
            <scope>test</scope>
        </dependency>

        <dependency>
            <groupId>io.rest-assured</groupId>
            <artifactId>rest-assured</artifactId>
            <version>4.4.0</version>
            <scope>test</scope>
        </dependency>

Create new Test class

Simple test example:

public class HelloYouTubeRestAssured {

    @Test
    public void greetingsYouTube() {
        given().when()
                .get("http://youtube.com/")
                .then()
                .statusCode(200);
    }
}

The simple test connects to YouTube, performing GET call and making sure that the server responds with a success status code of 200.

Another tests verifies the Users API:

public class UsersApiTest {

    @Test
    public void checkUsers() {
        given()
                .baseUri("https://jsonplaceholder.typicode.com")
                .when()
                .get("/users")
                .then()
                .statusCode(200)
                .statusLine("HTTP/1.1 200 OK")
                .body("id",hasSize(10))
                .body("name[0]", equalTo("Leanne Graham"))
                .body("username[0]", equalTo("Bret"))
                .body("email[0]", equalTo("Sincere@april.biz"))
                .body("address[0].city", equalTo("Gwenborough"))
                .body("phone[0]", startsWith("1-770-736-8031"))
                .body("website[0]", equalTo("hildegard.org"))
                .body("company[0].name", equalTo("Romaguera-Crona"));
    }
}

As we can see from the above examples the tests are enclosed in the sense that a single call is performed to the server and only a single response is evaluated. The above test navigates to the Users API of the application and then verifies the response from the server. The verification first verifies that the status code from the code is OK. Then we verify that the response has 10 items after that we verify that the first item has the corresponding data. We are able to assert also inner data of the user object in address[0].city and company[0].name. The assertions which we use are from org.hamcrest which are incorporated into Rest-Assured.

Conclusion

Even though here we have scratched the surface, I hope that you now have a better understanding of Rest-Assured. You can find a working example with the tests on this repository.
Also, you can find more about the Rest-assured usage here.

Deploy microservice on Kubernetes, step by step

Reading Time: 11 minutes

In this tutorial, I will try to explain step by step, how you can set up Kubernetes, deploy your microservice on Kubernetes, and check the result via the Kubernetes dashboard. All other things will be “as simple as possible”. As a cloud platform gcloud will be used. We will cover the following aspects of the problem:

  1. Create microservice to be deployed
  2. Placed application in your docker container
  3. What is Kubernetes and how to install it?
  4. Create a new Kubernetes project
  5. Create new Cluster
  6. Allow access from your local machine
  7. Create service account
  8. Activate service account
  9. Connect to cluster
  10. Gcloud initialization
  11. Generate access token
  12. Deploy and start Kubernetes dashboard
  13. Deploy microservice

Step 1: Create microservice to be deployed

Traditionally, in the programming world, everything starts with “Hello World”. So, as mentioned previously, to keep things simple, create a microservice that returns just “Hello World”. You can use https://start.spring.io/ for this goal. Create HelloController like this:

package com.example.demojooq.controllers;


import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequestMapping("/api/v1")
public class HelloController {

    @GetMapping("/say-hello")
    public String sayHello() {
        return "Hello world";
    }
}

Step 2: Placed application in your docker container

We have a microservice, need to put this microservice in a docker container and upload it on Kubernetes. From that point, Kubernetes will orchestrate the container according to your settings. Let’s create the first image from the microservice. Normally, as you might guess, it is called Dockerfile (without any extension), and the content is:

Dockerfile

FROM adoptopenjdk/openjdk11:jre-11.0.8_10-debianslim
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","app.jar"]

The next step is to create the docker-compose file. For that purpose, a call to Dockerfile will be made to build the image. You can do it manually, but the best way is from the docker-compose file, as you have a permanent track of the solution. This is a .yaml file. (picture below)

docker-compose.yaml

version: "3"
services:
  hello-world:
    build: .
    ports:
      - "8099:8080"

After starting docker, go to the folder where docker-compose is located and execute the command “docker-compose up”. The expectation is to reach this microservice on 8099 port. If everything is ok, in your docker will be something like this:

Check microservice docker installation with postman calling http://localhost:8099/api/v1/say-hello. In response, you have “Hello World”.

Step 3: What is Kubernetes and how to install it?

What is Kubernetes?

Kubernetes is an open-source container orchestrator that automates many tasks involved in deploying, managing, and scaling containerized applications. What happens when you use Docker, and your container fails? Probably the first thing to do is to restart your container. You do that manually (if you don’t have Kubernetes). Here comes Kubernetes, observe that container is down and start new container automatically. This is just a basic use case. Please read more on the internet, there is a bunch of information about this.

How to install Kubernetes?

Ok, until now you are sure that Kubernetes is needed, but where to find it, what are the costs, and how to install it? First of all, try “download Kubernetes” on Google… Pick the site https://kubernetes.io/docs/tasks/tools/… Options for Windows, Mac, Linux appear… A different installation like kind, minikube, kubeadmin… So, is it worth spending so much time setting properly this Kubernetes? You do not have to ask me, I agree with you, it is too much time. Fortunately, we can make a “go around” and skip all that: Go to Gcloud where Kubernetes is offered as a service and just use it. Somebody else takes care of this, we can focus just on the business logic in our microservice and use out-of-the-box Kubernetes installation from Gcloud. Sounds good, doesn’t it? The last and most important question; money. Is it for free? No, it is not. You have to pay for the Gcloud services and here is the price list: https://cloud.google.com/kubernetes-engine/pricing. But for ordinary people like you and me, Gcloud offers a free account for 3 months up to 300$ and it seems fair. It is enough time to learn about deploying microservices on Kubernetes. For any professional use in future, the company should stay behind this. Here is the link where you can create your free cloud account https://cloud.google.com/. One more thing, during the creation of a free account, Google will ask for your bank account, to automatically charge you. But do not worry, you are safe for the first three months and below 300$. And for any charging, you will be asked for permission before… So, until now my personal experience is positive, as Google is keeping the promise when you create an account. But the final decision is up to you.

Step 4: Create new Kubernetes project

Open up your Google account, sign in and go to the console.

Create a new project from the main dashboard; the name of the new project is “hello-world”. From now on, this is your active project.

Step 5: Create new cluster

Create new cluster (named it cluster2). Accept default values for others fields.

Step 6: Allow access from your local machine

Now, we must allow access from our local machine to Kubernetes, via kubectl. For that purpose, we need to follow these steps:

  1. Click on cluster2
  2. Find your local IP address and add it here according to the CIDR standard in the Edit control plane authorized networks

Step 7: Create service account

Give new account role “Owner”. Accept default values for other fields. After a service account is created, you should have something like this:

Generate keys for this service account with key type JSON. When the key is downloaded, it has some random name like hello-world-315318-ab0c74d58a70.json. Keep this file in a safe place, we will need it later.

Now, install Google Cloud SDK Shell on your machine according to your OS. Let’s do the configuration so kubectl can reach cluster2. Copy the file hello-world-315318-ab0c74d58a70.json and put it in the CLOUD SDK folder. For the Windows environment, it looks like this:

Step 8: Activate service account

The first thing to do is to activate the service account with the command: gcloud auth activate-service-account hello-world-service-account@hello-world-315318.iam.gserviceaccount.com –key-file=hello-world-315318-ab0c74d58a70.json

Step 9: Connect to cluster

Now go to cluster2 again and find the connection string to connect to the new cluster

Execute this connection string in Google Cloud Shell: gcloud container clusters get-credentials cluster2 –zone us-central1-c –project hello-world-315318

Step 10: Gcloud initialization

The next command to execute is gcloud init, to initialize connection with the new project. Here is the complete code on how to do that from the Gcloud Shell:

C:\Users\Dimche Trifunov\AppData\Local\Google\Cloud SDK>gcloud init
Welcome! This command will take you through the configuration of gcloud.

Settings from your current configuration [dev] are:
accessibility:
  screen_reader: 'False'
compute:
  region: europe-west3
  zone: europe-west3-a
core:
  account: hello-world-service-account@hello-world-315318.iam.gserviceaccount.com
  disable_usage_reporting: 'True'
  project: dops-containers

Pick configuration to use:
 [1] Re-initialize this configuration [dev] with new settings
 [2] Create a new configuration
 [3] Switch to and re-initialize existing configuration: [database-connection]
 [4] Switch to and re-initialize existing configuration: [default]
Please enter your numeric choice:  2

Enter configuration name. Names start with a lower case letter and
contain only lower case letters a-z, digits 0-9, and hyphens '-':  hello-world
Your current configuration has been set to: [hello-world]

You can skip diagnostics next time by using the following flag:
  gcloud init --skip-diagnostics

Network diagnostic detects and fixes local network connection issues.
Checking network connection...done.
Reachability Check passed.
Network diagnostic passed (1/1 checks passed).

Choose the account you would like to use to perform operations for
this configuration:
 [1] cicd-worker@devops-platform-n47.iam.gserviceaccount.com
 [2] d.trifunov74@gmail.com
 [3] dimche.trifunov@north-47.com
 [4] dtrifunov@lunar-sled-314616.iam.gserviceaccount.com
 [5] hello-world-service-account@hello-world-315318.iam.gserviceaccount.com
 [6] service-account-demo-dime@blissful-epoch-305214.iam.gserviceaccount.com
 [7] Log in with a new account
Please enter your numeric choice:  5

You are logged in as: [hello-world-service-account@hello-world-315318.iam.gserviceaccount.com].

API [cloudresourcemanager.googleapis.com] not enabled on project
[580325979968]. Would you like to enable and retry (this will take a
few minutes)? (y/N)?  y

Enabling service [cloudresourcemanager.googleapis.com] on project [580325979968]...
Operation "operations/acf.p2-580325979968-f1bf2515-deea-49d5-ae35-a0adfef9973e" finished successfully.
Pick cloud project to use:
 [1] hello-world-315318
 [2] Create a new project
Please enter numeric choice or text value (must exactly match list
item):  1

Your current project has been set to: [hello-world-315318].

Do you want to configure a default Compute Region and Zone? (Y/n)?  n

Error creating a default .boto configuration file. Please run [gsutil config -n] if you would like to create this file.
Your Google Cloud SDK is configured and ready to use!

* Commands that require authentication will use hello-world-service-account@hello-world-315318.iam.gserviceaccount.com by default
* Commands will reference project `hello-world-315318` by default
Run `gcloud help config` to learn how to change individual settings

This gcloud configuration is called [hello-world]. You can create additional configurations if you work with multiple accounts and/or projects.
Run `gcloud topic configurations` to learn more.

Some things to try next:

* Run `gcloud --help` to see the Cloud Platform services you can interact with. And run `gcloud help COMMAND` to get help on any gcloud command.
* Run `gcloud topic --help` to learn about advanced features of the SDK like arg files and output formatting

Step 11: Generate access token

Type kubectl get namespace, access token is generated in .kube folder (in home folder), in config file:

If you open this config file, you will find your access token. You will need this later.

Step 12: Deploy and start Kubernetes dashboard

Now, deploy Kubernetes dashboard with the next command: kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

C:\Users\Dimche Trifunov\AppData\Local\Google\Cloud SDK>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

C:\Users\Dimche Trifunov\AppData\Local\Google\Cloud SDK>kubectl proxy
Starting to serve on 127.0.0.1:8001

Start the dashboard with kubectl proxy command. Now open the dashboard from the link: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/overview?namespace=default

In front of you, this screen will appear:

Now, you need the token from the config file that we spoke about a moment ago. Open the config file with Notepad (on Windows), find your access token, and copy from there and paste it in the Enter token* field. Be careful when you are copying token from the config file as there might be several tokens. You must choose yours (image below).

Finally, the stage is prepared to deploy microservice.

Step 13: Deploy microservice

Build the docker image from Dockerfile with the command: docker build -t docker2222/dimac:latest. docker2222/dimac is my public docker repository.
Push the image on docker hub with the command: docker image push docker2222/dimac:latest.
Execute kubectl apply -f k8s.yaml where k8s.yaml is the file below:

---

apiVersion: v1
kind: Namespace
metadata:
  name: hello

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-world
  namespace: hello
  annotations:
    buildNumber: "1.0"
spec:
  selector:
    matchLabels:
      app: hello-world
  replicas: 1
  template:
    metadata:
      labels:
        app: hello-world
      annotations:
        buildNumber: "1.0"
    spec:
      containers:
        - name: hello-world
          image: docker2222/dimac:latest
          readinessProbe:
            httpGet:
              path: "/actuator/health/readiness"
              port: 8080
            initialDelaySeconds: 5
          ports:
            - containerPort: 8080
          env:
            - name: APPLICATION_VERSION
              value: "1.0"
---


apiVersion: v1
kind: Service
metadata:
  name: hello-world
  namespace: hello
spec:
  selector:
    app: hello-world
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
---

Last but not least, open the Kubernetes dashboard. If everything is OK, you should see your service.

Create a CI/CD pipeline with GitHub Actions

Reading Time: 7 minutes

A CI/CD pipeline helps in automating your software delivery process. What the pipeline does is building code, running tests, and deploying a newer version of the application.

Not long ago GitHub announced GitHub Actions. Meaning that they have built it system for support for CI/CD. This means that developers can use GitHub Actions to create a CI/CD pipeline.

With Actions, GitHub now allows developers not just to host the code on the platform, but also to run it.

Let’s create a CI/CD pipeline using GitHub Actions, the pipeline will deploy a spring boot app to AWS Elastic Beanstalk.

First of all, let’s find a project

For this purpose, I will be using this project which I have forked:
When forked we need to open the project. Upon opening, we will see the section for GitHub Actions.

GitHub Actions Tool

Add predefined Java with Maven Workflow

Get started with GitHub Actions

By clicking on Actions we are provided with a set of predefined workflows. Since our project is Maven based we will be using the Java with Maven workflow.

By clicking “Start commit” GitHub Will add a commit with the workflow, the commit can be found here.

Let’s take a look at the predefined workflow:

name: Java CI with Maven

on:
  push:
    branches: [ master ]
  pull_request:
    branches: [ master ]

jobs:
  build:

    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v2
    - name: Set up JDK 1.8
      uses: actions/setup-java@v1
      with:
        java-version: 1.8
    - name: Build with Maven
      run: mvn -B package --file pom.xml

name: Java CI with Maven
This is just specifying the name for the workflow

on: push,pull_request
On command is used for specifying the events that will trigger the workflow. In this case, those events are push and pull_request on the master branch in this case

job:
A job is a set of steps that execute the same runner

runs-on: ubuntu-latest
The runs-on is specifying the underlaying OS we want for our workflow to run on for which we are using the latest version of ubuntu

steps:
A step is an individual task that can run commands (known as actions). Each step in a job executes on the same runner, allowing the actions in that job to share data with each other

actions:
Actions are the smallest portable building block of a workflow which are combined into steps to create a job. We can create our own actions, or use actions created by the GitHub community

Our steps actually setup Java and execute Maven commands needed for the build of the project.

Since we added the workflow by creating commit from the GUI the pipeline has automatically started and verified the commit – which we can see on the following image:

Pipeline report

Create an application in AWS Elastic Beanstalk

The next thing that we need to do is to create an app on Elastic Beanstalk where our application is going to be deployed. For that purpose, an AWS account is needed.

AWS Elastic Beanstalk service

Upon opening the Elastic Beanstalk service we need to choose the application name:

Application name

For the platform choose Java8.

Choose platform

For the application code, choose Sample application and click Create application.
Elastic Beanstalk will create and initialize an environment with a sample application.

Let’s continue working on our pipeline

We are going to use an action created from the GitHub community for deploying an application on Elastic Beanstalk. The action is einaregilsson/beanstalk-deploy.
This action requires additional configuration which is added using the keyboard with:

    - name: Deploy to EB
      uses: einaregilsson/beanstalk-deploy@v13
      with:
        aws_access_key: ${{ secrets.AWS_ACCESS_KEY_ID }}
        aws_secret_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        application_name: {change this with aws application name}
        environment_name: {change this with aws environment name}
        version_label: ${{github.SHA}}
        region: {change this with aws region}
        deployment_package: target/spring-petclinic-rest-2.2.5.jar

Add variables

We need to add value into the properties AWS Elastic Beanstalk application_name, environment_name AWS region and, AWS APIkey.

Go to AWS Elastic Beanstalk and copy the previously created Environment name and Application name.
Go to AWS Iam under your user in the security credentials section either create a new AWS access Key or use an existing one.
The AWS Access Key and AWS Secret access key should be added into the GitHub project settings under the secrets tab which looks like this:

GitHub Project Secrets

The complete pipeline should look like this:

name: Java CI with Maven

on:
  push:
    branches: [ master ]
  pull_request:
    branches: [ master ]

jobs:
  build:

    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v2
    - name: Set up JDK 1.8
      uses: actions/setup-java@v1
      with:
        java-version: 1.8
    - name: Build
      run: mvn -B package --file pom.xml
    - name: Deploy to EB
      uses: einaregilsson/beanstalk-deploy@v13
      with:
        aws_access_key: ${{ secrets.AWS_ACCESS_KEY_ID }}
        aws_secret_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        application_name: pet-clinic
        environment_name: PetClinic-env
        version_label: ${{github.SHA}}
        region: us-east-1
        deployment_package: target/spring-petclinic-rest-2.2.5.jar

Lastly, let’s modify the existing app

The deployed application in order to be considered a healthy instance from Elastic Beanstalk has to return an ok response when accessed from the Load Balancer which is standing upfront the Elastic Beanstalk. The load balancer is accessing the application on the root path. The forked application when accessed on the root path is forwarding the request towards swagger-ui.html. For that purpose, we need to remove the forwarding.

Change RootController.class:

@RequestMapping(value = "/", method = RequestMethod.GET, produces = "application/json")
    public ResponseEntity<Void> getRoot() {
        return new ResponseEntity<>(HttpStatus.OK);
    }

Change application.properties server port to 5000 since, by default, Spring Boot applications will listen on port 8080. Elastic Beanstalk assumes that the application will listen on port 5000.

server.port=5000

And remove the server.servlet.context-path=/petclinic/.

The successful commit which deployed our app on AWS Elastic Beanstalk can be seen here:

Pipeline build

And the Elastic Beanstalk with a green environment:

Elastic Beanstalk green environment

Voila, there we have it a CI/CD pipeline with GitHub Actions and deployment on AWS Elastic Beanstalk. You can find the forked project here.

Pet Clinic Swagger UI

How to integrate GraphQL in the Microservice

Reading Time: 4 minutes

GraphQL is a query language for your APIs and a runtime for fulfilling those queries with existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more. GraphQL is designed to make APIs fast, flexible, and developer-friendly.

GraphQL SPQR

GraphQL SPQR (GraphQL Schema Publisher & Query Resolver, pronounced like speaker) is a simple-to-use library for rapid development of GraphQL APIs in Java. It works by dynamically generating a GraphQL schema from Java code.

In this tutorial, we are going to explain simple steps for how to integrate Graphql in your microservice.

  • Include dependencies in pom.xml
<!-- GraphQL -->
<dependency>
    <groupId>io.leangen.graphql</groupId>
    <artifactId>spqr</artifactId>
    <version>${graphql-spqr.version}</version>
</dependency>
<dependency>
    <groupId>com.graphql-java-kickstart</groupId>
    <artifactId>graphql-spring-boot-autoconfigure</artifactId>
    <version>${graphql-spring-boot-autoconfigure.version}</version>
</dependency>
  • Spring Boot Java Configuration class:
@Configuration
public class GraphQLConfiguration {
    @Bean
    public GraphQLSchema schema(GraphQLRootQuery graphQLRootQuery,
                                GraphQLRootMutation graphQLRootMutation,
                                GraphQLRootSubscription graphQLRootSubscription,
                                GraphQLResolvers graphQLResolvers) {
        GraphQLSchema schema = new GraphQLSchemaGenerator()
            .withBasePackages("com.myproject.microservices")
            .withOperationsFromSingletons(graphQLRootQuery, graphQLRootMutation, graphQLRootSubscription, graphQLResolvers)
            .generate();
        return schema;
    }

    @Bean
    public GraphQLResolvers resolvers(MyOtherMicroserviceClient myOtherMicroserviceClient) {
        return new GraphQLResolvers(myOtherMicroserviceClient);
    }

    @Bean
    public GraphQLRootQuery query(MyOtherMicroserviceClient myOtherMicroserviceClient) {
        return new GraphQLRootQuery(myOtherMicroserviceClient);
    }

    @Bean
    public GraphQLRootMutation mutation(MyOtherMicroserviceClient myOtherMicroserviceClient) {
        return new GraphQLRootMutation(myOtherMicroserviceClient);
    }

    // define your own scalar types (custom data type) if you need to.
    @Bean
    public GraphQLEnumProperty graphQLEnumProperty() {
        return new GraphQLEnumProperty();
    }

    @Bean
    public JsonScalar jsonScalar() {
        return new JsonScalar();
    }

    /* Add your own custom error handler if you need to.
    This is needed, if you want to propagate any custom information error messages propagated to the client. */
    @Bean
    public GraphQLErrorHandler errorHandler() {
        ....
    }

}
  • GraphQL class for query operations:
public class GraphQLRootQuery {

    @GraphQLQuery(description = "Retrieve list of your attributes by search criteria")
    public List<AttributeDTO> getMyAttributes(@GraphQLId @GraphQLArgument(name = "id", description = "Id of your attribute") String id,
                                              @GraphQLArgument(name = " myQueryParam ", description = "…") String myQueryParam) {
        return …;
    }
}
  • GraphQL class for mutation operations:
public class GraphQLRootMutation {

    @GraphQLMutation(description = "Update attribute")
    public AttributeDTO updateAttribute(@GraphQLId @GraphQLNonNull @GraphQLArgument(name = "id", description = "Id of your attribute") String id,
                                        @GraphQLArgument(name = "payload", description = "Payload for update") UpdateRequest payload) {
        return …
    }
}
  • GraphQL resolvers:
public class GraphQLResolvers {

    @GraphQLQuery(description = "Retrieve additional information")
    public List<AdditionalInfoDTO> getAdditionalInfo(@GraphQLContext AttributesDTO attributesDTO) {
        return …
    }
}

Note: All the Java classes (AdditionalInfoDTO, AttributesDTO, UpdateRequest) are just examples for data transfer objects and requests that needs to be replaced with your custom classes in order the code to compile and be executable.

  • How to use GraphQL from client side?

Finally, we want to have a look, how GraphQL looks from the front end side. We are using a tool, called  GraphiQL (https://www.electronjs.org/apps/graphiql) to test it.

  • GraphQL Endpoint: URL of your service, defaults to /graphql
  • Method: it is always POST
  • HTTP Header: You can include authorization tokens with the request
  • Left pane: the query, must be always in JSON format
  • Right pane: response from the server, always JSON
  • Note: You get what you request, only those attribute are returned which you request.

Simple examples for query and mutation:

In this tutorial, you learned how to create your GraphQL API in Java with Spring Boot. But you are not limited to Spring Boot for that. You can use the GraphQL SPQR in pretty much any Java environment.

Scaling Microservices with Spring Cloud Netflix

Reading Time: 10 minutes

If you need to build large distributed systems, then this is the place to be. We are going to talk about some of the components that the solution from Spring and Netflix provides and how easy it is to use them. If you follow this article, in the end, you will create a complete application with service discovery, client-side load balancing, feign clients and much more.

Before we start, let’s explain some of the terms that we are going to use in this article:

  • Eureka – a service discovery service, where every client will register itself
  • Ribbon – a client-side load balancer
  • Feign client – declarative web service client which provides communication between microservices

On the picture above it is presented what we are going to build. We will create two user-oriented microservices, one called Supplier and the other called Order. The user will be able to place an order for some supplier but the supplier in order to perform the order will call the Order microservice. For the communication between Supplier and Order, we will use Feign Client in combination with service discovery that will be enabled by Eureka. In the end, we are going to scale the microservice Order and we will see how the Ribbon load balancer will work when we have more instances.

Let’s start by creating the Eureka service discovery microservice.

The easiest way is to go to the Spring Initializer and create your microservice with the following properties as you can see on the picture below:

The required dependencies for our service discovery service are only the Eureka Server.

Once you are done with this, click on generate and your project will be downloaded. Open it via your favourite IDE (I will be using IntelliJ) and there are just two more things that you need to do. In your main class you should add the following annotation @EnableEurekaServer:

package com.north;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.server.EnableEurekaServer;

@SpringBootApplication
@EnableEurekaServer
public class EurekaApplication {

    public static void main(String[] args) {
        SpringApplication.run(EurekaApplication.class, args);
    }

}

One more thing that we will need to change is in our application.yml file. By default an application.properties file is created, but if this is the case we will rename it to application.yml and add the following code:

server:
  port: 8761

eureka:
  instance:
    hostname: localhost
  client:
    registerWithEureka: false
    fetchRegistry: false
    serviceUrl:
      defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka/

With these, we are setting the server port and the service URL. And there we have our first service discovery. Start the application and go to your browser and enter the following link: http://localhost:8761. Now we should be able to see the eureka homepage:

As you can see, there are no instances registered at the moment. So let’s create our first instance.

We will start by creating the Order microservice. Go to the Spring Initializer and create a project with the following properties:

And we will add the following dependencies:

Let’s start by setting the name and the port of the application. Change your application.properties to application.yml and add the following code:

spring:
  application:
    name: order

server:
  port: 8082

Now the name of the application is set to Order and the application will run on port: 8082. If this port is taken on your machine, feel free to change the port. We are not going to be dependent on this port but you will see that we will be dependent on the application name when we want to communicate with it.

In order to enable this instance to be discovered by Eureka we need to add the following annotation to the main class:

package com.north.order;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;

@EnableDiscoveryClient
@SpringBootApplication
public class OrderApplication {

    public static void main(String[] args) {
        SpringApplication.run(OrderApplication.class, args);
    }

}

Now if we start the application and go back to the homepage of eureka by going to our browser and entering the following link: http://localhost:8761 we should be able to see that this instance is registered to Eureka.

Since we confirmed that this instance is registered to Eureka we can now create an endpoint from where an order can be placed. First, let’s create an entity Order:

package com.north.order.domain;

import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;

@Data
@AllArgsConstructor
@NoArgsConstructor
public class Order {

    private String productName;
    private int quantity;
}

It is a simple entity that will contain the name of the products and how many pieces of it we want to order. The rest controller should contain the following logic:

package com.north.order.ctrl;

import com.north.order.domain.Order;
import lombok.extern.slf4j.Slf4j;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RestController;

@RestController
@Slf4j
public class OrderController {

    @PostMapping(value = "/order")
    ResponseEntity<Void> placeOrder(@RequestBody Order order) {
        log.info("Placing an order for product: {} with quantity: {}", order.getProductName(), order.getQuantity());
        return ResponseEntity.ok().body(null);
    }
}

You can test this endpoint by using Postman or some similar tool but we want the Supplier microservice to call this endpoint.

Now that we are done with the Order microservice, let’s build the Supplier. Again we will open the Spring Initializer and create a project with the following properties:

And we will have the following dependencies:

Generate the project and import it into your IDE. First thing let’s change the application.properties file by changing the extension to yml and add the following code:

spring:
  application:
    name: supplier
server:
  servlet:
    context-path: /supplier

With this, we have set the application name and set a context-path. Since here we didn’t change the port, the default 8080 will be taken. In order to register this instance to Eureka and to be able to use Feign Client we need to add the following two annotations in our main class:

package com.north.supplier;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.openfeign.EnableFeignClients;

@SpringBootApplication
@EnableFeignClients
@EnableDiscoveryClient
public class SupplierApplication {

    public static void main(String[] args) {
        SpringApplication.run(SupplierApplication.class, args);
    }

}

Next thing is to create the same entity Order as we have in the previous microservice.

package com.north.supplier.domain;

import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;

@Data
@AllArgsConstructor
@NoArgsConstructor
public class Order {

    private String productName;
    private int quantity;
}

For communication with the Order microservice we will create a feign client called OrderClient:

package com.north.supplier.service;

import com.north.supplier.domain.Order;
import org.springframework.cloud.openfeign.FeignClient;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;

@FeignClient("order")
public interface OrderClient {

    @PostMapping("/order")
    void performOrder(@RequestBody Order order);
}

As a value in the @FeignClient annotation, we need to use the application name of the microservice that we want to communicate with, in our case Order. The method written here is the one that will call the previously exposed endpoint in the Order microservice. Let’s create a service that will use this feign client and execute an order:

package com.north.supplier.service;

import com.north.supplier.domain.Order;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Service;

@Slf4j
@Service
@RequiredArgsConstructor
public class OrderService {

    private final OrderClient orderClient;

    public void placeOrder(Order order) {
        log.info("Requesting order ms to place an order");
        orderClient.performOrder(order);
    }
}

At the end we will expose one endpoint that we can use to test this scenario:

package com.north.supplier.ctrl;

import com.north.supplier.domain.Order;
import com.north.supplier.service.OrderService;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequiredArgsConstructor
@Slf4j
public class OrderController {

    private final OrderService orderService;

    @RequestMapping(value = "/order")
    public ResponseEntity<Void> placeOrder(@RequestBody Order order) {
        orderService.placeOrder(order);
        return ResponseEntity.ok().body(null);
    }
}

Now that we are done, let’s start the application. First, if we check the Eureka homepage we should be able to see this instance also that is registered. You can also see this in the console of where the Supplier is being started:

2020-09-20 20:02:43.907  INFO 7956 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient    : DiscoveryClient_SUPPLIER/host.docker.internal:supplier: registering service...
2020-09-20 20:02:43.911  INFO 7956 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient    : DiscoveryClient_SUPPLIER/host.docker.internal:supplier - registration status: 204

To test this complete scenario make sure that all three applications are started and that the Order and Supplier are registered to Eureka. By using postman I will send a post request to the endpoint on the Supplier microservice and I should be able to see the order being placed in the Order microservice:

Just make sure that you have added in your Headers tab a header with key: Content-Type and value application/json. What should happen if execute this request is in the Supplier microservice console? -we should see the following log:

2020-09-20 20:20:36.674  INFO 19420 --- [nio-8080-exec-4] com.north.supplier.service.OrderService  : Requesting order ms to place an order

in the Order microservice console we should see:

2020-09-20 20:20:36.678  INFO 17800 --- [io-8082-exec-10] com.north.order.ctrl.OrderController     : Placing an order for product: bread with quantity: 300

At this point, we managed to create three microservices, two for user purpose and one for service discovery. We used the feign client for communication with microservices. At some point, if we decide to grow this application and there are too many orders to be executed and we add some complex logic to our Order microservice, we will reach a point where the Order microservice won’t be able to execute all the orders. Let’s see what will happen if we scale our Order microservice.

First, from your IDE stop the Order microservice. Just be sure that Eureka and Supplier are still running. Now go to the folder directory in the Order project (something like …\Documents\blog\order) and open in that location three command prompt windows. In each of them we will type the following command:

mvn spring-boot:run -Dspring-boot.run.arguments="--server.port=8084"

in the second:

mvn spring-boot:run -Dspring-boot.run.arguments="--server.port=8085"

in the third:

mvn spring-boot:run -Dspring-boot.run.arguments="--server.port=8086"

It should be something like this:

Three instances of the application should be now up and running on the port that we previously specified. If you open again the Eureka home page, you should be able to see all three instances registered. Now go back to the postman and execute the same post call to Supplier as we did previously and do this many times as fast as possible. Now if you take a look at the command prompt windows that we opened you should be able to see that every time a different instance of the Order microservice is called. This is provided by Ribbon that is provided out of the box on the client-side (in this case the Supplier microservice), without adding some additional code. As we have mentioned before we are not dependent on the port but we are using the application name in order for Supplier to send a request to Order.

To summarize, our Supplier microservice became aware of all the instances and now he is sending the request every time to a different instance of Order so the load will be balanced.

Hope that you enjoyed this article and had fun with it. The code is available on the our N47 Bitbucket:

Automate Processes with Camunda

Reading Time: 5 minutes

Overview

Camunda BPM is a light-weight, open-source platform for Business Process Management. It ships with tools for creating workflow and decision models, operating deployed models in production, and allowing users to execute workflow tasks assigned to them. It is developed in Java and released as open-source software under the terms of Apache License.

Modeling your first process

In order to show how Camunda works and looks like I will use this simple process. Let us imagine that you want to introduce a review process on your Twitter account and have every tweet go through this review process.

One way to manage this is to make a web application from scratch for this scenario. But we can also model this process with Camunda Modeler and utilize Camunda for this workflow.

On the following image, it is shown one way to model this process with standard BPMN model using Camunda Modeler:

Business Process Model and Notation (BPMN) for the above process

In this diagram, the process is started when someone writes a new tweet. After that, we have a human task where someone has to review this tweet and decide its approval status. And after that we have two possible options if the tweet is approved, a service task is called that will publish this on Twitter. If rejected we again call a service task, however this time we notify the user that his tweet was rejected.

I will go through all of these steps in more detail.

Starting the process

Camunda processes can be started programmatically using some of their supported SDKs like Java or by using the Camunda Tasklist GUI that comes out of the box. In this case, I will use the Camunda tasklist to start a new tweet.

Working on the human task

Human tasks are tasks that must be manually completed by some users. And this can be something as simple as completing a form or it can be something like actually shipping an item somewhere. They are visible in the Camunda Tasklist GUI and users can assign a certain task to themselves and complete them.

In our Camunda BPMN model, the next step in the process is a human task. In our process, we want to review the tweet in this task. On the following image is shown how the human tasks look like by default in Camunda Tasklist:

Automating service tasks

Service task is used to invoke some service, this can be some Java code or some asynchronous external worker.

After the tweet is reviewed we have ‘conditional flow’ in Camunda, which depends on whatever the tweet was approved or not, decides how the flow should continue. In both cases, our flow continues with a service task.

In our case, we have two separate service tasks. One is called when a tweet is rejected and will send a notification, while the other one is used when the tweet is approved and will publish it on Twitter.

First, let us take a look at the service tasks for sending rejection notification:

@Slf4j
@Service("emailAdapter")
public class RejectionNotificationDelegate implements JavaDelegate {

  public void execute(DelegateExecution execution) throws Exception {
    String content = (String) execution.getVariable("content");
    String comments = (String) execution.getVariable("comments");

    log.info("Hi!\n\n"
           + "Unfortunately your tweet has been rejected.\n\n"
           + "Original content: {}\n\n"
           + "Comment: {}\n\n"
           + "Sorry, please try with better content the next time :-)", content, comments);
  }
}

In this code, we obtain process variables like tweet content and rejection comments and we log them in the console. This logic, of course, can be extended to send actual emails, the important thing here is that in order to model Camunda service we only need to implement JavaDelegate interface and override execute method.

In the next code snippet, we have the snippet for publishing the tweet:

public class TweetContentDelegate implements JavaDelegate {

  public void execute(DelegateExecution execution) throws Exception {
	    String content = (String) execution.getVariable("content");

	    AccessToken accessToken = new AccessToken("token", "secret");
	    Twitter twitter = new TwitterFactory().getInstance();
	    twitter.setOAuthConsumer("consumer");
	    twitter.setOAuthAccessToken(accessToken);

	    twitter.updateStatus(content);
	}
}

As in the previous code, we also have to implement JavaDelegate and override execute method.

More Camunda examples can be found on their official GitHub repository: https://github.com/camunda/camunda-bpm-examples

Conclusion

In the above diagram, we have only seen one example of a process, but Camunda offers a lot more features for modeling business processes and a lot of out-of-the-box implementations that save a lot of time. Also, almost everything is customizable.

If your company has a lot of processes that can be modeled as a BPMN process or processes that require human intervention then Camunda can be the right tool for the job.

In my opinion, it’s definitely worth to have a basic understanding of how Camunda works in order to be able to spot a use-case for this tool.

Hibernate techniques for mapping sets, lists and enumerations

Reading Time: 4 minutes

As we all know, Hibernate is an Object Relational Mapping (ORM) framework for the Java programming language. This blog post will teach you how to use advanced hibernate techniques for mapping sets, lists and enums in simple and easy steps.

Mapping sets

Set is a collection of objects in which duplicate values are not allowed and the order of the objects is not important. Hibernate uses the following annotation for mapping sets:

  • @ElementCollection – Declares an element collection mapping. The data for the collection is stored in a separate table.
  • @CollectionTable – Specifies the name of a table that will hold the collection. Also provides the join column to refer to the primary table.
  • @Column – The name of the column to map in the collection table.

@ElementCollection is used to define the following relationships: One-to-many relationship to an @Embeddable object and One-to-many relationship to a Basic object, such as Java primitives (wrappers): int, Integer, Double, Date, String, etc…

Now you’re probably asking yourself: Hmmm… How does this compare to @OneToMany?

@ElementCollection is similar to @OneToMany except that the target object is not an @Entity. These annotations give you an еasy way to define a collection with simple/basic objects. But, you can’t query, persist or merge target objects independently of their parent object. ElementCollection does not support a cascade option, so target objects are ALWAYS persisted, merged, removed with their parent object.

Mapping lists

Lists are used when we need to keep track of order position and duplicates of the elements are allowed. Additional annotation that we are going to use here is @OrderColumn, that specified the name of the column to track the element order/position (name defaults to <property>_ORDER):

Mapping maps

When you want to access data via a key rather than integer index, you should probably decide to use maps. Additional annotation used for maps is @MapKeyColumn which helps us to define the name of the key column for a map. Name defaults to <property>_KEY :

Mapping sorted sets

As we mentioned before, the set is an unsorted collection with no duplicates. But what if we don’t need duplicates and the order of retrieval is also important? In that case, we can use @OrderBy and specify the ordering of the elements when a collection is retrieved.

Syntax: @OrderBy(“[field name or property name] [ASC |DESC]”)

Mapping sorted maps

@OrderBy can be also used in maps. In that case, the default value is a key column, ascending.

Mapping Enums

By default, Hibernate maps an enum to a number. This mapping is very efficient, but there is a high risk that adding or removing a value from your enum will change the ordinal of the remaining values. Because of that, you should map the enum value to a String with the @Enumerated annotation. This annotation is used to reference an Enum type and save the field in database as String.

Conclusion

In this article, we have taken a look in the simple techniques for mapping sets, lists and enumerations when we are using Hibernate. I hope you enjoyed reading it and have found it helpful.

JHipster with Google App Engine and Cloud MySQL

Reading Time: 5 minutes

How does it sound to set up a complete spring application, with front-end and database? With all the models, repositories and controllers? Even with Unit and Integration tests, with mocked data? All within a few hours? Your solution is JHipster!

JHipster

JHipster or “Java Hipster” is a handy application generator, a development platform, to develop and deploy web applications. JHipster has become popular in a short time, and it has been featured in many conferences all around the globe – Montreal, Omaha, Taipei, Richmond, Frankfurt, Paris, London. It supports:

  • Spring Boot (Back-end)
  • Angular/React/Vue (Front-end)
  • Spring microservices

JHipster is used for generating complete applications, it will create for you a Spring Boot and Angular/React/Vue application, high-quality application with most of the things pre-configured, using Java as back-end technology and an extensive set of Spring technologies: Spring Security, Spring Boot, Spring MVC (providing a framework for web-sockets, REST and MVC), Spring Data, etc. and Angular/React/Vue front-end and a suite of pre-configured development tools like Yeoman, Maven, Gradle, Grunt, Gulp.js and Bower.

JHipster gives you a head start in creating Spring Boot application with a set of pre-defined screens for user management, monitoring, and logging. The generated Spring Boot application is specifically tailored to make working with Angular/React/Vue a smoother experience. At the top of all that, JHipster also gives you the tools to update, manage and package the resulting application.

By now you may think it sounds too good to be true… But it is not everything that JHipster offers. If you are a web developer, by now probably you have a lot of questions. 🙂
One important question we will answer in this blog post: is it supported by today’s cloud solutions, is it compatible with all of them? The answer is yes, it is compatible with the popular cloud solutions from Google, Amazon, Microsoft, and Heroku. Let’s see what it takes to make a complete integration in Google’s cloud platform, the app engine.

Compatibility Test - NEXCOM

Google App Engine

Google App Engine is a cloud solution provided by Google, a platform for developing and hosting web applications in data centres managed by Google; Platform as a Service (PaaS). Applications are sandboxed and run across multiple servers. The App Engine supports Java or Python, uses the Google query language and stores data in Google BigTable.

It is free of usage up to a certain amount of resource usage. After the user is exceeding the limited usage rates for storage, CPU resources, requests or number of API calls and concurrent requests can pay for more of these resources.

It is fully compatible with the JHipster generated projects. What it takes to host your application is just to follow the official how-to guide from Google App Engine documentation, as normal Spring Boot Application. To make things easier, Google offers a database which works closely with the Google App Engine, the Cloud SQL.

Cloud SQL

Cloud SQL is a database service offered by Google for their cloud solutions, fully-managed that makes it easy to configure, manage, maintain, and operate your relational databases on Google Cloud Platform.

It offers three database options to integrate with:

  • MySQL
  • PostgreSQL
  • SQL Server

Let’s get into details of integrating with Cloud SQL for MySQL:

  1. The first step is to create a Cloud SQL instance on the Google Cloud Platform, which requires few things like instance ID, password and etc. to be set and it gives you the option to choose the MySQL database version.
  2. The following step is to create the database in the newly created instance. It is possible to have more databases in one instance.
  3. Now, our application, in the case to be able to communicate with the Cloud SQL, without any permission blockers, we need to register the application in the Cloud SQL and manually configure the service account roles.
  4. The final step is connecting your application to the created Cloud SQL instance. It is done through JDBC. All the required properties can be found in the overview of the Cloud SQL, instance connection name, credentials and etc.

So the conclusion: don’t be afraid to invest some time in new technologies, be curious, you never know where they may lead you. Thank you for reading. 🙂

JHipster, is it worth it?

Reading Time: 7 minutes

JHipster is an open-source platform to generate, develop and deploy Spring Boot + Angular / React / Vue web applications. And with over 15 000 stars on Github, it is the most popular code generation framework for Spring Boot. But is it worth the hype or is the generated code too difficult to maintain and not production-ready?

How does it work?

The first thing to note is that JHipster is not a separate framework by itself. It uses yeoman and .jdl files in order to generate code in Spring Boot for backend and Angular or React or Vue for frontend. And after the initial generation of the project, you have the option to use the generated code without ever running JHipster commands again or to use JHipster in order to incrementally grow the projects and develop new features.

What exactly is JDL?

JDL is a JHipster-specific domain language where you can describe all your applications, deployments, entities and their relationships in a single file (or more than one) with a user-friendly syntax.

You can use our online JDL-Studio or one of the JHipster IDE plugins/extensions, which support working with JDL files.

Example of simple JDL file for Blog application:

entity Blog {
  name String required minlength(3)
  handle String required minlength(2)
}

entity Post {
  title String required
  content TextBlob required
  date Instant required
}

entity Tag {
  name String required minlength(2)
}

relationship ManyToOne {
  Blog{user(login)} to User
  Post{blog(name)} to Blog
}

relationship ManyToMany {
  Post{tag(name)} to Tag{entry}
}

paginate Post, Tag with infinite-scroll

Which technologies are used?

On the backend we have the following technologies:

  • Spring Boot as the primary backend framework
  • Maven or Gradle for configuration
  • Spring Security as a Security framework
  • Spring MVC REST + Jackson for REST communication
  • Spring Data JPA + Bean Validation for Object Relational Mapping
  • Liquibase for Database updates
  • MySQL, PostgreSQL, Oracle, MsSQL or MariaDB as SQL databases
  • MongoDB, Counchbase or Cassandra as NoSQL databases
  • Thymleaf as a templating engine
  • Optional Elasticsearch support if you want to have search capabilities on top of your database
  • Optional Spring WebSockets for Web Socket communication
  • Optional Kafka support as a publish-subscribe messaging system

On the frontend side these technologies are used:

  • Angular or React or Vue as a primary frontend framework
  • Responsive Web Design with Twitter Bootstrap
  • HTML5 Boilerplate compatible with modern browsers
  • Full internationalization support
  • Installation of new JavaScript libraries with NPM
  • Build, optimization and live reload with Webpack
  • Testing with Jest and Protractor
  • Optional Sass support for CSS design

How to get started?

  1. Pre-requirements: JavaGit and Node.js.
  2. Install JHipster npm install -g generator-jhipster
  3. Create a new directory and go into it mkdir myApp && cd myApp
  4. Run JHipster and follow instructions on the screen jhipster
  5. Model your entities with JDL Studio and download the resulting jhipster-jdl.jh file
  6. Generate your entities with jhipster import-jdl jhipster-jdl.jh
  7. Run ./mvnw to start generated backend
  8. Run npm start to start generated frontend with live reload support

How does the generated code and application look like?

In case you only want to see a sample generated application without starting the whole framework you can check this official Github repo for the latest up-to-date sample code: https://github.com/jhipster/jhipster-sample-app.

Following are some screen from my up and running JHipster application:

Welcome screen jhipster homepageThis is the initial screen when you open your JHipster app

Create a user screenjhipster user create screenWith this form you can create a new user in the app

View all users screenjhipster user management screenIn this screen you have the option to manage all your existing users

Monitoring of your JHipster application screenjhipster monitoring screenMonitoring of JVM metrics, as well as HTTP requests statistics

What are the pros and cons

The important thing to remember is that JHipster is not a “magic bullet” that will solve all your problems and is not an optimal solution for all the new projects. As a good software engineer, you will have to weigh in the pros and cons of this platform and decide when it makes sense to use and when it’s better to go with a different approach. Having used JHipster for production projects these are some of the pros and cons that I’ve experienced:

Pros

  • Easy bootstrap of a new project with a lot of technologies preconfigured
  • JHipster almost always follows best practices and latest trends in backend and frontend development
  • Login, register, management of users and monitoring comes out-of-the-box
  • Wizard for generating your project, only the technologies that you select are included in the project
  • After defining your own JDL file, all of the models, repository, service and controllers classes for your entities are generated, together with integration tests. This is saving a lot of time in the begging of the project when you want to get to feature development as soon as possible

Cons

  • If you are not familiar with technologies that are being used in the generated project it can be overwhelming and it’s easy to get lost into this mix of lots of different technologies
  • Using JHipster after the initial project is not a smooth experience. Classes and Liquibase scripts are being overwritten and you have to be very careful with changing the initial JDL model. Or you can decide to continue without using JHipster after the initial generation of projects
  • REST responses that are returned from endpoints will not always correspond to business requirements, very often you will have to manually modify your initial JHipster REST responses
  • Not all of the options that are available are at the same level, some technologies that JHipster is using and configuring are more polished than the others. Especially true if you decide to use community modules

What kind of projects are a good fit?

Having said all of this, it’s important to understand that there are projects which can benefit a lot from JHipster and projects that are better without using this platform.

In my experience, a good candidate is a greenfield project where it’s expected to deliver a lot of features fast. JHipster will help a lot to be productive from day one and to cut on the boilerplate code that you need to write. So you will be able, to begin with, feature development really fast. This works well with new projects with tight deadlines, proof of concepts, internal projects, hackathons, and startups.

On the other hand, a not so ideal situation is if you have an already started and up and running project, there is not much a JHipster can do in this case. Or another case would if the application has a lot of specific business logic and its not a simple CRUD application, for example, an AI project, a chatbot or a legacy ecosystem where these new technologies are not suitable or supported.

JHipster, is it worth it?

There is only one sure way to decide if JHipster is worth it for your next project or not and that is to try it out yourself and play around with the different features and configuration that JHipster offers.

At best, you will find a new framework for your next project and save a lot of effort next time you have to start a project. At worst, you will get to know the latest trends in both backend and frontend and learn some of the best practices from a very large community.

Testing asynchronous code in a concise and easy to read manner

Reading Time: 7 minutes

We live in a fast-paced world where a standard project delivery strategy is agile or it is a direction which people tend to follow. If you have been part of an agile software delivery practice then somewhere in your coding career you have met with some form of tests. Whether they might be unit or integration ( system ) or some form of E2E test.

You might be familiar with the testing pyramid and with the benefits and scopes of the different types of tests presented in the pyramid.

Let’s take a quick look at the pyramid:

Unit

As shown on the image above tests that we write are grouped into layers from which the pyramid is built. The foundation layer which is the biggest. It shows us their quantity. Meaning we need more of them on our application. They are also called Unit Tests because of the scope which they are testing. A small unit e.g. an if clause.

Integration/System

The tests belonging to the middle layer are called Integration tests and their purpose is to test integration between one or more elements inside an application and in quantitative representation we need fewer tests of this type than Unit tests.

UI/E2E

The last layer is the smallest one meaning that the quantity of those tests should be the smallest. Those types of tests are also called UI or E2E tests. Here a test has the biggest scope meaning that it is checking more interconnected parts of your application i.e whole register scenario from UI perspective.

As we go from the bottom to the top costs for maintenance are increasing, respectively their speed is decreasing. Confidence is also a crucial part. If a test higher in the pyramid passes we are more confident that our application works or some part of it at least.

Our focus is on the middle layer. So-called Integration tests lay there. As we mentioned above those are the tests that check the interconnection between one or more modules inside an application e.g tests which check that a user can be registered by pinging an endpoint. The scope of this test is to prepare data, send a request to the corresponding endpoint and also check whether the user has been successfully created in the underlying datastore. Testing integration between controller and repository layer, therefore, their name “An integration test”.
In my opinion, I think that tests are a must-have for every application.

Therefore we are writing integration tests for asynchronous code.

With multi-threaded data processing systems and increased popularity of reactive programming in Java, we are puzzled with writing proficient tests for asynchronous code.
Writing high-value tests is hard, but writing high-value tests for asynchronous code is harder.

Problem

Let’s take a look at this example where we have a small system that exposes several endpoints for updating a person. We have created several tests each is updating a person with different names. When a test is running it tries to update a person by sending a request via an endpoint. The system receives the request and returns ok status. In the meantime, it spans a different thread for the actual person update. On the side of the tests, we don’t know how much time does it gonna take for the update to happen so the naive approach is to wait for a specific time after which we are going to verify whether the actual update has happened.

We have several tests which ping a different endpoint. The endpoints are differing in the wait time that would be needed to process each request
updatePersonWith1SecondDelay
updatePersonWith2SecondDelay
updatePersonWith3SecondDelay
updatePersonWithDelayFrom1To5Seconds

In order for our tests to pass, I used the naive approach by adding a function waitForCompetition() which is nothing else than some sleep of the test thread. Thread.sleep() in Java.

Example

The first execution of tests with a timeout of 1 second. The total execution is 4 seconds but not all tests have passed.

The second execution of tests with a timeout of 3 seconds. The total execution is 12 seconds but not all tests have passed.

Third execution of tests with a timeout of 5 seconds. The total execution is 20 seconds where all tests have passed.

But in order for all the tests to pass, we would need a max of 5-second sleep wait which is executed after each test. This way we are guaranteeing that every test will pass. However, we add an unnecessary wait of 4 seconds for the first test and respectively add wait time for other tests. This results increased execution time, hence optimum wait time is not guaranteed.

Solution

As stated in the official documentation Awaitility is a small java library for synchronizing asynchronous operation. Which helps expressing expectations in a concise and easy to read manner. Which is a smart option for checking the outcome of some async operation.
It’s fairly easy to incorporate this library into your codebase.

You just need to add the library into pom.xml:

		<dependency>
			<groupId>org.awaitility</groupId>
			<artifactId>awaitility</artifactId>
			<version>3.0.0</version>
			<scope>test</scope>
		</dependency>

And add the import in your test:
import static org.awaitility.Awaitility.await;

Let’s take a look at an example before using this library:

@Test
    public void testDelay1Second() throws Exception {
        Person person = new Person();
        person.setName("Yara");
        person.setAddress("New York");
        person.setAge("23");
        personRepository.save(person);

        ObjectMapper mapper = new ObjectMapper();

        person.setName("Daenerys");

        this.mockMvc.perform(put("/api/endpoint1/" + person.getId())
                .contentType(APPLICATION_JSON)
                .content(mapper.writeValueAsBytes(person)))
                .andExpect(status().isOk())
                .andExpect(content().string(containsString("Request received")));

        waitForCompletion();
        assertThat(personRepository.findById(person.getId()).get().getName())
                .isEqualTo("Daenerys");
    }

An example with Awaitility:

@Test
    public void testDelay1Second() throws Exception {
        Person person = new Person();
        person.setName("Yara");
        person.setAddress("New York");
        person.setAge("23");
        personRepository.save(person);

        ObjectMapper mapper = new ObjectMapper();

        person.setName("Daenerys");

        this.mockMvc.perform(put("/api/endpoint1/" + person.getId())
                .contentType(APPLICATION_JSON)
                .content(mapper.writeValueAsBytes(person)))
                .andExpect(status().isOk())
                .andExpect(content().string(containsString("Request received")));

        await().atMost(Duration.FIVE_SECONDS).untilAsserted(() -> assertThat(personRepository.findById(person.getId()).get().getName())
                .isEqualTo("Daenerys"));
    }

Example of the executed test suite with the library:

As we can see the execution time is greatly reduced from 20 seconds for all tests to pass in just under 10 seconds.
As you can spot the function waitForCompletition() is removed and a new wait is introduced from the library as await().atMost(Duration.FIVE_SECONDS).untilAsserted()

You can also configure the library using static methods from the Awaitility class:
Awaitility.setDefaultPollInterval(10, TimeUnit.MILLISECONDS);
Awaitility.setDefaultPollDelay(Duration.ZERO);
Awaitility.setDefaultTimeout(Duration.ONE_MINUTE);

Conclusion

In this article, we have taken a look at how to improve tests when dealing with asynchronous code using an interesting library. I hope this post helps benefit you and adds to your knowledge. You can find a working example with all of the tests with and without the Awaitility library on this repository.
Also, you can find more about the library here.

Project Lombok explained

Reading Time: 4 minutes

In this article, I want to present a very powerful tool called Project Lombok. It acts as an annotation processor that allows us to modify the classes at compile time. Project Lombok enables us to reduce the amount of boilerplate code that needs to be written. The main idea is to give the users an option to put annotation processors into the build classpath.

Add Project Lombok to the project

  • using gradle
 compileOnly "org.projectlombok:lombok:1.16.16"
  • using maven
<dependencies>
     <dependency>
          <groupId>org.projectlombok</groupId>
           <artifactId>lombok</artifactId>
           <version>1.16.16</version>
           <scope>provided</scope>
      </dependency> 

Project Lombok

Project Lombok provides the following annotations:

  • @Getter and @Setter: create getters and setters for your fields
  • @EqualsAndHashCode: implements equals() and hashCode()
  • @ToString: implements toString()
  • @Data: uses the four previous features
  • @Cleanup: closes your stream
  • @Synchronized: synchronize on objects
  • @SneakyThrows: throws exceptions
    and many more. Check the full list of available annotations: https://projectlombok.org/features/all

Common object methods

In this example, we have a class that represents User and holds five attributes, for which we want to have an additional constructor for all attributes, toString representation, getters, and setters and overridden equals and hashCode in terms of the email attribute:

 private String email;
    private String firstName;
    private String lastName;
    private String password;
    private int age;

    // empty constructor
    // constructor for all attributes
    // getters and setters
    // toString
    // equals() and hashCode()
}

With some help from Lombok, the class now looks like this:

import lombok.AllArgsConstructor;
import lombok.EqualsAndHashCode;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.Setter;
import lombok.ToString;

@Getter
@Setter
@NoArgsConstructor
@AllArgsConstructor
@ToString
@EqualsAndHashCode(of = {"email"})
public class User {

    private String email;
    private String firstName;
    private String lastName;
    private String password;
    private int age;
}

As you can see, the annotations are replacing the boilerplate code that needs to be written for all the fields, constructor, toString, etc. The annotations do the following:

  • using @Getter and @Setter Lombok is instructed to generate getters and setters for all attributes
  • using @NoArgsConstructor and @AllArgsConstructors Lombok created the default empty constructor and an additional one for all the attributes
  • using @ToString generates toString() method
  • using @EqualsAndHashCode we get the pair of equals() and hashCode() methods defined for the email field (Note that more than one field can be specified here)

Customize Lombok Annotations

We can customize the existing example with the following:

  • in case we want to restrict the visibility of the default constructor we can use AccessLevel.PACKAGE
  • in case we want to be sure that the method fields won’t get null values assigned to them, we can use @NonNull annotation
  • in case we want to exclude some property from toString generated code, we can use excludes argument in @ToString annotation
  • we can change the access level of the setters from public to protected with AccessLevel.PROTECTED for @Setter annotation
  • in case we want to do some kind of checks in case the field gets modified we can implement the setter method by yourself. Lombok will not generate the method because it already exists

Now the example looks like the following:

import lombok.AccessLevel;
import lombok.AllArgsConstructor;
import lombok.EqualsAndHashCode;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.NonNull;
import lombok.Setter;
import lombok.ToString;

@Getter
@Setter(AccessLevel.PROTECTED)
@NoArgsConstructor(access = AccessLevel.PACKAGE)
@AllArgsConstructor
@ToString(exclude = {"age"})
@EqualsAndHashCode(of = {"email"})
public class User {

    private @NonNull String email;
    private @NonNull String firstName;
    private @NonNull String lastName;
    private @NonNull String password;
    private @NonNull int age;

    protected void setEmail(String email) {
        // Check for null and valid email code
        this.email = email;
    }
}

Builder Annotation

Lombok offers another powerful annotation called @Builder. Builder annotation can be placed on a class, or on a constructor, or on a method.

In our example, the User can be created using the following:

User user = User
        .builder()
            .email("dimitar.gavrilov@north-47.com")
            .password("secret".getBytes(StandardCharsets.UTF_8))
            .firstName("Dimitar")
            .registrationTs(Instant.now())
        .build();

Delegation

Looking at our example the code can be further improved. If we want to follow the rule of composition over inheritance, we can create a new class called ContactInformation. The object can be modelled via an interface:

public interface HasContactInformation {
    String getEmail();
    String getFirstName();
    String getLastName();
}

The class can be defined as the following:

@Data
public class ContactInformation implements HasContactInformation {

    private String email;
    private String firstName;
    private String lastName;
}

In the end, our User example will look like the following:

import lombok.AccessLevel;
import lombok.AllArgsConstructor;
import lombok.Builder;
import lombok.EqualsAndHashCode;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.NonNull;
import lombok.Setter;
import lombok.ToString;
import lombok.experimental.Delegate;

@Getter
@Setter(AccessLevel.PROTECTED)
@NoArgsConstructor(access = AccessLevel.PACKAGE)
@AllArgsConstructor
@ToString(exclude = {"password"})
@EqualsAndHashCode(of = {"contactInformation"})
public class User implements HasContactInformation {

    @Getter(AccessLevel.NONE)
    @Delegate(types = {HasContactInformation.class})
    private final ContactInformation contactInformation = new ContactInformation();

    private @NonNull byte[] password;

    private @NonNull Instant registrationTs;

    private boolean payingCustomer = false;
}

Conclusion

This article covers some basic features and there is a lot more that can be investigated. I hope you have found a motivation to give Lombok a chance in your project if you find it applicable.

Unit testing with Mockito

Reading Time: 4 minutes

A unit is the smallest testable part of an application. Mockito is a well known mock framework that allows us to create configure mock objects. With Mockito we can mock both interfaces and classes in the class under test. Mockito also helps us to reduce the number of boilerplate code while using mockito annotations.

Adding Mockito to the project

  • using gradle
testCompile "org.mockito:mockito−core:2.7.7"
  • using maven
<dependency>
      <groupId>org.mockito</groupId>
      <artifactId>mockito-core</artifactId>
      <version>2.7.7</version>
      <scope>test</scope>
</dependency>

Mockito annotations

  • @Mock – used for mock creation.
  • @Spy – creates a spy object.
  • @InjectMocks – instantiates the tested object and injects all the annotated field dependencies into it
  • @Captor – used to capture argument values for further assertions

Mockito example @Mock

Let’s say we have the following classes and we want to write a test for the CalculationService:

public class CalculationService {

   private AddService addService;
  
   public int calculate(int x, int y) {
       return addService.add(x, y);
   }
}

public class AddService {

   public int add(int x, int y) {
       return x+y;
   }
}

The usage of the @Mock and @InjectMock annotations is shown in the following sample code:

@InjectMocks
private CalculationService calculationService;

@Mock
private AddService addService;

@Before
public void setUp() {
   // initializes objects annotated with @Mock, @Spy, @Captor, or @InjectMocks
   MockitoAnnotations.initMocks(this);
}

@Test
public void testCalculationService() {
    // mock the result from method add in addService
    doReturn(20).when(addService).add(10, 10);

    // verify that the calculate method from calculationService will return the same value
    assertEquals(20, calculationService.calculate(10, 10));
}

@Spy

Mockito spy is used to spying on a real object. The main difference between a spy and mock is that with spy the tested instance will behave as a normal instance. The following example will explain it:

@Test
public void testSpyInstance() {
    List<String> spyList = spy(new ArrayList());
    spyList.add("firstElement");
    spyList.add("secondElement");
    verify(spyList).add("firstElement");
    verify(spyList).add("secondElement");

    assertEquals(2, spyList.size());
}

Note that method add is called and the size of the spy list is 2.

@Captor

Mockito framework gives us plenty of useful annotations. One of the most recent that I’ve had a chance to use is @Captor. ArgumentCaptor is used to capture the inner data in a method that is either void or returns a different type of object.
Let’s say we have the following method snippet:

public class AnyClass {
    public void doSearch(SearchData searchData) {
        CustomData data = new CustomData("custom data");
        searchData.doSomething(data);
    }
}

We want to capture the argument data so we can verify its inner data. So, to check that, we can use ArgumentCaptor from Mockito:

// Create a mock of the SearchData
SearchData data = mock(SearchData.class);

// Run the doSearch method with the mock
new AnyClass().doSearch(data);

// Capture the argument of the doSomething function
ArgumentCaptor<CustomData> captor = ArgumentCaptor.forClass(CustomData.class);
verify(data, times(1)).doSomething(captor.capture());

// Assert the argument
CustomData actualData = captor.getValue();
assertEquals("custom data", actualData.customData);

New features in Mockito 2.x

Since its inception, Mockito lacked mocking finals. One of the major features in the 2.X version is the support stubbing of the final method and final class. This feature has to be explicitly activated by creating the file MockMaker in this directory src/test/resources/mockito-extensions/org.mockito.plugins.MockMaker containing a single line:
mock-maker-inline

public final class MyFinalClass {

    public String hello() {
        return "my final class says hello";
    }
}

public class MyCallingClass {

    final MyFinalClass myFinalClass = new MyFinalClass();

    public String executeFinal() {
        return myFinalClass.hello();
    }
}

public class MyCallingClassTest {

    @Test
    public void testFinalClass() {
        MyCallingClass myCallingClass = new MyCallingClass();
        MyFinalClass myFinalClass = mock(MyFinalClass.java);

        when(myFinalClass.hello()).thenReturn("testString");

        assertEquals("testString", myCallingClass.executeFinal());
    }
}

Given the following example, without the file org.mockito.plugins.MockMaker and its content, we get the following error:

When the file is in the resources and the content is valid, we are all good.

The plan for the future is to have a programmatic way of using this feature.

Conclusion

In this article, I gave a brief overview of some of the features in Mockito test framework. Like any other tool, it must be used in a proper way to be useful. Now go and bring your unit tests to the next level.

Testing Spring Boot application with examples

Reading Time: 7 minutes

Why bother writing tests is already a well-discussed topic in software engineering. I won’t go into much details on this topic, but I will mention some of the main benefits.

In my opinion, testing your software is the only way to achieve confidence that your code will work on the production environment. Another huge benefit is that it allows you to refactor your code without fear that you will break some existing features.

Risk of bugs vs the number of tests

In the Java world, one of the most popular frameworks is Spring Boot, and part of the popularity and success of Spring Boot is exactly the topic of this blog – testing. Spring Boot and Spring framework offer out-of-the-box support for testing and new features are being added constantly. When Spring framework appeared on the Java scene in 2005, one of the reasons for its success was exactly this, ease of writing and maintaining tests, as opposed to JavaEE where writing integration requires additional libraries like Arquillian.

In the following, I will go over different types of tests in Spring Boot, when to use them and give a short example.

Testing pyramid

We can roughly group all automated tests into 3 groups:

  • Unit tests
  • Service (integration) tests
  • UI (end to end) tests

As we go from the bottom of the pyramid to the top tests become slower for execution, so if we measure execution times, unit tests will be in orders of few milliseconds, service in hundreds milliseconds and UI will execute in seconds. If we measure the scope of tests, unit as the name suggest test small units of code. Service will test the whole service or slice of that service that involve multiple units and UI has the largest scope and they are testing multiple different services. In the following sections, I will go over some examples and how we can unit test and service test spring boot application. UI testing can be achieved using external tools like Selenium and Protractor, but they are not related to Spring Boot.

Unit testing

In my opinion, unit tests make the most sense when you have some kind of validators, algorithms or other code that has lots of different inputs and outputs and executing integration tests would take too much time. Let’s see how we can test validator with Spring Boot.

Validator class for emails

public class Validators {

    private static final String EMAIL_REGEX = "(?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*|\"(?:[\\x01-\\x08\\x0b\\x0c\\x0e-\\x1f\\x21\\x23-\\x5b\\x5d-\\x7f]|\\\\[\\x01-\\x09\\x0b\\x0c\\x0e-\\x7f])*\")@(?:(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?|\\[(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?|[a-z0-9-]*[a-z0-9]:(?:[\\x01-\\x08\\x0b\\x0c\\x0e-\\x1f\\x21-\\x5a\\x53-\\x7f]|\\\\[\\x01-\\x09\\x0b\\x0c\\x0e-\\x7f])+)\\])";

    public static boolean isEmailValid(String email) {
        return email.matches(EMAIL_REGEX);
    }
}

Unit tests for email validator with Spring Boot

@RunWith(MockitoJUnitRunner.class)
public class ValidatorsTest {
    @Test
    public void testEmailValidator() {
        assertThat(isEmailValid("valid@north-47.com")).isTrue();
        assertThat(isEmailValid("invalidnorth-47.com")).isFalse();
        assertThat(isEmailValid("invalid@47")).isFalse();
    }
}

MockitoJUnitRunner is used for using Mockito in tests and detection of @Mock annotations. In this case, we are testing email validator as a separate unit from the rest of the application. MockitoJUnitRunner is not a Spring Boot annotation, so this way of writing unit tests can be done in other frameworks as well.

Integration testing of the whole application

If we have to choose only one type of test in Spring Boot, then using the integration test to test the whole application makes the most sense. We will not be able to cover all the scenarios, but we will significantly reduce the risk. In order to do integration testing, we need to start the application context. In Spring Boot 2, this is achieved with following annotations @RunWith(SpringRunner.class) and @SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT. This will start the application on some random port and we can inject beans into our tests and do REST calls on application endpoints.

In the following is an example code for testing book endpoints. For making rest API calls we are using Spring TestRestTemplate which is more suitable for integration tests compared to RestTemplate.

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
public class SpringBootTestingApplicationTests {

    @Autowired
    private TestRestTemplate restTemplate;

    @Autowired
    private BookRepository bookRepository;

    private Book defaultBook;

    @Before
    public void setup() {
        defaultBook = new Book(null, "Asimov", "Foundation", 350);
    }

    @Test
    public void testShouldReturnCreatedWhenValidBook() {
        ResponseEntity<Book> bookResponseEntity = this.restTemplate.postForEntity("/books", defaultBook, Book.class);

        assertThat(bookResponseEntity.getStatusCode()).isEqualTo(HttpStatus.CREATED);
        assertThat(bookResponseEntity.getBody().getId()).isNotNull();
        assertThat(bookRepository.findById(1L)).isPresent();
    }

    @Test
    public void testShouldFindBooksWhenExists() throws Exception {
        Book savedBook = bookRepository.save(defaultBook);

        ResponseEntity<Book> bookResponseEntity = this.restTemplate.getForEntity("/books/" + savedBook.getId(), Book.class);

        assertThat(bookResponseEntity.getStatusCode()).isEqualTo(HttpStatus.OK);
        assertThat(bookResponseEntity.getBody().getId()).isEqualTo(savedBook.getId());
    }

    @Test
    public void testShouldReturn404WhenBookMissing() throws Exception {
        Long nonExistingId = 999L;
        ResponseEntity<Book> bookResponseEntity = this.restTemplate.getForEntity("/books/" + nonExistingId, Book.class);

        assertThat(bookResponseEntity.getStatusCode()).isEqualTo(HttpStatus.NOT_FOUND);
    }
}

Integration testing of web layer (controllers)

Spring Boot offers the ability to test layers in isolation and only starting the necessary beans that are required for testing. From Spring Boot v1.4 on there is a very convenient annotation @WebMvcTest that only the required components in order to do a typical web layer test like controllers, Jackson converters and similar without starting the full application context and avoid startup of unnecessary components for this test like database layer. When we are using this annotation we will be making the REST calls with MockMvc class.

Following is an example of testing the same endpoints like in the above example, but in this case, we are only testing if the web layer is working as expected and we are mocking the database layer using @MockBean annotation which is also available starting from Spring Boot v1.4. Using these annotations we are only using BookController in the application context and mocking database layer.

@RunWith(SpringRunner.class)
@WebMvcTest(BookController.class)
public class BookControllerTest {
    @Autowired
    private MockMvc mockMvc;

    @MockBean
    private BookRepository repository;

    @Autowired
    private ObjectMapper objectMapper;

    private static final Book DEFAULT_BOOK = new Book(null, "Asimov", "Foundation", 350);

    @Test
    public void testShouldReturnCreatedWhenValidBook() throws Exception {
        when(repository.save(Mockito.any())).thenReturn(DEFAULT_BOOK);

        this.mockMvc.perform(post("/books")
                .content(objectMapper.writeValueAsString(DEFAULT_BOOK))
                .contentType(MediaType.APPLICATION_JSON)
                .accept(MediaType.APPLICATION_JSON))
                .andExpect(status().isCreated())
                .andExpect(MockMvcResultMatchers.jsonPath("$.name").value(DEFAULT_BOOK.getName()));
    }

    @Test
    public void testShouldFindBooksWhenExists() throws Exception {
        Long id = 1L;
        when(repository.findById(id)).thenReturn(Optional.of(DEFAULT_BOOK));

        this.mockMvc.perform(get("/books/" + id)
                .accept(MediaType.APPLICATION_JSON))
                .andExpect(status().isOk())
                .andExpect(MockMvcResultMatchers.content().string(Matchers.is(objectMapper.writeValueAsString(DEFAULT_BOOK))));
    }

    @Test
    public void testShouldReturn404WhenBookMissing() throws Exception {
        Long id = 1L;
        when(repository.findById(id)).thenReturn(Optional.empty());

        this.mockMvc.perform(get("/books/" + id)
                .accept(MediaType.APPLICATION_JSON))
                .andExpect(status().isNotFound());
    }
}

Integration testing of database layer (repositories)

Similarly to the way that we tested web layer we can test the database layer in isolation, without starting the web layer. This kind of testing in Spring Boot is achieved using the annotation @DataJpaTest. This annotation will do only the auto-configuration related to JPA layer and by default will use an in-memory database because its fastest to startup and for most of the integration tests will do just fine. We also get access TestEntityManager which is EntityManager with supporting features for integration tests of JPA.

Following is an example of testing the database layer of the above application. With these tests we are only checking if the database layer is working as expected we are not making any REST calls and we are verifying results from BookRepository, by using the provided TestEntityManager.

@RunWith(SpringRunner.class)
@DataJpaTest
public class BookRepositoryTest {
    @Autowired
    private TestEntityManager entityManager;

    @Autowired
    private BookRepository repository;

    private Book defaultBook;

    @Before
    public void setup() {
        defaultBook = new Book(null, "Asimov", "Foundation", 350);
    }

    @Test
    public void testShouldPersistBooks() {
        Book savedBook = repository.save(defaultBook);

        assertThat(savedBook.getId()).isNotNull();
        assertThat(entityManager.find(Book.class, savedBook.getId())).isNotNull();
    }

    @Test
    public void testShouldFindByIdWhenBookExists() {
        Book savedBook = entityManager.persistAndFlush(defaultBook);

        assertThat(repository.findById(savedBook.getId())).isEqualTo(Optional.of(savedBook));
    }

    @Test
    public void testFindByIdShouldReturnEmptyWhenBookNotFound() {
        long nonExistingID = 47L;
        
        assertThat(repository.findById(nonExistingID)).isEqualTo(Optional.empty());
    }
}

Conclusion

You can find a working example with all of these tests on the following repo: https://gitlab.com/47northlabs/public/spring-boot-testing.

In the following table, I’m showing the execution times with the startup of the different types of tests that I’ve used as examples. We can clearly see that unit tests, as mentioned in the beginning, are the fastest ones and that separating integration tests into layered testing leads to faster execution times.

Type of testExecution time with startup
Unit test80 ms
Integration test620 ms
Web layer test190 ms
Database layer test220 ms

My opinion on talks from JPoint Moscow 2019

Reading Time: 4 minutes

If you have read my previous parts, this is the last one in which I will give my highlights on the talks that I have visited.

First stop was the opening talk from Anton Keks on topic The world needs full-stack craftsmen. Interesting presentation about current problems in software development like splitting development roles and what is the real result of that. Another topic was about agile methodology and is it really helping the development teams to build a better product. Also, some words about startup companies and usual problems. In general, excellent presentation.

Simon Ritter, in my opinion, he had the best talks about JPoint. First day with the topic JDK 12: Pitfalls for the unwary. In this session, he covered the impact of application migration from previous versions of Java to the last one, from aspects like Java language syntax, class libraries and JVM options. Another interesting thing was how to choose which versions of Java to use in production. Well balanced presentation with real problems and solutions.

Next stop Kohsuke Kawaguchi, creator of Jenkins, with the topic Pushing a big project forward: the Jenkins story. It was like a story from a management perspective, about new projects that are coming up and what the demands of the business are. To be honest, it was a little bit boring for me, because I was expecting superpowers coming to Jenkins, but he changed the topic to this management story.

Sebastian Daschner from IBM, his topic was Bulletproof Java Enterprise applications. This session covered which non-functional requirements we need to be aware of to build stable and resilient applications. Interesting examples of different resiliency approaches, such as circuit breakers, bulkheads, or backpressure, in action. In the end, adding telemetry to our application and enhancing our microservice with monitoring, tracing, or logging in a minimalistic way.

Again, Simon Ritter, this time, with the topic Local variable type inference. His talk was about using var and let the compiler define the type of the variable. There were a lot of examples, when it makes sense to use it, but also when you should not. In my opinion, a very useful presentation.

Rafael Winterhalter talked about Java agents, to be more specific he covered the Byte Buddy library, and how to program Java agents with no knowledge of Java bytecode. Another thing was showing how Java classes can be used as templates for implementing highly performant code changes, that avoid solutions like AspectJ or Javassist and still performing better than agents implemented in low-level libraries.

To summarize, the conference was excellent, any Java developer would be happy to be here, so put JPoint on your roadmap for sure. Stay tuned for my next conference, thanks for reading, THE END 🙂

Unit testing using JSONassert library

Reading Time: 3 minutes

In this article, we’ll have a closer look at a library called JSONassert library. We will explain using some examples and how this library can be used. So, let’s get started!

Working with an easy example:

Let’s start our tests with a simple JSON string comparison:

String actual = "{objectId:123, name:\"magy\",lastName:\"henry\"}"; String expected="{objectId:123,name:\"magy\"}"; JSONAssert.assertEquals(expected, actual, false);

The above example will work for the condition strict=false. However, if it is set to true, the test will fail. You have to keep in mind, that JSONAssert makes a logical comparison of the data. This means that the ordering of elements does not matter while dealing with JSON objects.

Working with a complex example

Assuming, that you want a unit test where you have to validate (match an actual response with expected) our rest interfaces in the JUnit. Our endpoint delivers a list of objects. So, the goals are to verify the properties of each object. Let’s assume that delivered response is a list of a type called partner, where the implementation of the class partner is as following:

@Data
public class Partner {
    private String id;
    private String firstName;
    private String lastName;
    private LocalDate birthDate;
    private Gender gender;
    private MaritalStatus maritalStatus;
    private String phoneNumber;
}

The following Java examples will help you to understand the usage of. Assuming, we need to write some assertions for each family member on the list. So, the following should be done.

ResponseEntity<List<Partner>> response = restTemplate.exchange(
  "http://localhost:8080/partners/x/family",
  HttpMethod.GET,
  null,
  new ParameterizedTypeReference<List<Partner>>(){});
List<Parnter> partners= response.getBody();
//Now we need to test that each partner in the family has a certain values. 
Parnter partner1= partners.stream.filter(partner -> parnter.getid().equals("1")).findFirst().get();
assertEquals("Magy", partner1.getFirstName());
assertEquals("Mueller", partner1.getLastName());
assertEquals(of(1980, 7, 12), partner1.getBirthDate());
assertEquals(FEMALE, partner1.getGender());
assertNull(partner1.getMaritalStatus());

// Testing the second person
Parnter partner2= partners.stream.filter(partner -> parnter.getid().equals("2")).findFirst().get();
assertEquals("Marc", partner2.getFirstName());
assertEquals("Ullenstein", partner2.getLastName());
assertEquals(of(1988, 7, 13), partner2.getBirthDate());
assertEquals(MALE, partner2.getGender());
assertNull(partner1.getMaritalStatus());

So, to test the values for just one partner, it takes some time. So, in case of testing multiple partners, it will take a lot of code lines. We would like to use JSONAssert. To make it easier using JSONAssert, let’s create a JSON file under test/resources/json in which we create a list of objects, where each object contains the properties, that will be tested. You should also have in mind, that this library allows developers not having a restrict mode, which means that you don’t have to test against each property.

String actual= restTemplate.getForObject("http://localhost:8080/partners/x/family", String.class);
String expected = IOUtils.toString(this.getClass().getResourceAsStream("/json/expectedJsonResponse.json"),"UTF-8");
JSONAssert.assertEquals(expected, actual, false);

The above method takes three parameters as seen in the above example, where the first parameter is the expected JSON. The second one is the result, or what we got as a response from our endpoint. The third one defines whether we should use the strict mode or not. When comparing the code written in both cases, you are going to think about why you should use this library in future. You can give up all of the used assertions in the above example.

Conclusion

In this article, we looked at multiple scenarios in which JSONAssert can be helpful. We started with a very simple example and moved on to more complex comparisons. And, as always, stay tuned for new interesting articles.

Live from JPoint, Moscow 2019

Reading Time: 3 minutes

The conference took place at the World Trade Center in Moscow and started at 9 am. It looked like it will be huge from the beginning, well organized and big conference halls. The first step was an attendee registration.

After completing the registration and picking up some welcome packages, we had some starting coffee break and drinks. Also, we had visited most of the big company representative stands, that were in front of the conference halls. You can find interesting free materials there, like stickers, manuals and packages from the company you are visiting.

The next step was the conference. There were four conference halls, each one with different speakers. The opening talk was made by Anton Keks from Codeborne on the topic The world needs full-stack craftsmen.

After the opening ceremony talk, the conference started with different speakers on every track. Some of them were Russian speakers, so we focused on the English ones. Every talk was one to one and a half hour long and after that was a coffee break in the lounge room. There were also two lunch breaks included. In the end, the party at 20:00. You can check the full schedule here.

Day two was completely the same setup, some different speakers or the same one with a different topic. In general, the whole organization of the conference was amazing, like it should be for a world-class event. I highly recommend visiting if you have a chance.

Stay tuned for my next part where I will describe my opinion of the talks that I have visited…

DEVOXX UKRAINE, Here I come

Reading Time: 2 minutes

As a developer, when you need to extend your programming knowledge theoretical, practical, or either or, you need to go to a conference. Also, conferences are a good change to peer others in your field. Unfortunately, most software engineering conferences focus on introducing new technologies more than defining how a software engineer becomes an architect. That makes developer conferences a place to broaden the technical horizons, but not the vertical horizons. Exactly this makes DEVOXX so special. I have already had the pleasure to visit a DEVOXX conference in Europe and other conferences. Check out the articleabout that here!

What we expect from this conference 👤💬?

Normally, I focus on the new technical topics like what is new in Java. What do the new versions of Java offer? However, at this time, I would like to focus on both, the technical topics and software architecture, as it is a massive and fast-moving discipline. I would like to expect some training and insights to help you stay current with the latest trends in technologies, frameworks, and techniques — and build the skills needed to advance your career.

Source: https://earlycoders.com/so-you-want-to-learn-to-code-are-you-a-newbie-programmer-developer-or-a-software-engineer/

Organization to visit Devoxx Ukraine conference

The conference will be held in Kiev. So, my colleague Jeremy and I will be travelling from Zurich airport to Kiev. According to some articles, Kiev is considered one of the cheapest cities in Europe. We will try to explore the nightlife of Kiev. To be honest, I didn’t expect that the conference ticket is so cheap, it just costs 150 usd.

My private trips:

I will write another blog to explain what I and my colleague Jeremy did in Kiev. I can say one thing at the end: “Stay Tuned”!

JPoint Java conference in Moscow – 2019

Reading Time: 2 minutes

JPoint is one of the three (JPoint, Joker and JBreak) most common technical Java conferences for experienced Java developers. There will be 40 talks in two days, separated in 4 tracks in parallel. The conference takes place each year, this is being the seventh consecutive year.

Organization to visit JPoint conference

Apart from changing flights to reach Moscow everything else should not be any bigger issue. Book flights and choose some nearby hotel.

There are a few types of tickets. From which I’ll choose the personal ticket, main reason is the discount of 58%.

What is scheduled by now?

Many interesting subjects are going to be covered during two days of presentations:

  • New projects in Jenkins
  • Java SE 10 variable types
  • More of Java collections
  • Decomposing Java applications
  • AOT Java compilation
  • Java vulnerability
  • Prepare Java Enterprise application for production
  • Application migration JDK 9, 10, 11 and 12
  • Jenkins X

The following topics on the conference will be the most interesting ones for me:

  • Prepare Enterprise application for production (telemetry is crucial).
  • Is Java so vulnerable? What can we do to reduce security issues?
  • What is the right way of splitting application to useful components?
  • It looks that now with Jenkins Essentials there is significant less overhead for managing it, without any user involvement. Let us see what Jenkins replaced with few commands.

Just half of presentations are scheduled by now. Expect many more to be announce.

Null pointer exceptions in Java 8

Reading Time: 2 minutes

Probably every single developer has headaches with null pointers, so what is the silver bullet for this problem?

Java 8 introduced a handy way of dealing with this and it’s called Optional object. This is a container type of a value which may be absent. For example, let’s search for some objects in the repository:

Object findById(String id) { ... };

Object object = findById("1"); 
System.out.println("Property = " + object.getProperty());

We have a potential null pointer exception if the object with id “1” is not found in the database. The same example with using Optional will look like this:

Optional<Object> findById(String id) { ... };

Optional<Object> optional = findById("1");
optional.ifPresent(object -> {
    System.out.println("Property = " + object.getProperty());    
})

By returning an Optional object from the repository, we are forcing the developer to handle this situation. Once you have an Optional, you can use various methods that come with it, to handle different situations.

ifPresent()

optional.ifPresent(object -> {
    System.out.println("Found: " + object);
});

We can pass a Consumer function to this method, which is executed when the object of Optional exists.

isPresent()

if(optional.isPresent()) {
    System.out.println("Found: " + optional.get());
} else {
    System.out.println("Optional is empty");
}	

Will return true if we have a non-null value for the Optional object.

Throw an exception when a value is not present

One possible solution for handling null pointer exceptions when the object is not present would be throwing a custom exception for the specific object. All of these custom exceptions should be summarized and handle on a higher level, in the end, they can be shown to the end user.

@GetMapping("/cars/{carId}")
public Car getCar(@PathVariable("carId") String carId) {
    return carRepository.findByCarId(carId).orElseThrow(
	    () -> new ResourceNotFoundException("Car not found with carId " + carId);
    );
}

For that purpose, we can use orElseThrow() method to throw an exception when Optional is empty.

Thanks for reading, I hope it helps and don’t forget always to keep it simple, think simple 🙂

My expectations on JPoint Moscow 2019

Reading Time: 3 minutes

PREPARATION

Tickets

Tickets for individuals: 280€ until 1st March.
No possibility to change the participant.

Personal tickets may not be acquired by companies in any way. The companies may not fully or partially reimburse these tickets’ costs to their employees.

Standard tickets: 465€ until 1st March. A possibility to change the participant is given.

Tickets for companies and individuals, no limits. Includes a set of closing documents and amendments to the contract.

Flight

Skopje-Vienna-Moscow. Visa for Russia is needed!

Hotel

I guess a hotel like Crowne Plaza Moscow – World Trade Centre is a good option, because it’s in the same place where the conference takes part.

DAY 1

So what are my plans and expectations for the first day of JPoint. I will start with Rafael Winterhalter who is a Java Champion and will talk about Java agents. It will be interesting to see how Java classes can be used as templates for implementing highly performant code changes.

Next stop will be the creator of Jenkins: Kohsuke Kawaguchi. He has great headline Superpowers coming to your Jenkins and I am exicted to see where Jenkins is going next.

Last stop for day one, Simon Ritter from Azul Systems, with focus on local variable type inference. As with many features, there are some unexpected nuances as well as both good and bad use cases that will be covered.

There will be many more for day one but I will focus on these three for now. Also at the end, party at 20:00.

DAY 2

I will start the second day with Simon Ritter again, this time with focus on JDK 12. Pitfalls for the unwary, it will be interesting to see all the areas of JDK 9, 10, 11 and 12 that may impact application migration. Another topic will be how the new JDK release cadence will impact Java support and the choices of which Java versions to use in production.

Other headliner talks for the second day are still under consideration, so I’m expecting something interesting from Pivotal and JetBrains.

Feel free to share some Moscow hints or interesting talks that I’m missing.

Hackdayz #18: SMS Forwarding Android App

Reading Time: 5 minutes

Team members

Youssef Idelhoussain, Senior Front-end Engineer
Shehab Eltobgy, Test Manager

Abstract

This is the real deal.. Prepare your battery, connect to a good network to get thousands of SMS. Whoever you are, maybe you come from a faraway land, maybe you don’t understand my language, maybe you are from a country that I never heard the name of…
One thing is for sure, you will get the SMS. So, whoever you are, wherever you are, our app has special skills which will make this world easier for you, starting with getting an SMS 📲🤩

Having such nice days during our Hackdayz did not prevent us from thinking into adding more practical benefit to our company by improving the current app. And after we had our lunch, we had the power to start working, nevertheless, my vegetarian lunch did not taste good at all.
Our aim was by the end of Hackdayz that the app should be released in PlayStore and the code should be made as open-source for further improvements!

Side Notes

The app should have:

  • Rules: number and where it should be posted
  • Environment: Slack, Email, and others…
  • Some new settings: such as the ability for the user to set a password… (we were so optimistic)

Agenda

  • What is the problem you want to solve?
  • Who experiences that problem?
  • How do you want to solve that problem?
  • Why is this a better solution?

Having such a funny combination of a team with a front-end developer and test manager trying to develop an android app included so much fun these days, as we were literally underdogs. But, just to get our spirit up, we went to the gym, and then to the sauna where I could not even see my hands, and finally to the swimming pool.

Working on the project 👨‍💻 at Hackdayz18 in St. Gallen

Although, we were so ambitious that we set our plan to create an app with an infinite number of environments, and with so flexible rules (such as amateur dreams). After some time as Thomas A. Edison stated “I have not failed. I’ve just found 10,000 ways that won’t work.”, we realized that we are not gonna create the app as it actually was planned 🤯. Nevertheless, the days were cool enough to make us laugh while we were failing for several times.

Youssef was really ambitious that he told me “I will never go to bed mad. I’m gonna stay up and fight!”. After 10 minutes, each of us went to his room to sleep 😴. Due to the effort, I spent during 3 hours in the gym, sauna, and swimming, I wanted to sleep because by looking at my hand I couldn’t recognize how many fingers I did have.

The next day, we started working again. I wanted to start now with my real work since when kings start the party 🎉, my first task was to find a beautiful design. I decided to choose a simple design due to the time pressure. Besides improvisation is too good to leave to chance.

GitLab Repository

And using the mentioned GitLab we were able to create the app.

https://gitlab.com/47northlabs/public/sms-to-slack

Results

We were somehow not so much satisfied with the results, actually shocked 😱😱😱!!!

  • The app has been developed with the ability to add up to 5 environments. Unlike what we have expected to reach an infinite number of environments.. such youth dreams 😅
  • The app could not set the email as one of the environment due to inability to find a library via which the app can send the message to the email while it is in the background…. experience is simply the name we gave our mistakes 😄

Conclusion and implication

The app has been created successfully and applied to one of our android devices using +41 76 75x xxxx.

Screenshot of our Slack and Slackbot channel

Future features and challenges would be…

1. Adding email as a new environment. Let us see how this gonna be manageable 🤔.
2. Adding a password for the app. We are still so optimistic 😁.
3. Adding the ability to add more (unlimited environments) with the recycle bin 🧹 to remove them when needed.

By the end of the day, I just was totally shocked f the difference between what has been planned and what actually has been achieved. But, it was just a funny and exhausting experience.

Voxxed Days Bucharest & Devoxx Ukraine – HERE WE COME!

Reading Time: 4 minutes

Last year conferences

Already in 2018, we had the pleasure to visit 2 conferences in Europe:

We had a great time visiting these two cities 🙌 and we can’t wait to do that again this year 😎.

What do we expect from the two conferences in 2019 👤💬?

Like last year, we are interested in several different topics. For me, I am looking forward to Methodology & Culture slots, Shady is most interested in Java stuff. All in all, we hope that there are several different interesting speeches about:

  • Architecture & Security
  • Cloud, Containers & Infrastructure
  • Java
  • Big Data & Machine Learning
  • Methodology & Culture
  • Other programming languages
  • Web & UX
  • Mobile & IoT

We ❤️ food!

The title is speaking for itself. We just love food 🍴! Travelling ✈️ gives a good opportunity to see and taste something new 👅. All over the world, every culture has unique and special cuisine. Each cuisine is very different because of the different methods of cooking food. We try to taste (almost) everything when we arrive in new countries and cities.

We are really looking forward to seeing, what Bucharest’s and Kiev’s specialities are 🍽 and to trying them all! Here some snapshots from our trips to the conferences in Amsterdam Netherlands and Krakow Poland in 2018…

What about the costs 💸?

One great thing at N47 is, that your personal development 🧠 is important to the company. Besides hosted internal events and workshops, you can also visit international conferences 🛫 and everything is paid 💰. Every employee can choose his desired conferences/workshops, gather the information about the costs and request his visit. One step of the approval process is writing 📝 about his expectation in a blogpost. This is exactly what you are reading 🤓📖 at the moment.

Costs breakdown (per person)

Flights: 170 USD
Hotel: 110 USD CHF (3 days, 2 nights)
Conference: 270 USD
Food and public transportation: 150 USD
Knowledge gains: priceless
Explore new country and food: priceless
Spend time with your buddy: priceless
—–
Total: 700 USD
—–

Any recommendations for Bucharest or Kiev?

We never visited the two cities 🏙, so if you have any tips or recommendations, please let us know in the comments 💬!

My humble expectations on Devoxx Poland 2019

Reading Time: 2 minutes

Five months and devoxx.pl in Krakow has already finished again. That leaves me some time for preparation and anticipation.

Preparation

Key preparations like conference ticket, flight and hotel are done. However, I would be grateful for sightseeing hints for the evenings. Feel free to share them in the comments section.

There are five different tickets available. Generally, you have three alternatives with some optional sugar in form of dinner or gifts.

  • Conference only
  • Deep Dives only
  • Combi (both of them)

I’ve chosen the conference only ticket because I prefer to attend more but shorter slots as the speakers are forced to focus on the key parts. I also liked this format in my last visit at Devox Belgium in 2017.

Anticipation

There is not yet a published schedule but only the list of tracks:

  • Web & HTML5
  • Cloud & Big Data
  • Server Side Java
  • Future <Devoxx>
  • Mobile
  • JVM Languages
  • Methodology
  • Java SE
  • Architecture & Security

And now, those are my expectations for Devoxx Poland

  • New Java language features in 11 and 12.
  • From monolith via microservice to serverless. What’s the next thing?
  • Learn about some new fancy JS frameworks.
  • What’s new in the mobile world and which approach will make us happy: native, hybrid, web views, …?
  • Insights into JVM release roadmap and Oracle subscription models.
  • Enjoy some typical Polish foods and drink.

How to get a service reference or BundleContext with no OSGi context

Reading Time: 2 minutes

Issue

In Adobe Experience Manager (AEM) projects developers are working a lot in services, filters, servlets and handlers. All of these classes are OSGi components and services using the Felix SCR annotations or the newer OSGi DS annotations. But sometimes you need an OSGi service or the BundleContext also in non OSGi / DS controlled class

Solution

You can use the OSGi FrameworkUtil [1] to get the reference to the bundle context from any object. The code below shows how to get reference to the BundleContext and the service.

Most of the time, you have the SlingHttpServletRequest ready to pass:

import org.apache.sling.api.SlingHttpServletRequest;
import org.apache.sling.api.scripting.SlingBindings;
import org.apache.sling.api.scripting.SlingScriptHelper;
import org.osgi.framework.BundleContext;
import org.osgi.framework.FrameworkUtil;
import org.osgi.framework.ServiceReference;

import java.util.Objects;

public class ServiceUtils {

    /**
     * Gets the service from the current bundle context.
     * Return null if something goes wrong.
     *
     * @param <T>     the generic type
     * @param request the request
     * @param type    the type
     * @return        the service
     */
    @SuppressWarnings({"unchecked", "rawtypes"})
    public static <T> T getService(SlingHttpServletRequest request, Class<T> type) {
        SlingBindings bindings = (SlingBindings) request.getAttribute(SlingBindings.class.getName());
        if (bindings != null) {
            SlingScriptHelper sling = bindings.getSling();

            return Objects.isNull(sling) ? null : sling.getService(type);
        } else {
            BundleContext bundleContext = FrameworkUtil.getBundle(type).getBundleContext();
            ServiceReference settingsRef = bundleContext.getServiceReference(type.getName());

            return (T) bundleContext.getService(settingsRef);
        }
    }
}

Or you just use the class that was loaded over a bundle classloader.

package com.example.aem.core.utils;

import org.osgi.framework.Bundle;
import org.osgi.framework.BundleContext;
import org.osgi.framework.FrameworkUtil;
import org.osgi.framework.ServiceReference;

public class ServiceUtils {
    
    /**
     * Gets the service from the current bundle context.
     * Return null if something goes wrong.
     *
     * @param <T>     the class that was loaded over a bundle classloader.
     * @param type    the service type.
     * @return        the service instance.
     */
    @SuppressWarnings({"unchecked", "rawtypes"})
    public static <T> T getService(Class clazz, Class<T> type) {
        Bundle currentBundle = FrameworkUtil.getBundle(clazz);
        if (currentBundle == null) {
            return null;
        }
        BundleContext bundleContext = currentBundle.getBundleContext();
        if (bundleContext == null) {
            return null;
        }
        ServiceReference<T> serviceReference = bundleContext.getServiceReference(type);
        if (serviceReference == null) {
            return null;
        }
        T service = bundleContext.getService(serviceReference);
        if (service == null) {
            return null;
        }
        return service;
    }

}

[1] https://osgi.org/javadoc/osgi.core/7.0.0/org/osgi/framework/FrameworkUtil.html