Service Discovery in a Microservices Architecture: Client vs Service side discovery

Reading Time: 11 minutes

Service discovery is one of the key things of distributed systems. This allow us to automatically discover services on a network. In order to make some REST request, your service needs to know the service location. This service location includes IP address and port of a service instance. In the most scenarios, from the same service will be deployed multiple instances. The traditional applications are running on a physical hardware and the network locations are static. On the other hand, in the modern cloud-based microservice solutions, this network locations are dynamic. This makes them much harder to manage and more difficult challenge.

There are two main service discovery patterns:

  • Client-side discovery (Eureka)
  • Service-side discovery (Kubernetes)

Service Registry?

The Service Registry is a process that involves registering a service’s location to some central place. You can imagine that as a database of a service locations. The Instances of services would be register into the service registry on startup and deregistered accordingly. It will register it’s host, ports and some additional information like on the image below.
I have already created example application using Netflix Eureka service registry. It comes with some of the predefined annotations. For this purpose, I have created two spring applications. One of application will act as a discovery server. The application needs to include the following dependencies: Eureka Server, Spring Web, Actuator. I have added the @EnableEurekaServer annotation on the main class like on the code below. With this annotation, the app will act like microservice registry and discovery server.

@SpringBootApplication
@EnableEurekaServer
public class EurekaServerApplication {

    public static void main(String[] args) {
        SpringApplication.run(EurekaServerApplication.class, args);
    }
}
spring.application.name=eureka-server
server.port=8761
eureka.client.register-with-eureka=false
eureka.client.fetch-registry=false

The second Spring boot application will act as a client which will be registered on the eureka server (the first created application). This application needs to have the following dependencies: Actuator, Eureka Discovery, and Spring Web. We need to add the @EnableDiscoveryClient annotation on the Spring Boot app class and set up properties like on the code below into the application.yml file.

@SpringBootApplication
@EnableDiscoveryClient
public class ClientDiscoveryApplication {

    public static void main(String[] args) {
        SpringApplication.run(ClientDiscoveryApplication.class, args);
    }
}
spring.application.name=eureka-client-service
server.port=8085
eureka.client.service-url.defaultZone=http://localhost:8761/eureka/

If we navigate on the following URL http://localhost:8761/ like on the image below. We can notice that 2 instances were registered on the eureka-client-service application and started on a different port. One of them was registered on port 8085 and the second instance in on port 8087.

Client-side discovery pattern

In this pattern, the client is responsible for determining the network locations of available service instances and load balancing requests. The network location of the service instance will be registered to the service registry when it start up and deregistered when it will be terminated.
The client is responsible for determining the network locations of available service instances and load balancing requests across them. The Service registry for client-side discovery which we are using in the previous example is Netflix Eureka Server. It provides management of service instances for querying available instances.

From the image below we can consider that we have 3 Services (Service A, Service B, and Service C ) which have different IP addresses provided to each service. For instance, the IP address of Service A is 10.10.1.2:15202. If we want to query the available instances, we need to access Service Registry (Eureka). The Eureka is responsible for storing the instances of all services that were previously registered.

  1. The locations of Service A, Service B and Service C are sent to the Service Registry like on the image below.
  2. The service consumer ask for the Service Discovery Server for the Location of Service A / Service B
  3. The location of Service A will be searched by the Service Registry which store the instance’s location
  4. The Service Customer gets the location of Service A and can make a direct request

Benefits of using this pattern
⦁ flexible, application specific load balancer
Drawback
⦁ must implement register and discovery mechanism for each framework
The significant disadvantage of this pattern is that it couples the client with the service registry.

Alternatives to Eureka discovery

Other alternatives on Eureka are Consul and Zookeeper. Consul makes it simple for services to register themselves and to discover other services via DNS or HTTP interface. Also, provides a richer health checking. It has its own internal distributed key value store that can we use it as well.

Apache Zookeeper on the other side, is a distributed key value store. We can use it as the basis to implement service discovery. A centralized service for maintaining configuration information, naming, providing distributed synchronization. One of the benefit is that we can use it with a large community that supporting it.

Service-side discovery pattern

The alternative approach to Service Discovery is the Server-side Discovery pattern. Clients just make requests to a service instance via a load balancer, which acts as an orchestrator. The load balancer queries the service registry and routes each request to an available service instance like on the image below.
Benefits of using this pattern
⦁ no need to implement discovery logic for each framework used by service clients.
Drawback
⦁ need to set up and manage the Load Balancer, unless it’s already provided in the deployment environment.
Some of the clustering solutions such as Kubernetes, run a proxy on each host in the cluster. In order to make a request to the service, a client connects to the local proxy using the port which is assigned to that service.

From the image below we can consider that we have added Load Balancer, that acts as an orchestrator. The Load Balancer queries the Service Registry and routes each HTTP request to an available service instance.

Kubernetes

If we try to find some analogies between Kubernetes and more traditional architectures. I would compare Kubernetes Pods with service instances and Kubernetes Service with a logical set of pods.

Kubernetes Pods are the smallest, deployable unit in Kubernetes, which contain one or more containers, such as Docker containers. When a Pod runs multiple containers, the containers will be managed as a single entity and share the Pod’s resources. Each pod has its own IP address. When the pod restarts, it gets the new IP address. However there is no guarantee that the pod’s IP address will remain the same throughout all the time.

Kubernetes may relocate or re-instantiate pods at runtime. It doesn’t make sense to use the pod IP addresses for service discovery. This is one of the main reason to include Kubernetes Services into our story.

Service in Kubernetes

It is a component just like pod but it’s not a process. It’s just an abstraction layer which basically represents an IP address. Service will know to forward the request to one of the pods that have registered as service endpoints. On the other hand, unlike Kubernetes pods which don’t have the same IP address, the service has a stable IP address. Each service exposes an IP address and also exposes a DNS endpoint which will never change, even when the pod dies.

The Services provided by Kubernetes allow us to connect to our pods and provide a dynamic resource to access them. For instance, a load balancer can use these services to automatically determine which servers are trying to load balance. Services also provide load balancing ( if we have for example 3 instances of the microservice app, the service will get each request target to that and forward it to one of those pods).

In Kubernetes we have a concept of namespaces, which present a logical group of resources. For our examples I will using hello-today-dev namespace which consists of couples of pods.

kubectl -n hello-today-dev get pod -o wide

kubectl -n hello-today-dev get svc 

With this command we can see all available services in this namespace

How does Service Discovery work in Kubernetes?

Kubernetes has a powerful concept of labeling. Labels are just a key-value pair that provides metadata on our objects. Any pod whose labels match the selector defined in the service manifest will automatically be discover by a service.

Here we have a single service that is front-ending two of our pods. The two pods have a label named “app:my-app” and the Service has defined a label selector that is looking for those same labels.

If we have a single service that is front-ending two of our pods, instances of our app. These two pods have a label named “app=my-app” and the Service has defined a label selector that is looking for those same labels. This means that the service will send traffic to them, even thought the pods might change their addresses. You might also notice that there is a Pod3 that has a different label. The Service won’t front end that pod.

Example of service selector

The other example is when in a service selector we have defined two labels (app = my-app, microservice). Then service will looking for both of labels and it must match all the selectors, not just one and it will register as its endpoints. This is how the service will know which pods belong to it meaning where to forward that request to.

kubectl -n hello-today-dev get pods –show-labels

A service identifies its member pods using a selector attribute. We should specify a selector attribute which has a key-value pair defined as a list, which in our case is app = my-app. It creates a binding between the service and the pods with this name. It provides a flexible mechanism for service discovery, allowing us automatic load balancing. The Kubernetes will use the endpoints object to keep track of which pods are members of the service. Any pod whose label ‘s value (app=my-app) matches with the defined selector (my-app in our case) by the service will be exposed as its endpoint. Load Balancing will be provided by service by routing requests across matching pods.

kubectl -n hello-today-dev describe svc/ht-employee

The following command is to view the status of the service, in our case ht-employee. The service uses the selector app=ht-employee to select the pod 10.0.21.217 as backend pods. The virtual IP address in the cluster is created for the Kubernetes Service to evenly distribute traffic to the pod at the backend.

We can see the defined endpoints of ht-employee microservice (that means that our service is working)

If we have black endpoints it is the result of not selecting any pods, which means that your service traffic will go nowhere. The Endpoints field indicates the pods specified by the selector field.

Summary

Depending of the deployment environments you needed, you can set up your own service discovery using service registry like Eureka. In other deployment environments as in our case is Kubernetes, service discovery is build in. The Kubernetes, as was explained previously is responsible for handling service instance registration and deregistration.

Microservice architecture: Using Java thread locals and Tomcat/Spring capabilities for automated information propagation

Reading Time: 13 minutes

Inter-microservice communication has always brought questions and challenges to software architects. For example, when it comes to propagating certain information (via HTTP headers for instance) through a whole chain of calls in the scope of one transaction, we want this to happen outside of the microservices’ business logic. We do not want to tackle and work with these headers in the presentation or service layers of the application, especially if they are not important to the microservice for completing some business logic task. I would like to show you how you can automate this process by using Java thread locals and Tomcat/Spring capabilities, showing a simple microservice architecture.

Architecture overview

Architecture overview

This is the sample architecture we will be looking at. We have a Zuul Proxy Server that will act as a gateway towards our two microservices: the Licensing Microservice and the Organization Service. Those three will be the main focus of this article. Let’s say that a single License belongs to a single Organization and a single Organization can deal with multiple Licenses. Additionally, our services are registered to a Eureka Service Discovery and they pull their application config from a separate Configuration Server.

Simple enough, right? Now, what is the goal we want to achieve?

Let’s say that we have some sort of HTTP headers related to authentication or tracking of the chain of calls the application goes through. These headers arrive at the proxy along with each request from the client-side and they need to be propagated towards each microservice participating in the action of completing the user’s request. For simplicity’s sake, let’s introduce two made up HTTP headers that we need to send: correlation-id and authentication-token. You may say: “Well, the Zuul proxy gateway will automatically propagate those to the corresponding microservices, if not stated otherwise in its config”. And you are correct because this is a gateway that has an out-of-the-box feature for achieving that. But, what happens when we have an inter-microservice communication, for example, between the Licensing Microservice and the Organization Microservice. The Licensing Microservice needs to make a call to the Organization Microservice in order to complete some task. The Organization Microservice needs to have the headers sent to it somehow. The “go-to, technical debt” solution would be to read these headers in the controller/presentation layer of our application, then pass them down to the business logic in the service layer, which in turn is gonna pass them to our configured HTTP client, which in the end is gonna send them to the Organization Microservice. Ugly right? What if we have dozens of microservices and need to do this in each and every single one of them? Luckily, there is a lot prettier solution that includes using a neat Java feature: ThreadLocal.

Java thread locals

The Java ThreadLocal class provides us with thread-local variables. What does this mean? Simply put, it enables setting a context (tying all kinds of objects) to a certain thread, that can later be accessed no matter where we are in the application, as long as we access them within the thread that set them up initially. Let’s look at an example:

public class Main {

    public static final ThreadLocal<String> threadLocalContext = new ThreadLocal<>();

    public static void main(String[] args) {
        threadLocalContext.set("Hello from parent thread!");
        Thread childThread = new Thread(() -> {
            System.out.println("Child thread: " + threadLocalContext.get()); // Prints null
        });
        childThread.start();
        childThread.join(); // Waiting for the child thread to finish
        System.out.println("Parent thread: " + threadLocalContext.get()); // Prints "Hello from parent thread!"
    }
}

We have a single static final ThreadLocal<String> reference that we use for setting some information to the thread (in this case, the string “Hello from parent thread!” to the main thread). Accessing this variable via threadLocalContext.get() (no matter in which class we are, as long as we are on the same main thread) produces the expected string we have set previously. Accessing it in a child thread produces a null result. What if we set some context to the child thread as well:

threadLocalContext.set("Hello from parent thread!");
Thread childThread = new Thread(() -> {
    threadLocalContext.set("Hello from child thread!");
    System.out.println("Child thread: " + threadLocalContext.get()); // Prints "Hello from child thread!"
});
childThread.start();
childThread.join(); // Waiting for the child thread to finish
System.out.println("Parent thread: " + threadLocalContext.get()); // Prints "Hello from parent thread!"

We can notice that the two threads have completely separate contexts. Even though they access the same threadLocalContext reference, in the background, the context is relative to the calling thread. What if we wanted the child thread to inherit its parent context:

public class Main {

    private static final ThreadLocal<String> threadLocalContext = new InheritableThreadLocal<>();

    public static void main(String[] args) throws InterruptedException {
        threadLocalContext.set("Hello from parent thread!");
        Thread childThread = new Thread(() -> {
            System.out.println("Child thread: " + threadLocalContext.get()); // Prints "Hello from parent thread!"
            threadLocalContext.set("Hello from child thread!");
            System.out.println("Child thread: " + threadLocalContext.get()); // Prints "Hello from child thread!"
        });
        childThread.start();
        childThread.join(); // Waiting for the child thread to finish
        System.out.println("Parent thread: " + threadLocalContext.get()); // Prints "Hello from parent thread!"
    }
}

We only changed the ThreadLocal to an InheritableThreadLocal in order to achieve that. We can notice that the first print inside the child thread does not render null anymore. The moment we set another context to the child thread, the two contexts become disconnected and the parent keeps its old one. Note that by using the InheritableThreadLocal, the reference to the parent context gets copied to the child, meaning: this is not a deep copy, but two references pointing to the same object (in this case, the string “Hello from parent thread!”). If, for example, we used InheritableThreadLocal<SomeCustomObject> and tackled directly some of the properties of the object inside the child thread (threadLocalContext.get().setSomeProperty("some value")), then this would also be reflected in the parent thread and vice versa. If we want to disconnect the two contexts completely, we just call .set(new SomeCustomObject()) on one of the threads, which will turn its local reference to point to the newly created object.

Now, you may be wondering: “What does this have to do with automatically propagating headers to a microservice?”. Well, by using Servlet containers such as Tomcat (which Spring Boot has it embedded by default), we handle each new HTTP request (whether we like/know it or not :-)) in a separate thread. The servlet container picks an idle thread from its dedicated thread pool each time a new call is made. This thread is then used by Spring Boot throughout the processing of the request and the return of the response. Now, it is only a matter of setting up Spring filters and HTTP client interceptors that will set and get the local thread context that will contain the HTTP headers.

Solution

First off, let’s create a simple POJO class that is going to contain both of the headers that need propagating:

@Getter
@Setter
@ToString
@AllArgsConstructor
@NoArgsConstructor
public class RequestHeadersContext {

    public static final String CORRELATION_ID = "correlation-id";
    public static final String AUTHENTICATION_TOKEN = "authentication-token";

    private String correlationId;
    private String authenticationToken;
}

Next, we will create a utility class for setting and retrieving the thread-local context:

public final class RequestHeadersContextHolder {

    private static final ThreadLocal<RequestHeadersContext> requestHeaderContext = new ThreadLocal<>();

    public static void clearContext() {
        requestHeaderContext.remove();
    }

    public static RequestHeadersContext getContext() {
        RequestHeadersContext context = requestHeaderContext.get();
        if (context == null) {
            context = createEmptyContext();
            requestHeaderContext.set(context);
        }
        return context;
    }

    public static void setContext(RequestHeadersContext context) {
        Assert.notNull(context, "Only not-null RequestHeadersContext instances are permitted");
        requestHeaderContext.set(context);
    }

    public static RequestHeadersContext createEmptyContext() {
        return new RequestHeadersContext();
    }
}

The idea is to have a Spring filter, that is going to read the HTTP headers from the incoming request and place them in the RequestHeadersContextHolder:

@Configuration
public class RequestHeadersServiceConfiguration {

    @Bean
    public Filter getFilter() {
        return new RequestHeadersContextFilter();
    }

    private static class RequestHeadersContextFilter implements Filter {

        @Override
        public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain filterChain) throws IOException, ServletException {
            HttpServletRequest httpServletRequest = (HttpServletRequest) servletRequest;
            RequestHeadersContext context = new RequestHeadersContext(
                    httpServletRequest.getHeader(RequestHeadersContext.CORRELATION_ID),
                    httpServletRequest.getHeader(RequestHeadersContext.AUTHENTICATION_TOKEN)
            );
            RequestHeadersContextHolder.setContext(context);
            filterChain.doFilter(servletRequest, servletResponse);
        }
    }
}

We created a RequestHeadersServiceConfiguration class which, at the moment, has a single Spring filter bean defined. This filter is going to read the needed headers from the incoming request and set them in the RequestHeadersContextHolder (we will need to propagate those later when we make an outgoing request to another microservice). Afterwards, it will resume the processing of the request and will give control to the other filters that might be present in the filter chain. Keep in mind that, all the while, this code executes within the boundaries of the dedicated Tomcat thread, which the container had assigned to us.

Next, we need to define an HTTP client interceptor which we are going to link to a RestTemplate client, which in turn is going to execute the interceptor’s code each time before it makes a request to an outer microservice. We can add this new RestTemplate bean inside the same configuration file:

@Configuration
public class RequestHeadersServiceConfiguration {
    
    // .....

    @LoadBalanced
    @Bean
    public RestTemplate getRestTemplate() {
        RestTemplate restTemplate = new RestTemplate();
        List<ClientHttpRequestInterceptor> interceptors = restTemplate.getInterceptors();
        interceptors.add(new RequestHeadersContextInterceptor());
        return restTemplate;
    }

    private static class RequestHeadersContextInterceptor implements ClientHttpRequestInterceptor {

        @Override
        @NonNull
        public ClientHttpResponse intercept(@NonNull HttpRequest httpRequest,
                                            @NonNull byte[] body,
                                            @NonNull ClientHttpRequestExecution clientHttpRequestExecution) throws IOException {
            RequestHeadersContext context = RequestHeadersContextHolder.getContext();
            HttpHeaders headers = httpRequest.getHeaders();
            headers.add(RequestHeadersContext.CORRELATION_ID, context.getCorrelationId());
            headers.add(RequestHeadersContext.AUTHENTICATION_TOKEN, context.getAuthenticationToken());
            return clientHttpRequestExecution.execute(httpRequest, body);
        }
    }
}

As you might have guessed, the interceptor reads the header values from the thread-local context and sets them up for the outgoing request. The RestTemplate just adds this interceptor to the list of its already existing ones.

A good-to-have thing will be to eventually clear/remove the thread-local variables from the thread. When we have an embedded Tomcat container, missing out on this point will not impose a problem, since along with the Spring application, the Tomcat container dies as well. This means that all of the threads will altogether be destroyed and the thread-local memory released. However, if we happen to have a separate servlet container and we deploy our app as a .war instead of a standalone .jar, not clearing the context might introduce some memory leaks. Imagine having multiple applications on our standalone servlet container and each of them messing around with thread locals. The container shares its threads with all of the applications. When one of the applications is turned off, the container is going to continue to run and the threads which it borrowed to the application will not cease to exist. Hence, the thread-local variables will not be garbage collected, since there are still references to them. That is why we are going to define and add an interceptor to the Spring interceptor registry, which will clear the context after a request finishes and the thread can be assigned to other tasks:

@Configuration
public class WebMvcInterceptorsConfiguration implements WebMvcConfigurer {

    @Override
    public void addInterceptors(InterceptorRegistry registry) {
        registry.addInterceptor(new RequestHeadersContextClearInterceptor()).addPathPatterns("/**");
    }

    private static class RequestHeadersContextClearInterceptor implements HandlerInterceptor {

        @Override
        public void afterCompletion(HttpServletRequest request, HttpServletResponse response, Object handler, Exception exception) {
            RequestHeadersContextHolder.clearContext();
        }
    }
}

All we need to do now is wire these configurations into our microservices. We can create a separate library extracting the config (and maybe upload it to an online repository, such as Maven Central, or our own Nexus) so that we do not need to copy-paste all of the code into each of our microservices. Whatever the case, it is good to make this library easy to use. That is why we are going to create a custom annotation for enabling it:

@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
@Import({RequestHeadersServiceConfiguration.class, WebMvcInterceptorsConfiguration.class})
public @interface EnableRequestHeadersService {
}

Usage

Let’s see how we can leverage and use this library from inside a microservice. Only a couple of things are needed.

First, we need to annotate our application with the @EnableRequestHeadersService:

@SpringBootApplication
@EnableRequestHeadersService
public class LicensingServiceApplication {

    public static void main(String[] args) {
        SpringApplication.run(LicensingServiceApplication.class, args);
    }
}

Second, we need to inject the already defined RestTemplate in our microservice and use it as given:

@Component
public class OrganizationRestTemplateClient {

    private final RestTemplate restTemplate;

    public OrganizationRestTemplateClient(RestTemplate restTemplate) {
        this.restTemplate = restTemplate;
    }

    public Organization getOrganization(String organizationId) {
        ResponseEntity<Organization> restExchange = restTemplate.exchange(
                "http://organizationservice/v1/organizations/{organizationId}",
                HttpMethod.GET,
                null,
                Organization.class,
                organizationId
        );
        return restExchange.getBody();
    }
}

We can notice that the getOrganization(String organizationId) method does not handle any HTTP headers whatsoever. It just passes the URL and the HTTP method and lets the imported configuration do its magic. As simple as that! We can now call the getOrganization method wherever we like, without having any sort of knowledge about the headers that are being sent in the background. If we have the need to read them somewhere in our code, or even change them, then we can use the RequestHeadersContextHolder.getContext()/setContext() static methods wherever we like in our microservice, without the need to parse them from the request object.

Feign HTTP Client

If we want to leverage a more declarative type of coding we can always use the Feign HTTP Client. There are ways to configure interceptors here as well, so, using the RestTemplate is not strictly required. We can add the new interceptor configuration to the already existing RequestHeadersServiceConfiguration class:

@Configuration
public class RequestHeadersServiceConfiguration {

    // .....

    @Bean
    public RequestInterceptor getFeignRequestInterceptor() {
        return new RequestHeadersContextFeignInterceptor();
    }

    private static class RequestHeadersContextFeignInterceptor implements RequestInterceptor {

        @Override
        public void apply(RequestTemplate requestTemplate) {
            RequestHeadersContext context = RequestHeadersContextHolder.getContext();
            requestTemplate.header(RequestHeadersContext.CORRELATION_ID, context.getCorrelationId());
            requestTemplate.header(RequestHeadersContext.AUTHENTICATION_TOKEN, context.getAuthenticationToken());
        }
    }
}

The new bean we created is going to automatically be wired as a new Feign interceptor for our client.

Next, in our microservice, we can annotate our application class with @EnableFeignClients and then create our Feign client:

@FeignClient("organizationservice")
public interface OrganizationFeignClient {

    @GetMapping(value = "/v1/organizations/{organizationId}")
    Organization getOrganization(@PathVariable("organizationId") String organizationId);
}

All that we need to do now, is just inject our new client anywhere in our services and use it from there. In comparison to the RestTemplate, this is a more concise way of making HTTP calls.

Asynchronous HTTP requests

What if we do not want to wait for the request to the Organization Microservice to finish and want to execute it asynchronously and concurrently (using the @EnableAsync and @Async annotations from Spring, for example). How are we going to access the headers that need to be propagated in this case? You might have guessed it: by using InheritableThreadLocal instead of ThreadLocal. As mentioned earlier above, the child threads we create separately (aside from the Tomcat ones which will be the parents) can inherit their parent’s context. That way we can send header-populated requests in an asynchronous manner. There is no need to clear the context for the child threads (side note: clearing it will not affect the parent thread, and clearing the parent thread will not affect the child thread; it will only set the current thread’s local context reference to null), since these will be created from a separate thread pool that has nothing to do with the container’s one. The child threads’ memory will be cleared after execution or after the Spring application exits because eventually, they die off.

Summary

I hope you will find this neat little trick useful while refactoring your microservices. A lot of Spring’s functionality is actually based on thread locals. Looking into its source code, you will find a lot of similar/same concepts as the ones mentioned above. Spring Security is one example, Zuul Proxy is another.

The full code for this article can be found here.

References

Spring Microservices In Action by John Carnell

Spring Boot REST API with OpenAPI (SwaggerUI) Codegen

Reading Time: 5 minutes

When working with microservices architecture, one of the most important aspects is inter-service communication. Usually, each microservice stores data in its own database, and if we follow the MVC design pattern, we probably have model classes that map the relational database to object models, and components that contain methods for performing CRUD operations. These components are exposed by controller endpoints.

So that one microservice calls another, the caller needs to know the exact request and response model classes. This article will show a simple example of how to generate such models with SpringDoc OpenAPI.

I will create two services that will provide basic CRUD operations. For demonstrating purposes I chose to store data about vehicles:

  • vehicle-manager- the microservice that provides vehicles’ data to the client
  • vehicle-manager-client – the client microservice that requests vehicles’ data

For the purpose of this tutorial, I created empty Spring Boot projects via SpringInitializr.

In order to use the OpenAPI in our Spring Boot project, we need to add the following Maven dependency in our pom file:

<dependency>
  <groupId>org.springdoc</groupId>
  <artifactId>springdoc-openapi-ui</artifactId>
  <version>1.5.5</version>
</dependency>

In the vehicle-manager microservice I created a Vehicle class that looks like this:

@Data
@Builder
@Schema(name = "Vehicle", description = "Example vehicle schema")
public class Vehicle {
    private VehicleType vehicleType;
    private String registrationPlate;
    private int seatsCount;
    private Category category;
    private double price;
    private Currency currency;
    private boolean available;
}

And a controller:

package com.n47.vehiclemanager.ctrl;

import com.n47.vehiclemanager.model.Vehicle;
import com.n47.vehiclemanager.service.VehicleService;
import io.swagger.v3.oas.annotations.tags.Tag;
import lombok.RequiredArgsConstructor;
import org.springframework.web.bind.annotation.*;

import javax.validation.Valid;

@Tag(name = "vehicle", description = "Vehicle controller API")
@RestController
@RequiredArgsConstructor
@RequestMapping(path = "/vehicle")
public class VehicleCtrl {

    private final VehicleService vehicleService;

    @PostMapping(path = "/add")
    public void addVehicle(@RequestBody @Valid Vehicle vehicle) {
        vehicleService.addVehicle(vehicle);
    }

    @GetMapping(path = "/get")
    public Vehicle getVehicle(@RequestParam String registrationPlate) throws Exception {
        return vehicleService.getVehicle(registrationPlate);
    }
}

The important annotations here from openAPI are @Schema and @Tag. The former is used to define the actual class that needs to be included in the API documentation. The latter is used for grouping operations, such as all methods under one controller.

The swagger documentation interface for Vehiclemanager microservice is shown on Figure 1, and can be accessed on the following links:

If we open http://localhost:8080/api-docs in our browser (or any other port we set our Spring boot app to run on), we can get the entire documentation for the Vehiclemanager microservice. The important part for the model generation is right under components/schemas, while the controller endpoints are under paths.

{
   "openapi":"3.0.1",
   "info":{
      "title":"OpenAPI definition",
      "version":"v0"
   },
   "servers":[
      {
         "url":"http://localhost:8080",
         "description":"Generated server url"
      }
   ],
   "tags":[
      {
         "name":"vehicle",
         "description":"Vehicle controller API"
      }
   ],
   "paths":{
      "/vehicle/add":{
         "post":{
            "tags":[
               "vehicle"
            ],
            "operationId":"addVehicle",
            "requestBody":{
               "content":{
                  "application/json":{
                     "schema":{
                        "$ref":"#/components/schemas/Vehicle"
                     }
                  }
               },
               "required":true
            },
            "responses":{
               "200":{
                  "description":"OK"
               }
            }
         }
      },
      "/vehicle/get":{
         "get":{
            "tags":[
               "vehicle"
            ],
            "operationId":"getVehicle",
            "parameters":[
               {
                  "name":"registrationPlate",
                  "in":"query",
                  "required":true,
                  "schema":{
                     "type":"string"
                  }
               }
            ],
            "responses":{
               "200":{
                  "description":"OK",
                  "content":{
                     "*/*":{
                        "schema":{
                           "$ref":"#/components/schemas/Vehicle"
                        }
                     }
                  }
               }
            }
         }
      }
   },
   "components":{
      "schemas":{
         "Vehicle":{
            "type":"object",
            "properties":{
               "vehicleType":{
                  "type":"string",
                  "enum":[
                     "MOTORBIKE",
                     "CAR",
                     "VAN",
                     "BUS",
                     "TRUCK"
                  ]
               },
               "registrationPlate":{
                  "type":"string"
               },
               "seatsCount":{
                  "type":"integer",
                  "format":"int32"
               },
               "category":{
                  "type":"string",
                  "enum":[
                     "A",
                     "B",
                     "C",
                     "D",
                     "E"
                  ]
               },
               "price":{
                  "type":"number",
                  "format":"double"
               },
               "currency":{
                  "type":"string",
                  "enum":[
                     "EUR",
                     "USD",
                     "CHF",
                     "MKD"
                  ]
               },
               "available":{
                  "type":"boolean"
               }
            },
            "description":"Example vehicle schema"
         }
      }
   }
}

I am going to create a Vehiclemanager-client service, running on port 8082, that will get vehicle information for a given registration plate, by calling the Vehiclemanager microservice. In order to do so, we need to generate the Vehicle model class defined in the original Vehicle microservice. We can generate it by adding the swagger codegen plugin in the pom’s plugins section, in the new demo service, like this:

<profiles>
  <profile>
	<id>generateModels</id>
	<build>
	  <plugins>
		<plugin>
	      <groupId>io.swagger.codegen.v3</groupId>
			<artifactId>swagger-codegen-maven-plugin</artifactId>
			<version>3.0.11</version>
			<configuration>
		      <output>${project.basedir}</output>
			  <inputSpec>default-config</inputSpec>
			  <language>java</language>
			  <generateModels>true</generateModels>
			  <generateModelDocumentation>false</generateModelDocumentation>
			  <generateApis>false</generateApis>
			  <generateApiTests>false</generateApiTests>
			  <generateModelTests>false</generateModelTests>
			  <generateSupportingFiles>false</generateSupportingFiles>
			  <configOptions>
			    <sourceFolder>src/main/java</sourceFolder>
				<hideGenerationTimestamp>true</hideGenerationTimestamp>
				<sortParamsByRequiredFlag>true</sortParamsByRequiredFlag>
				<checkDuplicatedModelName>true</checkDuplicatedModelName>
				<useBeanValidation>true</useBeanValidation>
				<library>feign</library>
				<dateLibrary>java8-localdatetime</dateLibrary>
			  </configOptions>
			</configuration>
			<executions>
			  <execution>
				<id>generate-vehiclemanager-classes</id>
				<goals>
			      <goal>generate</goal>
				</goals>
				<configuration>
				  <inputSpec>http://localhost:8080/api-docs</inputSpec>
				  <language>java</language>
				  <modelPackage>com.n47.domain.external.model</modelPackage>
				  <modelsToGenerate>Vehicle</modelsToGenerate>
				</configuration>
			  </execution>
			</executions>
		  </plugin>
		</plugins>
	 </build>
  </profile>
</profiles>

After running the corresponding maven profile with:

> mvn clean compile -P generateModels

the models defined in <modelsToGenerate> tag will be created under the specified package in <modelPackage> tag.

Codegen generates for us the entire model class with all classes that are defined inside it.

It is important to note that we can have models generated from different services. In each execution (see line 30 from the XML snippet) we can define the corresponding API documentation link in the <inputSpec> tag (line 37).

To demo data transfer from Vehiclemanager to Vehiclemanager-client microservice, we can send a simple request via Postman. The request I am going to use will be a GET request, that accepts a parameter registrationPlate which is used to query the vehicles stored in the Vehiclemanager microservice. The response is shown in Figure 3, which is a JSON containing the vehicle’s data that I hardcoded in the Vehiclemanager microservice.

Using OpenAPI helps us getting rid of copy-paste and boilerplate code, and more importantly, we have an automated mechanism that on each Maven clean compile generates the latest models from other microservices.

You can find the full code example microservices in the links below:

Feel free to download and run them yourself, and leave a comment or feedback.

How to integrate GraphQL in the Microservice

Reading Time: 4 minutes

GraphQL is a query language for your APIs and a runtime for fulfilling those queries with existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more. GraphQL is designed to make APIs fast, flexible, and developer-friendly.

GraphQL SPQR

GraphQL SPQR (GraphQL Schema Publisher & Query Resolver, pronounced like speaker) is a simple-to-use library for rapid development of GraphQL APIs in Java. It works by dynamically generating a GraphQL schema from Java code.

In this tutorial, we are going to explain simple steps for how to integrate Graphql in your microservice.

  • Include dependencies in pom.xml
<!-- GraphQL -->
<dependency>
    <groupId>io.leangen.graphql</groupId>
    <artifactId>spqr</artifactId>
    <version>${graphql-spqr.version}</version>
</dependency>
<dependency>
    <groupId>com.graphql-java-kickstart</groupId>
    <artifactId>graphql-spring-boot-autoconfigure</artifactId>
    <version>${graphql-spring-boot-autoconfigure.version}</version>
</dependency>
  • Spring Boot Java Configuration class:
@Configuration
public class GraphQLConfiguration {
    @Bean
    public GraphQLSchema schema(GraphQLRootQuery graphQLRootQuery,
                                GraphQLRootMutation graphQLRootMutation,
                                GraphQLRootSubscription graphQLRootSubscription,
                                GraphQLResolvers graphQLResolvers) {
        GraphQLSchema schema = new GraphQLSchemaGenerator()
            .withBasePackages("com.myproject.microservices")
            .withOperationsFromSingletons(graphQLRootQuery, graphQLRootMutation, graphQLRootSubscription, graphQLResolvers)
            .generate();
        return schema;
    }

    @Bean
    public GraphQLResolvers resolvers(MyOtherMicroserviceClient myOtherMicroserviceClient) {
        return new GraphQLResolvers(myOtherMicroserviceClient);
    }

    @Bean
    public GraphQLRootQuery query(MyOtherMicroserviceClient myOtherMicroserviceClient) {
        return new GraphQLRootQuery(myOtherMicroserviceClient);
    }

    @Bean
    public GraphQLRootMutation mutation(MyOtherMicroserviceClient myOtherMicroserviceClient) {
        return new GraphQLRootMutation(myOtherMicroserviceClient);
    }

    // define your own scalar types (custom data type) if you need to.
    @Bean
    public GraphQLEnumProperty graphQLEnumProperty() {
        return new GraphQLEnumProperty();
    }

    @Bean
    public JsonScalar jsonScalar() {
        return new JsonScalar();
    }

    /* Add your own custom error handler if you need to.
    This is needed, if you want to propagate any custom information error messages propagated to the client. */
    @Bean
    public GraphQLErrorHandler errorHandler() {
        ....
    }

}
  • GraphQL class for query operations:
public class GraphQLRootQuery {

    @GraphQLQuery(description = "Retrieve list of your attributes by search criteria")
    public List<AttributeDTO> getMyAttributes(@GraphQLId @GraphQLArgument(name = "id", description = "Id of your attribute") String id,
                                              @GraphQLArgument(name = " myQueryParam ", description = "…") String myQueryParam) {
        return …;
    }
}
  • GraphQL class for mutation operations:
public class GraphQLRootMutation {

    @GraphQLMutation(description = "Update attribute")
    public AttributeDTO updateAttribute(@GraphQLId @GraphQLNonNull @GraphQLArgument(name = "id", description = "Id of your attribute") String id,
                                        @GraphQLArgument(name = "payload", description = "Payload for update") UpdateRequest payload) {
        return …
    }
}
  • GraphQL resolvers:
public class GraphQLResolvers {

    @GraphQLQuery(description = "Retrieve additional information")
    public List<AdditionalInfoDTO> getAdditionalInfo(@GraphQLContext AttributesDTO attributesDTO) {
        return …
    }
}

Note: All the Java classes (AdditionalInfoDTO, AttributesDTO, UpdateRequest) are just examples for data transfer objects and requests that needs to be replaced with your custom classes in order the code to compile and be executable.

  • How to use GraphQL from client side?

Finally, we want to have a look, how GraphQL looks from the front end side. We are using a tool, called  GraphiQL (https://www.electronjs.org/apps/graphiql) to test it.

  • GraphQL Endpoint: URL of your service, defaults to /graphql
  • Method: it is always POST
  • HTTP Header: You can include authorization tokens with the request
  • Left pane: the query, must be always in JSON format
  • Right pane: response from the server, always JSON
  • Note: You get what you request, only those attribute are returned which you request.

Simple examples for query and mutation:

In this tutorial, you learned how to create your GraphQL API in Java with Spring Boot. But you are not limited to Spring Boot for that. You can use the GraphQL SPQR in pretty much any Java environment.

Scaling Microservices with Spring Cloud Netflix

Reading Time: 10 minutes

If you need to build large distributed systems, then this is the place to be. We are going to talk about some of the components that the solution from Spring and Netflix provides and how easy it is to use them. If you follow this article, in the end, you will create a complete application with service discovery, client-side load balancing, feign clients and much more.

Before we start, let’s explain some of the terms that we are going to use in this article:

  • Eureka – a service discovery service, where every client will register itself
  • Ribbon – a client-side load balancer
  • Feign client – declarative web service client which provides communication between microservices

On the picture above it is presented what we are going to build. We will create two user-oriented microservices, one called Supplier and the other called Order. The user will be able to place an order for some supplier but the supplier in order to perform the order will call the Order microservice. For the communication between Supplier and Order, we will use Feign Client in combination with service discovery that will be enabled by Eureka. In the end, we are going to scale the microservice Order and we will see how the Ribbon load balancer will work when we have more instances.

Let’s start by creating the Eureka service discovery microservice.

The easiest way is to go to the Spring Initializer and create your microservice with the following properties as you can see on the picture below:

The required dependencies for our service discovery service are only the Eureka Server.

Once you are done with this, click on generate and your project will be downloaded. Open it via your favourite IDE (I will be using IntelliJ) and there are just two more things that you need to do. In your main class you should add the following annotation @EnableEurekaServer:

package com.north;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.server.EnableEurekaServer;

@SpringBootApplication
@EnableEurekaServer
public class EurekaApplication {

    public static void main(String[] args) {
        SpringApplication.run(EurekaApplication.class, args);
    }

}

One more thing that we will need to change is in our application.yml file. By default an application.properties file is created, but if this is the case we will rename it to application.yml and add the following code:

server:
  port: 8761

eureka:
  instance:
    hostname: localhost
  client:
    registerWithEureka: false
    fetchRegistry: false
    serviceUrl:
      defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka/

With these, we are setting the server port and the service URL. And there we have our first service discovery. Start the application and go to your browser and enter the following link: http://localhost:8761. Now we should be able to see the eureka homepage:

As you can see, there are no instances registered at the moment. So let’s create our first instance.

We will start by creating the Order microservice. Go to the Spring Initializer and create a project with the following properties:

And we will add the following dependencies:

Let’s start by setting the name and the port of the application. Change your application.properties to application.yml and add the following code:

spring:
  application:
    name: order

server:
  port: 8082

Now the name of the application is set to Order and the application will run on port: 8082. If this port is taken on your machine, feel free to change the port. We are not going to be dependent on this port but you will see that we will be dependent on the application name when we want to communicate with it.

In order to enable this instance to be discovered by Eureka we need to add the following annotation to the main class:

package com.north.order;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;

@EnableDiscoveryClient
@SpringBootApplication
public class OrderApplication {

    public static void main(String[] args) {
        SpringApplication.run(OrderApplication.class, args);
    }

}

Now if we start the application and go back to the homepage of eureka by going to our browser and entering the following link: http://localhost:8761 we should be able to see that this instance is registered to Eureka.

Since we confirmed that this instance is registered to Eureka we can now create an endpoint from where an order can be placed. First, let’s create an entity Order:

package com.north.order.domain;

import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;

@Data
@AllArgsConstructor
@NoArgsConstructor
public class Order {

    private String productName;
    private int quantity;
}

It is a simple entity that will contain the name of the products and how many pieces of it we want to order. The rest controller should contain the following logic:

package com.north.order.ctrl;

import com.north.order.domain.Order;
import lombok.extern.slf4j.Slf4j;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RestController;

@RestController
@Slf4j
public class OrderController {

    @PostMapping(value = "/order")
    ResponseEntity<Void> placeOrder(@RequestBody Order order) {
        log.info("Placing an order for product: {} with quantity: {}", order.getProductName(), order.getQuantity());
        return ResponseEntity.ok().body(null);
    }
}

You can test this endpoint by using Postman or some similar tool but we want the Supplier microservice to call this endpoint.

Now that we are done with the Order microservice, let’s build the Supplier. Again we will open the Spring Initializer and create a project with the following properties:

And we will have the following dependencies:

Generate the project and import it into your IDE. First thing let’s change the application.properties file by changing the extension to yml and add the following code:

spring:
  application:
    name: supplier
server:
  servlet:
    context-path: /supplier

With this, we have set the application name and set a context-path. Since here we didn’t change the port, the default 8080 will be taken. In order to register this instance to Eureka and to be able to use Feign Client we need to add the following two annotations in our main class:

package com.north.supplier;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.openfeign.EnableFeignClients;

@SpringBootApplication
@EnableFeignClients
@EnableDiscoveryClient
public class SupplierApplication {

    public static void main(String[] args) {
        SpringApplication.run(SupplierApplication.class, args);
    }

}

Next thing is to create the same entity Order as we have in the previous microservice.

package com.north.supplier.domain;

import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;

@Data
@AllArgsConstructor
@NoArgsConstructor
public class Order {

    private String productName;
    private int quantity;
}

For communication with the Order microservice we will create a feign client called OrderClient:

package com.north.supplier.service;

import com.north.supplier.domain.Order;
import org.springframework.cloud.openfeign.FeignClient;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;

@FeignClient("order")
public interface OrderClient {

    @PostMapping("/order")
    void performOrder(@RequestBody Order order);
}

As a value in the @FeignClient annotation, we need to use the application name of the microservice that we want to communicate with, in our case Order. The method written here is the one that will call the previously exposed endpoint in the Order microservice. Let’s create a service that will use this feign client and execute an order:

package com.north.supplier.service;

import com.north.supplier.domain.Order;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Service;

@Slf4j
@Service
@RequiredArgsConstructor
public class OrderService {

    private final OrderClient orderClient;

    public void placeOrder(Order order) {
        log.info("Requesting order ms to place an order");
        orderClient.performOrder(order);
    }
}

At the end we will expose one endpoint that we can use to test this scenario:

package com.north.supplier.ctrl;

import com.north.supplier.domain.Order;
import com.north.supplier.service.OrderService;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequiredArgsConstructor
@Slf4j
public class OrderController {

    private final OrderService orderService;

    @RequestMapping(value = "/order")
    public ResponseEntity<Void> placeOrder(@RequestBody Order order) {
        orderService.placeOrder(order);
        return ResponseEntity.ok().body(null);
    }
}

Now that we are done, let’s start the application. First, if we check the Eureka homepage we should be able to see this instance also that is registered. You can also see this in the console of where the Supplier is being started:

2020-09-20 20:02:43.907  INFO 7956 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient    : DiscoveryClient_SUPPLIER/host.docker.internal:supplier: registering service...
2020-09-20 20:02:43.911  INFO 7956 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient    : DiscoveryClient_SUPPLIER/host.docker.internal:supplier - registration status: 204

To test this complete scenario make sure that all three applications are started and that the Order and Supplier are registered to Eureka. By using postman I will send a post request to the endpoint on the Supplier microservice and I should be able to see the order being placed in the Order microservice:

Just make sure that you have added in your Headers tab a header with key: Content-Type and value application/json. What should happen if execute this request is in the Supplier microservice console? -we should see the following log:

2020-09-20 20:20:36.674  INFO 19420 --- [nio-8080-exec-4] com.north.supplier.service.OrderService  : Requesting order ms to place an order

in the Order microservice console we should see:

2020-09-20 20:20:36.678  INFO 17800 --- [io-8082-exec-10] com.north.order.ctrl.OrderController     : Placing an order for product: bread with quantity: 300

At this point, we managed to create three microservices, two for user purpose and one for service discovery. We used the feign client for communication with microservices. At some point, if we decide to grow this application and there are too many orders to be executed and we add some complex logic to our Order microservice, we will reach a point where the Order microservice won’t be able to execute all the orders. Let’s see what will happen if we scale our Order microservice.

First, from your IDE stop the Order microservice. Just be sure that Eureka and Supplier are still running. Now go to the folder directory in the Order project (something like …\Documents\blog\order) and open in that location three command prompt windows. In each of them we will type the following command:

mvn spring-boot:run -Dspring-boot.run.arguments="--server.port=8084"

in the second:

mvn spring-boot:run -Dspring-boot.run.arguments="--server.port=8085"

in the third:

mvn spring-boot:run -Dspring-boot.run.arguments="--server.port=8086"

It should be something like this:

Three instances of the application should be now up and running on the port that we previously specified. If you open again the Eureka home page, you should be able to see all three instances registered. Now go back to the postman and execute the same post call to Supplier as we did previously and do this many times as fast as possible. Now if you take a look at the command prompt windows that we opened you should be able to see that every time a different instance of the Order microservice is called. This is provided by Ribbon that is provided out of the box on the client-side (in this case the Supplier microservice), without adding some additional code. As we have mentioned before we are not dependent on the port but we are using the application name in order for Supplier to send a request to Order.

To summarize, our Supplier microservice became aware of all the instances and now he is sending the request every time to a different instance of Order so the load will be balanced.

Hope that you enjoyed this article and had fun with it. The code is available on the our N47 Bitbucket:

Securing your microservices with OAuth 2.0. Building Authorization and Resource server

Reading Time: 8 minutes

We live in a world of microservices. They give us an easy opportunity to scale our application. But as we scale our application it becomes more and more vulnerable. We need to think of a way of how to protect our services and how to keep the wrong people from accessing protected resources. One way to do that is by enabling user authorization and authentication. With authorization and authentication, we need a way to manage credentials, check the access of the requester and make sure people are doing what they suppose to.

When we speak about Spring (Cloud) Security, we are talking about Service authorization powered by OAuth 2.0. This is how it exactly works:

 

The actors in this OAuth 2.0 scenario that we are going to discuss are:

  • Resource Owner – Entity that grants access to a resource, usually you!
  • Resource Server – Server hosting the protected resource
  • Client – App making protected resource requests on behalf of a resource owner
  • Authorization server – server issuing access tokens to clients

The client will ask the resource owner to authorize itself. When the resource owner will provide an authorization grant with the client will send the request to the authorization server. The authorization server replies by sending an access token to the client. Now that the client has access token it will put it in the header and ask the resource server for the protected resource. And finally, the client will get the protected data.

Now that everything is clear about how the general OAuth 2.0 flow is working, let’s get our hands dirty and start writing our resource and authorization server!

Building OAuth2.0 Authorization server

Let’s start by creating our authorization server using the Spring Initializr. Create a project with the following configuration:

  • Project: Maven Project
  • Artefact: auth-server
  • Dependencies: Spring Web, Cloud Security, Cloud OAuth2

Download the project, copy it into your workspace and open it via your IDE. Go to your main class and add the @EnableAuthorizationServer annotation.

@SpringBootApplication
@EnableAuthorizationServer
public class AuthServerApplication {

    public static void main(String[] args) {
        SpringApplication.run(AuthServerApplication.class, args);
    }

}

Go to the application.properties file and make the following modification:

  • Change the server port to 8083
  • Set the context path to be “/api/auth”
  • Set the client id to “north47”
  • Set the client secret to “north47secret”
  • Enable all authorized grant types
  • Set the client scope to read and write
server.port=8083

server.servlet.context-path=/api/auth

security.oauth2.client.client-id=north47
security.oauth2.client.client-secret=north47secret
security.oauth2.client.authorized-grant-types=authorization,password,refresh_token,password,client_credentials
security.oauth2.client.scope=read,write

The client id is a public identifier for applications. The way that we used it is not a good practice for the production environment. It is usually a 32-character hex string so it won’t be so easy guessable.

Let’s add some users into our application. We are going to use in-memory users and we will achieve that by creating a new class ServiceConfig. Create a package called “config” with the following path: com.north47.authserver.config and in there create the above-mentioned class:

@Configuration
public class ServiceConfig extends GlobalAuthenticationConfigurerAdapter {

    @Override
    public void init(AuthenticationManagerBuilder auth) throws Exception {
        auth.inMemoryAuthentication()
                .withUser("filip")
                .password(passwordEncoder().encode("1234"))
                .roles("ADMIN");
    }

    @Bean
    public BCryptPasswordEncoder passwordEncoder() {
        return new BCryptPasswordEncoder();
    }
}

With this we are defining one user with username: ‘filip’ and password: ‘1234’ with a role ADMIN. We are defining that BCryptPasswordEncoder bean so we can encode our password.

In order to authenticate the users that will arrive from another service we are going to add another class called UserResource into the newly created package resource (com.north47.autserver.resource):

@RestController
public class UserResource {

    @RequestMapping("/user")
    public Principal user(Principal user) {
        return user;
    }
}

When the users from other services will try to send a token for validation the user will also be validated with this method.

And that’s it! Now we have our authorization server! The authorization server is providing some default endpoints which we are going to see when we will be testing the resource server.

Building Resource Server

Now let’s build our resource server where we are going to keep our secure data. We will do that with the help of the Spring Initializr. Create a project with the following configuration:

  • Project: Maven Project
  • Artefact: resource-server
  • Dependencies: Spring Web, Cloud Security, Cloud OAuth2

Download the project and copy it in your workspace. First, we are going to create our entity called Train. Create a new package called domain into com.north47.resourceserver and create the class there.

public class Train {

    private int trainId;
    private boolean express;
    private int numOfSeats;

    public Train(int trainId, boolean express, int numOfSeats) {
        this.trainId = trainId;
        this.express = express;
        this.numOfSeats = numOfSeats;
    }

   public int getTrainId() {
        return trainId;
    }

    public void setTrainId(int trainId) {
        this.trainId = trainId;
    }

    public boolean isExpress() {
        return express;
    }

    public void setExpress(boolean express) {
        this.express = express;
    }

    public int getNumOfSeats() {
        return numOfSeats;
    }

    public void setNumOfSeats(int numOfSeats) {
        this.numOfSeats = numOfSeats;
    }

}

Let’s create one resource that will expose an endpoint from where we can get the protected data. Create a new package called resource and there create a class TrainResource. We will have one method only that will expose an endpoint behind we can get the protected data.

@RestController
@RequestMapping("/train")
public class TrainResource {


    @GetMapping
    public List<Train> getTrainData() {

        return Arrays.asList(new Train(1, true, 100),
                new Train(2, false, 80),
                new Train(3, true, 90));
    }
}

Let’s start the application and send a GET request to http://localhost:8082/api/services/train. You will be asked to enter a username and password. The username is user and the password you can see from the console where the application was started. By entering this credentials will give the protected data.

Let’s change the application now to be a resource server by going to the main class ResourceServerApplication and adding the annotation @EnableResourceServer.

@SpringBootApplication
@EnableResourceServer
public class ResourceServerApplication {

    public static void main(String[] args) {
        SpringApplication.run(ResourceServerApplication.class, args);
    }

}

Go to the application properties file and do the following changes:

server.port=8082
server.servlet.context-path=/api/services
security.oauth2.resource.user-info-uri=http://localhost:8083/api/auth/user 

What we have done here is:

  • Changed our server port to 8082
  • Set context path: /api/services
  • Gave user info URI where the user will be validated when he will try to pass a token

Now if you try to get the protected data by sending a GET request to http://localhost:8082/api/services/train the server will return to you a message that you are unauthorized and that full authentication is required. That means that without a token you won’t be able to access the resource.

So that means that we need a fresh new token in order to get the data. We will ask the authorization server to give us a token for the user that we previously created. Our client in this scenario will be the postman. The authorization server that we previously created is exposing some endpoints out of the box. To ask the authorization server for a fresh new token send a POST request to the following URL: localhost:8083/api/auth/oauth/token.

As it was said previously that postman in this scenario is the client that is accessing the resource, it will need to send the client credentials to the authorization server. Those are the client id and the client secret. Go to the authorization tab and add as a username the client id (north47) and the password will be the client secret (north47secret). On the picture below is presented how to set the request:

What is left is to say the username and password of the user. Open the body tab and select x-www-form-urlencoded and add the following values:

  • key: ‘grant_type’, value: ‘password’
  • key: ‘ client_id’, value: ‘north47’
  • key: ‘ username’, value: ‘filip’
  • key: ‘password’, value ‘1234’

Press send and you will get a response with the access_token:

{
    "access_token": "ae27c519-b3da-4da8-bacd-2ffc98450b18",
    "token_type": "bearer",
    "refresh_token": "d97c9d2d-31e7-456d-baa2-c2526fc71a5a",
    "expires_in": 43199,
    "scope": "read write"
}

Now that we have the access token we can call our protected resource by inserting the token into the header of the request. Open postman again and send a GET request to localhost:8082/api/services/train. Open the header tab and here is the place where we will insert the access token. For a key add “Authorization” and for value add “Bearer ae27c519-b3da-4da8-bacd-2ffc98450b18”.

 

And there it is! You have authorized itself and got a new token which allowed you to get the protected data.

You can find the projects in our repository:

And that’s it! Hope you enjoyed it!

Spring Boot 2.0 new Features

Reading Time: 5 minutes

Spring Boot is the most used framework by Java developer for creating microservices. The first version of Spring Boot 1.0 was released in January 2014. After that many releases were done, but Spring Boot 2.0 is the first major release after its launch. Spring Boot-2.0 was released on March 2018 and while writing this blog, recently released version is 2.1.3, which was released on 15th February 2019.

There are many changes which will break your existing application if you want to upgrade from Spring Boot 1.x to 2.x. here is a described migration guide.

We are using Spring Boot 2.0 too 💻!

Currently, here at N47, we are implementing different services and also an in-house developed product(s). We decided to use Spring Boot 2.0 and we already have a blog post about Deploy Spring Boot Application on Google Cloud with GitLab. Check it out and if you have any questions, feel free to use the commenting functionality 💬.

Java

Spring boot 2.0 requires Java 8 as minimum version and it also supports Java 9. if you are using Java 7 or earlier and want to use Spring Boot 2.0 version then it’s not possible, you have to upgrade to Java 8 or 9. also Spring Boot 1.5 version will not support Java 9 and new latest version of Java.

Spring Boot 2.1 has also supported Java 11. it has continuous integration configured to build and test Spring Boot against the latest Java 11 release.

Gradle Plugin

Spring Boot’s Gradle plugin 🔌 has been mostly rewritten to enable a number of significant improvements. Spring Boot 2.0 now requires Gradle 4.x.

Third-party Library Upgrades

Spring Boot builds on Spring Framework. Spring Boot 2.0 requires Spring Framework 5, while Spring Boot 2.1 requires Spring Framework 5.1.

Spring Boot has upgraded to the latest stable releases of other third-party jars wherever it possible. Some notable dependency upgrades in 2.0 release include:

  • Tomcat 8.5
  • Flyway 5
  • Hibernate 5.2
  • Thymeleaf 3

Some notable dependency upgrades in 2.1 release include:

  • Tomcat 9
  • Undertow 2
  • Hibernate 5.3
  • JUnit 5.2
  • Micrometre 1.1

Reactive Spring

Many projects in the Spring portfolio are now providing first-class support for developing reactive applications. Reactive applications are fully asynchronous and non-blocking. They’re intended for use in an event-loop execution model (instead of the more traditional one thread-per-request execution model).

Spring Boot 2.0 fully supports reactive applications via auto-configuration and starter-POMs. The internals of Spring Boot itself has also been updated where necessary to offer reactive alternatives.

Spring WebFlux & WebFlux.fn

Spring WebFlux is a fully non-blocking reactive alternative to Spring MVC. Spring Boot provides auto-configuration for both annotation-based Spring WebFlux applications, as well as WebFlux.fn which offers a more functional style API. To get started, use the starter spring-boot-starter-webflux POM which will provide Spring WebFlux backed by an embedded Netty server.

Reactive Spring Data

Where the underlying technology enables it, Spring Data also provides support for reactive applications. Currently, Cassandra, MongoDB, Couchbase and Redis all have reactive API support.

Spring Boot includes special starter-POMs for these technologies that provide everything you need to get started. For example, spring-boot-starter-data-mongodb-reactive includes dependencies to the reactive mongo driver and project reactor.

Reactive Spring Security

Spring Boot 2.0 can make use of Spring Security 5.0 to secure your reactive applications. Auto-configuration is provided for WebFlux applications whenever Spring Security is on the classpath. Access rules for Spring Security with WebFlux can be configured via a SecurityWebFilterChain. If you’ve used Spring Security with Spring MVC before, this should feel quite familiar.

Embedded Netty Server

Since WebFlux does not rely on Servlet APIs, Spring Boot is now able to support Netty as an embedded server for the first time. The starter spring-boot-starter-webflux POM will pull-in Netty 4.1 and Reactor Netty.

HTTP/2 Support

HTTP/2 support is provided for Tomcat, Undertow and Jetty. Support depends on the chosen web server and the application environment.

Kotlin

Spring Boot 2.0 now includes support for Kotlin 1.2.x and offers a functionrunApplication which provides a way to run a Spring Boot application using Kotlin.

Actuator Improvements

There have been many improvements and refinements to the actuator endpoints with Spring Boot 2.0. All HTTP actuator endpoints are now exposed under the path and resulting /actuator JSON payloads have been improved.

Data Support

In addition, the “Reactive Spring Data” support mentioned above, several other updates and improvements have been made in the area of Data.

  • HikariCP
  • Initialization
  • JOOQ
  • JdbcTemplate
  • Spring Data Web Configuration
  • Influx DB
  • Flyway/Liquibase Flexible Configuration
  • Hibernate
  • MongoDB Client Customization
  • Redis

Here I have mentioned only the list for changes in Data support but a detailed description will be available here for each topic.

Animated ASCII Art

Finally, Spring Boot 2.0 also provides support for animated GIF banners.

For a complete overview of changes in configuration go here and the release note for 2.1 available here.

Microservices vs. Monoliths

Reading Time: 3 minutes

What are microservices and what are monoliths?

The difference between monolith and microservice architecture

The task that microservices perform is quite simple: The mapping of software in modules. Now the statement could be made that classes, packages etc. also fulfill the same task. That’s right, but the main difference lies in deployment. It is possible to deploy a microservice without “touching” the other microservices.

Classic monoliths, on the other hand, force deployment of the entire “project”.

Advantages of microservices and disadvantages of monoliths

1. Imagine that you are working on a project that contains thousands or even tens of thousands of lines of code. With each new function, the lines of code grow. Every DEV loses the overview here. Some a little earlier, the other a little later. Ultimately, it is impossible to keep track.
In addition, with each new feature, strange things are created elsewhere. This makes it very difficult to locate bugs and robs any developer of the last nerve.
Unlike monoliths, microservices are defined in small modules. Each microservice serves a specific task. Thus, the manageability is granted a lot easier.

2. The data for monoliths are located in a pool, to which each submodule can access via the interface. If you make a change to the data structure, you have to adapt each submodule, otherwise you have to expect errors.
Microservices are responsible for their own data, and the structure is absolutely irrelevant. Each service can define its structure. Changes to the structure also have no impact on other services, which saves a lot of time and, above all, prevents errors.

3. Microservices are only dependent on microservices that communicate with each other so that in the event of a bug, not the entire system fails. In the monolithic approach, however, the bug of one module means the failure of the entire system.

4. Another disadvantage arises with an update. All monoliths are overinstalled, which costs an enormous amount of time.
For the microsevices, only the services where changes have been made are installed. This saves time and nerves.

5. Detecting errors in the monolithic approach can take a long time for large projects.
Microservices, on the other hand, are “small” and greatly simplify troubleshooting.

6. The team of a monolithic architecture works as a whole, which makes the technical coordination difficult.
The teams of a microservice architecture, however, are divided into small teams, so that the technical coordination is simplified.

Conclusion

The microservice approach divides a big task into small subtasks. This method greatly simplifies the work for developers because on the one hand the overview is easy to keep in contrast to the monolithic approach and on the other hand the microservices are independent of the other microservices.