Microservice architecture: Using Java thread locals and Tomcat/Spring capabilities for automated information propagation

Reading Time: 13 minutes

Inter-microservice communication has always brought questions and challenges to software architects. For example, when it comes to propagating certain information (via HTTP headers for instance) through a whole chain of calls in the scope of one transaction, we want this to happen outside of the microservices’ business logic. We do not want to tackle and work with these headers in the presentation or service layers of the application, especially if they are not important to the microservice for completing some business logic task. I would like to show you how you can automate this process by using Java thread locals and Tomcat/Spring capabilities, showing a simple microservice architecture.

Architecture overview

Architecture overview

This is the sample architecture we will be looking at. We have a Zuul Proxy Server that will act as a gateway towards our two microservices: the Licensing Microservice and the Organization Service. Those three will be the main focus of this article. Let’s say that a single License belongs to a single Organization and a single Organization can deal with multiple Licenses. Additionally, our services are registered to a Eureka Service Discovery and they pull their application config from a separate Configuration Server.

Simple enough, right? Now, what is the goal we want to achieve?

Let’s say that we have some sort of HTTP headers related to authentication or tracking of the chain of calls the application goes through. These headers arrive at the proxy along with each request from the client-side and they need to be propagated towards each microservice participating in the action of completing the user’s request. For simplicity’s sake, let’s introduce two made up HTTP headers that we need to send: correlation-id and authentication-token. You may say: “Well, the Zuul proxy gateway will automatically propagate those to the corresponding microservices, if not stated otherwise in its config”. And you are correct because this is a gateway that has an out-of-the-box feature for achieving that. But, what happens when we have an inter-microservice communication, for example, between the Licensing Microservice and the Organization Microservice. The Licensing Microservice needs to make a call to the Organization Microservice in order to complete some task. The Organization Microservice needs to have the headers sent to it somehow. The “go-to, technical debt” solution would be to read these headers in the controller/presentation layer of our application, then pass them down to the business logic in the service layer, which in turn is gonna pass them to our configured HTTP client, which in the end is gonna send them to the Organization Microservice. Ugly right? What if we have dozens of microservices and need to do this in each and every single one of them? Luckily, there is a lot prettier solution that includes using a neat Java feature: ThreadLocal.

Java thread locals

The Java ThreadLocal class provides us with thread-local variables. What does this mean? Simply put, it enables setting a context (tying all kinds of objects) to a certain thread, that can later be accessed no matter where we are in the application, as long as we access them within the thread that set them up initially. Let’s look at an example:

public class Main {

    public static final ThreadLocal<String> threadLocalContext = new ThreadLocal<>();

    public static void main(String[] args) {
        threadLocalContext.set("Hello from parent thread!");
        Thread childThread = new Thread(() -> {
            System.out.println("Child thread: " + threadLocalContext.get()); // Prints null
        });
        childThread.start();
        childThread.join(); // Waiting for the child thread to finish
        System.out.println("Parent thread: " + threadLocalContext.get()); // Prints "Hello from parent thread!"
    }
}

We have a single static final ThreadLocal<String> reference that we use for setting some information to the thread (in this case, the string “Hello from parent thread!” to the main thread). Accessing this variable via threadLocalContext.get() (no matter in which class we are, as long as we are on the same main thread) produces the expected string we have set previously. Accessing it in a child thread produces a null result. What if we set some context to the child thread as well:

threadLocalContext.set("Hello from parent thread!");
Thread childThread = new Thread(() -> {
    threadLocalContext.set("Hello from child thread!");
    System.out.println("Child thread: " + threadLocalContext.get()); // Prints "Hello from child thread!"
});
childThread.start();
childThread.join(); // Waiting for the child thread to finish
System.out.println("Parent thread: " + threadLocalContext.get()); // Prints "Hello from parent thread!"

We can notice that the two threads have completely separate contexts. Even though they access the same threadLocalContext reference, in the background, the context is relative to the calling thread. What if we wanted the child thread to inherit its parent context:

public class Main {

    private static final ThreadLocal<String> threadLocalContext = new InheritableThreadLocal<>();

    public static void main(String[] args) throws InterruptedException {
        threadLocalContext.set("Hello from parent thread!");
        Thread childThread = new Thread(() -> {
            System.out.println("Child thread: " + threadLocalContext.get()); // Prints "Hello from parent thread!"
            threadLocalContext.set("Hello from child thread!");
            System.out.println("Child thread: " + threadLocalContext.get()); // Prints "Hello from child thread!"
        });
        childThread.start();
        childThread.join(); // Waiting for the child thread to finish
        System.out.println("Parent thread: " + threadLocalContext.get()); // Prints "Hello from parent thread!"
    }
}

We only changed the ThreadLocal to an InheritableThreadLocal in order to achieve that. We can notice that the first print inside the child thread does not render null anymore. The moment we set another context to the child thread, the two contexts become disconnected and the parent keeps its old one. Note that by using the InheritableThreadLocal, the reference to the parent context gets copied to the child, meaning: this is not a deep copy, but two references pointing to the same object (in this case, the string “Hello from parent thread!”). If, for example, we used InheritableThreadLocal<SomeCustomObject> and tackled directly some of the properties of the object inside the child thread (threadLocalContext.get().setSomeProperty("some value")), then this would also be reflected in the parent thread and vice versa. If we want to disconnect the two contexts completely, we just call .set(new SomeCustomObject()) on one of the threads, which will turn its local reference to point to the newly created object.

Now, you may be wondering: “What does this have to do with automatically propagating headers to a microservice?”. Well, by using Servlet containers such as Tomcat (which Spring Boot has it embedded by default), we handle each new HTTP request (whether we like/know it or not :-)) in a separate thread. The servlet container picks an idle thread from its dedicated thread pool each time a new call is made. This thread is then used by Spring Boot throughout the processing of the request and the return of the response. Now, it is only a matter of setting up Spring filters and HTTP client interceptors that will set and get the local thread context that will contain the HTTP headers.

Solution

First off, let’s create a simple POJO class that is going to contain both of the headers that need propagating:

@Getter
@Setter
@ToString
@AllArgsConstructor
@NoArgsConstructor
public class RequestHeadersContext {

    public static final String CORRELATION_ID = "correlation-id";
    public static final String AUTHENTICATION_TOKEN = "authentication-token";

    private String correlationId;
    private String authenticationToken;
}

Next, we will create a utility class for setting and retrieving the thread-local context:

public final class RequestHeadersContextHolder {

    private static final ThreadLocal<RequestHeadersContext> requestHeaderContext = new ThreadLocal<>();

    public static void clearContext() {
        requestHeaderContext.remove();
    }

    public static RequestHeadersContext getContext() {
        RequestHeadersContext context = requestHeaderContext.get();
        if (context == null) {
            context = createEmptyContext();
            requestHeaderContext.set(context);
        }
        return context;
    }

    public static void setContext(RequestHeadersContext context) {
        Assert.notNull(context, "Only not-null RequestHeadersContext instances are permitted");
        requestHeaderContext.set(context);
    }

    public static RequestHeadersContext createEmptyContext() {
        return new RequestHeadersContext();
    }
}

The idea is to have a Spring filter, that is going to read the HTTP headers from the incoming request and place them in the RequestHeadersContextHolder:

@Configuration
public class RequestHeadersServiceConfiguration {

    @Bean
    public Filter getFilter() {
        return new RequestHeadersContextFilter();
    }

    private static class RequestHeadersContextFilter implements Filter {

        @Override
        public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain filterChain) throws IOException, ServletException {
            HttpServletRequest httpServletRequest = (HttpServletRequest) servletRequest;
            RequestHeadersContext context = new RequestHeadersContext(
                    httpServletRequest.getHeader(RequestHeadersContext.CORRELATION_ID),
                    httpServletRequest.getHeader(RequestHeadersContext.AUTHENTICATION_TOKEN)
            );
            RequestHeadersContextHolder.setContext(context);
            filterChain.doFilter(servletRequest, servletResponse);
        }
    }
}

We created a RequestHeadersServiceConfiguration class which, at the moment, has a single Spring filter bean defined. This filter is going to read the needed headers from the incoming request and set them in the RequestHeadersContextHolder (we will need to propagate those later when we make an outgoing request to another microservice). Afterwards, it will resume the processing of the request and will give control to the other filters that might be present in the filter chain. Keep in mind that, all the while, this code executes within the boundaries of the dedicated Tomcat thread, which the container had assigned to us.

Next, we need to define an HTTP client interceptor which we are going to link to a RestTemplate client, which in turn is going to execute the interceptor’s code each time before it makes a request to an outer microservice. We can add this new RestTemplate bean inside the same configuration file:

@Configuration
public class RequestHeadersServiceConfiguration {
    
    // .....

    @LoadBalanced
    @Bean
    public RestTemplate getRestTemplate() {
        RestTemplate restTemplate = new RestTemplate();
        List<ClientHttpRequestInterceptor> interceptors = restTemplate.getInterceptors();
        interceptors.add(new RequestHeadersContextInterceptor());
        return restTemplate;
    }

    private static class RequestHeadersContextInterceptor implements ClientHttpRequestInterceptor {

        @Override
        @NonNull
        public ClientHttpResponse intercept(@NonNull HttpRequest httpRequest,
                                            @NonNull byte[] body,
                                            @NonNull ClientHttpRequestExecution clientHttpRequestExecution) throws IOException {
            RequestHeadersContext context = RequestHeadersContextHolder.getContext();
            HttpHeaders headers = httpRequest.getHeaders();
            headers.add(RequestHeadersContext.CORRELATION_ID, context.getCorrelationId());
            headers.add(RequestHeadersContext.AUTHENTICATION_TOKEN, context.getAuthenticationToken());
            return clientHttpRequestExecution.execute(httpRequest, body);
        }
    }
}

As you might have guessed, the interceptor reads the header values from the thread-local context and sets them up for the outgoing request. The RestTemplate just adds this interceptor to the list of its already existing ones.

A good-to-have thing will be to eventually clear/remove the thread-local variables from the thread. When we have an embedded Tomcat container, missing out on this point will not impose a problem, since along with the Spring application, the Tomcat container dies as well. This means that all of the threads will altogether be destroyed and the thread-local memory released. However, if we happen to have a separate servlet container and we deploy our app as a .war instead of a standalone .jar, not clearing the context might introduce some memory leaks. Imagine having multiple applications on our standalone servlet container and each of them messing around with thread locals. The container shares its threads with all of the applications. When one of the applications is turned off, the container is going to continue to run and the threads which it borrowed to the application will not cease to exist. Hence, the thread-local variables will not be garbage collected, since there are still references to them. That is why we are going to define and add an interceptor to the Spring interceptor registry, which will clear the context after a request finishes and the thread can be assigned to other tasks:

@Configuration
public class WebMvcInterceptorsConfiguration implements WebMvcConfigurer {

    @Override
    public void addInterceptors(InterceptorRegistry registry) {
        registry.addInterceptor(new RequestHeadersContextClearInterceptor()).addPathPatterns("/**");
    }

    private static class RequestHeadersContextClearInterceptor implements HandlerInterceptor {

        @Override
        public void afterCompletion(HttpServletRequest request, HttpServletResponse response, Object handler, Exception exception) {
            RequestHeadersContextHolder.clearContext();
        }
    }
}

All we need to do now is wire these configurations into our microservices. We can create a separate library extracting the config (and maybe upload it to an online repository, such as Maven Central, or our own Nexus) so that we do not need to copy-paste all of the code into each of our microservices. Whatever the case, it is good to make this library easy to use. That is why we are going to create a custom annotation for enabling it:

@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
@Import({RequestHeadersServiceConfiguration.class, WebMvcInterceptorsConfiguration.class})
public @interface EnableRequestHeadersService {
}

Usage

Let’s see how we can leverage and use this library from inside a microservice. Only a couple of things are needed.

First, we need to annotate our application with the @EnableRequestHeadersService:

@SpringBootApplication
@EnableRequestHeadersService
public class LicensingServiceApplication {

    public static void main(String[] args) {
        SpringApplication.run(LicensingServiceApplication.class, args);
    }
}

Second, we need to inject the already defined RestTemplate in our microservice and use it as given:

@Component
public class OrganizationRestTemplateClient {

    private final RestTemplate restTemplate;

    public OrganizationRestTemplateClient(RestTemplate restTemplate) {
        this.restTemplate = restTemplate;
    }

    public Organization getOrganization(String organizationId) {
        ResponseEntity<Organization> restExchange = restTemplate.exchange(
                "http://organizationservice/v1/organizations/{organizationId}",
                HttpMethod.GET,
                null,
                Organization.class,
                organizationId
        );
        return restExchange.getBody();
    }
}

We can notice that the getOrganization(String organizationId) method does not handle any HTTP headers whatsoever. It just passes the URL and the HTTP method and lets the imported configuration do its magic. As simple as that! We can now call the getOrganization method wherever we like, without having any sort of knowledge about the headers that are being sent in the background. If we have the need to read them somewhere in our code, or even change them, then we can use the RequestHeadersContextHolder.getContext()/setContext() static methods wherever we like in our microservice, without the need to parse them from the request object.

Feign HTTP Client

If we want to leverage a more declarative type of coding we can always use the Feign HTTP Client. There are ways to configure interceptors here as well, so, using the RestTemplate is not strictly required. We can add the new interceptor configuration to the already existing RequestHeadersServiceConfiguration class:

@Configuration
public class RequestHeadersServiceConfiguration {

    // .....

    @Bean
    public RequestInterceptor getFeignRequestInterceptor() {
        return new RequestHeadersContextFeignInterceptor();
    }

    private static class RequestHeadersContextFeignInterceptor implements RequestInterceptor {

        @Override
        public void apply(RequestTemplate requestTemplate) {
            RequestHeadersContext context = RequestHeadersContextHolder.getContext();
            requestTemplate.header(RequestHeadersContext.CORRELATION_ID, context.getCorrelationId());
            requestTemplate.header(RequestHeadersContext.AUTHENTICATION_TOKEN, context.getAuthenticationToken());
        }
    }
}

The new bean we created is going to automatically be wired as a new Feign interceptor for our client.

Next, in our microservice, we can annotate our application class with @EnableFeignClients and then create our Feign client:

@FeignClient("organizationservice")
public interface OrganizationFeignClient {

    @GetMapping(value = "/v1/organizations/{organizationId}")
    Organization getOrganization(@PathVariable("organizationId") String organizationId);
}

All that we need to do now, is just inject our new client anywhere in our services and use it from there. In comparison to the RestTemplate, this is a more concise way of making HTTP calls.

Asynchronous HTTP requests

What if we do not want to wait for the request to the Organization Microservice to finish and want to execute it asynchronously and concurrently (using the @EnableAsync and @Async annotations from Spring, for example). How are we going to access the headers that need to be propagated in this case? You might have guessed it: by using InheritableThreadLocal instead of ThreadLocal. As mentioned earlier above, the child threads we create separately (aside from the Tomcat ones which will be the parents) can inherit their parent’s context. That way we can send header-populated requests in an asynchronous manner. There is no need to clear the context for the child threads (side note: clearing it will not affect the parent thread, and clearing the parent thread will not affect the child thread; it will only set the current thread’s local context reference to null), since these will be created from a separate thread pool that has nothing to do with the container’s one. The child threads’ memory will be cleared after execution or after the Spring application exits because eventually, they die off.

Summary

I hope you will find this neat little trick useful while refactoring your microservices. A lot of Spring’s functionality is actually based on thread locals. Looking into its source code, you will find a lot of similar/same concepts as the ones mentioned above. Spring Security is one example, Zuul Proxy is another.

The full code for this article can be found here.

References

Spring Microservices In Action by John Carnell

Spring Boot Internationalization using Resource Bundles

Reading Time: 3 minutes

Implementing Spring Boot internationalization can be easily achieved using Resource Bundles. I will show you a code example of how you can implement it in your projects.

Let’s create a simple Spring Boot application from start.spring.io.

The first step is to create a resource bundle (a set of properties files with the same base name and language suffix) in the resources package.

I will create properties files with base name texts and only one key greeting:

  • texts_en.properties
  • texts_de.properties
  • texts_it.properties
  • texts_fr.properties

In all of that files I will add the value “Hello World !!!”, and the translations for that phrase. I was using Google Translate so please do not judge me if something is wrong :).

After that, I will add some simple YML configuration in application.yml file which I will use later.

server:
  port: 7000

application:
  translation:
    properties:
      baseName: texts
      defaultLocale: de

Now, let’s create the configuration. I will create two Beans LocaleResolver and ResourceBundleMessageSource. Let’s explain both of them.

With the LocaleResolver interface, we are defining which implementation we are going to use. For this example, I chose to use AcceptHeaderLocaleResolver implementation. It means that the language value must be provided via Accept-Language header.

@Bean
    public LocaleResolver localeResolver() {
        AcceptHeaderLocaleResolver acceptHeaderLocaleResolver = new AcceptHeaderLocaleResolver();
        acceptHeaderLocaleResolver.setDefaultLocale(new Locale(defaultLocale));
        return acceptHeaderLocaleResolver;
    }

With ResourceBundleMessageSource we are defining which bundle we are going to use in the Translator component (I will create it later πŸ™‚ ).

@Bean(name = "textsResourceBundleMessageSource")
    public ResourceBundleMessageSource messageSource() {
        ResourceBundleMessageSource rs = new ResourceBundleMessageSource();
        rs.setBasename(propertiesBasename);
        rs.setDefaultEncoding("UTF-8");
        rs.setUseCodeAsDefaultMessage(true);
        return rs;
    }

Now, let’s create the Translator component. In this component, I will create only one method, toLocale. In that method, I will fetch the Locale from the LocaleContexHolder and I will take the translation from the resource bundle.

@Component
public class Translator {

    private static ResourceBundleMessageSource messageSource;

    public Translator(@Qualifier("textsResourceBundleMessageSource") ResourceBundleMessageSource messageSource) {
        this.messageSource = messageSource;
    }

    public static String toLocale(String code) {
        Locale locale = LocaleContextHolder.getLocale();
        return messageSource.getMessage(code, null, locale);
    }
}

That’s all the configuration we need for this feature. Now, let’s create Controller, Service and TranslatorCodes Util classes so we can test the APIs.

@RestController
@RequestMapping("/index")
public class IndexController {

    private final TranslationService translationService;

    public IndexController(TranslationService translationService) {
        this.translationService = translationService;
    }

    @GetMapping("/translate")
    public ResponseEntity<String> getTranslation() {
        String translation = translationService.translate();
        return ResponseEntity.ok(translation);
    }
}
@Service
public class TranslationService {

    public String translate() {
        return toLocale(GREETINGS);
    }
}
public class TranslatorCode {
    public static final String GREETINGS = "greetings";
}

Now, you can start the application. After the application is started successfully, you can start making API calls.

Here is an example of API call that you can use as a cURL command.

curl –location –request GET “localhost:7000/index/translate” –header “Accept-Language: en”

These are some of the responses from the calls I made:

You can change the default behaviour, add some protection, add multiple resource bundles, you are not limited to using this feature.

Download the source code

This project is available on our BitBucket repository. Feel free to fix any mistakes and to leave a comment here if you have any questions or feedback.

https://bitbucket.org/n47/spring-boot-internationalization/src/master/

Multitenancy with Spring Boot

Reading Time: 7 minutes

Why should you consider implementing multitenancy in your project?

  • Cost: Multi-tenant architecture allows the sharing of resources, databases, and the application itself, thus the cost to run the system is fixed.
  • Maintenance: Users do not have to pay a considerable amount of fees to keep the software up to date. This reduces the overall cost of maintenance for each tenant.
  • Performance: Easier to assess and optimize speed, utilization, response time across the entire system, and even update the technology stack when needed.

In this blog we will implement multitenancy in our Spring Boot project.

Let’s create a simple Spring Boot project from start.spring.io, with only basic dependencies (Spring Web, Spring Data JPA, Spring Configuration Processor, MySQL Driver).

The good thing for implementing multitenancy is that we do not need additional dependencies.
We will split this example into two parts. In the first one, we will explain the idea/logic behind it and split the approach into 7 configuration steps, and explain every step. In the second part, we will see how it’s implemented in real life and we will test the solution.

1. Let’s start with creating Tenant Storage. We will use it for keeping the tenant value while the request is executing.

public class TenantStorage {

    private static ThreadLocal<String> currentTenant = new ThreadLocal<>();

    public static void setCurrentTenant(String tenantId) {
        currentTenant.set(tenantId);
    }

    public static String getCurrentTenant() {
        return currentTenant.get();
    }

    public static void clear() {
        currentTenant.remove();
    }
}

2. Next, we will create the Tenant Interceptor. For every request, we will set the value at the beginning and clear it at the end. As you can see, in Tenant Interceptor, I decided for this demo to fetch the value of the tenant from request header (X-Tenant), this is up to you. Just keep an eye on data security when using this in production. Maybe you want to fetch from a cookie or some other header name.

@Component
public class TenantInterceptor implements WebRequestInterceptor {

    private static final String TENANT_HEADER = "X-Tenant";

    @Override
    public void preHandle(WebRequest request) {
        TenantStorage.setCurrentTenant(request.getHeader(TENANT_HEADER));
    }

    @Override
    public void postHandle(WebRequest webRequest, ModelMap modelMap) {
        TenantStorage.clear();
    }

    @Override
    public void afterCompletion(WebRequest webRequest, Exception e) {

    }
}

3. Next thing is to add the tenant Interceptor in the interceptor registry. For that purpose, I will create WebConfiguration that will implement WebMvcConfigurer.

@Configuration
public class WebConfiguration implements WebMvcConfigurer {

    private final TenantInterceptor tenantInterceptor;

    public WebConfiguration(TenantInterceptor tenantInterceptor) {
        this.tenantInterceptor = tenantInterceptor;
    }

    @Override
    public void addInterceptors(InterceptorRegistry registry) {
        registry.addWebRequestInterceptor(tenantInterceptor);
    }
}

4. Now, let’s update the application.yml file with some properties for the database connections.

tenants:
  datasources:
    n47schema1:
      jdbcUrl: jdbc:mysql://localhost:3306/n47schema1?verifyServerCertificate=false&useSSL=false&requireSSL=false
      driverClassName: com.mysql.cj.jdbc.Driver
      username: root
      password:
    n47schema2:
      jdbcUrl: jdbc:mysql://localhost:3306/n47schema2?verifyServerCertificate=false&useSSL=false&requireSSL=false
      driverClassName: com.mysql.cj.jdbc.Driver
      username: root
      password:
spring:
  jpa:
    database-platform: org.hibernate.dialect.MySQL5Dialect

5. Following, we will wrap the tenant’s values to map with key = tenant name, value = data source in DataSourceProperties.

@ConfigurationProperties(prefix = "tenants")
@Component
public class DataSourceProperties {

    private Map<Object, Object> dataSources = new LinkedHashMap<>();

    public Map<Object, Object> getDataSources() {
        return dataSources;
    }

    public void setDataSources(Map<String, Map<String, String>> datasources) {
        datasources.forEach((key, value) -> this.dataSources.put(key, convert(value)));
    }

    public DataSource convert(Map<String, String> source) {
        return DataSourceBuilder.create()
                .url(source.get("jdbcUrl"))
                .driverClassName(source.get("driverClassName"))
                .username(source.get("username"))
                .password(source.get("password"))
                .build();
    }
}

6. Afterwards, we should create DataSource Bean, and for that purpose, I will create DataSourceConfig.

@Configuration
public class DataSourceConfig {

    private final DataSourceProperties dataSourceProperties;

    public DataSourceConfig(DataSourceProperties dataSourceProperties) {
        this.dataSourceProperties = dataSourceProperties;
    }

    @Bean
    public DataSource dataSource() {
        TenantRoutingDataSource customDataSource = new TenantRoutingDataSource();
        customDataSource.setTargetDataSources(dataSourceProperties.getDataSources());
        return customDataSource;
    }
}

7. At last, we will extend the AbstractRoutingDataSource and implement our lookup key.

public class TenantRoutingDataSource extends AbstractRoutingDataSource {

    @Override
    protected Object determineCurrentLookupKey() {
        return TenantStorage.getCurrentTenant();
    }

}

And we are done with the first part.

Let’s see how it looks in the real world:

For this example, we will use two schemas from the same database instance, we will create a user and get all users. Also, I will show you how you can implement Flyway and test the solution.

First, let’s configure our databases. In my local instance of MySQL server, we will create two schemas: n47schema1 and n47schema2.

Next step is to execute this CREATE statement on both schemas:

CREATE TABLE `users` (
	`id` INT(11) NOT NULL AUTO_INCREMENT,
	`name` VARCHAR(64) NOT NULL DEFAULT '' COLLATE 'utf8_general_ci',
	PRIMARY KEY (`id`)
);

Then, we will create two APIs, one for creating a user, and the other one to fetch all users.

@RestController
@RequestMapping("/users")
public class UserController {

    private final UserRepository userRepository;

    public UserController(UserRepository userRepository) {
        this.userRepository = userRepository;
    }

    @PostMapping
    public UserDomain addUser(@RequestBody UserRequestBody userRequestBody) {
        UserDomain userDomain = new UserDomain(userRequestBody.getName());
        return userRepository.save(userDomain);
    }

    @GetMapping
    public List<UserDomain> getAll() {
        return userRepository.findAll();
    }
}

Also we need to create UserDomain, UserRepository and UserRequestBody.

@Entity
@Table(name = "users")
public class UserDomain {

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;

    private String name;

    public UserDomain() {
    }

    public UserDomain(String name) {
        this.name = name;
    }

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }
}
public interface UserRepository extends JpaRepository<UserDomain, Long> {
}
public class UserRequestBody {
    private String name;

    public UserRequestBody() {
    }

    public UserRequestBody(String name) {
        this.name = name;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }
}

And we are done with coding.

We can run our application and start making a request.

First, let’s create some users with a POST request to http://localhost:8080/users. The most important thing not to forget is that we need to provide header X-Tenant with the value n47schema1 or n47schema2.

We will create two users for tenant n47schema1: Antonie and John. Example:

After that, we will change the X-Tenant header value to n47schema2 and create two users: William and Joseph.

You will notice that the ids retrieved in the response are the same as the first tenant value. Now let’s fetch the users by the API.

When you make a GET request to http://localhost:8080/users with header X-Tenant having value n47schema1 you will fetch the users from the n47schema1 schema, and when you make a request with a header value n47schema2 you will fetch from the n47schema2 schema.

You can also check the data in the database to be sure that it is stored correctly.

You can always set fallback if the X-Tenant header is not provided, or it’s with a wrong value.

As the last thing, I will show you how you can implement Flyway with multitenancy. First, you need to add flyway as a dependency and disable it in the application.yml

spring:
  flyway:
    enabled: false

Add PostConstruct method in DataSourceConfig configuration:

@PostConstruct
public void migrate() {
        for (Object dataSource : dataSourceProperties.getDataSources().values()) {
            DataSource source = (DataSource) dataSource;
            Flyway flyway = Flyway.configure().dataSource(source).load();
            flyway.migrate();
        }
}

And we are done, hope this blog helps you to understand what multitenancy is and how it’s implemented in Spring Boot project.

Download the source code

The project is freely available on our Bitbucket repository. Feel free to fix any mistakes and to comment here if you have any questions or feedback.

https://bitbucket.org/n47/spring-boot-multitenancy/src/master/

Spring Cloud OpenFeign

Reading Time: 3 minutes

Choosing the microservice architecture and Spring Boot means that you’ll need to pick the cleanest possible way for your services to communicate between themselves. Feign Client is one of the best solutions for this issue. It is a declarative Java web service client initially developed by Netflix. It’s an abstraction over REST-based calls allowing your microservices to communicate cleanly without the need to know REST details happening underneath. The main idea behind Feign Client is to create an interface with method definitions representing your service call. Even if you need some customization on requests or responses, you can do it in a declarative way. In this article, we will learn about integrating Feign in a Spring Boot application with an example for REST-based HTTP calls. An example will be given, in which two microservices will communicate with each other to transfer some data. But, first, let’s get familiar with feign.

What is Feign?

Feign is a declarative web service client that makes writing web service clients easier. We use the various annotations provided by the Spring framework such as Requestmapping, @PathVariable in a Java interface to Feign, a declarative web service client. It makes writing web service clients easier. To use Feign create an interface and annotate it. It has pluggable annotation support including Feign annotations and JAX-RS annotations. Feign also supports pluggable encoders and decoders. Spring Cloud adds support for Spring MVC annotations and for using the same HttpMessageConverters used by default in Spring Web. Spring Cloud integrates Ribbon and Eureka to provide a load-balanced HTTP client when using Feign.

Example Management API simulator

In the following code section, you can see a Feign Client resource example. The interface extends the origin API interface to declare the @FeignClient. The @FeignClient declares that a REST client should be created with this interface.

Setup pom.xml

The following dependency will be added:

    <dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-starter-openfeign</artifactId>
    </dependency>

Enable Feign Client

Now enable the Eureka Feign by using the @EnableFeignClients annotation in a main Spring Boot application class that is also annotated with the @SpringBootApplication annotation.

package com.example.demo;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.openfeign.EnableFeignClients;
@SpringBootApplication
@EnableFeignClients
@EnableDiscoveryClient
public class FeignClientApplication {
  public static void main(String[] args) {
    SpringApplication.run(FeignClientApplication.class, args);
  }
}

Use a Circuit Breaker with a Feign Client

If you want to use the Spring Cloud OpenFeign support for Hystrix circuit breakers, you must set the feign.hystrix.enabled property to true. In the Feign version of the Agency app, this property is configured in application.yml:

feign:
  hystrix:
    enabled: true
@FeignClient(name = "Validations", url = "${validations.host}")
public interface ValidationsClient {

    @GetMapping(value = "/validate-phone")
    InfoMessageResponse<PhoneNumber> validatePhoneNumber(@RequestParam("phoneNumber") String phoneNumber);

}

In the application.yml file, we will store the URL of the microservice with which we need to communicate:

validations:
  host: "http://localhost:9080/validations"

We will need to add a config for Feign as follows:

package com.demo;

import feign.Contract;
import org.springframework.cloud.openfeign.support.SpringMvcContract;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class FeignClientConfiguration {
    @Bean
    public Contract feignContract() {
        return new SpringMvcContract();
    }
}

Congrats! You just managed to run your Feign Client application by which you can easily locate and consume the REST service.

Summary

In this article, we have launched an example of microservice that communicates with one another. This article should be treated as an introduction to the subject of Feign Client and a discussion of integration with other important components of the microservice architecture.

JHipster with Google App Engine and Cloud MySQL

Reading Time: 5 minutes

How does it sound to set up a complete spring application, with front-end and database? With all the models, repositories and controllers? Even with Unit and Integration tests, with mocked data? All within a few hours? Your solution is JHipster!

JHipster

JHipster or β€œJava Hipster” is a handy application generator, a development platform, to develop and deploy web applications. JHipster has become popular in a short time, and it has been featured in many conferences all around the globe – Montreal, Omaha, Taipei, Richmond, Frankfurt, Paris, London. It supports:

  • Spring Boot (Back-end)
  • Angular/React/Vue (Front-end)
  • Spring microservices

JHipster is used for generating complete applications, it will create for you a Spring Boot and Angular/React/Vue application, high-quality application with most of the things pre-configured, using Java as back-end technology and an extensive set of Spring technologies: Spring Security, Spring Boot, Spring MVC (providing a framework for web-sockets, REST and MVC), Spring Data, etc. and Angular/React/Vue front-end and a suite of pre-configured development tools like Yeoman, Maven, Gradle, Grunt, Gulp.js and Bower.

JHipster gives you a head start in creating Spring Boot application with a set of pre-defined screens for user management, monitoring, and logging. The generated Spring Boot application is specifically tailored to make working with Angular/React/Vue a smoother experience. At the top of all that, JHipster also gives you the tools to update, manage and package the resulting application.

By now you may think it sounds too good to be true… But it is not everything that JHipster offers. If you are a web developer, by now probably you have a lot of questions. πŸ™‚
One important question we will answer in this blog post: is it supported by today’s cloud solutions, is it compatible with all of them? The answer is yes, it is compatible with the popular cloud solutions from Google, Amazon, Microsoft, and Heroku. Let’s see what it takes to make a complete integration in Google’s cloud platform, the app engine.

Compatibility Test - NEXCOM

Google App Engine

Google App Engine is a cloud solution provided by Google, a platform for developing and hosting web applications in data centres managed by Google; Platform as a Service (PaaS). Applications are sandboxed and run across multiple servers. The App Engine supports Java or Python, uses the Google query language and stores data in Google BigTable.

It is free of usage up to a certain amount of resource usage. After the user is exceeding the limited usage rates for storage, CPU resources, requests or number of API calls and concurrent requests can pay for more of these resources.

It is fully compatible with the JHipster generated projects. What it takes to host your application is just to follow the official how-to guide from Google App Engine documentation, as normal Spring Boot Application. To make things easier, Google offers a database which works closely with the Google App Engine, the Cloud SQL.

Cloud SQL

Cloud SQL is a database service offered by Google for their cloud solutions, fully-managed that makes it easy to configure, manage, maintain, and operate your relational databases on Google Cloud Platform.

It offers three database options to integrate with:

  • MySQL
  • PostgreSQL
  • SQL Server

Let’s get into details of integrating with Cloud SQL for MySQL:

  1. The first step is to create a Cloud SQL instance on the Google Cloud Platform, which requires few things like instance ID, password and etc. to be set and it gives you the option to choose the MySQL database version.
  2. The following step is to create the database in the newly created instance. It is possible to have more databases in one instance.
  3. Now, our application, in the case to be able to communicate with the Cloud SQL, without any permission blockers, we need to register the application in the Cloud SQL and manually configure the service account roles.
  4. The final step is connecting your application to the created Cloud SQL instance. It is done through JDBC. All the required properties can be found in the overview of the Cloud SQL, instance connection name, credentials and etc.

So the conclusion: don’t be afraid to invest some time in new technologies, be curious, you never know where they may lead you. Thank you for reading. πŸ™‚

Testing Spring Boot application with examples

Reading Time: 7 minutes

Why bother writing tests is already a well-discussed topic in software engineering. I won’t go into much details on this topic, but I will mention some of the main benefits.

In my opinion, testing your software is the only way to achieve confidence that your code will work on the production environment. Another huge benefit is that it allows you to refactor your code without fear that you will break some existing features.

Risk of bugs vs the number of tests

In the Java world, one of the most popular frameworks is Spring Boot, and part of the popularity and success of Spring Boot is exactly the topic of this blog – testing. Spring Boot and Spring framework offer out-of-the-box support for testing and new features are being added constantly. When Spring framework appeared on the Java scene in 2005, one of the reasons for its success was exactly this, ease of writing and maintaining tests, as opposed to JavaEE where writing integration requires additional libraries like Arquillian.

In the following, I will go over different types of tests in Spring Boot, when to use them and give a short example.

Testing pyramid

We can roughly group all automated tests into 3 groups:

  • Unit tests
  • Service (integration) tests
  • UI (end to end) tests

As we go from the bottom of the pyramid to the top tests become slower for execution, so if we measure execution times, unit tests will be in orders of few milliseconds, service in hundreds milliseconds and UI will execute in seconds. If we measure the scope of tests, unit as the name suggest test small units of code. Service will test the whole service or slice of that service that involve multiple units and UI has the largest scope and they are testing multiple different services. In the following sections, I will go over some examples and how we can unit test and service test spring boot application. UI testing can be achieved using external tools like Selenium and Protractor, but they are not related to Spring Boot.

Unit testing

In my opinion, unit tests make the most sense when you have some kind of validators, algorithms or other code that has lots of different inputs and outputs and executing integration tests would take too much time. Let’s see how we can test validator with Spring Boot.

Validator class for emails

public class Validators {

    private static final String EMAIL_REGEX = "(?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*|\"(?:[\\x01-\\x08\\x0b\\x0c\\x0e-\\x1f\\x21\\x23-\\x5b\\x5d-\\x7f]|\\\\[\\x01-\\x09\\x0b\\x0c\\x0e-\\x7f])*\")@(?:(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?|\\[(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?|[a-z0-9-]*[a-z0-9]:(?:[\\x01-\\x08\\x0b\\x0c\\x0e-\\x1f\\x21-\\x5a\\x53-\\x7f]|\\\\[\\x01-\\x09\\x0b\\x0c\\x0e-\\x7f])+)\\])";

    public static boolean isEmailValid(String email) {
        return email.matches(EMAIL_REGEX);
    }
}

Unit tests for email validator with Spring Boot

@RunWith(MockitoJUnitRunner.class)
public class ValidatorsTest {
    @Test
    public void testEmailValidator() {
        assertThat(isEmailValid("valid@north-47.com")).isTrue();
        assertThat(isEmailValid("invalidnorth-47.com")).isFalse();
        assertThat(isEmailValid("invalid@47")).isFalse();
    }
}

MockitoJUnitRunner is used for using Mockito in tests and detection of @Mock annotations. In this case, we are testing email validator as a separate unit from the rest of the application. MockitoJUnitRunner is not a Spring Boot annotation, so this way of writing unit tests can be done in other frameworks as well.

Integration testing of the whole application

If we have to choose only one type of test in Spring Boot, then using the integration test to test the whole application makes the most sense. We will not be able to cover all the scenarios, but we will significantly reduce the risk. In order to do integration testing, we need to start the application context. In Spring Boot 2, this is achieved with following annotations @RunWith(SpringRunner.class) and @SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT. This will start the application on some random port and we can inject beans into our tests and do REST calls on application endpoints.

In the following is an example code for testing book endpoints. For making rest API calls we are using Spring TestRestTemplate which is more suitable for integration tests compared to RestTemplate.

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
public class SpringBootTestingApplicationTests {

    @Autowired
    private TestRestTemplate restTemplate;

    @Autowired
    private BookRepository bookRepository;

    private Book defaultBook;

    @Before
    public void setup() {
        defaultBook = new Book(null, "Asimov", "Foundation", 350);
    }

    @Test
    public void testShouldReturnCreatedWhenValidBook() {
        ResponseEntity<Book> bookResponseEntity = this.restTemplate.postForEntity("/books", defaultBook, Book.class);

        assertThat(bookResponseEntity.getStatusCode()).isEqualTo(HttpStatus.CREATED);
        assertThat(bookResponseEntity.getBody().getId()).isNotNull();
        assertThat(bookRepository.findById(1L)).isPresent();
    }

    @Test
    public void testShouldFindBooksWhenExists() throws Exception {
        Book savedBook = bookRepository.save(defaultBook);

        ResponseEntity<Book> bookResponseEntity = this.restTemplate.getForEntity("/books/" + savedBook.getId(), Book.class);

        assertThat(bookResponseEntity.getStatusCode()).isEqualTo(HttpStatus.OK);
        assertThat(bookResponseEntity.getBody().getId()).isEqualTo(savedBook.getId());
    }

    @Test
    public void testShouldReturn404WhenBookMissing() throws Exception {
        Long nonExistingId = 999L;
        ResponseEntity<Book> bookResponseEntity = this.restTemplate.getForEntity("/books/" + nonExistingId, Book.class);

        assertThat(bookResponseEntity.getStatusCode()).isEqualTo(HttpStatus.NOT_FOUND);
    }
}

Integration testing of web layer (controllers)

Spring Boot offers the ability to test layers in isolation and only starting the necessary beans that are required for testing. From Spring Boot v1.4 on there is a very convenient annotation @WebMvcTest that only the required components in order to do a typical web layer test like controllers, Jackson converters and similar without starting the full application context and avoid startup of unnecessary components for this test like database layer. When we are using this annotation we will be making the REST calls with MockMvc class.

Following is an example of testing the same endpoints like in the above example, but in this case, we are only testing if the web layer is working as expected and we are mocking the database layer using @MockBean annotation which is also available starting from Spring Boot v1.4. Using these annotations we are only using BookController in the application context and mocking database layer.

@RunWith(SpringRunner.class)
@WebMvcTest(BookController.class)
public class BookControllerTest {
    @Autowired
    private MockMvc mockMvc;

    @MockBean
    private BookRepository repository;

    @Autowired
    private ObjectMapper objectMapper;

    private static final Book DEFAULT_BOOK = new Book(null, "Asimov", "Foundation", 350);

    @Test
    public void testShouldReturnCreatedWhenValidBook() throws Exception {
        when(repository.save(Mockito.any())).thenReturn(DEFAULT_BOOK);

        this.mockMvc.perform(post("/books")
                .content(objectMapper.writeValueAsString(DEFAULT_BOOK))
                .contentType(MediaType.APPLICATION_JSON)
                .accept(MediaType.APPLICATION_JSON))
                .andExpect(status().isCreated())
                .andExpect(MockMvcResultMatchers.jsonPath("$.name").value(DEFAULT_BOOK.getName()));
    }

    @Test
    public void testShouldFindBooksWhenExists() throws Exception {
        Long id = 1L;
        when(repository.findById(id)).thenReturn(Optional.of(DEFAULT_BOOK));

        this.mockMvc.perform(get("/books/" + id)
                .accept(MediaType.APPLICATION_JSON))
                .andExpect(status().isOk())
                .andExpect(MockMvcResultMatchers.content().string(Matchers.is(objectMapper.writeValueAsString(DEFAULT_BOOK))));
    }

    @Test
    public void testShouldReturn404WhenBookMissing() throws Exception {
        Long id = 1L;
        when(repository.findById(id)).thenReturn(Optional.empty());

        this.mockMvc.perform(get("/books/" + id)
                .accept(MediaType.APPLICATION_JSON))
                .andExpect(status().isNotFound());
    }
}

Integration testing of database layer (repositories)

Similarly to the way that we tested web layer we can test the database layer in isolation, without starting the web layer. This kind of testing in Spring Boot is achieved using the annotation @DataJpaTest. This annotation will do only the auto-configuration related to JPA layer and by default will use an in-memory database because its fastest to startup and for most of the integration tests will do just fine. We also get access TestEntityManager which is EntityManager with supporting features for integration tests of JPA.

Following is an example of testing the database layer of the above application. With these tests we are only checking if the database layer is working as expected we are not making any REST calls and we are verifying results from BookRepository, by using the provided TestEntityManager.

@RunWith(SpringRunner.class)
@DataJpaTest
public class BookRepositoryTest {
    @Autowired
    private TestEntityManager entityManager;

    @Autowired
    private BookRepository repository;

    private Book defaultBook;

    @Before
    public void setup() {
        defaultBook = new Book(null, "Asimov", "Foundation", 350);
    }

    @Test
    public void testShouldPersistBooks() {
        Book savedBook = repository.save(defaultBook);

        assertThat(savedBook.getId()).isNotNull();
        assertThat(entityManager.find(Book.class, savedBook.getId())).isNotNull();
    }

    @Test
    public void testShouldFindByIdWhenBookExists() {
        Book savedBook = entityManager.persistAndFlush(defaultBook);

        assertThat(repository.findById(savedBook.getId())).isEqualTo(Optional.of(savedBook));
    }

    @Test
    public void testFindByIdShouldReturnEmptyWhenBookNotFound() {
        long nonExistingID = 47L;
        
        assertThat(repository.findById(nonExistingID)).isEqualTo(Optional.empty());
    }
}

Conclusion

You can find a working example with all of these tests on the following repo: https://gitlab.com/47northlabs/public/spring-boot-testing.

In the following table, I’m showing the execution times with the startup of the different types of tests that I’ve used as examples. We can clearly see that unit tests, as mentioned in the beginning, are the fastest ones and that separating integration tests into layered testing leads to faster execution times.

Type of testExecution time with startup
Unit test80 ms
Integration test620 ms
Web layer test190 ms
Database layer test220 ms

Deploy Spring Boot Application on Google Cloud with GitLab

Reading Time: 5 minutes

A lot of developers experience a painful process with their code being deployed on the environment. We, as a company, suffer the same thing so that we wanted to create something to make our life easier.

After internal discussions, we decided to make a fully automated CI/CD process. We investigated and came up with a decision for that purpose to implement Gitlab CI/CD with google cloud deployment.

Further in this blog, you can see how we achieved that and how you can achieve the same.

Let’s start with setting up. πŸ™‚

  • After that, we create a simple rest controller for testing purposes.
package com.northlabs.gitlabdemo.rest;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class RootController {

    @GetMapping(value = "/")
    public String root() {
        return "Hello from Root";
    }

    @GetMapping(value = "/demo")
    public String demo() {
        return "Hello from Demo";
    }

}
  • Next step is to push the application to our GitLab repo.
  1. cd path_to_root_folder
  2. git init
  3. git remote add origin https://gitlab.com/47northlabs/47northlabs/product/playground/gitlab-demo.git
  4. git add .
  5. git commit -m “Initial commit”
  6. git push -u origin master

Now, after we have our application in GitLab repository, we can go to setup Google Cloud. But, before you start, be sure that you have a G-Suite account with enabled billing.

  • The first step is to create a new project: in my case it is northlabsgitlab-demo.

Create project: northlabs-gitlab-demo
  • Now, let’s create our Kubernetes Cluster.

It will take some time after Kubernetes clusters are initialized so that GitLab will be able to create a cluster.

We are done with Google Cloud, so it’s time to set up Kubernetes in our GitLab repository.

  • First, we add a Kubernetes cluster.
Add Kubernetes Cluster
Sign in with Google
  • Next, we give a name to the cluster and select a project from our Google Cloud account: in my case it’s gitlab-demo.
  • The base domain name should be set up.
  • Installing Helm Tiller is required, and installing other applications is optional (I installed Ingress, Cert-Manager, Prometheus, and GitLab Runner).

Install Helm Tiller

Installed Ingress, Cert-Manager, Prometheus, and GitLab Runner

After installing the applications it’s IMPORTANT to update your DNS settings. Ingress IP address should be copied and added to your DNS configuration.
In my case, it looks like this:

Configure DNS

We are almost done. πŸ™‚

  • The last thing that should be done is to enable Auto DevOps.
  • And to set up Auto DevOps.

Now take your coffee and watch your pipelines. πŸ™‚
After a couple of minutes your pipeline will finish and will look like this:

Now open the production pipeline and in the log, under notes section, check the URL of the application. In my case that is:

Application should be accessible at: http://47northlabs-47northlabs-product-playground-gitlab-demo.gitlab-demo.north-47.com

Open the URL in browser or postman.

https://47northlabs-47northlabs-product-playground-gitlab-demo.gitlab-demo.north-47.com
https://47northlabs-47northlabs-product-playground-gitlab-demo.gitlab-demo.north-47.com/demo
  • Let’s edit our code and push it to GitLab repository.
package com.northlabs.gitlabdemo.rest;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class RootController {

    @GetMapping(value = "/")
    public String root() {
        return "Hello from Root v1";
    }

    @GetMapping(value = "/demo")
    public String demo() {
        return "Hello from Demo v1";
    }
}

After the job is finished, if you check the same URL, you will see that the values are now changed.


https://47northlabs-47northlabs-product-playground-gitlab-demo.gitlab-demo.north-47.com

https://47northlabs-47northlabs-product-playground-gitlab-demo.gitlab-demo.north-47.com/demo

And we are done !!!

This is just a basic implementation of the GitLab Auto DevOps. In some of the next blogs, we will show how to customize your pipeline, and how to add, remove or edit jobs.