Multitenancy with Spring Boot

Reading Time: 7 minutes

Why should you consider implementing multitenancy in your project?

  • Cost: Multi-tenant architecture allows the sharing of resources, databases, and the application itself, thus the cost to run the system is fixed.
  • Maintenance: Users do not have to pay a considerable amount of fees to keep the software up to date. This reduces the overall cost of maintenance for each tenant.
  • Performance: Easier to assess and optimize speed, utilization, response time across the entire system, and even update the technology stack when needed.

In this blog we will implement multitenancy in our Spring Boot project.

Let’s create a simple Spring Boot project from start.spring.io, with only basic dependencies (Spring Web, Spring Data JPA, Spring Configuration Processor, MySQL Driver).

The good thing for implementing multitenancy is that we do not need additional dependencies.
We will split this example into two parts. In the first one, we will explain the idea/logic behind it and split the approach into 7 configuration steps, and explain every step. In the second part, we will see how it’s implemented in real life and we will test the solution.

1. Let’s start with creating Tenant Storage. We will use it for keeping the tenant value while the request is executing.

public class TenantStorage {

    private static ThreadLocal<String> currentTenant = new ThreadLocal<>();

    public static void setCurrentTenant(String tenantId) {
        currentTenant.set(tenantId);
    }

    public static String getCurrentTenant() {
        return currentTenant.get();
    }

    public static void clear() {
        currentTenant.remove();
    }
}

2. Next, we will create the Tenant Interceptor. For every request, we will set the value at the beginning and clear it at the end. As you can see, in Tenant Interceptor, I decided for this demo to fetch the value of the tenant from request header (X-Tenant), this is up to you. Just keep an eye on data security when using this in production. Maybe you want to fetch from a cookie or some other header name.

@Component
public class TenantInterceptor implements WebRequestInterceptor {

    private static final String TENANT_HEADER = "X-Tenant";

    @Override
    public void preHandle(WebRequest request) {
        TenantStorage.setCurrentTenant(request.getHeader(TENANT_HEADER));
    }

    @Override
    public void postHandle(WebRequest webRequest, ModelMap modelMap) {
        TenantStorage.clear();
    }

    @Override
    public void afterCompletion(WebRequest webRequest, Exception e) {

    }
}

3. Next thing is to add the tenant Interceptor in the interceptor registry. For that purpose, I will create WebConfiguration that will implement WebMvcConfigurer.

@Configuration
public class WebConfiguration implements WebMvcConfigurer {

    private final TenantInterceptor tenantInterceptor;

    public WebConfiguration(TenantInterceptor tenantInterceptor) {
        this.tenantInterceptor = tenantInterceptor;
    }

    @Override
    public void addInterceptors(InterceptorRegistry registry) {
        registry.addWebRequestInterceptor(tenantInterceptor);
    }
}

4. Now, let’s update the application.yml file with some properties for the database connections.

tenants:
  datasources:
    n47schema1:
      jdbcUrl: jdbc:mysql://localhost:3306/n47schema1?verifyServerCertificate=false&amp;useSSL=false&amp;requireSSL=false
      driverClassName: com.mysql.cj.jdbc.Driver
      username: root
      password:
    n47schema2:
      jdbcUrl: jdbc:mysql://localhost:3306/n47schema2?verifyServerCertificate=false&amp;useSSL=false&amp;requireSSL=false
      driverClassName: com.mysql.cj.jdbc.Driver
      username: root
      password:
spring:
  jpa:
    database-platform: org.hibernate.dialect.MySQL5Dialect

5. Following, we will wrap the tenant’s values to map with key = tenant name, value = data source in DataSourceProperties.

@ConfigurationProperties(prefix = "tenants")
@Component
public class DataSourceProperties {

    private Map<Object, Object> dataSources = new LinkedHashMap<>();

    public Map<Object, Object> getDataSources() {
        return dataSources;
    }

    public void setDataSources(Map<String, Map<String, String>> datasources) {
        datasources.forEach((key, value) -> this.dataSources.put(key, convert(value)));
    }

    public DataSource convert(Map<String, String> source) {
        return DataSourceBuilder.create()
                .url(source.get("jdbcUrl"))
                .driverClassName(source.get("driverClassName"))
                .username(source.get("username"))
                .password(source.get("password"))
                .build();
    }
}

6. Afterwards, we should create DataSource Bean, and for that purpose, I will create DataSourceConfig.

@Configuration
public class DataSourceConfig {

    private final DataSourceProperties dataSourceProperties;

    public DataSourceConfig(DataSourceProperties dataSourceProperties) {
        this.dataSourceProperties = dataSourceProperties;
    }

    @Bean
    public DataSource dataSource() {
        TenantRoutingDataSource customDataSource = new TenantRoutingDataSource();
        customDataSource.setTargetDataSources(dataSourceProperties.getDataSources());
        return customDataSource;
    }
}

7. At last, we will extend the AbstractRoutingDataSource and implement our lookup key.

public class TenantRoutingDataSource extends AbstractRoutingDataSource {

    @Override
    protected Object determineCurrentLookupKey() {
        return TenantStorage.getCurrentTenant();
    }

}

And we are done with the first part.

Let’s see how it looks in the real world:

For this example, we will use two schemas from the same database instance, we will create a user and get all users. Also, I will show you how you can implement Flyway and test the solution.

First, let’s configure our databases. In my local instance of MySQL server, we will create two schemas: n47schema1 and n47schema2.

Next step is to execute this CREATE statement on both schemas:

CREATE TABLE `users` (
	`id` INT(11) NOT NULL AUTO_INCREMENT,
	`name` VARCHAR(64) NOT NULL DEFAULT '' COLLATE 'utf8_general_ci',
	PRIMARY KEY (`id`)
);

Then, we will create two APIs, one for creating a user, and the other one to fetch all users.

@RestController
@RequestMapping("/users")
public class UserController {

    private final UserRepository userRepository;

    public UserController(UserRepository userRepository) {
        this.userRepository = userRepository;
    }

    @PostMapping
    public UserDomain addUser(@RequestBody UserRequestBody userRequestBody) {
        UserDomain userDomain = new UserDomain(userRequestBody.getName());
        return userRepository.save(userDomain);
    }

    @GetMapping
    public List<UserDomain> getAll() {
        return userRepository.findAll();
    }
}

Also we need to create UserDomain, UserRepository and UserRequestBody.

@Entity
@Table(name = "users")
public class UserDomain {

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;

    private String name;

    public UserDomain() {
    }

    public UserDomain(String name) {
        this.name = name;
    }

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }
}
public interface UserRepository extends JpaRepository<UserDomain, Long> {
}
public class UserRequestBody {
    private String name;

    public UserRequestBody() {
    }

    public UserRequestBody(String name) {
        this.name = name;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }
}

And we are done with coding.

We can run our application and start making a request.

First, let’s create some users with a POST request to http://localhost:8080/users. The most important thing not to forget is that we need to provide header X-Tenant with the value n47schema1 or n47schema2.

We will create two users for tenant n47schema1: Antonie and John. Example:

After that, we will change the X-Tenant header value to n47schema2 and create two users: William and Joseph.

You will notice that the ids retrieved in the response are the same as the first tenant value. Now let’s fetch the users by the API.

When you make a GET request to http://localhost:8080/users with header X-Tenant having value n47schema1 you will fetch the users from the n47schema1 schema, and when you make a request with a header value n47schema2 you will fetch from the n47schema2 schema.

You can also check the data in the database to be sure that it is stored correctly.

You can always set fallback if the X-Tenant header is not provided, or it’s with a wrong value.

As the last thing, I will show you how you can implement Flyway with multitenancy. First, you need to add flyway as a dependency and disable it in the application.yml

spring:
  flyway:
    enabled: false

Add PostConstruct method in DataSourceConfig configuration:

@PostConstruct
public void migrate() {
        for (Object dataSource : dataSourceProperties.getDataSources().values()) {
            DataSource source = (DataSource) dataSource;
            Flyway flyway = Flyway.configure().dataSource(source).load();
            flyway.migrate();
        }
}

And we are done, hope this blog helps you to understand what multitenancy is and how it’s implemented in Spring Boot project.

Download the source code

The project is freely available on our Bitbucket repository. Feel free to fix any mistakes and to comment here if you have any questions or feedback.

https://bitbucket.org/n47/spring-boot-multitenancy/src/master/

Spring Cloud OpenFeign

Reading Time: 3 minutes

Choosing the microservice architecture and Spring Boot means that you’ll need to pick the cleanest possible way for your services to communicate between themselves. Feign Client is one of the best solutions for this issue. It is a declarative Java web service client initially developed by Netflix. It’s an abstraction over REST-based calls allowing your microservices to communicate cleanly without the need to know REST details happening underneath. The main idea behind Feign Client is to create an interface with method definitions representing your service call. Even if you need some customization on requests or responses, you can do it in a declarative way. In this article, we will learn about integrating Feign in a Spring Boot application with an example for REST-based HTTP calls. An example will be given, in which two microservices will communicate with each other to transfer some data. But, first, let’s get familiar with feign.

What is Feign?

Feign is a declarative web service client that makes writing web service clients easier. We use the various annotations provided by the Spring framework such as Requestmapping, @PathVariable in a Java interface to Feign, a declarative web service client. It makes writing web service clients easier. To use Feign create an interface and annotate it. It has pluggable annotation support including Feign annotations and JAX-RS annotations. Feign also supports pluggable encoders and decoders. Spring Cloud adds support for Spring MVC annotations and for using the same HttpMessageConverters used by default in Spring Web. Spring Cloud integrates Ribbon and Eureka to provide a load-balanced HTTP client when using Feign.

Example Management API simulator

In the following code section, you can see a Feign Client resource example. The interface extends the origin API interface to declare the @FeignClient. The @FeignClient declares that a REST client should be created with this interface.

Setup pom.xml

The following dependency will be added:

    <dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-starter-openfeign</artifactId>
    </dependency>

Enable Feign Client

Now enable the Eureka Feign by using the @EnableFeignClients annotation in a main Spring Boot application class that is also annotated with the @SpringBootApplication annotation.

package com.example.demo;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.openfeign.EnableFeignClients;
@SpringBootApplication
@EnableFeignClients
@EnableDiscoveryClient
public class FeignClientApplication {
  public static void main(String[] args) {
    SpringApplication.run(FeignClientApplication.class, args);
  }
}

Use a Circuit Breaker with a Feign Client

If you want to use the Spring Cloud OpenFeign support for Hystrix circuit breakers, you must set the feign.hystrix.enabled property to true. In the Feign version of the Agency app, this property is configured in application.yml:

feign:
  hystrix:
    enabled: true
@FeignClient(name = "Validations", url = "${validations.host}")
public interface ValidationsClient {

    @GetMapping(value = "/validate-phone")
    InfoMessageResponse<PhoneNumber> validatePhoneNumber(@RequestParam("phoneNumber") String phoneNumber);

}

In the application.yml file, we will store the URL of the microservice with which we need to communicate:

validations:
  host: "http://localhost:9080/validations"

We will need to add a config for Feign as follows:

package com.demo;

import feign.Contract;
import org.springframework.cloud.openfeign.support.SpringMvcContract;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class FeignClientConfiguration {
    @Bean
    public Contract feignContract() {
        return new SpringMvcContract();
    }
}

Congrats! You just managed to run your Feign Client application by which you can easily locate and consume the REST service.

Summary

In this article, we have launched an example of microservice that communicates with one another. This article should be treated as an introduction to the subject of Feign Client and a discussion of integration with other important components of the microservice architecture.

Crypto Trading Bot

Reading Time: 6 minutes

Every year, N47 as a tech family celebrates a tech festival as Hackdays at the end of the year. In December 2019 we were in Budapest, Hungary for Hackdays. There were five different teams and each team had created some cool projects in a short time. I was also part of a team and we implemented a simple Trading Bot for Crypto. In this blog post, I want to share my experiences.

Trading Platform

To create a Trading Bot, you first need to find the right trading platform. We selected Binance DEX, which can offer a good volume for selected trading pairs, testnet for test purposes and was a Decentralized EXchange (DEX). Thanks to DEX, we can connect the wallet directly and use the credit directly from it.

Binance Chain is a new blockchain and peer-to-peer system developed by Binance and the community. Binance DEX is a secure, native marketplace that is based on the Binance Chain and enables the exchange of digital assets that are issued and listed in the DEX. Reconciliation takes place within the blockchain nodes, and all transactions are recorded in the chain, creating a complete, verifiable activity book. BNB is the native token in the Binance Chain, so users are charged the BNB for sending transactions.

Trading fees are subject to a complex logic, which can lead to individual transactions not being calculated exactly at the rates mentioned here, but instead between them. This is due to the block-based matching engine used in the DEX. The difference between Binance Chain and Ethereum is that there is no idea ​​of gas. As a result, the fees for the remaining transactions are set. There are no fees for a new order.

The testnet is a test environment for Binance Chain network, run by the Binance Chain development community, which is open to developers. The validators on the testnet are from the development team. There is also a web wallet that can directly interact with the DEX testnet. It also provides 200 testnet BNB so that you can interact with the Binance DEX testnet.

For developers, Binance DEX has also provided the REST API for testnet and main net. It also provides different Binance Chain SDKs for different languages like GoLang, Javascript, Java etc. We used Java SDK for the Trading Bot with Spring Boot.

Trading Strategy

To implement a Trading Bot, you need to know which pair and when to buy and sell Crypto for these pairs. We selected a very simple trading strategy for our project. First, we selected the NEXO / BINANCE trading pair (Nexo / BNB) because this pair has the highest trading volume. Perhaps you can choose a different trading pair based on your analysis.

For the purchase and sale, we made a decision based on Candlestick count. We considered the size of Candlestick for 15 minutes. If three are still red (price drops), buy Nexo and if three are still green (price increase), sell Nexo. Once you’ve bought or sold, you’ll have to wait for the next three. Continue with the red or green Candlestick. The purchase and sales volume is always 20 Nexo. You can also choose this based on your analysis.

Let’s Code IT

We have implemented the frontend (Vue.Js) and the backend (Spring Boot) for the Trading Bot, but here I will only go into the backend application as it contains the main logic. As already mentioned, the backend application was created with Spring Boot and Binance Chain Java SDK.

We used ThreadPoolTaskScheduler for the application. This scheduler runs every 2 seconds and checks Candlestick. This scheduler has to be activated once via the frontend app and is then triggered automatically every 2 seconds.

ThreadPoolTaskScheduler.scheduleAtFixedRate(task, 2000);

Based on the scheduler, the execute() method is triggered every two seconds. This method first collects all previous Candlestick for 15 minutes and calculates the green and red Candlestick. Based on this, it will buy or sell.

private double quantity = 20.0;
private String symbol = NEXO-A84_BNB; 
public void execute() {
        List<Candlestick> candleSticks = binanceDexApiRestClient.getCandleStickBars(this.symbol, CandlestickInterval.FIFTEEN_MINUTES);
        List<Candlestick> lastThreeElements = candleSticks.subList(candleSticks.size() - 4, candleSticks.size() - 1);
        // check if last three candlesticks are all red (close - open is negative)
        boolean allRed = lastThreeElements.stream()
                .filter(cs -> Double.parseDouble(cs.getClose()) - Double.parseDouble(cs.getOpen()) < 0.0d).count() == 3;
        // check if last three candlesticks are all green (close - open is positive)
        boolean allGreen = lastThreeElements.stream()
                .filter(cs -> Double.parseDouble(cs.getOpen()) - Double.parseDouble(cs.getClose()) < 0.0d).count() == 3;
        Wallet wallet = new Wallet(privateKey, binanceDexEnvironment);

        // open and closed orders required to check last order creation time
        OrderList closedOrders = binanceDexApiRestClient.getClosedOrders(wallet.getAddress());
        OrderList openOrders = binanceDexApiRestClient.getOpenOrders(wallet.getAddress());

        // order book required for buying and selling price
        OrderBook orderBook = binanceDexApiRestClient.getOrderBook(symbol, 5);
        Account account = binanceDexApiRestClient.getAccount(wallet.getAddress());

        if ((openOrders.getOrder().isEmpty() || openOrders.getOrder().get(0).getOrderCreateTime().plusMinutes(45).isBeforeNow()) &amp;&amp; (closedOrders.getOrder().isEmpty() || closedOrders.getOrder().get(0).getOrderCreateTime().plusMinutes(45).isBeforeNow())) {
            if (allRed) {
                if (Double.parseDouble(account.getBalances().stream().filter(b -> b.getSymbol().equals(symbol.split("_")[1])).findFirst().get().getFree()) >= (quantity * Double.parseDouble(orderBook.getBids().get(0).getPrice()))) {
                    order(wallet, symbol, OrderSide.BUY, orderBook.getBids().get(0).getPrice());
                    System.out.println("Buy Order Placed  Quantity:" + quantity + "  Symbol:" + symbol + "  Price:" + orderBook.getAsks().get(0).getPrice());
                    
                } else {
                    System.out.println("do not have enough Token: " + symbol + " in wallet for buy");
                }

            } else if (allGreen) {
                if (Double.parseDouble(account.getBalances().stream().filter(b -> b.getSymbol().equals(symbol.split("_")[0])).findFirst().get().getFree()) >= quantity) {
                    order(wallet, symbol, OrderSide.SELL, orderBook.getAsks().get(0).getPrice());
                    System.out.println("Sell Order Placed  Quantity:" + quantity + "  Symbol:" + symbol + "  Price:" + orderBook.getAsks().get(0).getPrice());
                } else {
                    System.out.println("do not have enough Token:" + symbol + " in wallet for sell");
                }

            } else System.out.println("do nothing");
        } else System.out.println("do nothing");

    }

    private void order(Wallet wallet, String symbol, OrderSide orderSide, String price) {
        NewOrder no = new NewOrder();
        no.setTimeInForce(TimeInForce.GTE);
        no.setOrderType(OrderType.LIMIT);
        no.setSide(orderSide);
        no.setPrice(price);
        no.setQuantity(String.valueOf(quantity));
        no.setSymbol(symbol);

        TransactionOption options = TransactionOption.DEFAULT_INSTANCE;

        try {
            List<TransactionMetadata> resp = binanceDexApiRestClient.newOrder(no, wallet, options, true);
            log.info("TransactionMetadata", resp);
        } catch (Exception e) {
            log.error("Error occurred while order", e);
        }
    }

At first glance, the strategy looks really simple, I agree. After this initial setup, however, it’s easy to add more complex logic with some AI.

Result

Since 12th December 2019, this bot is running on Google Cloud and did 1130 transactions (buy/sell) until 14th April 2020. Initially, I started this bot with 2.6 BNB. On 7th February 2020, the balance was 2.1 BNB in the wallet, but while writing this blog on 14th April 2020, it looks like the bot has recovered the loss and the balance is 2.59 BNB. Hopefully, in future it will make some profit💰🙂.

Let me know your suggestions in a comment on this bot and I would also like to answer your questions if you have anything on this topic. Thanks for the time.

JHipster with Google App Engine and Cloud MySQL

Reading Time: 5 minutes

How does it sound to set up a complete spring application, with front-end and database? With all the models, repositories and controllers? Even with Unit and Integration tests, with mocked data? All within a few hours? Your solution is JHipster!

JHipster

JHipster or “Java Hipster” is a handy application generator, a development platform, to develop and deploy web applications. JHipster has become popular in a short time, and it has been featured in many conferences all around the globe – Montreal, Omaha, Taipei, Richmond, Frankfurt, Paris, London. It supports:

  • Spring Boot (Back-end)
  • Angular/React/Vue (Front-end)
  • Spring microservices

JHipster is used for generating complete applications, it will create for you a Spring Boot and Angular/React/Vue application, high-quality application with most of the things pre-configured, using Java as back-end technology and an extensive set of Spring technologies: Spring Security, Spring Boot, Spring MVC (providing a framework for web-sockets, REST and MVC), Spring Data, etc. and Angular/React/Vue front-end and a suite of pre-configured development tools like Yeoman, Maven, Gradle, Grunt, Gulp.js and Bower.

JHipster gives you a head start in creating Spring Boot application with a set of pre-defined screens for user management, monitoring, and logging. The generated Spring Boot application is specifically tailored to make working with Angular/React/Vue a smoother experience. At the top of all that, JHipster also gives you the tools to update, manage and package the resulting application.

By now you may think it sounds too good to be true… But it is not everything that JHipster offers. If you are a web developer, by now probably you have a lot of questions. 🙂
One important question we will answer in this blog post: is it supported by today’s cloud solutions, is it compatible with all of them? The answer is yes, it is compatible with the popular cloud solutions from Google, Amazon, Microsoft, and Heroku. Let’s see what it takes to make a complete integration in Google’s cloud platform, the app engine.

Compatibility Test - NEXCOM

Google App Engine

Google App Engine is a cloud solution provided by Google, a platform for developing and hosting web applications in data centres managed by Google; Platform as a Service (PaaS). Applications are sandboxed and run across multiple servers. The App Engine supports Java or Python, uses the Google query language and stores data in Google BigTable.

It is free of usage up to a certain amount of resource usage. After the user is exceeding the limited usage rates for storage, CPU resources, requests or number of API calls and concurrent requests can pay for more of these resources.

It is fully compatible with the JHipster generated projects. What it takes to host your application is just to follow the official how-to guide from Google App Engine documentation, as normal Spring Boot Application. To make things easier, Google offers a database which works closely with the Google App Engine, the Cloud SQL.

Cloud SQL

Cloud SQL is a database service offered by Google for their cloud solutions, fully-managed that makes it easy to configure, manage, maintain, and operate your relational databases on Google Cloud Platform.

It offers three database options to integrate with:

  • MySQL
  • PostgreSQL
  • SQL Server

Let’s get into details of integrating with Cloud SQL for MySQL:

  1. The first step is to create a Cloud SQL instance on the Google Cloud Platform, which requires few things like instance ID, password and etc. to be set and it gives you the option to choose the MySQL database version.
  2. The following step is to create the database in the newly created instance. It is possible to have more databases in one instance.
  3. Now, our application, in the case to be able to communicate with the Cloud SQL, without any permission blockers, we need to register the application in the Cloud SQL and manually configure the service account roles.
  4. The final step is connecting your application to the created Cloud SQL instance. It is done through JDBC. All the required properties can be found in the overview of the Cloud SQL, instance connection name, credentials and etc.

So the conclusion: don’t be afraid to invest some time in new technologies, be curious, you never know where they may lead you. Thank you for reading. 🙂

JHipster, is it worth it?

Reading Time: 7 minutes

JHipster is an open-source platform to generate, develop and deploy Spring Boot + Angular / React / Vue web applications. And with over 15 000 stars on Github, it is the most popular code generation framework for Spring Boot. But is it worth the hype or is the generated code too difficult to maintain and not production-ready?

How does it work?

The first thing to note is that JHipster is not a separate framework by itself. It uses yeoman and .jdl files in order to generate code in Spring Boot for backend and Angular or React or Vue for frontend. And after the initial generation of the project, you have the option to use the generated code without ever running JHipster commands again or to use JHipster in order to incrementally grow the projects and develop new features.

What exactly is JDL?

JDL is a JHipster-specific domain language where you can describe all your applications, deployments, entities and their relationships in a single file (or more than one) with a user-friendly syntax.

You can use our online JDL-Studio or one of the JHipster IDE plugins/extensions, which support working with JDL files.

Example of simple JDL file for Blog application:

entity Blog {
  name String required minlength(3)
  handle String required minlength(2)
}

entity Post {
  title String required
  content TextBlob required
  date Instant required
}

entity Tag {
  name String required minlength(2)
}

relationship ManyToOne {
  Blog{user(login)} to User
  Post{blog(name)} to Blog
}

relationship ManyToMany {
  Post{tag(name)} to Tag{entry}
}

paginate Post, Tag with infinite-scroll

Which technologies are used?

On the backend we have the following technologies:

  • Spring Boot as the primary backend framework
  • Maven or Gradle for configuration
  • Spring Security as a Security framework
  • Spring MVC REST + Jackson for REST communication
  • Spring Data JPA + Bean Validation for Object Relational Mapping
  • Liquibase for Database updates
  • MySQL, PostgreSQL, Oracle, MsSQL or MariaDB as SQL databases
  • MongoDB, Counchbase or Cassandra as NoSQL databases
  • Thymleaf as a templating engine
  • Optional Elasticsearch support if you want to have search capabilities on top of your database
  • Optional Spring WebSockets for Web Socket communication
  • Optional Kafka support as a publish-subscribe messaging system

On the frontend side these technologies are used:

  • Angular or React or Vue as a primary frontend framework
  • Responsive Web Design with Twitter Bootstrap
  • HTML5 Boilerplate compatible with modern browsers
  • Full internationalization support
  • Installation of new JavaScript libraries with NPM
  • Build, optimization and live reload with Webpack
  • Testing with Jest and Protractor
  • Optional Sass support for CSS design

How to get started?

  1. Pre-requirements: JavaGit and Node.js.
  2. Install JHipster npm install -g generator-jhipster
  3. Create a new directory and go into it mkdir myApp && cd myApp
  4. Run JHipster and follow instructions on the screen jhipster
  5. Model your entities with JDL Studio and download the resulting jhipster-jdl.jh file
  6. Generate your entities with jhipster import-jdl jhipster-jdl.jh
  7. Run ./mvnw to start generated backend
  8. Run npm start to start generated frontend with live reload support

How does the generated code and application look like?

In case you only want to see a sample generated application without starting the whole framework you can check this official Github repo for the latest up-to-date sample code: https://github.com/jhipster/jhipster-sample-app.

Following are some screen from my up and running JHipster application:

Welcome screen jhipster homepageThis is the initial screen when you open your JHipster app

Create a user screenjhipster user create screenWith this form you can create a new user in the app

View all users screenjhipster user management screenIn this screen you have the option to manage all your existing users

Monitoring of your JHipster application screenjhipster monitoring screenMonitoring of JVM metrics, as well as HTTP requests statistics

What are the pros and cons

The important thing to remember is that JHipster is not a “magic bullet” that will solve all your problems and is not an optimal solution for all the new projects. As a good software engineer, you will have to weigh in the pros and cons of this platform and decide when it makes sense to use and when it’s better to go with a different approach. Having used JHipster for production projects these are some of the pros and cons that I’ve experienced:

Pros

  • Easy bootstrap of a new project with a lot of technologies preconfigured
  • JHipster almost always follows best practices and latest trends in backend and frontend development
  • Login, register, management of users and monitoring comes out-of-the-box
  • Wizard for generating your project, only the technologies that you select are included in the project
  • After defining your own JDL file, all of the models, repository, service and controllers classes for your entities are generated, together with integration tests. This is saving a lot of time in the begging of the project when you want to get to feature development as soon as possible

Cons

  • If you are not familiar with technologies that are being used in the generated project it can be overwhelming and it’s easy to get lost into this mix of lots of different technologies
  • Using JHipster after the initial project is not a smooth experience. Classes and Liquibase scripts are being overwritten and you have to be very careful with changing the initial JDL model. Or you can decide to continue without using JHipster after the initial generation of projects
  • REST responses that are returned from endpoints will not always correspond to business requirements, very often you will have to manually modify your initial JHipster REST responses
  • Not all of the options that are available are at the same level, some technologies that JHipster is using and configuring are more polished than the others. Especially true if you decide to use community modules

What kind of projects are a good fit?

Having said all of this, it’s important to understand that there are projects which can benefit a lot from JHipster and projects that are better without using this platform.

In my experience, a good candidate is a greenfield project where it’s expected to deliver a lot of features fast. JHipster will help a lot to be productive from day one and to cut on the boilerplate code that you need to write. So you will be able, to begin with, feature development really fast. This works well with new projects with tight deadlines, proof of concepts, internal projects, hackathons, and startups.

On the other hand, a not so ideal situation is if you have an already started and up and running project, there is not much a JHipster can do in this case. Or another case would if the application has a lot of specific business logic and its not a simple CRUD application, for example, an AI project, a chatbot or a legacy ecosystem where these new technologies are not suitable or supported.

JHipster, is it worth it?

There is only one sure way to decide if JHipster is worth it for your next project or not and that is to try it out yourself and play around with the different features and configuration that JHipster offers.

At best, you will find a new framework for your next project and save a lot of effort next time you have to start a project. At worst, you will get to know the latest trends in both backend and frontend and learn some of the best practices from a very large community.

Reactive Spring with WebFlux and SQL Databases

Reading Time: 6 minutes

Since SpringBoot 2 the Spring WebFlux was introduced so we can create reactive web applications. This was great and it was working fine with NoSql databases but when it came to relational databases this was an issue. The JDBC database operations are blocking by nature and this will stop you to create a totally non-blocking application. But in order to have an asynchronous and non-blocking application, we will need to cover every layer of the application. The hero that solved this was the R2DBC – Reactive Relational Database Connectivity that gives a possibility to make none-blocking calls to Relational Databases.

The combination of WebFlux and R2DBC is enough to cover every layer in our application that we are going to build. As a relational database, we are going to use H2. So on to the coding!

Go to the spring initializr page from where we are going to build our application and select the following configuration:

  • Group: com.north47 (or your package name)
  • Artifact: spring-r2dbc
  • Dependencies: Spring Reactive Web, Spring Data R2DBC [Experimental], H2 Database, Lombok

(You won’t be able to see the Lombok on this picture, but there it is! If for some reason the Lombok is causing you issues you might need to install a plugin. To do this in Intellij go to File -> Settings -> Plugins search for Lombok, install it and restart your IDE. If you can’t manage to do it just go the old way remove the annotations @Data, @AllArgsConstructor, @NoArgsConstructor in the Book.java class and just create your own setters, getters and constructors).

Now click on Generate, unzip the application and open it via your IDE.

Let’s first create a SQL script that will create our table. Go to src -> main -> resources and right-click on it and select New -> File. Name the file: schema.sql and enter there the following code:

CREATE TABLE BOOK (
ID INTEGER IDENTITY PRIMARY KEY ,
NAME VARCHAR(255) NOT NULL,
AUTHOR VARCHAR (255) NOT NULL
);

This will create a table with name ‘Book’ and the following columns: ID, NAME and AUTHOR.

We will create an additional script that will put us some data in our database. Repeat the following procedure from previous and this time give a name to the file: data.sql and add the following code:

INSERT INTO BOOK (ID,NAME,AUTHOR) VALUES (1,'Angels and Demons','Dan Brown');
INSERT INTO BOOK (ID,NAME, AUTHOR) VALUES (2,'The Matarese Circle', 'Robert Ludlum');
INSERT INTO BOOK (ID,NAME,AUTHOR) VALUES (3,'Name of the Rose', 'Umberto Eco');

This will put some data into our database.

In resources delete the application.properties file and let’s create a new file where we are going to add the following:

logging:
  level:
    org.springframework.data.r2dbc: DEBUG
spring:
  r2dbc:
    url: r2dbc:h2:mem:///test?options=DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE
    name: sa
    password:


Now that we have defined the r2dbc URL and enabled DEBUG logging level for r2dbc let’s go to create our java classes.

Create a new package domain under the ‘com.north47.springr2dbc’ and create a new class Book. This will be our database model:

package com.north47.springr2dbc.domain;

import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import org.springframework.data.annotation.Id;
import org.springframework.data.relational.core.mapping.Column;
import org.springframework.data.relational.core.mapping.Table;

@Table("book")
@Data
@AllArgsConstructor
@NoArgsConstructor
public class Book {

    @Id
    private Long id;

    @Column(value = "name")
    private String name;

    @Column(value = "author")
    private String author;

}

Now to create our repository first create a new package named ‘repository’ under ‘com.north47.springrdbc’. In there create an interface named BookRepository. This interface will extend the R2dbRepository:

package com.north47.springr2dbc.repository;

import com.north47.springr2dbc.domain.Book;
import org.springframework.data.r2dbc.repository.R2dbcRepository;

public interface BookRepository extends R2dbcRepository<Book, Long> {
}

As you may notice we are not extending the JpaRepository as usual. The R2dbcRepository will provide us with methods that can work with objects like Flux, Mono etc…

After this, we will create endpoints from where we can access the previously inserted data or create new, modify it or delete it.

Create a new package ‘resource’ under the ‘com.north47.springr2dbc’ package and in there we will create our BookResource:

package com.north47.springr2dbc.resource;

import com.north47.springr2dbc.domain.Book;
import com.north47.springr2dbc.repository.BookRepository;
import org.springframework.http.MediaType;
import org.springframework.web.bind.annotation.*;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;

@RestController
@RequestMapping(value = "/books")
public class BookResource {

    private final BookRepository bookRepository;

    public BookResource(BookRepository bookRepository) {
        this.bookRepository = bookRepository;
    }

    @GetMapping(produces = MediaType.TEXT_EVENT_STREAM_VALUE)
    public Flux<Book> getAllBooks() {
        return bookRepository.findAll();
    }

    @GetMapping(value = "/{id}")
    public Mono<Book> findById(@PathVariable Long id) {
        return bookRepository.findById(id);
    }

    @PostMapping(consumes = MediaType.APPLICATION_JSON_VALUE)
    public Mono<Book> save(@RequestBody Book book) {
        return bookRepository.save(book);
    }

    @DeleteMapping(value = "/{id}")
    public Mono<Void> delete(@PathVariable Long id) {
        return bookRepository.deleteById(id);
    }
}

And there we have endpoints from where we can access our data and modify it.

On to the postman so we can test our application, but of course, first, let’s start it. When you run the application you can see in the console that your server is started:

Netty started on port(s): 8080

Also since we enabled DEBUG log level you should be able to see al the SQL queries that are executed from the scripts that we wrote previously.

In postman set a GET method and the url: localhost:8080/books. In the Headers add key: ‘Content-Type’, value:’application-json’.

Press that send button and there it is you will get the data:

data:{"id":1,"name":"Angels and Demons","author":"Dan Brown"}

data:{"id":2,"name":"The Matarese Circle","author":"Robert Ludlum"}

data:{"id":3,"name":"Name of the Rose","author":"Umberto Eco"}

You can test also the other endpoints, for example, getting a book by id just by changing the URL to localhost:8080/books/1. The result will be:

{
    "id": 1,
    "name": "Angels and Demons",
    "author": "Dan Brown"
}

Now you can test the other endpoints by creating a new Book by sending a POST request to the localhost:8080/books or delete a book by sending a DELETE to localhost:8080/books/{id}.

Here you can find the whole code:

Spring-R2DBC

Hope you enjoyed it!

A simple way of using Micrometer, Prometheus and Grafana (Spring Boot 2)

Reading Time: 7 minutes

When we run any java application, we are running JVM. That JVM uses resources like memory, processor etc. Same happens when we run any spring application too; it runs and uses our hardware resources. Monitoring and measuring these parameters is crucial when we are in production or when we like to test the performance of our application. With spring, it is easy. We should just include spring actuator and it will give us access to almost all measurements that we need like:

"jvm.memory.max",
"jvm.threads.states",
"jvm.gc.memory.promoted",
"jvm.memory.used",
"jvm.gc.max.data.size",
"jvm.gc.pause",
"jvm.memory.committed",
"system.cpu.count",
"logback.events",
…

To set up spring actuator add the following dependency in our project:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

and on the following endpoint:

<host/context-path>/actuator

we will have basic links to additional features of the application and monitoring:

{
    "_links": {
        "self": {
            "href": "http://localhost:8080/actuator",
            "templated": false
        },
        "health": {
            "href": "http://localhost:8080/actuator/health",
            "templated": false
        },
        "health-path": {
            "href": "http://localhost:8080/actuator/health/{*path}",
            "templated": true
        },
        "info": {
            "href": "http://localhost:8080/actuator/info",
            "templated": false
        }
    }
}

If these basic information are not enough we can extend them with adding the following parameter in the application configuration file:

management.endpoints.web.exposure.include=*

By following any of these links, we will access the details. For our use it will be http://localhost:8080/actuator/metrics from which we are going to access to the metrics of our application.

Now we have almost everything what we need to monitor our application how it performs. Requests, JVM memory, cache, threads etc…

Micrometer

However, if we have some more logic in our code and we need more precise metrics for our application and want to get metrics for our code we will need some other way to get them. Spring Boot 2 Actuator enrich all this already exiting metrics with the micrometer data provider.

Micrometer is a dimensional-first metrics collection facade whose aim is to allow you to time, count, and gauge your code with a vendor-neutral API.

Moreover, a micrometer is a vendor-neutral data provider and exposes application metrics to other external monitoring systems like Prometheus, AWS Cloudwatch etc…

Micrometer gives a set of Meter primitives and plus including Timer, Counter, Gauge, DistributionSummary, LongTaskTimer, FunctionCounter, FunctionTimer, and TimeGauge. Here we should be aware that every different meter type has a different number of time-series metrics. The gauge has a single metric, but the timer has a count of timed events and a total time of all events timed.

If we write something like this in our code:

List<Integer> gaugeList = registry.gauge("dummy.gauge.list", Collections.emptyList(), someList, List::size);
        List<Integer> gaugeCollectionsSizeList = registry.gaugeCollectionSize("dummy.size.list", Tags.empty(), someList);
        Map<Integer, Integer> gaugeMapSize = registry.gaugeMapSize("dummy.gauge.map", Tags.empty(), someMap);

registry.timer("dummy.timer", Tags.empty()).record(() -> {
            slowDummyMethod();
        });

We will have three parameters for the Timer (dummy_timer_seconds_count, dummy_timer_seconds_max. dummy_timer_seconds_sum) and dummy_gauge_list, dummy_gauge_map, dummy_gauge_list.

All this data can be used from many monitoring systems like Netflix Atlas, CloudWatch,  Datadog, Ganglia etc… Here in our case, we will use Prometheus.

Prometheus

Including Prometheus in our project is easy with adding maven dependency:

<dependency>
    <groupId>io.micrometer</groupId>
    <artifactId>micrometer-registry-prometheus</artifactId>
</dependency>

This will create the new endpoint in the actuator http://localhost:8080/baeldung/actuator/prometheus. If we access this URL we will get the metrics from the micrometer.

To see this data in some graphic UI we will have to start Prometheus server. We can do that directly by downloading the Prometheus server and run it.

https://prometheus.io/download/

The configuration is in the prometheus.yml file.

Basic parameters that we should set up here are:

global:
  scrape_interval:     10s # Scrape interval to every 10 seconds. Default value is every 1 minute.

and

scrape_configs:
  - job_name: 'spring_micrometer'

    metrics_path: '/micromexample/actuator/prometheus' # Path to the prometheus end point in our application. “micromexample” is the context and “actuator/prometheus” is default path for prometheus in our application
    static_configs:
    - targets: ['localhost:8080'] # host where our application is deployed

Or another way to have Prometheus server we can run docker image which will contain Prometheus in it. We can do that with the following command:

docker run -d -p 9090:9090 -v <yours-prometheus-config-file.yml>:/etc/prometheus/prometheus.yml prom/prometheus

“9090” – the port where our Prometheus will listen, this value is the default port

<yours-prometheus-config-file.yml> – our configuration file for Prometheus

“prom/prometheus” – docker image with Prometheus

After we run spring boot application with Prometheus included and we run Prometheus server we should be able to see the metrics in some basic view from Prometheus

http://localhost:9090/graph

this is what we should get from our service:

For this graph, we wrote the following code (to have something to be sure that everything works)

registry.timer("dummy.timer ", Tags.empty()).record(() -> {
    slowDummyMethod();
});

Grafana

If we want, reach graphical UI, easy to browse through the metrics data, dashboard editing, cloud monitoring compatibility then it will be a good idea to use Grafana.

Setting up Grafana is similar to Prometheus, we will need a Grafana server.

Again, we can download and install it locally. Like this, we will have service in our OS:

https://grafana.com/get

Or run docker image with Grafana in it:

docker run -d -p 3000:3000 grafana/grafana

“3000” – port for grafana

“grafana/grafana” – docker image with grafana

Default user and password are admin/admin. On the first login, you will be asked to add a new password.

After we log in we should add source, wherefrom Grafana will read the metrics. Go to the following left menu: Configuration -> Data Sources, chose the “Data Sources” tab and add new data source “Add data source”.

Since we decided to go with Prometheus we will select Prometheus source. In the new page (Configuration), because we did not set any authentication or anything else in Prometheus – everything is default, we need just to set HTTP -> URL field. For our case, it will be “http://localhost:9090”. If everything is ok by clicking “Save and test” we should get a green bar that Grafana is connected to Prometheus and we can access the metrics from it.

Let’s see our first metrics from the timer that we added in our application. For this one we will create our own new dashboard:

Chose “Add Query” and in the new window add following key in the “Metrics”: “dummy_timer_seconds_count”. This will add one metric in our graph.

In the same graph, we can add the second one from the timer “dummy_timer_seconds_max”. With this, we will have both metrics in the same graph.

There are other parameters that you can set, but for basic setup default values are fine.

With this, we have set up everything we need for monitoring our application. Next is to add more graphs for metrics that we want to monitor.

Securing your microservices with OAuth 2.0. Building Authorization and Resource server

Reading Time: 8 minutes

We live in a world of microservices. They give us an easy opportunity to scale our application. But as we scale our application it becomes more and more vulnerable. We need to think of a way of how to protect our services and how to keep the wrong people from accessing protected resources. One way to do that is by enabling user authorization and authentication. With authorization and authentication, we need a way to manage credentials, check the access of the requester and make sure people are doing what they suppose to.

When we speak about Spring (Cloud) Security, we are talking about Service authorization powered by OAuth 2.0. This is how it exactly works:

 

The actors in this OAuth 2.0 scenario that we are going to discuss are:

  • Resource Owner – Entity that grants access to a resource, usually you!
  • Resource Server – Server hosting the protected resource
  • Client – App making protected resource requests on behalf of a resource owner
  • Authorization server – server issuing access tokens to clients

The client will ask the resource owner to authorize itself. When the resource owner will provide an authorization grant with the client will send the request to the authorization server. The authorization server replies by sending an access token to the client. Now that the client has access token it will put it in the header and ask the resource server for the protected resource. And finally, the client will get the protected data.

Now that everything is clear about how the general OAuth 2.0 flow is working, let’s get our hands dirty and start writing our resource and authorization server!

Building OAuth2.0 Authorization server

Let’s start by creating our authorization server using the Spring Initializr. Create a project with the following configuration:

  • Project: Maven Project
  • Artefact: auth-server
  • Dependencies: Spring Web, Cloud Security, Cloud OAuth2

Download the project, copy it into your workspace and open it via your IDE. Go to your main class and add the @EnableAuthorizationServer annotation.

@SpringBootApplication
@EnableAuthorizationServer
public class AuthServerApplication {

    public static void main(String[] args) {
        SpringApplication.run(AuthServerApplication.class, args);
    }

}

Go to the application.properties file and make the following modification:

  • Change the server port to 8083
  • Set the context path to be “/api/auth”
  • Set the client id to “north47”
  • Set the client secret to “north47secret”
  • Enable all authorized grant types
  • Set the client scope to read and write
server.port=8083

server.servlet.context-path=/api/auth

security.oauth2.client.client-id=north47
security.oauth2.client.client-secret=north47secret
security.oauth2.client.authorized-grant-types=authorization,password,refresh_token,password,client_credentials
security.oauth2.client.scope=read,write

The client id is a public identifier for applications. The way that we used it is not a good practice for the production environment. It is usually a 32-character hex string so it won’t be so easy guessable.

Let’s add some users into our application. We are going to use in-memory users and we will achieve that by creating a new class ServiceConfig. Create a package called “config” with the following path: com.north47.authserver.config and in there create the above-mentioned class:

@Configuration
public class ServiceConfig extends GlobalAuthenticationConfigurerAdapter {

    @Override
    public void init(AuthenticationManagerBuilder auth) throws Exception {
        auth.inMemoryAuthentication()
                .withUser("filip")
                .password(passwordEncoder().encode("1234"))
                .roles("ADMIN");
    }

    @Bean
    public BCryptPasswordEncoder passwordEncoder() {
        return new BCryptPasswordEncoder();
    }
}

With this we are defining one user with username: ‘filip’ and password: ‘1234’ with a role ADMIN. We are defining that BCryptPasswordEncoder bean so we can encode our password.

In order to authenticate the users that will arrive from another service we are going to add another class called UserResource into the newly created package resource (com.north47.autserver.resource):

@RestController
public class UserResource {

    @RequestMapping("/user")
    public Principal user(Principal user) {
        return user;
    }
}

When the users from other services will try to send a token for validation the user will also be validated with this method.

And that’s it! Now we have our authorization server! The authorization server is providing some default endpoints which we are going to see when we will be testing the resource server.

Building Resource Server

Now let’s build our resource server where we are going to keep our secure data. We will do that with the help of the Spring Initializr. Create a project with the following configuration:

  • Project: Maven Project
  • Artefact: resource-server
  • Dependencies: Spring Web, Cloud Security, Cloud OAuth2

Download the project and copy it in your workspace. First, we are going to create our entity called Train. Create a new package called domain into com.north47.resourceserver and create the class there.

public class Train {

    private int trainId;
    private boolean express;
    private int numOfSeats;

    public Train(int trainId, boolean express, int numOfSeats) {
        this.trainId = trainId;
        this.express = express;
        this.numOfSeats = numOfSeats;
    }

   public int getTrainId() {
        return trainId;
    }

    public void setTrainId(int trainId) {
        this.trainId = trainId;
    }

    public boolean isExpress() {
        return express;
    }

    public void setExpress(boolean express) {
        this.express = express;
    }

    public int getNumOfSeats() {
        return numOfSeats;
    }

    public void setNumOfSeats(int numOfSeats) {
        this.numOfSeats = numOfSeats;
    }

}

Let’s create one resource that will expose an endpoint from where we can get the protected data. Create a new package called resource and there create a class TrainResource. We will have one method only that will expose an endpoint behind we can get the protected data.

@RestController
@RequestMapping("/train")
public class TrainResource {


    @GetMapping
    public List<Train> getTrainData() {

        return Arrays.asList(new Train(1, true, 100),
                new Train(2, false, 80),
                new Train(3, true, 90));
    }
}

Let’s start the application and send a GET request to http://localhost:8082/api/services/train. You will be asked to enter a username and password. The username is user and the password you can see from the console where the application was started. By entering this credentials will give the protected data.

Let’s change the application now to be a resource server by going to the main class ResourceServerApplication and adding the annotation @EnableResourceServer.

@SpringBootApplication
@EnableResourceServer
public class ResourceServerApplication {

    public static void main(String[] args) {
        SpringApplication.run(ResourceServerApplication.class, args);
    }

}

Go to the application properties file and do the following changes:

server.port=8082
server.servlet.context-path=/api/services
security.oauth2.resource.user-info-uri=http://localhost:8083/api/auth/user 

What we have done here is:

  • Changed our server port to 8082
  • Set context path: /api/services
  • Gave user info URI where the user will be validated when he will try to pass a token

Now if you try to get the protected data by sending a GET request to http://localhost:8082/api/services/train the server will return to you a message that you are unauthorized and that full authentication is required. That means that without a token you won’t be able to access the resource.

So that means that we need a fresh new token in order to get the data. We will ask the authorization server to give us a token for the user that we previously created. Our client in this scenario will be the postman. The authorization server that we previously created is exposing some endpoints out of the box. To ask the authorization server for a fresh new token send a POST request to the following URL: localhost:8083/api/auth/oauth/token.

As it was said previously that postman in this scenario is the client that is accessing the resource, it will need to send the client credentials to the authorization server. Those are the client id and the client secret. Go to the authorization tab and add as a username the client id (north47) and the password will be the client secret (north47secret). On the picture below is presented how to set the request:

What is left is to say the username and password of the user. Open the body tab and select x-www-form-urlencoded and add the following values:

  • key: ‘grant_type’, value: ‘password’
  • key: ‘ client_id’, value: ‘north47’
  • key: ‘ username’, value: ‘filip’
  • key: ‘password’, value ‘1234’

Press send and you will get a response with the access_token:

{
    "access_token": "ae27c519-b3da-4da8-bacd-2ffc98450b18",
    "token_type": "bearer",
    "refresh_token": "d97c9d2d-31e7-456d-baa2-c2526fc71a5a",
    "expires_in": 43199,
    "scope": "read write"
}

Now that we have the access token we can call our protected resource by inserting the token into the header of the request. Open postman again and send a GET request to localhost:8082/api/services/train. Open the header tab and here is the place where we will insert the access token. For a key add “Authorization” and for value add “Bearer ae27c519-b3da-4da8-bacd-2ffc98450b18”.

 

And there it is! You have authorized itself and got a new token which allowed you to get the protected data.

You can find the projects in our repository:

And that’s it! Hope you enjoyed it!

JMeter

Reading Time: 4 minutes

Today, we are gonna take a look at JMeter. You can embed it in your application as a library and make an external integration testing solution. You don’t have to use it for load testing, it could simply send onej request, check the return status code, check the return value and move on. There is an argument that JMeter may be overkill for that, but it provides an easy way to verify the return, allows you to set it up using JMeter desktop app and then you can move into testing latency under load.

First, we need to create a test file that will be put later in our spring boot application. The steps for creating the .jmx file are as follows:

1 – Open the JMeter window by clicking on /home/apache-jmeter-5.1.1/bin/jmeter.bat. The next step you want to do with every JMeter Test Plan is to add a thread group element. Set the “Loop Count” parameter equal to 1, as shown below:

2 – The next step is to create a while controller. The purpose the while controller is to repeat a given set of actions until the condition is broken. While is a basic programming concept to run actions where the number of iterations to run are unknown or varying.

3 – Create an HTTP request as shown in the figure below:

4 – Afterwards, we are gonna create a CSV Data Set Config. This step refers to the CSV file for which the partner users will be collected and replaced as in the httprequest.

5- After running our test, we want to see the results e.g. which calls have been done, and which ones have failed. This is done through Listeners, a recording mechanism that shows results, including logging and debugging.

The View Results Tree is the most common Listener.

Right-click – Add->Listener->View Results Tree

6 – At the end, it should be something like the figure below:

Now click ‘Save’. Your test will be saved as a .jmx file. To run the test, click the green arrow on top. After the test completes running, you can view the results on the Listener as in the figure below. In this example, you can see the tests were successful because they’re green. On the right, you can see more detailed results, like load time, connect time, errors, the request data, the response data, etc. You can also save the results if you want to.

JMeter can also be used for Maven testing through a plugin and work quite nicely with variables and prerequisites etc. Integrating Performance Testing in your projects has many benefits:

  • It provides a constant check of the performances of the application
  • Secures continuous delivery in production
  • Allows early detection of performance problems or performance regressions
  • Automates the process means less manual work, allowing your team to focus on more valuable tasks like performance analysis and optimisation

First of all, you need to add your plugin to the project. So go to Maven project directory (jmeter-testproject in this case) and edit the pom.xml file. Here you must add the plugin. You can find the basic configuration here. You just need to copy the configuration text and paste it in your pom.xml file.

Finally, you have a pom.xml file that looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<plugin>
   <groupId>com.lazerycode.jmeter</groupId>
   <artifactId>jmeter-maven-plugin</artifactId>
   <version>2.9.0</version>
   <executions>
      <execution>
         <id>jmeter-tests</id>
         <phase>test</phase>
         <goals>
            <goal>jmeter</goal>
         </goals>
      </execution>
      <execution>
         <id>jmeter-check-results</id>
         <phase>test</phase>
         <goals>
            <goal>results</goal>
         </goals>
      </execution>
   </executions>
   <configuration>
      <testFilesDirectory>src/test/resources/jmeter</testFilesDirectory>
      <ignoreResultFailures>false</ignoreResultFailures>
   </configuration>
</plugin>

In the code above we will be running 2 goals:

  • jmeter in phase Test: This goal will run the load test and generate the HTML report
  • results in phase Test: This goal runs some verification on error rate and fails the build

Testing Spring Boot application with examples

Reading Time: 7 minutes

Why bother writing tests is already a well-discussed topic in software engineering. I won’t go into much details on this topic, but I will mention some of the main benefits.

In my opinion, testing your software is the only way to achieve confidence that your code will work on the production environment. Another huge benefit is that it allows you to refactor your code without fear that you will break some existing features.

Risk of bugs vs the number of tests

In the Java world, one of the most popular frameworks is Spring Boot, and part of the popularity and success of Spring Boot is exactly the topic of this blog – testing. Spring Boot and Spring framework offer out-of-the-box support for testing and new features are being added constantly. When Spring framework appeared on the Java scene in 2005, one of the reasons for its success was exactly this, ease of writing and maintaining tests, as opposed to JavaEE where writing integration requires additional libraries like Arquillian.

In the following, I will go over different types of tests in Spring Boot, when to use them and give a short example.

Testing pyramid

We can roughly group all automated tests into 3 groups:

  • Unit tests
  • Service (integration) tests
  • UI (end to end) tests

As we go from the bottom of the pyramid to the top tests become slower for execution, so if we measure execution times, unit tests will be in orders of few milliseconds, service in hundreds milliseconds and UI will execute in seconds. If we measure the scope of tests, unit as the name suggest test small units of code. Service will test the whole service or slice of that service that involve multiple units and UI has the largest scope and they are testing multiple different services. In the following sections, I will go over some examples and how we can unit test and service test spring boot application. UI testing can be achieved using external tools like Selenium and Protractor, but they are not related to Spring Boot.

Unit testing

In my opinion, unit tests make the most sense when you have some kind of validators, algorithms or other code that has lots of different inputs and outputs and executing integration tests would take too much time. Let’s see how we can test validator with Spring Boot.

Validator class for emails

public class Validators {

    private static final String EMAIL_REGEX = "(?:[a-z0-9!#$%&amp;'*+/=?^_`{|}~-]+(?:\\.[a-z0-9!#$%&amp;'*+/=?^_`{|}~-]+)*|\"(?:[\\x01-\\x08\\x0b\\x0c\\x0e-\\x1f\\x21\\x23-\\x5b\\x5d-\\x7f]|\\\\[\\x01-\\x09\\x0b\\x0c\\x0e-\\x7f])*\")@(?:(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?|\\[(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?|[a-z0-9-]*[a-z0-9]:(?:[\\x01-\\x08\\x0b\\x0c\\x0e-\\x1f\\x21-\\x5a\\x53-\\x7f]|\\\\[\\x01-\\x09\\x0b\\x0c\\x0e-\\x7f])+)\\])";

    public static boolean isEmailValid(String email) {
        return email.matches(EMAIL_REGEX);
    }
}

Unit tests for email validator with Spring Boot

@RunWith(MockitoJUnitRunner.class)
public class ValidatorsTest {
    @Test
    public void testEmailValidator() {
        assertThat(isEmailValid("valid@north-47.com")).isTrue();
        assertThat(isEmailValid("invalidnorth-47.com")).isFalse();
        assertThat(isEmailValid("invalid@47")).isFalse();
    }
}

MockitoJUnitRunner is used for using Mockito in tests and detection of @Mock annotations. In this case, we are testing email validator as a separate unit from the rest of the application. MockitoJUnitRunner is not a Spring Boot annotation, so this way of writing unit tests can be done in other frameworks as well.

Integration testing of the whole application

If we have to choose only one type of test in Spring Boot, then using the integration test to test the whole application makes the most sense. We will not be able to cover all the scenarios, but we will significantly reduce the risk. In order to do integration testing, we need to start the application context. In Spring Boot 2, this is achieved with following annotations @RunWith(SpringRunner.class) and @SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT. This will start the application on some random port and we can inject beans into our tests and do REST calls on application endpoints.

In the following is an example code for testing book endpoints. For making rest API calls we are using Spring TestRestTemplate which is more suitable for integration tests compared to RestTemplate.

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
public class SpringBootTestingApplicationTests {

    @Autowired
    private TestRestTemplate restTemplate;

    @Autowired
    private BookRepository bookRepository;

    private Book defaultBook;

    @Before
    public void setup() {
        defaultBook = new Book(null, "Asimov", "Foundation", 350);
    }

    @Test
    public void testShouldReturnCreatedWhenValidBook() {
        ResponseEntity<Book> bookResponseEntity = this.restTemplate.postForEntity("/books", defaultBook, Book.class);

        assertThat(bookResponseEntity.getStatusCode()).isEqualTo(HttpStatus.CREATED);
        assertThat(bookResponseEntity.getBody().getId()).isNotNull();
        assertThat(bookRepository.findById(1L)).isPresent();
    }

    @Test
    public void testShouldFindBooksWhenExists() throws Exception {
        Book savedBook = bookRepository.save(defaultBook);

        ResponseEntity<Book> bookResponseEntity = this.restTemplate.getForEntity("/books/" + savedBook.getId(), Book.class);

        assertThat(bookResponseEntity.getStatusCode()).isEqualTo(HttpStatus.OK);
        assertThat(bookResponseEntity.getBody().getId()).isEqualTo(savedBook.getId());
    }

    @Test
    public void testShouldReturn404WhenBookMissing() throws Exception {
        Long nonExistingId = 999L;
        ResponseEntity<Book> bookResponseEntity = this.restTemplate.getForEntity("/books/" + nonExistingId, Book.class);

        assertThat(bookResponseEntity.getStatusCode()).isEqualTo(HttpStatus.NOT_FOUND);
    }
}

Integration testing of web layer (controllers)

Spring Boot offers the ability to test layers in isolation and only starting the necessary beans that are required for testing. From Spring Boot v1.4 on there is a very convenient annotation @WebMvcTest that only the required components in order to do a typical web layer test like controllers, Jackson converters and similar without starting the full application context and avoid startup of unnecessary components for this test like database layer. When we are using this annotation we will be making the REST calls with MockMvc class.

Following is an example of testing the same endpoints like in the above example, but in this case, we are only testing if the web layer is working as expected and we are mocking the database layer using @MockBean annotation which is also available starting from Spring Boot v1.4. Using these annotations we are only using BookController in the application context and mocking database layer.

@RunWith(SpringRunner.class)
@WebMvcTest(BookController.class)
public class BookControllerTest {
    @Autowired
    private MockMvc mockMvc;

    @MockBean
    private BookRepository repository;

    @Autowired
    private ObjectMapper objectMapper;

    private static final Book DEFAULT_BOOK = new Book(null, "Asimov", "Foundation", 350);

    @Test
    public void testShouldReturnCreatedWhenValidBook() throws Exception {
        when(repository.save(Mockito.any())).thenReturn(DEFAULT_BOOK);

        this.mockMvc.perform(post("/books")
                .content(objectMapper.writeValueAsString(DEFAULT_BOOK))
                .contentType(MediaType.APPLICATION_JSON)
                .accept(MediaType.APPLICATION_JSON))
                .andExpect(status().isCreated())
                .andExpect(MockMvcResultMatchers.jsonPath("$.name").value(DEFAULT_BOOK.getName()));
    }

    @Test
    public void testShouldFindBooksWhenExists() throws Exception {
        Long id = 1L;
        when(repository.findById(id)).thenReturn(Optional.of(DEFAULT_BOOK));

        this.mockMvc.perform(get("/books/" + id)
                .accept(MediaType.APPLICATION_JSON))
                .andExpect(status().isOk())
                .andExpect(MockMvcResultMatchers.content().string(Matchers.is(objectMapper.writeValueAsString(DEFAULT_BOOK))));
    }

    @Test
    public void testShouldReturn404WhenBookMissing() throws Exception {
        Long id = 1L;
        when(repository.findById(id)).thenReturn(Optional.empty());

        this.mockMvc.perform(get("/books/" + id)
                .accept(MediaType.APPLICATION_JSON))
                .andExpect(status().isNotFound());
    }
}

Integration testing of database layer (repositories)

Similarly to the way that we tested web layer we can test the database layer in isolation, without starting the web layer. This kind of testing in Spring Boot is achieved using the annotation @DataJpaTest. This annotation will do only the auto-configuration related to JPA layer and by default will use an in-memory database because its fastest to startup and for most of the integration tests will do just fine. We also get access TestEntityManager which is EntityManager with supporting features for integration tests of JPA.

Following is an example of testing the database layer of the above application. With these tests we are only checking if the database layer is working as expected we are not making any REST calls and we are verifying results from BookRepository, by using the provided TestEntityManager.

@RunWith(SpringRunner.class)
@DataJpaTest
public class BookRepositoryTest {
    @Autowired
    private TestEntityManager entityManager;

    @Autowired
    private BookRepository repository;

    private Book defaultBook;

    @Before
    public void setup() {
        defaultBook = new Book(null, "Asimov", "Foundation", 350);
    }

    @Test
    public void testShouldPersistBooks() {
        Book savedBook = repository.save(defaultBook);

        assertThat(savedBook.getId()).isNotNull();
        assertThat(entityManager.find(Book.class, savedBook.getId())).isNotNull();
    }

    @Test
    public void testShouldFindByIdWhenBookExists() {
        Book savedBook = entityManager.persistAndFlush(defaultBook);

        assertThat(repository.findById(savedBook.getId())).isEqualTo(Optional.of(savedBook));
    }

    @Test
    public void testFindByIdShouldReturnEmptyWhenBookNotFound() {
        long nonExistingID = 47L;
        
        assertThat(repository.findById(nonExistingID)).isEqualTo(Optional.empty());
    }
}

Conclusion

You can find a working example with all of these tests on the following repo: https://gitlab.com/47northlabs/public/spring-boot-testing.

In the following table, I’m showing the execution times with the startup of the different types of tests that I’ve used as examples. We can clearly see that unit tests, as mentioned in the beginning, are the fastest ones and that separating integration tests into layered testing leads to faster execution times.

Type of testExecution time with startup
Unit test80 ms
Integration test620 ms
Web layer test190 ms
Database layer test220 ms