Scaling Microservices with Spring Cloud Netflix

Reading Time: 10 minutes

If you need to build large distributed systems, then this is the place to be. We are going to talk about some of the components that the solution from Spring and Netflix provides and how easy it is to use them. If you follow this article, in the end, you will create a complete application with service discovery, client-side load balancing, feign clients and much more.

Before we start, let’s explain some of the terms that we are going to use in this article:

  • Eureka – a service discovery service, where every client will register itself
  • Ribbon – a client-side load balancer
  • Feign client – declarative web service client which provides communication between microservices

On the picture above it is presented what we are going to build. We will create two user-oriented microservices, one called Supplier and the other called Order. The user will be able to place an order for some supplier but the supplier in order to perform the order will call the Order microservice. For the communication between Supplier and Order, we will use Feign Client in combination with service discovery that will be enabled by Eureka. In the end, we are going to scale the microservice Order and we will see how the Ribbon load balancer will work when we have more instances.

Let’s start by creating the Eureka service discovery microservice.

The easiest way is to go to the Spring Initializer and create your microservice with the following properties as you can see on the picture below:

The required dependencies for our service discovery service are only the Eureka Server.

Once you are done with this, click on generate and your project will be downloaded. Open it via your favourite IDE (I will be using IntelliJ) and there are just two more things that you need to do. In your main class you should add the following annotation @EnableEurekaServer:

package com.north;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.server.EnableEurekaServer;

@SpringBootApplication
@EnableEurekaServer
public class EurekaApplication {

    public static void main(String[] args) {
        SpringApplication.run(EurekaApplication.class, args);
    }

}

One more thing that we will need to change is in our application.yml file. By default an application.properties file is created, but if this is the case we will rename it to application.yml and add the following code:

server:
  port: 8761

eureka:
  instance:
    hostname: localhost
  client:
    registerWithEureka: false
    fetchRegistry: false
    serviceUrl:
      defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka/

With these, we are setting the server port and the service URL. And there we have our first service discovery. Start the application and go to your browser and enter the following link: http://localhost:8761. Now we should be able to see the eureka homepage:

As you can see, there are no instances registered at the moment. So let’s create our first instance.

We will start by creating the Order microservice. Go to the Spring Initializer and create a project with the following properties:

And we will add the following dependencies:

Let’s start by setting the name and the port of the application. Change your application.properties to application.yml and add the following code:

spring:
  application:
    name: order

server:
  port: 8082

Now the name of the application is set to Order and the application will run on port: 8082. If this port is taken on your machine, feel free to change the port. We are not going to be dependent on this port but you will see that we will be dependent on the application name when we want to communicate with it.

In order to enable this instance to be discovered by Eureka we need to add the following annotation to the main class:

package com.north.order;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;

@EnableDiscoveryClient
@SpringBootApplication
public class OrderApplication {

    public static void main(String[] args) {
        SpringApplication.run(OrderApplication.class, args);
    }

}

Now if we start the application and go back to the homepage of eureka by going to our browser and entering the following link: http://localhost:8761 we should be able to see that this instance is registered to Eureka.

Since we confirmed that this instance is registered to Eureka we can now create an endpoint from where an order can be placed. First, let’s create an entity Order:

package com.north.order.domain;

import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;

@Data
@AllArgsConstructor
@NoArgsConstructor
public class Order {

    private String productName;
    private int quantity;
}

It is a simple entity that will contain the name of the products and how many pieces of it we want to order. The rest controller should contain the following logic:

package com.north.order.ctrl;

import com.north.order.domain.Order;
import lombok.extern.slf4j.Slf4j;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RestController;

@RestController
@Slf4j
public class OrderController {

    @PostMapping(value = "/order")
    ResponseEntity<Void> placeOrder(@RequestBody Order order) {
        log.info("Placing an order for product: {} with quantity: {}", order.getProductName(), order.getQuantity());
        return ResponseEntity.ok().body(null);
    }
}

You can test this endpoint by using Postman or some similar tool but we want the Supplier microservice to call this endpoint.

Now that we are done with the Order microservice, let’s build the Supplier. Again we will open the Spring Initializer and create a project with the following properties:

And we will have the following dependencies:

Generate the project and import it into your IDE. First thing let’s change the application.properties file by changing the extension to yml and add the following code:

spring:
  application:
    name: supplier
server:
  servlet:
    context-path: /supplier

With this, we have set the application name and set a context-path. Since here we didn’t change the port, the default 8080 will be taken. In order to register this instance to Eureka and to be able to use Feign Client we need to add the following two annotations in our main class:

package com.north.supplier;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.openfeign.EnableFeignClients;

@SpringBootApplication
@EnableFeignClients
@EnableDiscoveryClient
public class SupplierApplication {

    public static void main(String[] args) {
        SpringApplication.run(SupplierApplication.class, args);
    }

}

Next thing is to create the same entity Order as we have in the previous microservice.

package com.north.supplier.domain;

import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;

@Data
@AllArgsConstructor
@NoArgsConstructor
public class Order {

    private String productName;
    private int quantity;
}

For communication with the Order microservice we will create a feign client called OrderClient:

package com.north.supplier.service;

import com.north.supplier.domain.Order;
import org.springframework.cloud.openfeign.FeignClient;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;

@FeignClient("order")
public interface OrderClient {

    @PostMapping("/order")
    void performOrder(@RequestBody Order order);
}

As a value in the @FeignClient annotation, we need to use the application name of the microservice that we want to communicate with, in our case Order. The method written here is the one that will call the previously exposed endpoint in the Order microservice. Let’s create a service that will use this feign client and execute an order:

package com.north.supplier.service;

import com.north.supplier.domain.Order;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Service;

@Slf4j
@Service
@RequiredArgsConstructor
public class OrderService {

    private final OrderClient orderClient;

    public void placeOrder(Order order) {
        log.info("Requesting order ms to place an order");
        orderClient.performOrder(order);
    }
}

At the end we will expose one endpoint that we can use to test this scenario:

package com.north.supplier.ctrl;

import com.north.supplier.domain.Order;
import com.north.supplier.service.OrderService;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequiredArgsConstructor
@Slf4j
public class OrderController {

    private final OrderService orderService;

    @RequestMapping(value = "/order")
    public ResponseEntity<Void> placeOrder(@RequestBody Order order) {
        orderService.placeOrder(order);
        return ResponseEntity.ok().body(null);
    }
}

Now that we are done, let’s start the application. First, if we check the Eureka homepage we should be able to see this instance also that is registered. You can also see this in the console of where the Supplier is being started:

2020-09-20 20:02:43.907  INFO 7956 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient    : DiscoveryClient_SUPPLIER/host.docker.internal:supplier: registering service...
2020-09-20 20:02:43.911  INFO 7956 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient    : DiscoveryClient_SUPPLIER/host.docker.internal:supplier - registration status: 204

To test this complete scenario make sure that all three applications are started and that the Order and Supplier are registered to Eureka. By using postman I will send a post request to the endpoint on the Supplier microservice and I should be able to see the order being placed in the Order microservice:

Just make sure that you have added in your Headers tab a header with key: Content-Type and value application/json. What should happen if execute this request is in the Supplier microservice console? -we should see the following log:

2020-09-20 20:20:36.674  INFO 19420 --- [nio-8080-exec-4] com.north.supplier.service.OrderService  : Requesting order ms to place an order

in the Order microservice console we should see:

2020-09-20 20:20:36.678  INFO 17800 --- [io-8082-exec-10] com.north.order.ctrl.OrderController     : Placing an order for product: bread with quantity: 300

At this point, we managed to create three microservices, two for user purpose and one for service discovery. We used the feign client for communication with microservices. At some point, if we decide to grow this application and there are too many orders to be executed and we add some complex logic to our Order microservice, we will reach a point where the Order microservice won’t be able to execute all the orders. Let’s see what will happen if we scale our Order microservice.

First, from your IDE stop the Order microservice. Just be sure that Eureka and Supplier are still running. Now go to the folder directory in the Order project (something like …\Documents\blog\order) and open in that location three command prompt windows. In each of them we will type the following command:

mvn spring-boot:run -Dspring-boot.run.arguments="--server.port=8084"

in the second:

mvn spring-boot:run -Dspring-boot.run.arguments="--server.port=8085"

in the third:

mvn spring-boot:run -Dspring-boot.run.arguments="--server.port=8086"

It should be something like this:

Three instances of the application should be now up and running on the port that we previously specified. If you open again the Eureka home page, you should be able to see all three instances registered. Now go back to the postman and execute the same post call to Supplier as we did previously and do this many times as fast as possible. Now if you take a look at the command prompt windows that we opened you should be able to see that every time a different instance of the Order microservice is called. This is provided by Ribbon that is provided out of the box on the client-side (in this case the Supplier microservice), without adding some additional code. As we have mentioned before we are not dependent on the port but we are using the application name in order for Supplier to send a request to Order.

To summarize, our Supplier microservice became aware of all the instances and now he is sending the request every time to a different instance of Order so the load will be balanced.

Hope that you enjoyed this article and had fun with it. The code is available on the our N47 Bitbucket:

iOS Unit Tests – My story

Reading Time: 5 minutes

In my last job interview – almost 2 years ago – I received a question about writing unit tests inside the iOS code. With the confidence of a developer with 8 years of experience in the iOS branch, my answer and my opinion at that moment were that if the developers are high-profiled and write good code writing unit tests is not necessary. Also, my answer continued with a conclusion: if the company has a testing team why do we (iOS developers) take their job? Throwing back on the answer and from today’s perspective, my feelings about the topic are mixed: First of all, I’ve learned more about the importance of the unit tests, and secondly, I’ve continued to work in an environment where we are not obligated to write tests.

Some basics

We should consider our code as pieces of code – called UNITS. The purpose of a unit test is to make validation of our code, and this will allow us to be sure that our code meets the design and fulfil the goal.

In XCode and iOS, the Unit tests are written in a separate target. The most important thing is the XCTest framework. The basic rule is that only the methods that start with the word test will be considered as unit tests by the compiler. Here is an example:

func testFormatForCard() {
    let formatter = DateFormatter()
    formatter.dateFormat = “ddMMyyyy”
    let date = formatter.date(from: “28061978")!
    XCTAssertEqual(date.formatForCard(), “28.06.1978”)
}

Once a test method is written with the proper semantic, the method is associated with rhombus sign on the left side:

Unit Tests with rhombus sign

We can run the tests to see if our code is good or if something is not working. If the test passes, the empty rhombus sign is changed with a green checkmark sign:

Figure 2: Test passed

In case of a failing test we have an assertion and a red sign:

Figure 3: Failed unit test

We can see in image 3 that intentionally the programmer entered the wrong output date 29.06.1978. That’s the way we should think when writing unit tests. First, we have to write the failing state and after that, we should enter the expected correct output. If in the second case the test passes, then we have created a useful unit test for that piece of code (unit). The general idea behind is if we change something in our code, and we made a mistake unintentionally the test should fail and warn us to fix the code.

Unit Test in practice

1. Code Coverage

There is a built-in option inside XCode for checking the code coverage of the tests. The ideal scenario is that 100% of our code is covered by unit tests. But is this really necessary? Will we be better programmers if our full code is supported with tests? Can we spend better the time we spent covering the tiniest pieces of code with proper tests?

In my opinion, writing unit tests is a philosophy and knowing the principles lead us to write qualitative code. That’s, of course, our goal as programmers. Covering the most important part of the code, especially the parts of code that are often changed like networking managers, parsers are a better option than trying to be perfect and writing 100% coverage.

2. Test Driven Development

The popular blog website for programmers Ray Wenderlich emphasizes the FIRST principles as a good way to follow writing unit tests. The basics of these principles are that the test should be fast, autonomous, repeatable, the output should be either “pass“ or “fail”, and ideally, the tests should be written before coding – Test Driven Development TDD.

The recommendation by TDD is writing tests before starting to fix a bug in the code. My opinion on this topic is also a mixing approach. Depending on the time you have, if the deadline is not close on the horizon, you can write tests before coding, or before starting to fix bugs in code, but that’s not always possible, and you won’t make a mistake if you skip this step sometimes.

Conclusion

I can say that writing unit tests is good for every programmer on his way to becoming great. The quality of coding could dramatically improve using unit tests. The philosophy of writing tests and thinking how the code should be structured to allow the tests to pass, will make you write cleaner code, better structured, using more protocols, make reusable classes… As in the strategy games, you shouldn’t always be a slave of the principles – the most important is to fulfill the goals in a quality manner. If you have enough time and not a strict deadline of the project you can make a bigger code coverage, but something around 75% is good enough.

The practical guide – Part 2: MVP -> MVVM

Reading Time: 5 minutes

The practical guide – Part 1: Refactor android application to follow the MVP design pattern

This is the second part of the practical guide. In the first part, we talked about refactoring android application to MVP. Now, instead of refactoring the base code to MVVM, I would like to refactor the MVP application to MVVM. That way we will learn the differences between MVP and MVVM.

Why MVVM?
First, I should tell you that Google accepted MVVM as preferred design pattern for building Android applications. So, they have build tools that helps us following this pattern. This is a great reason to learn and use this pattern, but why would Google choose MVVM over MVP (or other design patterns). Well, they know best but, my opinion is that they chose MVVM because it has one less dependency, due to the difference in communication between ViewModel and View. I will explain this in the next section.

Difference between MVP and MVVM
As we know, MVP stands for Model-View-Presenter. On the other hand, MVVM stands for Model-View-ViewModel. So, the Model and the View in MVVM are the same as in the MVP pattern. The only difference remains between the Presenter and View Model. More precisely, the difference is in the communication between the View and the ViewModel/Presenter.

As you can see in the diagrams, the only difference is the arrow from Presenter to View. What does it mean? It means that in MVP you have an instance of the Presenter in the View, and you have an instance of the View in the Presenter, hence double arrow in the Diagram. In MVVM you only have an instance of the ViewModel in the View. But, how do you communicate with the View? How can the View know when the ViewModel has made changes in the Model? For that, we need the Observer pattern. We have observables(subjects) in the ViewModel, and the View subscribes to this observables. So, when the observable is changed, then the View is automatically informed about that change, and it can update its views.

For practical implementation of this Observer pattern, we have to get help either from some external libraries like RxJava, or we can use the new architecture components from Android. We will use this later in this example.

Refactoring

MVP classes
MVVM Classes

First, we can get rid of the MainPresenter and MainView interfaces. We can have only one new class MainViewModel, that replaces the Presenter. Then we can extend MainViewModel from androidx.lifecycle.ViewModel. This is the first class that we will use from the android lifecycle components. This class helps us to deal with the lifecycle of the view model. It survives configuration changes, so it is a nice place for storing UI related data. Next, we will add quoteDao and quotesApi fields. We will initialize them with setters, instead of the constructor, because the creation of the ViewModel is a little bit different. We don’t have the MainView, and also we don’t need bindView() and dropView() methods.

Next, we will create the observable objects. These are objects that we want to display in the MainActivity, wrapped with androidx.lifecycle.LiveData or some other implementation of LiveData. This class helps us with the implementation of the observer pattern. We will create the objects in the MainViewModel, and we will observe these objects in the MainActivity. We want to display a list of Quote objects, so we will create MutableLiveData<List<Quote>> object. MutableLiveData because we want to change the value of the object manually.

getAllQuotes() will be very similar as in the Presenter, except minus the interaction with the MainView. So, instead of:

if (view != null) {
  view.displayQuotes(response.body());
}

we will have:

quotesLiveData.setValue(response.body());

We will also change the implementation of the DatabaseQuotesAsyncTask, so instead of sending the MainView, we will create a new interface that will get us the quotes from the async task and we will send the implementation of this interface there. In the implementation, we will update quotesLiveData, same as above.

In the MainActivity, we remove the implementation of the MainView. We won’t need to override onDestroy() method. We can replace the MainPresenter with the MainViewModel. We will create the ViewModel as follows:

viewModel = new ViewModelProvider(this, new ViewModelProvider.NewInstanceFactory()).get(MainViewModel.class);
viewModel.setQuoteDao(quoteDao);
viewModel.setQuotesApi(quotesApi);

Then we will observe to the quotesLiveData observable, and display the list.

viewModel.quotesLiveData.observe(this, quotes -> {
    QuotesAdapter adapter = new QuotesAdapter(quotes);
    rvQuotes.setAdapter(adapter);
});

And in the end, we call viewModel.getAllQuotes() to fetch the quotes.

And that’s it! Now we have an application that follows the MVVM design pattern. You can check the full code here.

Automate Processes with Camunda

Reading Time: 5 minutes

Overview

Camunda BPM is a light-weight, open-source platform for Business Process Management. It ships with tools for creating workflow and decision models, operating deployed models in production, and allowing users to execute workflow tasks assigned to them. It is developed in Java and released as open-source software under the terms of Apache License.

Modeling your first process

In order to show how Camunda works and looks like I will use this simple process. Let us imagine that you want to introduce a review process on your Twitter account and have every tweet go through this review process.

One way to manage this is to make a web application from scratch for this scenario. But we can also model this process with Camunda Modeler and utilize Camunda for this workflow.

On the following image, it is shown one way to model this process with standard BPMN model using Camunda Modeler:

Business Process Model and Notation (BPMN) for the above process

In this diagram, the process is started when someone writes a new tweet. After that, we have a human task where someone has to review this tweet and decide its approval status. And after that we have two possible options if the tweet is approved, a service task is called that will publish this on Twitter. If rejected we again call a service task, however this time we notify the user that his tweet was rejected.

I will go through all of these steps in more detail.

Starting the process

Camunda processes can be started programmatically using some of their supported SDKs like Java or by using the Camunda Tasklist GUI that comes out of the box. In this case, I will use the Camunda tasklist to start a new tweet.

Working on the human task

Human tasks are tasks that must be manually completed by some users. And this can be something as simple as completing a form or it can be something like actually shipping an item somewhere. They are visible in the Camunda Tasklist GUI and users can assign a certain task to themselves and complete them.

In our Camunda BPMN model, the next step in the process is a human task. In our process, we want to review the tweet in this task. On the following image is shown how the human tasks look like by default in Camunda Tasklist:

Automating service tasks

Service task is used to invoke some service, this can be some Java code or some asynchronous external worker.

After the tweet is reviewed we have ‘conditional flow’ in Camunda, which depends on whatever the tweet was approved or not, decides how the flow should continue. In both cases, our flow continues with a service task.

In our case, we have two separate service tasks. One is called when a tweet is rejected and will send a notification, while the other one is used when the tweet is approved and will publish it on Twitter.

First, let us take a look at the service tasks for sending rejection notification:

@Slf4j
@Service("emailAdapter")
public class RejectionNotificationDelegate implements JavaDelegate {

  public void execute(DelegateExecution execution) throws Exception {
    String content = (String) execution.getVariable("content");
    String comments = (String) execution.getVariable("comments");

    log.info("Hi!\n\n"
           + "Unfortunately your tweet has been rejected.\n\n"
           + "Original content: {}\n\n"
           + "Comment: {}\n\n"
           + "Sorry, please try with better content the next time :-)", content, comments);
  }
}

In this code, we obtain process variables like tweet content and rejection comments and we log them in the console. This logic, of course, can be extended to send actual emails, the important thing here is that in order to model Camunda service we only need to implement JavaDelegate interface and override execute method.

In the next code snippet, we have the snippet for publishing the tweet:

public class TweetContentDelegate implements JavaDelegate {

  public void execute(DelegateExecution execution) throws Exception {
	    String content = (String) execution.getVariable("content");

	    AccessToken accessToken = new AccessToken("token", "secret");
	    Twitter twitter = new TwitterFactory().getInstance();
	    twitter.setOAuthConsumer("consumer");
	    twitter.setOAuthAccessToken(accessToken);

	    twitter.updateStatus(content);
	}
}

As in the previous code, we also have to implement JavaDelegate and override execute method.

More Camunda examples can be found on their official GitHub repository: https://github.com/camunda/camunda-bpm-examples

Conclusion

In the above diagram, we have only seen one example of a process, but Camunda offers a lot more features for modeling business processes and a lot of out-of-the-box implementations that save a lot of time. Also, almost everything is customizable.

If your company has a lot of processes that can be modeled as a BPMN process or processes that require human intervention then Camunda can be the right tool for the job.

In my opinion, it’s definitely worth to have a basic understanding of how Camunda works in order to be able to spot a use-case for this tool.

RECOMMENDATION SYSTEMS AND COLLABORATIVE ALGORITHM

Reading Time: 8 minutes

WHY DO WE NEED RECOMMENDATION SYSTEMS?

Walking through the steps of technology, which has rapid growth nowadays, represents a huge challenge for humanity. Software systems are currently creating a dynamic world, which undoubtedly facilitates human life and enables its improvement to the highest point of a digital being.

Many mobile and web systems offer easy usage and search through the internet. They are a necessary segment of education, health, employment, trade and of course fun. In such a fast and dynamic life, it is necessary to have more and more systems that will help us enable fast recommendation search when in need of finding relevant information, all in order to save us time. Usually, generated recommendation systems are according to the collaboration filtering algorithms or content-based methods.

RECOMMENDATION SYSTEMS IN REAL LIFE

In real life, people are overwhelmed with making a lot of decisions, no matter of its importance, either minor or major. Understanding human choices is a field studied by cognitive psychology.

One of the most important factors influencing the decisions an individual makes is ‘the past experience’, i.e. the decisions made by the man in the past, affect those he will make in the future.

Human actions are also dependent on the experiences they have gained in interactions with other people. The opinion of others affects our lives without us being aware of it. Relationship with friends affects what neighbourhood we will live in, which place we will visit during our vacation, in which bar we will have a drink, etc.

A real life recommendation system

If one person has positive experiences with another one, then he/she has gained trust and authority over the particular individual and is more likely to follow their advice, as well as choosing the decisions that the person chose when they were in a similar situation.

RECOMMENDATION SYSTEMS IN THE DIGITAL WORLD

All large companies and complex systems use collaborative filtering, as an example of this is the social network “Facebook” with the phrase “People you may know”. Facebook is a hugely complex system and has a massive database, which is why they have a need for an optimization of the user data set so that they can provide a precise recommendation. They also have collaborating systems for the news feed, as well as for the game, fun pages, groups, and event sections.

Another, well-known technology and media service provider which uses those collaboration systems is Netflix, with the “Because you watched” phrase. Netflix uses algorithms and machine learning, probably based on genres, history of the watched movies, ratings and the amount of all ratings of the users that have a similar content taste as ours.

Here is as well Amazon, the multinational technology company, which uses the algorithms for a product recommendation for their clients. They use the item-to-item approach for the recommendation.

Hint: Click on the picture if you want to know more about Item-to-Item Collaborative Filtering

Last example but not least, is the most successful business social network LinkedIn, which uses ex. “People in the Information Technology & Services industry you may know”, “People you may know from Faculty XXX”, “Trending pages in your network”, “Online events for you” and a number of other phrases.

I made a research on the collaborative filtering algorithm, so I will deeply explain how this algorithm works, please read the analysis in the sections below.

RECOMMENDATION SYSTEM AND COLLABORATIVE FILTERING

Based on the selected data processing algorithm, the systems use different recommendation techniques.

Content-based system

People who liked this also likes that as well

Collaborative filtering

Analyzing a huge amount of information

Hybrid recommendation systems

COLLABORATIVE FILTERING – DETAILED ANALYSIS

On a coordinate system, we can show the popularity of products, as well as the number of orders.

The X-axis is presenting the product curve, which shows the popularity of a variety of products. The most popular product is on the left part – at the head of the tail, and the less popular ones are in the right part. Under popularity, I mean how many times the product has been ordered, and viewed by others.

The Y-axis is representing the number of orders and product overviews over a certain time interval.

By analyzing the curve, it is noticeable that the often ordered products usually are considered most popular, and those that have not been ordered recently are omitted. That is what the collaborative filtering algorithm offers.

A measure of similarity is how similar two data objects are to each other. The measure of similarity in a dataset usually described as distance with dimensions, which represent characteristics of the objects that are in comparison. If the distance is small, then the degree of similarity is large, and vice versa. The similarities are very subjective and highly dependent on the domain of the systems.

The similarities are in the range of 0 to 1 [0, 1].

Two main similarities:

  • Similarity = 1 if X = Y
  • Similarity = 0 if X != Y

Collaborative filtering is processing the similarity of the data we have, with the help of several theorems, such as Cosine similarity, Euclidean Distance, Manhattan distance etc.

COLLABORATIVE FILTERING – COSINE SIMILARITY

In the beginning, we need to have a database and characteristics of the items.

For cosine similarity implementation, we need a matrix of similarity from the user database. In this matrix, the vector A are the products, and vector B are the users. Matrix is in format AXB. The fields of the matrix represent the grade/rating of the users’ Ai over the products Bj.

Therefore, we can imagine that we have users from 1 to n {1, …n} and grades/ratings on the products {1,…10}. Every row represents a different user, and every column represents one product. Every field of the matrix consists of the product grade/rating that the user has entered. Now, with this generated matrix, we can use the formula for finding the similarity between the users:

STEP 1:

Similarity (UserN, User1) =

 STEP 2:

In step 1, we can see that User N has the most similarities with User 2, but we can see that in the data we have a deficiency for some product ratings, so we should count the priority of the products that User N, has not set a rating. Now we need the values for the most similar users with User N, and those are User 2 and User 4. The following formula should be used:

Priority (product) = User2 (value*similarity) + User4 (value*similarity).

Example:

Priority(product3) = 8 * 0.66 = 5.28

Priority(product4) = 8 * 0.71 = 5.68

Priority(product5) = 7 * 0.71 + 8 * 0.66 = 10.25

STEP 3:

If we want to recommend two products to User N, these will be product5 and product4.

CONCLUSION:

Similarity theorems have their advantages and disadvantages, depending on what data set they apply. From the above analysis, we came to a conclusion that if the data contains zero values and are rarely distributed, we use the metric for computed a cosine similarity that handles nonzero values. Otherwise, if the data are densely distributed and diversity instead of similarity of users/products, and we have non-zero values, then we use the measures for calculating Euclidean distance. Such systems are under constant pressure from large volumes of data in databases and will undergo to even more challenges due to the daily increasing volume of the data. Therefore, there is a growing need for such new technologies that will dramatically improve the scalability of the recommendation systems.

QUESTION: WHAT WILL HAPPEN IN THE FUTURE?
ANSWER: ONLY TIME WILL TELL.

4 steps to start building apps with Flutter

Reading Time: 2 minutes

As a front-end developer, I was always curious about mobile apps and wanted to build one. In the last years, I was testing multiple frameworks from Ionic to React native and to be honest, I was never satisfied. Until one day by accident, I tried FLUTTER and this happened:

Flutter is Google’s UI toolkit for building beautiful, natively compiled applications for mobileweb, and desktop from a single codebase.

From Flutter website

Just reading this sentence blew my mind and I was in. After two months playing around with the framework, I would say it’s the one that will take over in the next years for sure. Let’s jump and see how to start with it.

1 – Download the Flutter SDK

Download the stable version and put it in as a PATH environment variable. The download link is here.

2 – Run Flutter Doctor

flutter doctor

This command is the most important one as it checks your environment and displays a report of the status of your Flutter installation. Do not forget to check the output carefully, to be able to know what is still missing.

3 – Start Coding

import 'package:flutter/material.dart';

void main() async {
  runApp(
    MaterialApp(
      debugShowCheckedModeBanner: false,
      home: Scaffold(
        body: MyApp(),
      ),
    ),
  );
}

class MyApp extends StatefulWidget {
  @override
  _MyAppState createState() => _MyAppState();
}

class _MyAppState extends State<MyApp> {
  @override
  Widget build(BuildContext context) {
    return Text(
           'Click me!',
           style: TextStyle(
                  fontSize: 60.0,
                  fontWeight: FontWeight.bold,
                ),
           );
  }
}

4 – Enjoy it

Multitenancy with Spring Boot

Reading Time: 7 minutes

Why should you consider implementing multitenancy in your project?

  • Cost: Multi-tenant architecture allows the sharing of resources, databases, and the application itself, thus the cost to run the system is fixed.
  • Maintenance: Users do not have to pay a considerable amount of fees to keep the software up to date. This reduces the overall cost of maintenance for each tenant.
  • Performance: Easier to assess and optimize speed, utilization, response time across the entire system, and even update the technology stack when needed.

In this blog we will implement multitenancy in our Spring Boot project.

Let’s create a simple Spring Boot project from start.spring.io, with only basic dependencies (Spring Web, Spring Data JPA, Spring Configuration Processor, MySQL Driver).

The good thing for implementing multitenancy is that we do not need additional dependencies.
We will split this example into two parts. In the first one, we will explain the idea/logic behind it and split the approach into 7 configuration steps, and explain every step. In the second part, we will see how it’s implemented in real life and we will test the solution.

1. Let’s start with creating Tenant Storage. We will use it for keeping the tenant value while the request is executing.

public class TenantStorage {

    private static ThreadLocal<String> currentTenant = new ThreadLocal<>();

    public static void setCurrentTenant(String tenantId) {
        currentTenant.set(tenantId);
    }

    public static String getCurrentTenant() {
        return currentTenant.get();
    }

    public static void clear() {
        currentTenant.remove();
    }
}

2. Next, we will create the Tenant Interceptor. For every request, we will set the value at the beginning and clear it at the end. As you can see, in Tenant Interceptor, I decided for this demo to fetch the value of the tenant from request header (X-Tenant), this is up to you. Just keep an eye on data security when using this in production. Maybe you want to fetch from a cookie or some other header name.

@Component
public class TenantInterceptor implements WebRequestInterceptor {

    private static final String TENANT_HEADER = "X-Tenant";

    @Override
    public void preHandle(WebRequest request) {
        TenantStorage.setCurrentTenant(request.getHeader(TENANT_HEADER));
    }

    @Override
    public void postHandle(WebRequest webRequest, ModelMap modelMap) {
        TenantStorage.clear();
    }

    @Override
    public void afterCompletion(WebRequest webRequest, Exception e) {

    }
}

3. Next thing is to add the tenant Interceptor in the interceptor registry. For that purpose, I will create WebConfiguration that will implement WebMvcConfigurer.

@Configuration
public class WebConfiguration implements WebMvcConfigurer {

    private final TenantInterceptor tenantInterceptor;

    public WebConfiguration(TenantInterceptor tenantInterceptor) {
        this.tenantInterceptor = tenantInterceptor;
    }

    @Override
    public void addInterceptors(InterceptorRegistry registry) {
        registry.addWebRequestInterceptor(tenantInterceptor);
    }
}

4. Now, let’s update the application.yml file with some properties for the database connections.

tenants:
  datasources:
    n47schema1:
      jdbcUrl: jdbc:mysql://localhost:3306/n47schema1?verifyServerCertificate=false&amp;useSSL=false&amp;requireSSL=false
      driverClassName: com.mysql.cj.jdbc.Driver
      username: root
      password:
    n47schema2:
      jdbcUrl: jdbc:mysql://localhost:3306/n47schema2?verifyServerCertificate=false&amp;useSSL=false&amp;requireSSL=false
      driverClassName: com.mysql.cj.jdbc.Driver
      username: root
      password:
spring:
  jpa:
    database-platform: org.hibernate.dialect.MySQL5Dialect

5. Following, we will wrap the tenant’s values to map with key = tenant name, value = data source in DataSourceProperties.

@ConfigurationProperties(prefix = "tenants")
@Component
public class DataSourceProperties {

    private Map<Object, Object> dataSources = new LinkedHashMap<>();

    public Map<Object, Object> getDataSources() {
        return dataSources;
    }

    public void setDataSources(Map<String, Map<String, String>> datasources) {
        datasources.forEach((key, value) -> this.dataSources.put(key, convert(value)));
    }

    public DataSource convert(Map<String, String> source) {
        return DataSourceBuilder.create()
                .url(source.get("jdbcUrl"))
                .driverClassName(source.get("driverClassName"))
                .username(source.get("username"))
                .password(source.get("password"))
                .build();
    }
}

6. Afterwards, we should create DataSource Bean, and for that purpose, I will create DataSourceConfig.

@Configuration
public class DataSourceConfig {

    private final DataSourceProperties dataSourceProperties;

    public DataSourceConfig(DataSourceProperties dataSourceProperties) {
        this.dataSourceProperties = dataSourceProperties;
    }

    @Bean
    public DataSource dataSource() {
        TenantRoutingDataSource customDataSource = new TenantRoutingDataSource();
        customDataSource.setTargetDataSources(dataSourceProperties.getDataSources());
        return customDataSource;
    }
}

7. At last, we will extend the AbstractRoutingDataSource and implement our lookup key.

public class TenantRoutingDataSource extends AbstractRoutingDataSource {

    @Override
    protected Object determineCurrentLookupKey() {
        return TenantStorage.getCurrentTenant();
    }

}

And we are done with the first part.

Let’s see how it looks in the real world:

For this example, we will use two schemas from the same database instance, we will create a user and get all users. Also, I will show you how you can implement Flyway and test the solution.

First, let’s configure our databases. In my local instance of MySQL server, we will create two schemas: n47schema1 and n47schema2.

Next step is to execute this CREATE statement on both schemas:

CREATE TABLE `users` (
	`id` INT(11) NOT NULL AUTO_INCREMENT,
	`name` VARCHAR(64) NOT NULL DEFAULT '' COLLATE 'utf8_general_ci',
	PRIMARY KEY (`id`)
);

Then, we will create two APIs, one for creating a user, and the other one to fetch all users.

@RestController
@RequestMapping("/users")
public class UserController {

    private final UserRepository userRepository;

    public UserController(UserRepository userRepository) {
        this.userRepository = userRepository;
    }

    @PostMapping
    public UserDomain addUser(@RequestBody UserRequestBody userRequestBody) {
        UserDomain userDomain = new UserDomain(userRequestBody.getName());
        return userRepository.save(userDomain);
    }

    @GetMapping
    public List<UserDomain> getAll() {
        return userRepository.findAll();
    }
}

Also we need to create UserDomain, UserRepository and UserRequestBody.

@Entity
@Table(name = "users")
public class UserDomain {

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;

    private String name;

    public UserDomain() {
    }

    public UserDomain(String name) {
        this.name = name;
    }

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }
}
public interface UserRepository extends JpaRepository<UserDomain, Long> {
}
public class UserRequestBody {
    private String name;

    public UserRequestBody() {
    }

    public UserRequestBody(String name) {
        this.name = name;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }
}

And we are done with coding.

We can run our application and start making a request.

First, let’s create some users with a POST request to http://localhost:8080/users. The most important thing not to forget is that we need to provide header X-Tenant with the value n47schema1 or n47schema2.

We will create two users for tenant n47schema1: Antonie and John. Example:

After that, we will change the X-Tenant header value to n47schema2 and create two users: William and Joseph.

You will notice that the ids retrieved in the response are the same as the first tenant value. Now let’s fetch the users by the API.

When you make a GET request to http://localhost:8080/users with header X-Tenant having value n47schema1 you will fetch the users from the n47schema1 schema, and when you make a request with a header value n47schema2 you will fetch from the n47schema2 schema.

You can also check the data in the database to be sure that it is stored correctly.

You can always set fallback if the X-Tenant header is not provided, or it’s with a wrong value.

As the last thing, I will show you how you can implement Flyway with multitenancy. First, you need to add flyway as a dependency and disable it in the application.yml

spring:
  flyway:
    enabled: false

Add PostConstruct method in DataSourceConfig configuration:

@PostConstruct
public void migrate() {
        for (Object dataSource : dataSourceProperties.getDataSources().values()) {
            DataSource source = (DataSource) dataSource;
            Flyway flyway = Flyway.configure().dataSource(source).load();
            flyway.migrate();
        }
}

And we are done, hope this blog helps you to understand what multitenancy is and how it’s implemented in Spring Boot project.

Download the source code

The project is freely available on our Bitbucket repository. Feel free to fix any mistakes and to comment here if you have any questions or feedback.

https://bitbucket.org/n47/spring-boot-multitenancy/src/master/

Server-side rendering with Nuxt.js

Reading Time: 5 minutes

What is server-side rendering

By default, modern JS frameworks produce and manipulate DOM in the browser as an output. But, it is possible to render the same codebase into HTML strings on the server and send them to the browser and finally compile the static markup into a fully working application on the client-side. A server-rendered application can also be considered isomorphic or universal, in the sense that the majority of your code runs on both the server and the client.

Trade-offs when using SSR

  • Development constraints
  • Build setup, deployment requirements
  • Server-side load

Advantages when using SSR

  • Better SEO
  • Faster time to content

Nuxt.js

Nuxt is a framework based on Vue.js, Node.js, Webpack and Babel. It is free and open-source and we can use it to create various applications from static landing pages to complex enterprise-ready web solutions.

Supports 3 modes of working:

  1. Server-Rendered (Universal SSR)
  2. Single Page Applications (SPA)
  3. Static-Generated (Pre Rendering)

Some of the features Nuxt provides:

  • Write Vue Files (*.vue)
  • Automatic Code Splitting
  • Server-Side Rendering
  • Powerful Routing System with Asynchronous Data
  • Static File Serving
  • ES2015+ Transpilation
  • Bundling and minifying of your JS & CSS
  • Managing element
  • Hot module replacement in Development
  • Pre-processor: Sass, Less, Stylus, etc.
  • HTTP/2 push headers ready
  • Extending with Modular architecture

Project structure

Here we have the project structure that Nuxt provides out of the box. The pages, middleware, plugins and layouts directories are framework-specific and we’ll briefly explain their purpose.

The Nuxt community has added great README.md files into each directory with links to the documentation.

The layouts directory defines all of the layouts that our application can use. This is a great place to add shared global components that are used across the application like the header and footer for example. By default, the template that is used for .vue files in the pages directory is default.vue. It is needed to inject all of the page’s components, text, assets, and data.

Pages is the only required directory. All Vue components in this directory are automatically added to the vue-router based on their filenames and the directory structure. We can have dynamic routes by adding an underscore (_) to a directory or a .vue file.

/pages
---/categories
------/_category_id
------/products
---------/_product_id

// The structure above generates a router config that will provide 
// a route for /categories/1/products/3 for example

Middleware is a function that can be executed before rendering a page or layout. There is a variety of reasons we may want to do so. Route guarding is a popular use where we could check the Vuex store for a valid login or validate some params (instead of using the validate method on the component itself). Another use for middleware can be to generate dynamic breadcrumbs based on the route and params. These functions can be asynchronous, meaning nothing will be shown to the user until the middleware is resolved.

The plugins directory allows us to register Vue plugins before the application is created. This allows the plugin to be shared throughout our app on the Vue instance and be accessible in any component. Most major plugins have a Nuxt version that can be easily registered to the Vue instance by following their docs. However, there will be circumstances when we will have to develop a plugin or need to adapt to an existing plugin.

Nuxt’s supercharged components

Nuxt’s page components have extra methods attached to them that we can use to provide additional functionality. The main ones we would use in a project will be the asyncData and fetch methods. Both are very similar in concept, they run asynchronously before the component is generated, and they can be used to populate the data of a component and the store. They also enable the page to be fully rendered on the server before sending it to the client even when we have to wait for some database or API call.

Summary

There’s a lot to cover with Nuxt and server-side rendering, this post aims to provide a general overview of the framework, why server-side rendering is important and how we can scaffold our next web application using Nuxt.

Before using SSR though, we should ask whether we actually need it. Generally, it depends on how important time to content is for the application. If we are building an internal dashboard where an extra few hundred milliseconds on initial load isn’t an issue, SSR wouldn’t be needed. However, in applications where time to content is critical, SSR can help us achieve the best possible performance.

This is a starter Nuxt application which showcases examples of the features mentioned in the post if you have any comments or feedback let us know.
nuxt-ssr-example

Thanks for reading!

Lifecycle hooks in Vue.js

Reading Time: 3 minutes

In this post we’ll be doing an overview of lifecycle hooks in Vue.js.

Each Vue application first creates a Vue instance, with the Vue function:

var vm = new Vue({
  router,
  render: function (createElement) {
    return createElement(App)
  }
}).$mount('#app')

This Vue instance during its initialization goes through several phases and it exposes some properties and methods in each phase. This is an example of Vue using the template method behavioural design pattern.

The methods which run by default in this process of creating and updating the DOM are called lifecycle hooks and using them properly allows easy access to a behind the scenes overview of how the library works.

Below we can see a simple diagram showcasing all methods in one instance.

We can see that we have 8 methods that we can split into 4 phases in an application’s lifecycle.

  • Creation or initialization hooks (beforeCreate, created)
  • Mounting or DOM insertion hooks (beforeMount, mounted)
  • Updating hooks (beforeUpdate, updated)
  • Destroying hooks (beforeDestroy, destroyed)

Creation hooks

beforeCreate is the first hook that gets called in a Vue component, it has no access to the components reactive data and events as they haven’t been initialized yet. It’s good to use this hook for non-reactive data.
The created hook has access to the components events and reactive data and state. The DOM and $el properties are still not available. This hook is usually used to perform API calls and store the response.

Mounting hooks

The next lifecycle hook to be called is beforeMount and it happens right before the component is mounted on the DOM. Here is our last opportunity to perform operations before the DOM gets rendered.
Mounted is called right after and now the DOM is available. This is a good place to add third-party integrations like charts, Google Maps etc, which interact directly with the DOM.

mounted() {
  console.log('This is the DOM instance', this.$el)
  this.$nextTick(function() {
    console.log('Child components have now been loaded')
  }
}

Updating hooks

beforeUpdate and updated are the two hooks that are called each time a reactive property is changed. The data in beforeUpdate holds the previous values of the property, while after updated runs, the instance has finished re-rendering.

beforeUpdate () {
  console.log('before update called')
},
updated () {
  console.log('update finished')
}

Destroying hooks

When beforeDestroy, we can still mutate the state and the instance is still functional. Here we can do some last moment data mutation before the instance is destroyed.
When destroy is called, the Vue instance does not exist anymore. All directives, event listeners have been removed and child components have been destroyed.

beforeDestroy () {
  console.log('before destroy called')
},
destroyed () {
  console.log('destroyed')
}

I hope this provides a good overview of the lifecycle hooks in a Vue application. For more information, refer to the official docs here.

JavaScript loop and object iteration (optimization)

Reading Time: 3 minutes

Nowadays, it’s interesting that loops become part of our life as developers and we use them at least one time a day. Because of that, one day I decided to investigate and go deeper into JavaScript loops, where I found very interesting things and if I do not share them with you, I am going to feel guilty.

Before you continue reading, I would strongly recommend you to read my previous blog which I believe you will find very useful to create a full picture of the loops. So, go on and read it.

Object properties iteration

Let’s first analyze object iteration and suppose that we have an object, something like:

var obj = {
    property1: 1,
    property2: 2,
    …
}

First, what comes to our mind is to iterate them with the standard for each iteration:

for (var prop in obj) {
    console.log(prop);
}

In this case, we are going to iterate through the object properties but is it the correct way? The answer is yes and no, depends on your needs. Another way to iterate trough is to exclude all inherited properties, which in some case we do not need. So, we can exclude them by using the JavaScript method hasOwnProperty(). You can find an explanation about in operator and hasOwnProperty() in my previous blog.

Since we learned some object optimization/improvements/usage, now the question is, can we really do an optimization?
The answer is yes. Before I am going to show you how we can do that, let’s spend some time on the loops.

Loop iteration

In order to continue the previous example, I will continue explaining the loops with object iteration (of course you can test it with a list of integers like the speed test examples or anything you want).

To accomplish that, we will need the JavaScript method Object.keys().

  • Object.keys() method returns an array of a given object but only the own properties of the object

Let’s write the standard for loop:

var keys = Object.keys(obj)
for (var i = 0; i < keys.length; i++) {
    console.log(obj[keys[i]]);
}

Now we have a solution where we decreased the iteration time by eliminating the time for the evaluation `keys.length` from O(N) to O(1) which is a big time saving if we iterate big arrays.
So, during the development, if you are not limited (like applying some best practices,…), you can add another optimization, by using while loop.

var i = keys.length;
while (i--) {
    console.log(obj[keys[i]]);
}

In this case, we do not declare a new variable, we don’t execute new operations and the while loop will automatically stop when it reaches -1.

Speed testing:

Since the new browsers like Chrome are very fast and optimized, in order to see the best speed differences, I would suggest executing the loops on IE where you will be able to see a real speed difference between the loops.

var arr = new Array(10000);

Example speed test 1	
console.time();                        
for (var i = 0; i < arr.length; i++) {
    // operations...		
    var sum = i * i;   		
}
Execution 1: 4.4ms		
Execution 2: 5.5ms		
Execution 3: 5ms			
Execution 4: 4.6ms			
Execution 5: 5ms	
					
Example speed test 2                    
console.timeEnd(); 	          
while (i--) {
    // operations...
    var sum = i * i;
}
console.timeEnd();

Execution 1: 3.7ms
Execution 2: 4.8ms
Execution 3: 3.9ms
Execution 4: 3.8ms
Execution 5: 4.2ms

Thank you for reading and I would appreciate any feedback.