Microservice architecture: Using Java thread locals and Tomcat/Spring capabilities for automated information propagation

Reading Time: 13 minutes

Inter-microservice communication has always brought questions and challenges to software architects. For example, when it comes to propagating certain information (via HTTP headers for instance) through a whole chain of calls in the scope of one transaction, we want this to happen outside of the microservices’ business logic. We do not want to tackle and work with these headers in the presentation or service layers of the application, especially if they are not important to the microservice for completing some business logic task. I would like to show you how you can automate this process by using Java thread locals and Tomcat/Spring capabilities, showing a simple microservice architecture.

Architecture overview

Architecture overview

This is the sample architecture we will be looking at. We have a Zuul Proxy Server that will act as a gateway towards our two microservices: the Licensing Microservice and the Organization Service. Those three will be the main focus of this article. Let’s say that a single License belongs to a single Organization and a single Organization can deal with multiple Licenses. Additionally, our services are registered to a Eureka Service Discovery and they pull their application config from a separate Configuration Server.

Simple enough, right? Now, what is the goal we want to achieve?

Let’s say that we have some sort of HTTP headers related to authentication or tracking of the chain of calls the application goes through. These headers arrive at the proxy along with each request from the client-side and they need to be propagated towards each microservice participating in the action of completing the user’s request. For simplicity’s sake, let’s introduce two made up HTTP headers that we need to send: correlation-id and authentication-token. You may say: “Well, the Zuul proxy gateway will automatically propagate those to the corresponding microservices, if not stated otherwise in its config”. And you are correct because this is a gateway that has an out-of-the-box feature for achieving that. But, what happens when we have an inter-microservice communication, for example, between the Licensing Microservice and the Organization Microservice. The Licensing Microservice needs to make a call to the Organization Microservice in order to complete some task. The Organization Microservice needs to have the headers sent to it somehow. The “go-to, technical debt” solution would be to read these headers in the controller/presentation layer of our application, then pass them down to the business logic in the service layer, which in turn is gonna pass them to our configured HTTP client, which in the end is gonna send them to the Organization Microservice. Ugly right? What if we have dozens of microservices and need to do this in each and every single one of them? Luckily, there is a lot prettier solution that includes using a neat Java feature: ThreadLocal.

Java thread locals

The Java ThreadLocal class provides us with thread-local variables. What does this mean? Simply put, it enables setting a context (tying all kinds of objects) to a certain thread, that can later be accessed no matter where we are in the application, as long as we access them within the thread that set them up initially. Let’s look at an example:

public class Main {

    public static final ThreadLocal<String> threadLocalContext = new ThreadLocal<>();

    public static void main(String[] args) {
        threadLocalContext.set("Hello from parent thread!");
        Thread childThread = new Thread(() -> {
            System.out.println("Child thread: " + threadLocalContext.get()); // Prints null
        });
        childThread.start();
        childThread.join(); // Waiting for the child thread to finish
        System.out.println("Parent thread: " + threadLocalContext.get()); // Prints "Hello from parent thread!"
    }
}

We have a single static final ThreadLocal<String> reference that we use for setting some information to the thread (in this case, the string “Hello from parent thread!” to the main thread). Accessing this variable via threadLocalContext.get() (no matter in which class we are, as long as we are on the same main thread) produces the expected string we have set previously. Accessing it in a child thread produces a null result. What if we set some context to the child thread as well:

threadLocalContext.set("Hello from parent thread!");
Thread childThread = new Thread(() -> {
    threadLocalContext.set("Hello from child thread!");
    System.out.println("Child thread: " + threadLocalContext.get()); // Prints "Hello from child thread!"
});
childThread.start();
childThread.join(); // Waiting for the child thread to finish
System.out.println("Parent thread: " + threadLocalContext.get()); // Prints "Hello from parent thread!"

We can notice that the two threads have completely separate contexts. Even though they access the same threadLocalContext reference, in the background, the context is relative to the calling thread. What if we wanted the child thread to inherit its parent context:

public class Main {

    private static final ThreadLocal<String> threadLocalContext = new InheritableThreadLocal<>();

    public static void main(String[] args) throws InterruptedException {
        threadLocalContext.set("Hello from parent thread!");
        Thread childThread = new Thread(() -> {
            System.out.println("Child thread: " + threadLocalContext.get()); // Prints "Hello from parent thread!"
            threadLocalContext.set("Hello from child thread!");
            System.out.println("Child thread: " + threadLocalContext.get()); // Prints "Hello from child thread!"
        });
        childThread.start();
        childThread.join(); // Waiting for the child thread to finish
        System.out.println("Parent thread: " + threadLocalContext.get()); // Prints "Hello from parent thread!"
    }
}

We only changed the ThreadLocal to an InheritableThreadLocal in order to achieve that. We can notice that the first print inside the child thread does not render null anymore. The moment we set another context to the child thread, the two contexts become disconnected and the parent keeps its old one. Note that by using the InheritableThreadLocal, the reference to the parent context gets copied to the child, meaning: this is not a deep copy, but two references pointing to the same object (in this case, the string “Hello from parent thread!”). If, for example, we used InheritableThreadLocal<SomeCustomObject> and tackled directly some of the properties of the object inside the child thread (threadLocalContext.get().setSomeProperty("some value")), then this would also be reflected in the parent thread and vice versa. If we want to disconnect the two contexts completely, we just call .set(new SomeCustomObject()) on one of the threads, which will turn its local reference to point to the newly created object.

Now, you may be wondering: “What does this have to do with automatically propagating headers to a microservice?”. Well, by using Servlet containers such as Tomcat (which Spring Boot has it embedded by default), we handle each new HTTP request (whether we like/know it or not :-)) in a separate thread. The servlet container picks an idle thread from its dedicated thread pool each time a new call is made. This thread is then used by Spring Boot throughout the processing of the request and the return of the response. Now, it is only a matter of setting up Spring filters and HTTP client interceptors that will set and get the local thread context that will contain the HTTP headers.

Solution

First off, let’s create a simple POJO class that is going to contain both of the headers that need propagating:

@Getter
@Setter
@ToString
@AllArgsConstructor
@NoArgsConstructor
public class RequestHeadersContext {

    public static final String CORRELATION_ID = "correlation-id";
    public static final String AUTHENTICATION_TOKEN = "authentication-token";

    private String correlationId;
    private String authenticationToken;
}

Next, we will create a utility class for setting and retrieving the thread-local context:

public final class RequestHeadersContextHolder {

    private static final ThreadLocal<RequestHeadersContext> requestHeaderContext = new ThreadLocal<>();

    public static void clearContext() {
        requestHeaderContext.remove();
    }

    public static RequestHeadersContext getContext() {
        RequestHeadersContext context = requestHeaderContext.get();
        if (context == null) {
            context = createEmptyContext();
            requestHeaderContext.set(context);
        }
        return context;
    }

    public static void setContext(RequestHeadersContext context) {
        Assert.notNull(context, "Only not-null RequestHeadersContext instances are permitted");
        requestHeaderContext.set(context);
    }

    public static RequestHeadersContext createEmptyContext() {
        return new RequestHeadersContext();
    }
}

The idea is to have a Spring filter, that is going to read the HTTP headers from the incoming request and place them in the RequestHeadersContextHolder:

@Configuration
public class RequestHeadersServiceConfiguration {

    @Bean
    public Filter getFilter() {
        return new RequestHeadersContextFilter();
    }

    private static class RequestHeadersContextFilter implements Filter {

        @Override
        public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain filterChain) throws IOException, ServletException {
            HttpServletRequest httpServletRequest = (HttpServletRequest) servletRequest;
            RequestHeadersContext context = new RequestHeadersContext(
                    httpServletRequest.getHeader(RequestHeadersContext.CORRELATION_ID),
                    httpServletRequest.getHeader(RequestHeadersContext.AUTHENTICATION_TOKEN)
            );
            RequestHeadersContextHolder.setContext(context);
            filterChain.doFilter(servletRequest, servletResponse);
        }
    }
}

We created a RequestHeadersServiceConfiguration class which, at the moment, has a single Spring filter bean defined. This filter is going to read the needed headers from the incoming request and set them in the RequestHeadersContextHolder (we will need to propagate those later when we make an outgoing request to another microservice). Afterwards, it will resume the processing of the request and will give control to the other filters that might be present in the filter chain. Keep in mind that, all the while, this code executes within the boundaries of the dedicated Tomcat thread, which the container had assigned to us.

Next, we need to define an HTTP client interceptor which we are going to link to a RestTemplate client, which in turn is going to execute the interceptor’s code each time before it makes a request to an outer microservice. We can add this new RestTemplate bean inside the same configuration file:

@Configuration
public class RequestHeadersServiceConfiguration {
    
    // .....

    @LoadBalanced
    @Bean
    public RestTemplate getRestTemplate() {
        RestTemplate restTemplate = new RestTemplate();
        List<ClientHttpRequestInterceptor> interceptors = restTemplate.getInterceptors();
        interceptors.add(new RequestHeadersContextInterceptor());
        return restTemplate;
    }

    private static class RequestHeadersContextInterceptor implements ClientHttpRequestInterceptor {

        @Override
        @NonNull
        public ClientHttpResponse intercept(@NonNull HttpRequest httpRequest,
                                            @NonNull byte[] body,
                                            @NonNull ClientHttpRequestExecution clientHttpRequestExecution) throws IOException {
            RequestHeadersContext context = RequestHeadersContextHolder.getContext();
            HttpHeaders headers = httpRequest.getHeaders();
            headers.add(RequestHeadersContext.CORRELATION_ID, context.getCorrelationId());
            headers.add(RequestHeadersContext.AUTHENTICATION_TOKEN, context.getAuthenticationToken());
            return clientHttpRequestExecution.execute(httpRequest, body);
        }
    }
}

As you might have guessed, the interceptor reads the header values from the thread-local context and sets them up for the outgoing request. The RestTemplate just adds this interceptor to the list of its already existing ones.

A good-to-have thing will be to eventually clear/remove the thread-local variables from the thread. When we have an embedded Tomcat container, missing out on this point will not impose a problem, since along with the Spring application, the Tomcat container dies as well. This means that all of the threads will altogether be destroyed and the thread-local memory released. However, if we happen to have a separate servlet container and we deploy our app as a .war instead of a standalone .jar, not clearing the context might introduce some memory leaks. Imagine having multiple applications on our standalone servlet container and each of them messing around with thread locals. The container shares its threads with all of the applications. When one of the applications is turned off, the container is going to continue to run and the threads which it borrowed to the application will not cease to exist. Hence, the thread-local variables will not be garbage collected, since there are still references to them. That is why we are going to define and add an interceptor to the Spring interceptor registry, which will clear the context after a request finishes and the thread can be assigned to other tasks:

@Configuration
public class WebMvcInterceptorsConfiguration implements WebMvcConfigurer {

    @Override
    public void addInterceptors(InterceptorRegistry registry) {
        registry.addInterceptor(new RequestHeadersContextClearInterceptor()).addPathPatterns("/**");
    }

    private static class RequestHeadersContextClearInterceptor implements HandlerInterceptor {

        @Override
        public void afterCompletion(HttpServletRequest request, HttpServletResponse response, Object handler, Exception exception) {
            RequestHeadersContextHolder.clearContext();
        }
    }
}

All we need to do now is wire these configurations into our microservices. We can create a separate library extracting the config (and maybe upload it to an online repository, such as Maven Central, or our own Nexus) so that we do not need to copy-paste all of the code into each of our microservices. Whatever the case, it is good to make this library easy to use. That is why we are going to create a custom annotation for enabling it:

@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
@Import({RequestHeadersServiceConfiguration.class, WebMvcInterceptorsConfiguration.class})
public @interface EnableRequestHeadersService {
}

Usage

Let’s see how we can leverage and use this library from inside a microservice. Only a couple of things are needed.

First, we need to annotate our application with the @EnableRequestHeadersService:

@SpringBootApplication
@EnableRequestHeadersService
public class LicensingServiceApplication {

    public static void main(String[] args) {
        SpringApplication.run(LicensingServiceApplication.class, args);
    }
}

Second, we need to inject the already defined RestTemplate in our microservice and use it as given:

@Component
public class OrganizationRestTemplateClient {

    private final RestTemplate restTemplate;

    public OrganizationRestTemplateClient(RestTemplate restTemplate) {
        this.restTemplate = restTemplate;
    }

    public Organization getOrganization(String organizationId) {
        ResponseEntity<Organization> restExchange = restTemplate.exchange(
                "http://organizationservice/v1/organizations/{organizationId}",
                HttpMethod.GET,
                null,
                Organization.class,
                organizationId
        );
        return restExchange.getBody();
    }
}

We can notice that the getOrganization(String organizationId) method does not handle any HTTP headers whatsoever. It just passes the URL and the HTTP method and lets the imported configuration do its magic. As simple as that! We can now call the getOrganization method wherever we like, without having any sort of knowledge about the headers that are being sent in the background. If we have the need to read them somewhere in our code, or even change them, then we can use the RequestHeadersContextHolder.getContext()/setContext() static methods wherever we like in our microservice, without the need to parse them from the request object.

Feign HTTP Client

If we want to leverage a more declarative type of coding we can always use the Feign HTTP Client. There are ways to configure interceptors here as well, so, using the RestTemplate is not strictly required. We can add the new interceptor configuration to the already existing RequestHeadersServiceConfiguration class:

@Configuration
public class RequestHeadersServiceConfiguration {

    // .....

    @Bean
    public RequestInterceptor getFeignRequestInterceptor() {
        return new RequestHeadersContextFeignInterceptor();
    }

    private static class RequestHeadersContextFeignInterceptor implements RequestInterceptor {

        @Override
        public void apply(RequestTemplate requestTemplate) {
            RequestHeadersContext context = RequestHeadersContextHolder.getContext();
            requestTemplate.header(RequestHeadersContext.CORRELATION_ID, context.getCorrelationId());
            requestTemplate.header(RequestHeadersContext.AUTHENTICATION_TOKEN, context.getAuthenticationToken());
        }
    }
}

The new bean we created is going to automatically be wired as a new Feign interceptor for our client.

Next, in our microservice, we can annotate our application class with @EnableFeignClients and then create our Feign client:

@FeignClient("organizationservice")
public interface OrganizationFeignClient {

    @GetMapping(value = "/v1/organizations/{organizationId}")
    Organization getOrganization(@PathVariable("organizationId") String organizationId);
}

All that we need to do now, is just inject our new client anywhere in our services and use it from there. In comparison to the RestTemplate, this is a more concise way of making HTTP calls.

Asynchronous HTTP requests

What if we do not want to wait for the request to the Organization Microservice to finish and want to execute it asynchronously and concurrently (using the @EnableAsync and @Async annotations from Spring, for example). How are we going to access the headers that need to be propagated in this case? You might have guessed it: by using InheritableThreadLocal instead of ThreadLocal. As mentioned earlier above, the child threads we create separately (aside from the Tomcat ones which will be the parents) can inherit their parent’s context. That way we can send header-populated requests in an asynchronous manner. There is no need to clear the context for the child threads (side note: clearing it will not affect the parent thread, and clearing the parent thread will not affect the child thread; it will only set the current thread’s local context reference to null), since these will be created from a separate thread pool that has nothing to do with the container’s one. The child threads’ memory will be cleared after execution or after the Spring application exits because eventually, they die off.

Summary

I hope you will find this neat little trick useful while refactoring your microservices. A lot of Spring’s functionality is actually based on thread locals. Looking into its source code, you will find a lot of similar/same concepts as the ones mentioned above. Spring Security is one example, Zuul Proxy is another.

The full code for this article can be found here.

References

Spring Microservices In Action by John Carnell

Automated process with Bitbucket Pipelines for quick and easy creation of custom Docker images

Reading Time: 9 minutes

Starting from the first initial release back in 2013, Docker and its images are still growing and growing every day. We have more and more images created, more containers used, all that is expanding so we need something that will help us in the process to complete repetitive tasks easy and fast. Every second that we can save is a plus. For that purpose, I wanted to bring this article to you and save some time so you can focus and work on new things. In this article, I will bring one process that I think is very useful when we are working with Docker, here we will see how we can easily speed up the creation of Docker images for our custom usage.

For this presentation we need several prerequisites: installed Docker on your machine, Google Artifact Registry, the appropriate accounts set up for Google Cloud (authenticated and ready for use), Bitbucket Pipelines. This article can cost money, so please first take a look at the resources.

The process

First, we have to create the folder structure from where we will build, and push the Docker images. Create one folder like Docker_Images and inside that folder, we have to create two subfolders (Base_Images and Extension_Images), in the Base images folder we will store the base layer images that will be used in the Extension Docker Images, so we will use base images, add new features and get new custom Docker Images fast. With this approach, we can have several base images and always create new ones easily using the base layers by adding new features on top of them.

We will create one Base Docker Image and one Extension Docker Image. So first, we have to create the Base Image, for that, create a new subdirectory in the Base_Images directory, with the appropriate name, I will use linux_dind_openjdk11 as my base image. When we have created the subdirectory we have to create the Dockerfile. In the Dockerfile for this base image we will have this:

FROM docker:19.03-dind  #you can use different image here if you need specific one 
USER root
ENV \
    RUNTIME_DEPS="tar unzip curl openjdk11 bash docker-compose"  
#in ENV we add the packages that we want to be installed
RUN \
    echo "http://dl-cdn.alpinelinux.org/alpine/edge/community" >> /etc/apk/repositories && \
    apk update && \
    apk add --no-cache $RUNTIME_DEPS
#with RUN we are adding the package repo and adding the requested packages

As you can see in the Dockerfile that we have created several parts, the FROM, USER, ENV and RUN parts, each one of them is doing some specific job inside the Dockerfile. The Dockerfile can have additional parts, but in our case, these are the only ones that we need. So with this, we have created our base image. You can find more info about the Dockerfile structure on this link: https://docs.docker.com/engine/reference/builder/.

The next step is to build and push the image to the specific Google Artifact Registry where we will store the images and every member in our company or team can use those images.

docker build -t us-central1-docker.pkg.dev/gcp-team-platform/docker-registry-name/linux_dind_openjdk11:1.0.0 .
docker push us-central1-docker.pkg.dev/gcp-team-platform/docker-registry-name/linux_dind_openjdk11:1.0.0

When we execute these two commands successfully we can be sure that now we have our base image pushed on the Google Artifact Registry and can be found there. In the Google Cloud, this can be found by simply typing Artifact Registry in the search bar and selecting Registry where we will see the pushed image that we have created.

After this step, we can continue to our new Extension Docker Image, using the base image as a starting point. So now we have to create a new subfolder in the extension_images folder, with a similar pattern name, e.g. linux_dind_openjdk11_maven3_gradle7, this new folder also has to have Dockerfile inside. This Docker image will have new features like Maven and Gradle. The Dockerfile will look like below:

FROM us-central1-docker.pkg.dev/gcp-team-platform/docker-registry-name/linux_dind_openjdk11:1.0.0
#the Base Image that we have created previously
USER root
ENV \
    RUNTIME_DEPS="maven gradle"  #added new packages
RUN \
    echo "http://dl-cdn.alpinelinux.org/alpine/edge/community" >> /etc/apk/repositories && \
    apk update && \
    apk add --no-cache $RUNTIME_DEPS

When we build and push this Dockerfile we will have the previous (Base Image) and the new packages in our Extension Image, which means Maven and Gradle will be installed together with opendjk11, DinD (Docker in Docker) and Linux.

This process can be done with all variations and needs for Docker Images that we need, the process will be simple buy creating Base Images in the appropriate folder, and upgrading them in the Extension Images folder where we will create the Docker Images using the Baselayers. But as you can see this is a repetitive task and we can automatize some processes here.

For the automation part, I use Bitbucket Pipelines, which is giving us a very easy and fast way of doing specific actions easily, fast and secure. You can read more here https://bitbucket.org/product/features/pipelines.

Bitbucket Pipelines

Bitbucket Pipelines is a CI/CD tool, that is working also with Docker, in a way where every build we do, Bitbucket Pipeline is using Docker Container to serve our needs.

In our example we need one Bitbucket repo where we will store our files, so please create one and push the files there, also for the Bitbucket to work we need a bitbucket-pipelines.yml file created in the root of our working directory. The name must be the exact bitbucket-pipelines.yml, this is needed so the pipeline can work and recognize the file. When we create this file inside we have the option to automate the steps needed for the build and push the images.

The Bitbucket Pipeline file has several things that we have to take care of. The first thing is the image, that is on the top of the document. This is pulling the Cloud SDK version, so we can quickly execute gcloud commands.

image: gcr.io/google.com/cloudsdktool/cloud-sdk:latest

The next one in our code is the Git/Bitbucket part where we are getting the full structure of the files in the directories.

clone:
  depth: full

In this definitions part, where we are defining our script that will automate the steps:

definitions:
  scripts:
    - script: &buildDockerImage
      - echo $SERVICE_ACCOUNT_KEY | base64 -d > key.json
      - gcloud auth activate-service-account $SERVICE_ACCOUNT_EMAIL --key-file=key.json
      - gcloud auth configure-docker $DOCKER_REGISTRY_LOCATION --quiet
      - chmod +rx build_push_docker_images_script.sh
      - ./build_push_docker_images_script.sh

Here we can see the name of the script, next command is the Service Account Key spawn, so as I mentioned we need a service account already created for this task, here we are adding the key of the service account from Google. The next two steps starting with gcloud command are for connection to our GCP platform, here we are authenticating the service account and on the next command, we are configuring the connection and authentication with the Docker repository on Google. With the cd command, we are entering in the extension_images folder where we have to build and push our new Docker images.

The last two commands are for giving permission and running our custom script build_push_docker_images_script.sh, this script is created in order to go into every new folder of the extension_images directory, build the new images with their name and tag, and push them to the Artifact Registry if everything is correct.

The Bash script is a simple one, loops through every directory and executes several commands, see the code below:

#!/bin/sh
/bin/bash
extension_path="$(dirname $(realpath $0) )/extension-images"
for dir in $extension_path/*; do
if [ -d $dir ];
  then
   cd $dir
   folder_name=$(basename $dir);
   docker build -t $DOCKER_REGISTRY/$folder_name:1.0.0 .
   docker push $DOCKER_REGISTRY/$folder_name:1.0.0
fi
done

With the for cycle, we are looping through every folder in the extension_images directory and it is entering each of them and executing the docker build and push commands.

The last step in the Bitbucket pipeline file is the section where we define the pipeline how to work, means that in this section we are defining the triggers, so there are several different ways of triggering the pipeline, one of them is using the branches, so when we push to a specific branch it can be a trigger for the pipeline, in our case that branch is master.

pipelines:
  branches:
    master:
      - step:
          name: Build and Deploy Docker Images
          deployment: Dev
          script: *buildDockerImage
          services:
            - docker

In this code you can see that we have a step again, in Bitbucket Pipelines every new step is a separate Docker image, we have the name of the step for easy recognition what we do, the deployment (Dev) is the environment where this pipeline will execute in this case our script. The last one is the part of the service where we are spinning up a separate docker container for faster build and easy service editing. In our case, we have chosen docker.

You can find more about Bitbucket Pipeline Triggers on this link https://support.atlassian.com/bitbucket-cloud/docs/pipeline-triggers/.

Summary

In short, here you can find steps on how to automate and create Docker Images, using the Bitbucket Pipelines as a fast method to bring up Docker Images ready to use.

  • First the manual creation of the Base Docker Image, where we see how to create the folder structure and one base image, that later can be used as many times as we need.
  • Extension Docker Image, reusing the Base Image to create new custom image, how to do that and how to use the concept of fast creation.
  • Bitbucket Pipelines, pioneer in the technology where we are getting different ways of running fast and reliable pipelines, new features, automated triggers, great option for this kind of tasks.
  • Shell scripting for automated execution of commands that are repetitive.

I hope this helped you to understand the Docker Image creation, Base and Extension Image concept that is explained in short here, and the automation with the Bitbucket Pipelines.

Note!!!

Please be sure that some variables in this article are local so if you are running this setup be aware that you have to change them to the correct one so this can work on your end.

Multi Instance Process in Camunda

Reading Time: 10 minutes

As a developer, there is a 90% chance that the client or the business will come to you one day and say “look we are doing this the old way, but we want software now for this”. And there is a big chance that they will have some procedure with a lot of if-else scenarios. There also might be an option for a part of the procedure to be repeated for every customer that they have which means you will have to execute the same logic multiple times. If that happens, don’t be scared, Camunda is here to help and I hope that this blog will help you. We are going to dive into Camunda processes and how we can handle a part of it being repeated multiple times. In order to follow it may be some basic Camunda knowledge, Java and Spring Boot can be of help.

Let’s look at the following scenario:
We have a bank and in it, there is one guy responsible for the whole process of giving loans to people. He is sitting there and a guy John comes and he wants to get a loan. So the bank guy first gives an application to John. Once John is done with the application and returns it to the bank guy, the bank guy requests some documents from his place of work so they can be sure that he is employed full-time. Once that is done, the bank guy needs to send this to corporate so that they can make the final decision.

If we make a diagram of it, it will look something like this:

This is great and everything if you have one customer, but if you have around 100 customers per day the bank guy will have no clue which guy gave him the work information or filled the application.

So let’s make software for this guy and make his life easier, let’s open our Intellij and write some code!

I have already created the project (you can download it from here) and will go through some of the more important things.

First, let’s take a look at the pom file:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.6.1</version>
        <relativePath/> <!-- lookup parent from repository -->
    </parent>
    <groupId>north.com</groupId>
    <artifactId>multi-process-instance</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <name>multi-process-instance</name>
    <description>multi-process-instance</description>
    <properties>
        <java.version>11</java.version>
    </properties>
    <dependencies>
        <!-- Spring -->
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-jpa</artifactId>
        </dependency>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>

        <!-- Camunda -->
        <dependency>
            <groupId>org.camunda.bpm.springboot</groupId>
            <artifactId>camunda-bpm-spring-boot-starter-rest</artifactId>
            <version>7.14.0</version>
        </dependency>
        <dependency>
            <groupId>org.camunda.bpm.springboot</groupId>
            <artifactId>camunda-bpm-spring-boot-starter-webapp</artifactId>
            <version>7.14.0</version>
        </dependency>
        <dependency>
            <groupId>org.camunda.bpm</groupId>
            <artifactId>camunda-engine-plugin-spin</artifactId>
            <version>7.14.0</version>
        </dependency>
        <dependency>
            <groupId>org.camunda.spin</groupId>
            <artifactId>camunda-spin-dataformat-json-jackson</artifactId>
            <version>1.10.0</version>
        </dependency>

        <!-- Database -->
        <dependency>
            <groupId>com.h2database</groupId>
            <artifactId>h2</artifactId>
            <scope>runtime</scope>
        </dependency>
        <!-- Lombok -->
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
        </plugins>
    </build>

</project>

I have added the camunda dependencies because we are going to create camunda processes and an h2 database as well. Even though we are not going to write code that will access the database or create some tables, Camunda needs a database to create its own tables.

Next, let’s look at the application.yml

spring:
  datasource:
    driver-class-name: org.h2.Driver
    url: jdbc:h2:mem:testdb
    username: sa
  jpa:
    database-platform: org.hibernate.dialect.H2Dialect
    hibernate.ddl-auto: update

camunda:
  bpm:
    authorization:
      enabled: false
    default-serialization-format: application/json
    admin-user:
      id: admin
    database:
      type: h2
    generic-properties.properties:
      telemetry-reporter-activate: false
    deployment-resource-pattern:
      - classpath*:processes/*.bpmn

Above we are setting some database properties and some Camunda properties. Mainly we are enabling the admin-user for Camunda to open their cockpit with user admin and we are saying where the processes will be and what kind of database will Camunda use.

Now, having said all that, let’s create the whole scenario as a process in Camunda. First thing you will need is a Camunda modeller which you can download from here. Once you have the modeller up and running go to File -> New File -> BPMN Diagram. What we need to build (or you can just open the existing bpmn diagram from the application) is something that will look like this:

As a first step, we will generate some credit loaners (we are doing this just to have some data to play with) and then set this collection of credit loaners as a value for a variable called creditLoanerList. This will be achieved by using a service task. In this scenario, if you click on the Generate Credit Loaners you will see that we are providing as implementation: Java class and as a java class we are pointing to: north.com.multiprocessinstance.camunda.task.GenerateCreditLoanersTask.java:

package north.com.multiprocessinstance.camunda.task;

import north.com.multiprocessinstance.entity.CreditLoaner;
import org.camunda.bpm.engine.delegate.DelegateExecution;
import org.camunda.bpm.engine.delegate.JavaDelegate;

import java.util.Arrays;
import java.util.List;

public class GenerateCreditLoanersTask implements JavaDelegate {
    @Override
    public void execute(DelegateExecution delegateExecution) throws Exception {
        List<CreditLoaner> creditLoaners = Arrays.asList(new CreditLoaner("Sarah", "Cox"),
                new CreditLoaner("John", "Doe"),
                new CreditLoaner("Filip", "Trajkovski"));
        delegateExecution.setVariable("creditLoanerList", creditLoaners);
    }
}

If you click on the subprocess (it is the largest rectangle with the three dashes) you will see the reason why we set the list of credit loaners as a variable value to creditLoanerList:

In order to pass all the credit loaners and for each of them a subprocess to be started we need to provide a collection of credit loaners. That is why we are setting the Collection field to: creditLoanerList and in order for every credit loaner to be accessed in the appropriate subprocess we are setting an Element Variable that we will call: creditLoaner. What will happen is that we will have this process once for Filip, once for John and once for Sarah.

Let’s look at the tasks in our subprocess. The first one Fulfill Application is a user task and in it, we want the credit loaner to fill out an application. For the next step Get work recommendation to be completed, the credit loaner needs to provide some documents that he is employed and has a regular salary (this is not completely implemented, but we are describing a possible scenario and currently only form fields of type string are defined). In the third step, a service task will be executed and in it, the NotifyCorporate.java class will be executed:

package north.com.multiprocessinstance.camunda.task;

import lombok.extern.slf4j.Slf4j;
import north.com.multiprocessinstance.entity.CreditLoaner;
import org.camunda.bpm.engine.delegate.DelegateExecution;
import org.camunda.bpm.engine.delegate.JavaDelegate;

@Slf4j
public class NotifyCorporate implements JavaDelegate {
    @Override
    public void execute(DelegateExecution delegateExecution) throws Exception {
        final CreditLoaner creditLoaner = (CreditLoaner) delegateExecution.getVariable("creditLoaner");
        log.info("Notify corporate that person {} wants to get credit", creditLoaner.getFirstName());
        delegateExecution.setVariable("creditLoanerFirstName", creditLoaner.getFirstName());
        log.info("This is the user: {}", creditLoaner);
    }
}

Here we will notify the corporate for the credit loaner and to get the information about the credit loaner we will use the variable that was previously set as a Element Variable which is creditLoaner (look at the image of the subprocess details). One additional thing that we will do here is to set one more variable called creditLoanerFirstName and the reason for that will be revealed when we take a look at the next task: Wait for feedback from corporate.

In this Wait for feedback from corporate step one option is that the bank guy got a mail from corporate that the credit loaner is ok and he can get his loan so the bank guy will complete this step and will continue with the next one. For the other option, we are providing a boundary event where the corporate will send a message STOP and for that credit loaner, the request will be rejected. But in our scenario, we have three subprocesses, one for each credit loaner. When sending this STOP message, we need to somehow specify for which credit loaner it is sent. That’s why in the previous step we set the creditLoanerFirstName variable (in real cases don’t use the first name but something like email or other unique value) and one last thing that we will need to do is to set an input parameter in the Wait for feedback from corporate task.

In the Camunda modeller if you click on the task with the name: Wait for feedback from corporate and go to the tab Input/Output you should be able to see the input parameter which will be of type text and the value will be the variable that we set previously, called creditLoanerFirstName.

This input parameter provides us with an option to specify for which credit loaner we want to STOP the procedure and decline his loan request.

But let’s start our application and then start our bank process. Once you have started the application you can open the Camunda cockpit on the following URL: http://localhost:8080/camunda/app/tasklist/default/. The user name and password are admin and admin. Once you have entered the credentials you should be able to start a process from the top right by clicking on Start Process and the BankProcess will be provided (previously we needed to be sure that the BPMN file is saved under resources/processes in your application, since we define in our application.yaml that this will be the place where we will keep the processes).

Let’s take a look at the Camunda cockpit:

Camunda cockpit

Once you started the process you should be able to see the task list (if for some reason they are not visible just click on Add simple filter and they should be displayed). If you click on one of the tasks, tasks details will be displayed and from here we are able to complete it or to open the process details:

Here we have a visual presentation at which step the process is currently. Let’s complete all three tasks Fullfill application by claiming them and completing them and let’s do the same for Get work recommendation. Now we are at the step Wait for feedback from corporate and we can test the boundary event STOP that we added.

There is already an exposed post endpoint in the ProcessController for which we need to provide the process instance id and a MessageEventRequest. In this MessageEventRequest we need to provide the message name, which in our scenario is STOP and a list of correlation keys. The correlation key is where we will specify the credit loaner whose loan request we want to decline. The key will be set to creditLoanerFirstName and let’s decline Sarah’s request (before executing it, make sure that there is a task: Wait for feedback from corporate for Sarah) so we will set the value to Sarah. What we need to do is execute a post request towards: localhost:8080/{processInstanceId}/messageEvent with the following request body:

{
    "messageName":"STOP",
    "correlationKeys":[{
        "key":"     ",
        "value":"Sarah"
    }]
}

As I mentioned previously you can figure out the process instance id and monitor the whole flow of the task using the camunda cockpit that will be available on the following URL: http://localhost:8080/camunda/app/tasklist/default/.

So we started a whole process, started three subprocesses, finished a service task and a user task. I hope that you enjoyed it and also you can find the code on the following link.

Async await in Swift explained

Reading Time: 5 minutes

Async await?

Async await is part of the new structured concurrency changes which were launched in Swift 5.5 during WWDC 2021. As we already know Concurrency in Swift means allowing multiple pieces of code to run at the same time, and is very important for the performance of our apps. With the new Async await, we can define methods performing work asynchronously.

Now that “Async await” is finally here, we can simplify our code with async methods and await statements and make our asynchronous code easier to read.

Async is?

Async is asynchronous and can be seen as a method attribute, which method performs asynchronous work. As an example we can use the method below:

func getProducts() async throws -> [Products] {
    // ... perform here data request
}

The getProducts method is defined as async throwing, which means that it’s performing a fail-able asynchronous job. The method would return an array of custom objects Product if everything went well or would throw an error if something went wrong.

How async can replace completion callbacks

Async methods replace the often seen completion callbacks. Completion callbacks (closure) were common in Swift to return from an asynchronous task, often combined with a Result type parameter. The above method would have been written as followed:

func getProducts(completion: (Result<[Product], Error>) -> Void) {
    // ... perform here data request
}

Creating a method with a completion closure is still possible in Swift, but it has a few disadvantages which are solved by using async instead:

  • Developer has to make sure to call the completion closure in each possible method exit, if not doing so will possibly result in an app waiting for a result infinitely.
  • Callbacks (closures) are harder to read. It’s not as easy to read about the order of execution as compared to how easy it is with structured concurrency.
  • Retain cycles has to be avoided using weak references.
  • Implementers has to switch over the result to get the outcome, also it’s not possible to use try catch statements.

These disadvantages are based on the closure version using the relatively new Result enum. It’s likely that a lot of projects still make use of completion callbacks without this enumeration:

func getProducts(completion: ([Product]?, Error?) -> Void) {
    // .. perform here data request
}

Defining a method like this makes it even harder to reason about the outcome on the caller’s side. Here value and error are optional, which requires us to perform an unwrap in any case. Unwrapping these optional results(value and error) in more code clutter does not help to improve readability.

How await works?

Await as the keyword stays to be used for calling async methods. Usually, we see them as best friends in Swift as one will never go without the other. You can basically say:

Await is awaiting a callback from its buddy async

Even though this sounds childish, it’s not a lie! If we take a look at an example by calling our earlier defined async throwing fetch products method:

do {
    let products = try await getProducts()
    print("Got \(products.count) products.")
} catch {
    print("Getting products failed with error \(error)")
}

We can note that the above code example is performing an asynchronous task. Using the await keyword, we tell our program to await a result from the getProducts method and only continue after a result arrives. This could either be an array of products or an error if anything went wrong while fetching the products.

What is structured concurrency?

Structured concurrency with async-await method calls makes it easier to understand the order of execution. Methods are linearly executed, one by one, without going back and forth like you would with closures.

To understand this better, we will take a look at how we would call the above code example before structured concurrency arrived:

// 1. Call the method

getProducts { result in

    // 3. The asynchronous method return

    switch result {

    case .success(let products):

        print(“Got \(products.count) products.”)

    case .failure(let error):

        print(“Getting products failed with error \(error)”)

    }

}

// 2. The calling method exits

As you can see, the calling method returns before the products are fetched. In case a result is received, we go back into our flow within the completion callback. This is an unstructured order of execution and could be hard to understand. This is especially true if we would perform another asynchronous method within our completion callback which would add another closure callback:

// 1. Call the method
getProducts { result in
    // 3. The asynchronous method return
    switch result {
    case .success(let products):
        print("Got \(products.count) products.")
        
        // 4. Call the placed method
        placedProducts(products) { result in
            // 6. Placed method returns
            switch result {
            case .success(let products):
                print("Decoded \(products) products.")
            case .failure(let error):
                print("Decoding products failed with error \(error)")
            }
        }
        // 5. Fetch products method returns
    case .failure(let error):
        print("Getting products failed with error \(error)")
    }
}
// 2. The calling method exits

Each completion callback (closure) adds another level of indentation, which makes it harder to follow the order of execution.

If we rewrite the above code using async-await syntax, we have a more readable piece of code, and also explains best what structured concurrency does:

do {
    // 1. Call the method
    let products = try await getProducts()
    // 2. Fetch products method returns
    
    // 3. Call the placed method
    let placedProducts = try await placedProducts(products)
    // 4. Placed method returns
    
    print("Got \(products.count) products.")
} catch {
    print("Getting products failed with error \(error)")
}
// 5. The calling method exits

The order of execution is linear, easy to follow and easy to reason about. Asynchronous calls will be easier to understand while we’re still performing sometimes complex asynchronous tasks.

Async/Await in Javascript

Reading Time: 7 minutes

Flow control in JS is hard, first of all, I’m gonna go with a quick review through promises because this is the foundation of Async + Await.

Promises

Promises in JS are like something is going to happen at some point in the future, so this could be

  • access to a user’s webcam
  • ajax call
  • resizing an image
  • or something else

All of these take time, and with promises, we kick in the process and move along, we come back when we need to deal with the data.

Let’s say we wanted to do a few things:

  • Learn Javascript
  • Write Javascript
  • Learn Vue
  • Write Vue

Would it make sense to first learn javascript before you start learning vue, would it make sense to wait until you learn javascript and start writing vue before you even start learning vue? No that doesn’t make sense.

We want to start one thing and come back to it once it’s done, and deal with the result correspondingly!

Promises allowed us to start writing code like this:

// start both things, one after another
const jsLearnPromise = learnJs();
const vueLearnPromise = learnVue();

// then once each are done, deal with them
jsLearnPromise.then(js => {
  writeJs();
})

vueLearnPromise.then(([vuex, vueRouter])=>{
  writeVue(vuex, vueRouter);
})


// you can also wait until both are done
Promise
 .all([jsLearnPromise, vueLearnPromise])
 .then(([js, vue]) => {
   codeWith(js, vue);
});

Most of the new browser APIs are built on promises, so we have got fetch(), where you can fetch your data then convert it into JSON and then finally deal with the data.

// fetch
fetch('http://learnprograming.org')
  .then(data => data.json())
  .then(vueFramework => learn(vue));

We can use a library called AXIOS, which have really good built-in defaults where we don’t have to have that second .then() chain like on the following example

//AXIOS fetch
axios.get('http://learnprograming.org')
  .then(vueFramework => learn(vue));

There are many many more browser APIs payment requests dealing with credit card data in the browser, getting user media, getting access to the webcam, web animations… All of these things are being built on standard promises and it’s easy to make your own with promises as well, here we have a function called sleep:

function sleep(amount) {
  return new Promise((resolve, reject)=> {
    if(amount <=300) {
      return reject('That is too fast, cool it down')
    }
    setTimeout(() => resolve(`Slept for ${amount}`), amount)
  });
}

where it takes the amount and the way of promise works is immediately you return a promise and then what you do inside of that promise is:

  • you either resolve it when things went well
    or
  • you reject it when things didn’t go well

In this case, after 500 milliseconds we’re going to resolve it with some data or if it’s less than 300 milliseconds I’m going to reject it because that’s too fast. And what that will allow us to do is a little bit something where we can write our code and then we can chain .then().then().then() on it and by returning a new promise from each one.

sleep(500)
  .then((result)=> {
  console.log(result)
  return sleep(1000);
})
  .then((result)=> {
  console.log(result)
  return sleep(750);
})
  .then((result)=> {
  console.log(result)
  console.log('done');
})

So promises are really really great but what’s the deal with the .then() ? It is still a kind of callback-y and any code that needs to come after the promise still needs to be in the final .then() callback, it can’t just be top-level in your current function, this is where ASYNC + AWAIT comes in.

Async/Await

Async await is still based on promises but it’s really a nice syntax to work with it.
JavaScript is almost entirely asynchronous/non-blocking.
Async await gives us synchronous looking code without the downside that is actually writing synchronous code! So how does it work? The first thing we need to do is mark our function as async so you still keep your regular promise functions, nothing changes with your functions that return a promise, what we now do is create an async function by writing the word “async” in front of it

async function sleep(){
//.....
}

then, when you are inside an async function, you simply just await things inside it, so you can either just await the sleep function and that will just wait until the promise resolves or if you care about what’s coming back from that promise, maybe it’s some data from an API then we can store that in a variable.

async function sleep() {
  //just wait
  await sleep(1000);
  //or capture the returned value
  const response = await sleep(750);
  console.log(response);
}

Let’s take a look at another example, in this case, I’m capturing the returned value, that’s another way you can write it.

const getDetails = async function() {
  const dave = await axios.get('https://api.github.com/users/dave');
  const john = await axios.get('https://api.github.com/users/john');
  const html = `
    <h1>${dave.name}</h1>
    <h1>${john.name}</h1>
  `;
}

in here I await axios.get() and when that comes back I am going to await the second axios again.
That is kind of slow, we don’t want to do that, so what we can do is we can simply await Promise.all() and by passing
Promise.all() to the other 2 promises, we sort of make one big mega promise that we can await for both of them to finish.

That was great but, if you have seen any examples online, the error handlings start to ugly it up. So let’s look at a couple of options that we can use for actually working with error handling.

Error handling

  • TRY/CATCH – this is probably likely what you have seen online, so just wrap everything in the try/catch and you gonna be nice and safe. The way it looks is:
aync function displayData() {
  try {
    const dave = await axios.get('https://api.github.com/users/dave');
    console.log(data) //Work with Data
  } catch (err) {
    console.log(err); //Handle Error
  }
}

You have an async function, you give yourself a try, write all your code inside that TRY, and then if anything happens inside that TRY, you catch the error in your catch(err) and you deal with that accordingly.

  • HIGHER ORDER FUNCTION where you can chain a .catch() on async functions. So this is little bit more complicated. Let’s walk through an example
// Create a function without any arror handling
aync function displayData() {
    //do something that errors out
    const dave = await axios.get('https://nothing.com');
}

you’ve got a function displayData() where I don’t care about error handling, I assume that everything works correctly and great, then I’m going to await something that maybe gives me a 404 and it’s going to break because no data came back, this could be any error that AXIOS might throw to you.

Now you create a higher-order function called handleError(err) that takes as an argument the actual function and then from that you return a new function, you basically return the same function but with a catch()

// make a function to handle that error
function handleEror(fn) {
  return function(...params) {
    return fn(...params).catch(function(err) {
      // do something with the error!
      console.error('Error', err);                           
    });
  }
}

The same function in one line using ES6:

const handleError = fn => (...params) => fn(...params).catch(console.error);

And then what you do is you just pass your unsafe function displayData to handleError(displayData)

// Wrap it in a HOC
const safeDisplayData = handleError(displayData);
safeDisplayData();
  • HANDLE THE ERROR WHEN YOU CALL IT, sometimes you need to handle the error when you call it because you say “This is a special case” if there’s an error here I need you to handle it in a different way, so it’s pretty simple, you make your async function called loadData() and maybe an error happens and when you call it, you can just chain .catch() on the end and then deal with the error.
async function loadData() {
  const dave = await axios.get('....');
}
loadData().catch(dealWithErrors);

Summary

In JavaScript, it is much cleaner to use async/await than to use promises with .then() and .catch() , async/await is a great way to deal with asynchronous behaviour and an excellent option if you find yourself writing long, complicated waterfalls of .then() statements.

The practical guide – Part 4: Dependency injection with Hilt

Reading Time: 9 minutes

The practical guide – Part 1: Refactor android application to follow the MVP design pattern
The practical guide – Part 2: MVP -> MVVM
The practical guide – Part 3: Clean Architecture

Developing an application with clean architecture and design patterns is a path to success. But if we don’t know how to handle the dependencies that we created, this path is not going to be straightforward. It is important to understand what dependency injection is, and how to handle the dependencies properly, so we don’t end up with a mess. In the previous article, we created DependencyProvider object, where we provided the dependencies. Now, we will use the framework Hilt that will provide the dependencies for us. But let’s start with the basics:

Dependency Injection

Before going into dependency injection, let’s define what is dependency alone? When Class A uses some methods of Class B, we say that A is dependent on B. So we have dependency A -> B. Now, imagine that A creates an instance of B, so whenever we create A, we don’t need to supply B, because A automatically creates B for itself.

This is not good. Why? Mostly, because we cannot “inject” or provide other implementation of B. Why do we need other implementations of B? Well, the most common use case is for testing. If we want to test A, it will be very helpful for us to be able to provide mock or test implementation of B, instead of the real one. So, what is the fix? The fix is very obvious: instead of letting A create B, we pass B through the constructor/method/parameter of A. That means, whenever we want to create an instance of A, we must provide an instance of B first. So, we will have:

This little tweak that we just made, has a fancy name Inversion of control which is a general term of the other fancy term Dependency Injection. For the difference between these two terms, you can check this SO answer.

Too many dependencies

In the example above, A has only one dependency, and B doesn’t have any. What if B, has any dependencies, and that dependencies have their own dependencies and so on…? Then, when we want to create an instance of A, we will have to create all of those dependencies. And, if we use A in many places? You see where I am going, right?

How to fix this? One thing you can think of is by creating some class where you handle all the dependencies there (Like our DependencyProvider class). And, that is ok, you can do it by yourself. But, you can also use some framework that can help you.

Hilt

As you may suspect, Hilt is a library that helps us with handling the dependencies. It is a Google library, made specifically for Android. It is the most popular dependency injection library for Android development and is much easier to use than the more general Dagger 2 library.

Before going to the code, we have to check the architecture of the Hilt implementation, and the concept it uses. The three main concepts are:

  • Module – a class in which we provide the dependencies. Here we create methods that return the actual implementation of the dependency.
  • Component – an interface or an abstract class that connects dependencies from the Module and the class where we use those dependencies (In Dagger 2, we had to create these components, but Hilt creates most of them for us).
  • Scope – annotation which connects the lifecycle of the objects that we provide in the module, and the component’s lifecycle. (Hilt also has the most common scopes created for us)

There are a lot of other things that we have to learn, but we will do it with the implementation.

Implementing Hilt

First, we have to add the library to our project. In the project’s root build.gradle file, we have to add hilt-android-gradle-plugin:

buildscript {
    // ...
    dependencies {
        // ...
        classpath 'com.google.dagger:hilt-android-gradle-plugin:2.38.1'
    }
}

Then, we have to apply the Gradle plugin and add the dependencies:

plugins {
  id 'kotlin-kapt'
  id 'dagger.hilt.android.plugin'
}

dependencies {
    implementation "com.google.dagger:hilt-android:2.38.1"
    kapt "com.google.dagger:hilt-compiler:2.38.1"
}

Next, we’ll annotate our application class with @HiltAndroidApp. If you don’t have an application class, create one (Don’t forget to add the class in the AndroidManifest file).

@HiltAndroidApp
class QuotesApp: Application() {
}

Once we annotate our application class, we can provide dependencies to other Android classes by annotating them with @AndroidEntryPoint. Hilt supports these Android classes: Application (by using @HiltAndroidApp), ViewModel (by using @HiltViewModel), Activity, Fragment, View, Service and BroadcastReceiver. So, in our case, we will annotate our ViewModel with @HiltViewModel and move the getQuotesUseCase property in the constructor:

@HiltViewModel
class MainViewModel @Inject constructor(private val getQuotesUseCase: GetQuotesUseCase) : ViewModel() {
…
}

Next, we will annotate our MainActivity with @AndroidEntryPoint and we will remove every usage of DependencyProvider.

@AndroidEntryPoint
public class MainActivity extends AppCompatActivity {
    ...
}

Now, we told Hilt, that it should provide us an instance of GetQuotesUseCase, but it doesn’t know how to. Because GetQuotesUseCase is our class, we can use constructor injection to tell Hilt how to create GetQuotesUseCase instances.

class GetQuotesUseCase @Inject constructor(private val quotesRepository: QuotesRepository) {
    ...
}

We can do this for every class that we want to inject: QuotesRepositoryImplementation, LocalDataSourceImplementation and RemoteDataSourceImplementation. This is cool, but we still haven’t told Hilt how to provide the interfaces (QuotesRepository, LocalDataSource, …).

Hilt modules

For interfaces or classes that we cannot constructor-inject (Classes from some outside library), we have to create a Hilt module, where we can tell Hilt how to provide instances for those classes. In our case, we will create a few modules: RepositoryModule, DataSourceModule, NetworkModule and DatabaseModule, where we will provide all of the dependencies that cannot be constructor-injected.

@Module
@InstallIn(SingletonComponent::class)
abstract class RepositoryModule{
}

We have to annotate the class with @Module and we have to tell in which component this module will be installed. Hilt has created some components that we can use. Every component defines the scope of the dependencies provided by the module. We installed QuotesModule in SingletonComponent, which means that the dependencies in this module will be Singleton (they will be created only once per application). Here are the other scopes that Hilt supports:

Hilt componentInjector for
SingletonComponentApplication
ActivityRetainedComponentN/A
ViewModelComponentViewModel
ActivityComponentActivity
FragmentComponentFragment
ViewComponentView
ViewWithFragmentComponentView annotated with @WithFragmentBindings
ServiceComponentService

There are two ways to inject dependencies in the module. With @Binds and with @Provides. When using @Binds, we tell Hilt which interface we want to return in the return type of the function, and as a parameter of the function we specify which implementation of the interface we want to provide.

@Module
@InstallIn(SingletonComponent::class)
abstract class RepositoryModule {
    @Binds
    abstract fun bindQuotesRepository(impl: QuotesRepositoryImplementation): QuotesRepository
}

@Module
@InstallIn(SingletonComponent::class)
abstract class DataSourceModule {
    @Binds
    abstract fun bindLocalDataSource(impl: LocalDataSourceImplementation): LocalDataSource

    @Binds
    abstract fun bindRemoteDataSource(impl: RemoteDataSourceImplementation): RemoteDataSource
}

The second way is with @Provides. A function annotated with @Provides supplies the following information for Hilt:

  • The function return type tells Hilt what type the function provides instances of.
  • The function parameters tell Hilt the dependencies of the corresponding type.
  • The function body tells Hilt how to provide an instance of the corresponding type. Hilt executes the function body every time it needs to provide an instance of that type.
@Module
@InstallIn(SingletonComponent::class)
class DatabaseModule {
    @Provides
    @Singleton
    fun provideQuotesDao(quoteDatabase: QuoteDatabase): QuoteDao {
        return quoteDatabase.quoteDao()
    }

    @Provides
    @Singleton
    fun provideQuoteDatabase(@ApplicationContext context: Context): QuoteDatabase {
        return Room.databaseBuilder(
            context,
            QuoteDatabase::class.java,
            "quotes_db"
        ).build()
    }
}

@Module
@InstallIn(SingletonComponent::class)
class NetworkModule {
    @Provides
    fun provideQuotesApi(): QuotesApi {
        return RetrofitClient.getRetrofit().create(QuotesApi::class.java)
    }
}

In provideQuotesDao() we are asking for QuoteDatabase and with that, we can create our dao instance. Because we cannot construct-inject QuoteDatabase, we have to provide it too. For that, we need the application context. We can ask to get it as a parameter annotated with the Qualifier @ApplicationContext. Let’s see what qualifiers are:

Qualifiers

In some cases, we need multiple bindings for the same type. For instance, we might need two bindings for the retrofit client. One with authentication and another without. In order to implement it, we need to create two qualifiers:

@Qualifier
@Retention(AnnotationRetention.BINARY)
annotation class AuthenticatedRetrofitClient

@Qualifier
@Retention(AnnotationRetention.BINARY)
annotation class NotAuthenticatedRetrofitClient

And then, we will have to annotate the binding where we bind/provide it, and where we inject (use) it. We’ll have to change our NetworkModule:

@Module
@InstallIn(SingletonComponent::class)
class NetworkModule {
    @Provides
    @Singleton
    fun provideQuotesApi(@NotAuthenticatedRetrofitClient retrofit: Retrofit): QuotesApi {
        return retrofit.create(QuotesApi::class.java)
    }

    @Provides
    @Singleton
    @NotAuthenticatedRetrofitClient
    fun provideNotAuthenticatedRetrofitClient(): Retrofit {
        return Retrofit.Builder()
            .baseUrl("https://programming-quotes-api.herokuapp.com/")
            .addConverterFactory(GsonConverterFactory.create())
            .build()
    }
}

We don’t use authenticated retrofit clients in our case, but it is a nice example. There are some predefined qualifiers provided by Hilt, one example is the @ApplicationContext that we used, there is also @ActivityContext and you can find some more here.

One last thing that I want to mention is the Component Scopes. By default, all bindings in Hilt are unscoped. This means that each time your app requests the binding, Hilt creates a new instance of the needed type. Hilt allows us to scope a binding to a particular component. This means that the same instance of the binding will be used during the lifetime of the component. For every component that Hilt has predefined, it also has a scope for it. For instance, SingletonComponent has @Singleton scope, ActivityComponent has @ActivityScoped etc…

For our application, that’s it. We can get rid of the DependencyProvider file and we have successfully implemented Dagger Hilt. You can check out the code here.

Elevate @AssistedInject to new heights in your Android project

Reading Time: 4 minutes

Everybody used a form of DI in their projects, Android developers use(d) Dagger2/Hilt at least once in their careers.

Throughout this experience, your life was simplified and just one @Inject saved you from writing boilerplate code. This wasn’t always the case, you needed to add something dynamically in one of your dependencies, this is where @AssitedInject comes into play.

An assisted injection is a dependency injection (DI) pattern that is used to construct an object where some parameters may be provided by the DI framework and others must be passed in, at creation time (a.k.a “assisted”), by the developer (you).

The assisted injection uses a factory to provide your assisted dependency, the steps are as follows:

  1. Annotate your dependency with @AssitedInject
  2. Provide the dependencies that can be automatically wired by the DI library
  3. Annotate your dynamically added dependencies with @Assisted and provide them with a name if needed
  4. Create a factory for your dependency annotated with @AssistedFactory and a function that creates and returns your assisted dependency

To facilitate the aforementioned steps, in this blog post you’ll now build a reusable one shot shared preferences dependency.

In order to have our “OneTimePreference”, we create a common contract so that each dependency that implements it will behave as agreed.

interface OneTimePrefContract {
    val isOneTimeShown: Boolean
    fun setOneTimeShown()
    val oneTimePrefs: SharedPreferences
}

The real implementation comes in a form of an “assisted” dependency that implements the contract and is provided from a factory.

class OneTimePref @AssistedInject constructor(
    @ApplicationContext private val context: Context,
    @Assisted(PREFS_TAG_KEY) private val prefsTag: String,
    @Assisted(PREFS_BOOLEAN_KEY) private val prefsBooleanKey: String
) : OneTimePrefContract {
    
    private companion object {
        private const val PREFS_TAG_KEY = "prefsTag"
        private const val PREFS_BOOLEAN_KEY = "prefsBoolean"
    }
    
    @AssistedFactory
    interface OneTimePrefFactory {
        fun create(
            @Assisted(PREFS_TAG_KEY) prefsTag: String,
            @Assisted(PREFS_BOOLEAN_KEY) prefsBooleanKey: String
        ): OneTimePref
    }
    
    override val oneTimePrefs: SharedPreferences
        get() = context.getSharedPreferences(
            prefsTag,
            Context.MODE_PRIVATE
        )
    override val isOneTimeShown get() = oneTimePrefs.getBoolean(prefsBooleanKey, false)
    override fun setOneTimeShown() = oneTimePrefs.edit { putBoolean(prefsBooleanKey, true) }
}

As you see the factory provides the same assisted parameters that are needed in order for the assisted inject to happen while having an external dependency from the outside as with our application context.

You can now reuse it anywhere.

@AndroidEntryPoint
class WalkThroughFragment : Fragment (){
	
	@Inject
	lateinit var oneTimePrefFactory : OneTimePref.OneTimePrefFactory

	private val walkThroughPreferences : OneTimePref by lazy {
		oneTimePrefFactory.create("walkthrough-prefs", "walkthrough-isShown") // consider using constants, this is for demonstration purposes only
	}
}

Congratulations, you’ve learned @AssistedInject

There is one limitation by the DI framework and one big issue with this code.

  • @AssistedInject dependencies can’t be scoped
  • This is a lot of boilerplate to write

In order to write less boilerplate Kotlin’s delegation is one hell of a powerful tool to know and we want our dependency to be scoped to the lifecycle of a Fragment (for demonstration purposes).

@FragmentScoped
class WalkThroughPrefsProvider @Inject constructor(
    private val oneTimePrefFactory: OneTimePref.OneTimePrefFactory
) : OneTimePrefContract by oneTimePrefFactory.create(
    WALK_THROUGH_PREFS,
    WALK_THROUGH_PREFS_SHOWN_KEY
) {
    
    private companion object {
        private const val WALK_THROUGH_PREFS = "walkThrough"
        private const val WALK_THROUGH_PREFS_SHOWN_KEY = "walkThroughKey"
    }
}

Now you can go around injecting your WalkThroughPrefsProvider and having more readable code.

@AndroidEntryPoint
class WalkThroughFragment : Fragment (){
	
	@Inject
	lateinit var oneTimePrefFactory : WalkThroughPrefsProvider
}

The code is publicly available as a Gist.

Bundletool and how to utilize Android App Bundle

Reading Time: 6 minutes

The App Bundle is the new official publishing format that app developers use to publish their apps on Google Play as opposed to APK, the traditional way that was done for over 10 years of android’s history.

Splitting the Publishing from the Serving format

In the past, app developers uploaded monolithic APKs to Google Play, and the channel acted like a dumb pipe that did the distribution to the users.

But with the App Bundle, a conscious decision was made to split the publishing format and the serving format. Developers now put everything in this publishing format, and then Google Play processes it and generates optimised APKs to serve the best possible APK to the end device.

The contents of an Android App Bundle with one base module, two dynamic feature modules, and two asset packs.
(source android-developers)

Testing Android App Bundle

This is where the bundletool comes in. It is actually the underlying tool that Google Play and the Android Gradle plugin use to build .aab files. This tool is also available as a command-line tool so that developers can test their app locally and emulate Play server-side build and test out Play App Delivery or Asset Delivery flows, before uploading the artefact.

The tool has a few different responsibilities that help developers manipulate Android App Bundles:

  • Build an Android App Bundle from pre-compiled modules of a project.
  • Generate an APK Set archive containing APKs for all possible devices.
  • Extract APK(s) from the APK Set compatible with a given device.
  • Install APK(s) from the APK Set compatible with a connected device.
  • Extract device spec from a device as a JSON file.
  • Add code transparency to an Android App Bundle. Code transparency is an optional code signing mechanism.
  • Verify code transparency inside an Android App Bundle, APK files or an application installed on a connected device.

How to build APK sets from App Bundle

The command build-apks is used to build an APK set for the bundle. It will contain all the APKs for all the modules in the project.

bundletool build-apks 
--bundle=/ExampleApp/example_app.aab 
--output=/ExampleApp/example_app.apks

If the APKs were installed on a device, they would need to be signed with a private key, all the APKs contained in the APK set will be signed and installable. Note, if signing information is not specified, bundletool will attempt to sign the APK with a debug key.

bundletool build-apks 
--bundle=/ExampleApp/example_app.aab 
--output=/ExampleApp/example_app.apks
--ks=/ExampleApp/keystore.jks
--ks-pass=file:/ExampleApp/keystore.pwd
--ks-key-alias=ExampleKeyAlias
--key-pass=file:/ExampleApp/key.pwd

Generating APK sets for devices

You can target a device by specifying its configuration in a JSON file. This file can be created manually with specifying information about supported Application Binary Interfaces (ABIs), Locales, Device Features, OpenGL Implementations, Device Screen Density and the SDK version. Or just simply delegate this work to the useful command of bundletool, get-device-spec.

{
  "supportedAbis": ["arm64-v8a", "armeabi-v7a", "armeabi"],
  "supportedLocales": ["en-US", "de-DE", "mk-MK"],
  "deviceFeatures": ["android.hardware.bluetooth","android.hardware.camera",    "android.hardware.microphone","android.hardware.nfc"...],
  "glExtensions": ["GL_OES_EGL_image","GL_OES_EGL_image_external","GL_OES_EGL_sync"...],
  "screenDensity": 440,
  "sdkVersion": 30
}                                                                      pixel4a.json
bundletool build-apks 
--bundle=/ExampleApp/example_app.aab 
--output=/ExampleApp/example_app.apks
--device-spec=pixel4a.json

When passing the flag connected-device, bundletool will create APKs just for the device currently connected to, or serial ids can be specified with the command device-id when multiple devices are connected.

bundletool build-apks 
--bundle=/ExampleApp/example_app.aab 
--output=/ExampleApp/example_app.apks
--connected-device

If device flag attributes are omitted when building the APKs, the bundletool will generate an APK set that will contain APKs for all possible devices.

Universal APK

The bundletool can also create a universal APK by passing the universal flag. This APK will contain all the files for all the device configurations. Because of that, it can be installed on any device. This will not by any means represent what the user will get when the app is installed from Play Store, however, this is a convenient way to pass an APK to users when you don’t know what device they are using.

bundletool build-apks 
--bundle=/ExampleApp/example_app.aab 
--output=/ExampleApp/example_app.apks
--ks=/ExampleApp/keystore.jks
--ks-pass=file:/ExampleApp/keystore.pwd
--ks-key-alias=ExampleKeyAlias
--key-pass=file:/ExampleApp/key.pwd
--universal

Testing Dynamic Feature Modules installation

When testing feature modules while building a universal APK, bundletool will include only the modules that specify <dist:fusing dist:include="true"/> in their manifest. If an attribute is set to false the definition of the activities will still be merged in the AndroidManifest.xml of the base module, however, the DEX files, Resources, Assets and Native libraries will not be.

A more convenient way of trying out the dynamic feature modules is by adding a local-testing argument to the build-apks command, which adds special metadata that will let us test feature module installation locally.

bundletool build-apks
--local-testing
--bundle=/ExampleApp/example_app.aab 
--output=/ExampleApp/example_app.apks

Deploying APKs to a connected device or an emulator

After a set of APKs are created, bundletool has the ability to extract the right combination of APKs from that set to a connected device or an emulator, by using the command install-apks.

Note, when using the local testing flag, bundletool will read the metadata from the previous step and push all optional modules into the device’s local storage.

bundletool install-apks 
--apks example_app.apks

Get APK size from an APK set

By providing the path of the APKs and adding the command get-size total, bundletool can measure the estimated download sizes as they would be server compressed over-the-wire. Adding the modules flag would provide the exact size of the base module plus the specified module or modules.

bundletool get-size total 
--apks=/ExampleApp/example_app.apks
--modules=module1

Support for Code Transparency for App Bundles

As of version 1.7.0 bundletool has added support for adding and checking transparency, which ensures the app’s integrity. To be precise SHA256 hashes are created for each DEX file and each .so file that is part of the App Bundle. Along with a singed code transparency file, the developer can verify the matching hashes using a singing key that is private to the developer. About how code transparency works and in-depth analysis on how to use bundletool while adding, verifying an App Bundle or APK set, you can read more here.

Android App Bundle is Open Sourced through bundletool

With APK sizes going as light as possible, and functionalities like on-demand feature installations, the future of the bundle format is looking bright in the Android Ecosystem.

With open-sourcing bundletool Google has indirectly open-sourced the App Bundle, meaning if any other distribution channels want to get in the action and implement support for bundles, they could with the help of bundletool.


This tool has given app developers the means to manipulate the App Bundle. If you have not tried it yet, download this command-line tool from the Github repository and give it a go. Cheers.

How to securely install apps on Debian based Linux distros

Reading Time: 7 minutes

Every newcomer (me included) after painstakingly (at least in the early days) installed the Linux operating system, had only one question in mind: OK, now what?!

Well now, it is time to install some apps!

For this blog I will be using the Pop!_Os which is a free and open source operating system based on Ubuntu (which itself is a variant of Debian).

In newer versions of Linux distros, there are a lot of GUI tools for installing new applications. For example the Pop!_Os uses a so-called Pop!_Shop and Ubuntu uses Ubuntu Software Center, from which you can install applications with a single click, LITERALLY! Those GUI tools use the distribution’s repositories so anything you install from there should be safe. If we need some app that is not included there what do we do? Well, the first thing we would think is, to do a little search using the apt tool. And that is also safe, for now. But what if our beloved app is not included in the preconfigured repositories? Then the only option left is to install it from external sources. Because time is passing by and impatience starts to nest, we can get tempted to copy-paste the first piped command which appears on google search. Well, that is where the trouble starts.

Pop!_Shop

Try to understand commands you find online before executing them

In blogs and forums sometimes people can mistype a command (I do this often :)). Sometimes the command can work well on some distributions and not on others or it can be devastating. If you are not sure about a certain command you can always check the manual for the command or just read the distribution wiki page.

$ man tee

example of manual for the tee command

A curious case is when websites tell us to “pipewget or curl into bash in order to install their app. Sometimes they even tell us to ignore certificates as well.

$ wget -O – http://example.com/install.sh | sudo sh

 

example of “piping” wget into bash without using https

Imagine that the .sh file we are “piping” has a line in it similar to this one:

rm -rf /$TMP_DIR

What would happen if the connection closes midstream and all left from the script we are downloading is this:

rm -rf /

If that script gets executed, and it will because it is a perfectly legal script, it’s going to hurt badly. That is why some maintainers started wrapping code snippets into shell functions:

docleanup() { 
  rm -rf /$TMP_DIR
}

docleanup

A script like this, if it is missing some part, will result in an error when executed. The latest versions of Linux should have a safeguard against rm -rf / but either way, check the script before executing or don’t “pipe” into bash.

Install apps from sources you trust

If you need some famous apps like IntelliJ or slack, go to their website and you will find all the information on how to install them. Lately, it is not unusual for software maintainers to provide a .deb file. When there is no other option and if you trust the source go for it, otherwise prefer to install the apps the old fashion way by downloading a .tar.gz file which you can extract wherever you want and later it will be easier to remove completely.

Another thing Linux users usually do is install extensions. I on the other hand like the original look and feel of most Linux distributions so I rarely visit those kinds of websites. By installing extensions from unofficial sources, you can become a victim of the second biggest virus to hit 2019 Linux users specifically running gnome-desktop environments, called the “Evil GNOME”. Rightly dubbed because it runs as an extension in (you guessed it) the gnome-desktop environment. And yes, the Linux world is not virus-free, because every software made by man can be broken by man. This virus can monitor your audio, keep track of newly created files, and even has a key-logger. If you need gnome extensions you at least should visit the official website for this.

If you use browser extensions check if they are on the browser recommended list, or are located in the GitHub repository, in which case you can check them out yourself. Some browser extensions can keep track of your browsing history and even the things you type if they have the required rights. The bottom point is to install extensions from the official and trusted websites or don’t install them at all.

Don’t use apt-key anymore

Some websites offer us their GPG key and repository in order to install and update their software, which will look something like:

$ wget -qO – https://<example.com>/<repo-key-pub>.gpg | sudo apt-key add –

$ echo “deb https://<example.com>/ <apt/stable/>” | sudo tee /etc/apt/sources.list.d/<my-repository>.list

$ sudo apt-get update

$ sudo apt-get install <my-package>

<repo-key-pub> will be replaced with their public key
<example.com> will be replaced with their website or download path
<apt/stable/> will be replaced with their repository path
<my-repository> will be replaced with their repository name
<my-package> will be replaced with their package name, can be similar to the repository name

But when we try to execute this command on newer Debian-based distributions we will likely see the following output: “Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8))”. This message doesn’t mean that the developer decided he has better things to do and closed the project, but it has something to do with security.

The reason for this change is that when adding an OpenPGP key to /etc/apt/trusted.gpg or /etc/apt/trusted.gpg.d, the key is trusted by apt. But that is not all! That key is trusted by apt for all other repositories configured on the system that doesn’t have a signed-by option (we will see later in this blog), even the official Debian / Ubuntu repositories. As a result, any unofficial apt repository, which has its key in one of those 2 locations we mentioned, can replace any package on the system. Well, that is hardly likely that a source you trust will do that, but even they can get hacked. It’s also important to know, while the apt deprecation message says to “manage keyring files in trusted.gpg.d instead“, the Debian wiki states otherwise. That is because adding keys to /etc/apt/trusted.gpg and /etc/apt/trusted.gpg.d is equally insecure as we mentioned above.

Only 2 steps are required for the correct (secure) procedure:

STEP 1: Download the key into /usr/share/keyrings directory.

$ wget -O- https://<example.com>/<repo-key-pub>.gpg | gpg –dearmor | sudo tee /usr/share/keyrings/<myrepository>-archive-keyring.gpg

gpg –dearmor is piped if the file is ASCII armored (encoded)
you would probably need to replace all text within <>
note that the key extension can be .gpg, .asc, .key, and probably others

There is nothing special to this location, it is just a directory. Convention states that directory /usr is for all the programs and support files used by regular users, /usr/share contains shared data among programs, and /usr/share/keyrings is just a descriptive name.

STEP 2: add the repository sources.list entry. Previously, a sources.list file from the /etc/apt/sources.list.d directory would look like this (just a simple file with the following content, nothing special):

deb https://repository.example.com/debian/ stable main

However, to be able to use the key added under step 1, the sources.list entry must now look like this (/etc/apt/sources.list.d/<myrepository.list>):

deb [signed-by=/usr/share/keyrings/<myrepository>-archive-keyring.gpg] <https://repository.example.com/debian/ stable main>

As you can see there is an additional signed-by option (as we mentioned somewhere above) which points to the exact location of the key. And we are almost done.

After this step you can update the package list and install your app:

$ sudo apt update & sudo apt install <my-package> -y

 

Using Hystrix as a fault-tolerant strategy

Reading Time: 5 minutes

Until now as everybody knows, microservice architecture represents a collection of multiple services. Each service contains it’s own business logic in comparison to the monolithic architecture which contains everything in one place. This means we have to maintain multiple services usually at the same time.

The microservices communicate with each other in order to fulfil their needs. As usually, when you need it the most, an instance of one of the microservices can go down or have delay in response, what we usually call unreachable service. The chances of failure need to be taken into consideration and to be handled in an appropriate way.

Why taking care of latency is important in the microservice architecture?

Increased latency one can face is when one of the microservices is:

  • Reading/writing to database
  • Synchronously calling another service
  • Hitting the timeout of asynchronous communication

If we consider the following scenario:
We have 5 microservices that communicate with each other. If microservice 5 goes down, all the other services that depend on it can be affected.

In this type of scenario, the solution for this is the strategy of fault-tolerance.

Circuit breaker

A circuit breaker is a pattern that can help in achieving fault tolerance. The circuit breaker detects when external service fails and in that case, the circuit breaker is open. All the incoming requests to the unhealthy service will be rejected and errors will be returned instead of trying to reach out to the unhealthy service over and over again. For this, we can use Hystrix.

What is Hystrix?

Hystrix is a library which implements the fault-tolerance strategy and is used for isolating the access to remote services and increasing resilience, in order to prevent cascading failures and offer ability to recover quickly from disaster.

So, how does Hystrix actually work?

Let’s take again the architecture from above. So suppose that there are multiple user requests from microservice One that are requiring a piece of information from microservice Five. In this situation, the possibility of microservice One being blocked is very obvious since it might wait for responses from microservice Five. Also microservice Five can be overloaded with the requests so the outcome would be blocking the whole service. This is when Hystrix kicks in and helps avoiding the problem.

The external requests to the service of microservice Five are wrapped in HystrixCommand which defines the behaviour of the requests. The behaviour is defined as the available number of threads that can handle the requests. In our example, the service in microservice Five can be defined as ten available threads for handling external requests. By wrapping the service in HystrixCommand, we are limiting the number of requests which service is supposed to get.

By default, Hystrix uses ten threads. If there are more concurrent threads than the default value, the rest of the requests are rejected – redirected to the fallback method.

Using Hystrix in Spring boot

First thing, adding the dependency in pom.xml:

<dependency> 
   <groupId>org.springframework.cloud</groupId> 
   <artifactId>spring-cloud-starter-hystrix</artifactId> 
   <version>1.4.7.RELEASE</version> 
</dependency>

<dependencyManagement> 
   <dependencies> 
      <dependency> 
         <groupId>org.springframework.cloud</groupId> 
         <artifactId>spring-cloud-dependencies</artifactId> 
         <version>Hoxton.SR8</version> 
         <type>pom</type> 
         <scope>import</scope> 
      </dependency> 
   </dependencies> 
</dependencyManagement>

Add the @HystrixCommand annotation to the main class

@SpringBootApplication 
@EnableHystrix 
public class HystrixApplication { 

   public static void main(String[] args) { 
      SpringApplication.run(HystrixApplication.class, args); 
   } 
}

The next step is to define the fallback method for HystrixCommand:

@Service 
@Slf4j 
public class HystrixService { 
 
    @HystrixCommand(fallbackMethod="fallbackHystrix", 
    commandProperties = {@HystrixProperty(name = 
    "execution.isolation.thread.timeoutInMilliseconds", value = "2000")}) 
    public String testHystrix(String message) throws InterruptedException { 
        Thread.sleep(4000); 
        return message != null ? message : "Message is null"; 
    } 
 
    public String fallbackHystrix(String message) { 
        log.error("Request took to long. Timeout limit: 2000ms."); 
        return "Request took to long. Timeout limit: 2000ms. Message: " + message; 
    } 
 }

This is just a simple example of how the library can be used.

In order to improvise a timeout, Thread.sleep(4000) is set in milliseconds and the timeout for response is set to 2000 milliseconds as a HystrixProperty in HystrixCommand annotation, after which the call should end up in the fallback method.

Now we can test the implementation by executing the following request:

http://localhost:8080/hystrix-example?message=hello

If we want to change the default thread pool size of HystrixCommand, we can add the following thread pool properties:

@Service 
@Slf4j 
public class HystrixService { 
 
    @HystrixCommand(fallbackMethod = "fallbackHystrix", 
            commandProperties = {@HystrixProperty(name = 
            "execution.isolation.thread.timeoutInMilliseconds", value = "2000")}, 
            threadPoolProperties = {@HystrixProperty(name = "coreSize", value = "3")}) 
    public String testHystrix(String message) throws InterruptedException { 
        return message != null ? message : "Message is null"; 
    } 
 
    public String fallbackHystrix(String message) { 
       log.error("Request took to long. Timeout limit: 2000ms."); 
       return "Request took to long. Timeout limit: 2000ms. Message: " + message; 
   } 
 }

The fallback method is called when some fault occurs.
An important thing to notice here is that the signature of the fallback method should be the same as the method on which HystrixCommand annotation is defined.

The working example of the above exercise can be found here hystrix-example

Summary

Moving away from monolithic architecture to microservices is usually coming with quite some challenges.
In this blog post, we took a look at one of them, but we just scratched the surface.
As more challenges are coming in the pipeline, stay tuned and hope to see you in one of the next posts.

Angular provider scopes explained

Reading Time: 8 minutes

Services are one of the building blocks of Angular you will see in every Angular application. Their main purpose is to increase modularity and reusability or in other words to separate a component’s view-related logic from any other kind of processing. Usually, components are delegating various tasks to services like fetching data from a server, validating user input, logging, etc. By defining these kinds of tasks in injectable services, we are making them available to any component.

One service is just a TypeScript class that has @Injectable decorator attached. This decorator makes service available to Angular’s Dependency Injection (DI) mechanism which is built into the Angular framework.

import { Injectable } from '@angular/core';

@Injectable()
export class ExampleService {

  constructor() { }
}

When components like consumers reference one service, Angular DI provides an instance from that service.

import { Component } from '@angular/core';
import {ExampleService} from "./services/example.service";

@Component({
  selector: 'app-root',
  templateUrl: './app.component.html',
  styleUrls: ['./app.component.scss']
})
export class AppComponent {
  // Reference to ExampleService
  constructor(private exampleService: ExampleService) {
  }
}
Source: https://memegenerator.net/instance/55044567/futurama-fry-wait-a-second

How the Angular DI knows how many instances of one service to provide? Is it always just one? How can we tell Angular how many instances to create?

The answer is: provider scopes.

Before we dive into provider scopes and ways of providing service in Angular, let’s define module injector (ModuleInjector) hierarchies.

Visualization of module injector hierarchy

On the image above we can notice the ModuleInjector hierarchy:

  • Platform Module injector – the top module injector, usually used for special things like DomSanitizer
  • Root Module injector – main injector and it is the place for all eagerly loaded module providers
  • Lazy Modules injectors – all lazy-loaded modules are creating separate child injector from the root injector

These module injectors are used from Angular Dependency Injection during creating instances of services.

Now it’s time to dive into provider scopes and how everything works in practice.

There are five provider scopes in Angular:

  • Module scope
  • Component scope
  • Root scope
  • Platform scope
  • ‘any’ injector

Module scope provider

When the service is registered in providers array in one @NgModule, we say its service in the module.

...
import {ExampleService} from "./services/example.service";

@NgModule({
  ...
  providers: [ExampleService]
})
export class AppModule { }

There are two different scenarios to cover when one service is provided in Angular Module:

  • Service is provided in the root module or in an eagerly loaded module – The Angular DI mechanism will use a Root Module Injector and will create one service instance which will be shared between the root module and eagerly loaded modules (all providers from all imported modules are merged into the root injector)
  • Service is provided in a lazy-loaded module – The Angular DI mechanism will use Lazy Module (child injector) and will create one instance for every different lazy-loaded module where it is provided

Component scope provider

One service can be provided also in @Component, registering should be also made in the providers’ array. This means if the service is provided and referenced in a particular component, then for that component and all their children will be created one separate instance of the service, not depending on other providers. This is applied per instance of the component, which in practice means if that component has 3 instances, 3 instances of the service will be created.

import { Component } from '@angular/core';
import {ExampleService} from "./services/example.service";

@Component({
  selector: 'app-root',
  templateUrl: './app.component.html',
  styleUrls: ['./app.component.scss'],
  providers: [ExampleService]
})
export class AppComponent {
  // Reference to ExampleService
  constructor(private exampleService: ExampleService) {
  }
}

The component scope provider is the most specific and Angular starts searching for providers at the first from component and then goes up by hierarchy until it finds it or to the last Platform Module Injector.

Root scope provider

Starting from Angular version 6 it is introduced the possibility to provide a service without registering in @NgModule or @Component, instead of that the providing info should be placed inside @Injectable decorator with providedIn: ‘root’ option.

import { Injectable } from '@angular/core';

@Injectable({
  providedIn: 'root'
})
export class ExampleService {

  constructor() { }
}

With this kind of providing services, they are provided in Root Module Injector (typically called AppModule) and Angular DI creates a single, shared instance of the service and injects the same instance in every reference. Actually, the service acts like Singleton (Singleton pattern). This is valid if it is referenced in lazy and non-lazy modules too, they all receive the same instance.

Visualization where services with providedIn: ‘root’ option will be provided

The other benefit of this kind of registering is the tree-shaking option for optimizing the app bundle size, if it turns out that the service is not referenced anywhere, Angular DI does not register the service into the root injector at all.

Platform scope provider

Starting from Angular 9, one of the two new ways of providing services is providedIn: ‘platform’ option.

import { Injectable } from '@angular/core';

@Injectable({
  providedIn: 'platform'
})
export class ExampleService {

  constructor() { }
}

When one service is defined with providedIn: ‘platform’ it means that the service will be provided in Platform Module Injector and it acts like Singleton for all applications, also all lazy modules will use the instance from the platform.

Visualization where services with providedIn: ‘platform’ option will be provided

This kind of provider looks similar to ‘root’, but the key difference is only when we are running multiple Angular applications in the same window. Every application that runs in the window will have a separate Root Module Injector, but they will both share the Platform Module Injector. In practice this means, we are sharing the services over the application boundaries.

‘any’ Injector

This kind of providing services is the second newly added option starting from Angular version 9, the syntax is providedIn: ‘any’.

import { Injectable } from '@angular/core';

@Injectable({
  providedIn: 'any'
})
export class ExampleService {

  constructor() { }
}

Providing a service like this means every service will be provided in every module where it is used. So one service can have more than one instance, depending on its usages. The rule here is that every lazy-loaded module will have its own instance, and all eagerly loaded modules will share one instance provided by the Root Module Injector.

Visualization where services with providedIn: ‘any’ option will be provided

Summary

When Angular finds one service referenced in some component, in order to create an instance or use one of the created ones Angular starts looking for a provider in the following order:

  1. Component providers
  2. Module providers
  3. Root providers
  4. Looks for providedIn: ‘any’ injector and decides the number of instances
  5. Platform providers

Component scope provider option mostly will be used in some edge cases for example when we have dynamically created tabs, all have the same component like root, but every tab should have its own state, then we added one service in providers list of the tab root component and for every tab, we will have a different instance.

Module scope provider option was the default option for providing services before Angular 6, nowadays it is also very much used when we want one instance per lazy-loaded module, but as we exposed previously starting from Angular 9 this can be achieved with providedIn: ‘any’ option.

providedIn: ‘root’ this option is the default option when we create a new injectable service in Angular and most of the time we will need tree-shakable singleton services within an application.

providedIn: ‘any’ this is a very helpful option if we want to make sure that one service is singleton within module boundaries.

providedIn: ‘platform’ this option will be mostly used when we want singleton service within several Angular applications which run in the same window.

Spring Boot REST API with OpenAPI (SwaggerUI) Codegen

Reading Time: 5 minutes

When working with microservices architecture, one of the most important aspects is inter-service communication. Usually, each microservice stores data in its own database, and if we follow the MVC design pattern, we probably have model classes that map the relational database to object models, and components that contain methods for performing CRUD operations. These components are exposed by controller endpoints.

So that one microservice calls another, the caller needs to know the exact request and response model classes. This article will show a simple example of how to generate such models with SpringDoc OpenAPI.

I will create two services that will provide basic CRUD operations. For demonstrating purposes I chose to store data about vehicles:

  • vehicle-manager- the microservice that provides vehicles’ data to the client
  • vehicle-manager-client – the client microservice that requests vehicles’ data

For the purpose of this tutorial, I created empty Spring Boot projects via SpringInitializr.

In order to use the OpenAPI in our Spring Boot project, we need to add the following Maven dependency in our pom file:

<dependency>
  <groupId>org.springdoc</groupId>
  <artifactId>springdoc-openapi-ui</artifactId>
  <version>1.5.5</version>
</dependency>

In the vehicle-manager microservice I created a Vehicle class that looks like this:

@Data
@Builder
@Schema(name = "Vehicle", description = "Example vehicle schema")
public class Vehicle {
    private VehicleType vehicleType;
    private String registrationPlate;
    private int seatsCount;
    private Category category;
    private double price;
    private Currency currency;
    private boolean available;
}

And a controller:

package com.n47.vehiclemanager.ctrl;

import com.n47.vehiclemanager.model.Vehicle;
import com.n47.vehiclemanager.service.VehicleService;
import io.swagger.v3.oas.annotations.tags.Tag;
import lombok.RequiredArgsConstructor;
import org.springframework.web.bind.annotation.*;

import javax.validation.Valid;

@Tag(name = "vehicle", description = "Vehicle controller API")
@RestController
@RequiredArgsConstructor
@RequestMapping(path = "/vehicle")
public class VehicleCtrl {

    private final VehicleService vehicleService;

    @PostMapping(path = "/add")
    public void addVehicle(@RequestBody @Valid Vehicle vehicle) {
        vehicleService.addVehicle(vehicle);
    }

    @GetMapping(path = "/get")
    public Vehicle getVehicle(@RequestParam String registrationPlate) throws Exception {
        return vehicleService.getVehicle(registrationPlate);
    }
}

The important annotations here from openAPI are @Schema and @Tag. The former is used to define the actual class that needs to be included in the API documentation. The latter is used for grouping operations, such as all methods under one controller.

The swagger documentation interface for Vehiclemanager microservice is shown on Figure 1, and can be accessed on the following links:

If we open http://localhost:8080/api-docs in our browser (or any other port we set our Spring boot app to run on), we can get the entire documentation for the Vehiclemanager microservice. The important part for the model generation is right under components/schemas, while the controller endpoints are under paths.

{
   "openapi":"3.0.1",
   "info":{
      "title":"OpenAPI definition",
      "version":"v0"
   },
   "servers":[
      {
         "url":"http://localhost:8080",
         "description":"Generated server url"
      }
   ],
   "tags":[
      {
         "name":"vehicle",
         "description":"Vehicle controller API"
      }
   ],
   "paths":{
      "/vehicle/add":{
         "post":{
            "tags":[
               "vehicle"
            ],
            "operationId":"addVehicle",
            "requestBody":{
               "content":{
                  "application/json":{
                     "schema":{
                        "$ref":"#/components/schemas/Vehicle"
                     }
                  }
               },
               "required":true
            },
            "responses":{
               "200":{
                  "description":"OK"
               }
            }
         }
      },
      "/vehicle/get":{
         "get":{
            "tags":[
               "vehicle"
            ],
            "operationId":"getVehicle",
            "parameters":[
               {
                  "name":"registrationPlate",
                  "in":"query",
                  "required":true,
                  "schema":{
                     "type":"string"
                  }
               }
            ],
            "responses":{
               "200":{
                  "description":"OK",
                  "content":{
                     "*/*":{
                        "schema":{
                           "$ref":"#/components/schemas/Vehicle"
                        }
                     }
                  }
               }
            }
         }
      }
   },
   "components":{
      "schemas":{
         "Vehicle":{
            "type":"object",
            "properties":{
               "vehicleType":{
                  "type":"string",
                  "enum":[
                     "MOTORBIKE",
                     "CAR",
                     "VAN",
                     "BUS",
                     "TRUCK"
                  ]
               },
               "registrationPlate":{
                  "type":"string"
               },
               "seatsCount":{
                  "type":"integer",
                  "format":"int32"
               },
               "category":{
                  "type":"string",
                  "enum":[
                     "A",
                     "B",
                     "C",
                     "D",
                     "E"
                  ]
               },
               "price":{
                  "type":"number",
                  "format":"double"
               },
               "currency":{
                  "type":"string",
                  "enum":[
                     "EUR",
                     "USD",
                     "CHF",
                     "MKD"
                  ]
               },
               "available":{
                  "type":"boolean"
               }
            },
            "description":"Example vehicle schema"
         }
      }
   }
}

I am going to create a Vehiclemanager-client service, running on port 8082, that will get vehicle information for a given registration plate, by calling the Vehiclemanager microservice. In order to do so, we need to generate the Vehicle model class defined in the original Vehicle microservice. We can generate it by adding the swagger codegen plugin in the pom’s plugins section, in the new demo service, like this:

<profiles>
  <profile>
	<id>generateModels</id>
	<build>
	  <plugins>
		<plugin>
	      <groupId>io.swagger.codegen.v3</groupId>
			<artifactId>swagger-codegen-maven-plugin</artifactId>
			<version>3.0.11</version>
			<configuration>
		      <output>${project.basedir}</output>
			  <inputSpec>default-config</inputSpec>
			  <language>java</language>
			  <generateModels>true</generateModels>
			  <generateModelDocumentation>false</generateModelDocumentation>
			  <generateApis>false</generateApis>
			  <generateApiTests>false</generateApiTests>
			  <generateModelTests>false</generateModelTests>
			  <generateSupportingFiles>false</generateSupportingFiles>
			  <configOptions>
			    <sourceFolder>src/main/java</sourceFolder>
				<hideGenerationTimestamp>true</hideGenerationTimestamp>
				<sortParamsByRequiredFlag>true</sortParamsByRequiredFlag>
				<checkDuplicatedModelName>true</checkDuplicatedModelName>
				<useBeanValidation>true</useBeanValidation>
				<library>feign</library>
				<dateLibrary>java8-localdatetime</dateLibrary>
			  </configOptions>
			</configuration>
			<executions>
			  <execution>
				<id>generate-vehiclemanager-classes</id>
				<goals>
			      <goal>generate</goal>
				</goals>
				<configuration>
				  <inputSpec>http://localhost:8080/api-docs</inputSpec>
				  <language>java</language>
				  <modelPackage>com.n47.domain.external.model</modelPackage>
				  <modelsToGenerate>Vehicle</modelsToGenerate>
				</configuration>
			  </execution>
			</executions>
		  </plugin>
		</plugins>
	 </build>
  </profile>
</profiles>

After running the corresponding maven profile with:

> mvn clean compile -P generateModels

the models defined in <modelsToGenerate> tag will be created under the specified package in <modelPackage> tag.

Codegen generates for us the entire model class with all classes that are defined inside it.

It is important to note that we can have models generated from different services. In each execution (see line 30 from the XML snippet) we can define the corresponding API documentation link in the <inputSpec> tag (line 37).

To demo data transfer from Vehiclemanager to Vehiclemanager-client microservice, we can send a simple request via Postman. The request I am going to use will be a GET request, that accepts a parameter registrationPlate which is used to query the vehicles stored in the Vehiclemanager microservice. The response is shown in Figure 3, which is a JSON containing the vehicle’s data that I hardcoded in the Vehiclemanager microservice.

Using OpenAPI helps us getting rid of copy-paste and boilerplate code, and more importantly, we have an automated mechanism that on each Maven clean compile generates the latest models from other microservices.

You can find the full code example microservices in the links below:

Feel free to download and run them yourself, and leave a comment or feedback.

Rest assure your API

Reading Time: 4 minutes


Most if not all of the todays’ applications expose some API for interaction. Either for customers or other applications. Application programming interface or API is a software mediator that allows two applications. Each time when we are using Facebook, YouTube or some other app, essentially we are using an API. API is a set of HTTP endpoints that use to send and retrieve data in some form, JSON or XML. Making sure those HTTP endpoints are sending and retrieving correct data thus are working according to the specifications is a vital requirement. Testing APIs belongs to the last (E2E) layer of the testing pyramid for which you may find more information in my previous blog.

Introduction to Rest Assured

Rest Assured is an open-source Java library that is used for testing RESTfull web services. It allows us to write tests using the BDD pattern. Rest Assured is a headless client for accessing Rest web services. The library is highly customizable, allowing us to create a wide variety of request combinations to test different application core business logic combinations.

High customizability also comes in handy when we want to verify the responses from the server, where we can verify the Status code, Status message, Body, Headers, etc. This makes Rest-Assured a versatile library and is often used for API testing.

Rest Assured

Pseudo Syntax:

Given(). 
        param("a", "b"). 
        header("c", "d").
when().
Method().
Then(). 
        statusCode(XXX).
        body("x, ”y", equalTo("z"));

The syntax of Rest Assured.io is the most interesting part, it’s using the BDD syntax and it’s very understandable.

Explanation:

CodeDescription
Given()‘Given’ keyword, is used to set up the (Pre-conditions/ Context), here, you pass the request headers, query and path param, body, cookies. This is optional if these items are not needed in the request
When()‘when’ keyword is a notion that marks the premise of the test
Method()Specifies the action of HTTP method (POST,GET,PUT,PATCH,DELETE)
Then()Specifies the (Result/Outcomes) and is used for assertions

Let’s create automation tests

Create new project in intelliJ

Add the following dependency:

        <dependency>
            <groupId>org.junit.jupiter</groupId>
            <artifactId>junit-jupiter</artifactId>
            <version>RELEASE</version>
            <scope>test</scope>
        </dependency>

        <dependency>
            <groupId>io.rest-assured</groupId>
            <artifactId>rest-assured</artifactId>
            <version>4.4.0</version>
            <scope>test</scope>
        </dependency>

Create new Test class

Simple test example:

public class HelloYouTubeRestAssured {

    @Test
    public void greetingsYouTube() {
        given().when()
                .get("http://youtube.com/")
                .then()
                .statusCode(200);
    }
}

The simple test connects to YouTube, performing GET call and making sure that the server responds with a success status code of 200.

Another tests verifies the Users API:

public class UsersApiTest {

    @Test
    public void checkUsers() {
        given()
                .baseUri("https://jsonplaceholder.typicode.com")
                .when()
                .get("/users")
                .then()
                .statusCode(200)
                .statusLine("HTTP/1.1 200 OK")
                .body("id",hasSize(10))
                .body("name[0]", equalTo("Leanne Graham"))
                .body("username[0]", equalTo("Bret"))
                .body("email[0]", equalTo("Sincere@april.biz"))
                .body("address[0].city", equalTo("Gwenborough"))
                .body("phone[0]", startsWith("1-770-736-8031"))
                .body("website[0]", equalTo("hildegard.org"))
                .body("company[0].name", equalTo("Romaguera-Crona"));
    }
}

As we can see from the above examples the tests are enclosed in the sense that a single call is performed to the server and only a single response is evaluated. The above test navigates to the Users API of the application and then verifies the response from the server. The verification first verifies that the status code from the code is OK. Then we verify that the response has 10 items after that we verify that the first item has the corresponding data. We are able to assert also inner data of the user object in address[0].city and company[0].name. The assertions which we use are from org.hamcrest which are incorporated into Rest-Assured.

Conclusion

Even though here we have scratched the surface, I hope that you now have a better understanding of Rest-Assured. You can find a working example with the tests on this repository.
Also, you can find more about the Rest-assured usage here.

WTF are NFTs?!

Reading Time: 6 minutes

Before I start to explain what an NFT is, let’s have a look at some examples. I tried to gather different styles of NFTs in the high price segment. There are of course also NFTs for 100$ and less.

Sold for $210’000

LeBron James: Dunk, From the Top (Series 1)

Source: https://nbatopshot.com/moment/bigdog_brothers+2499f572-8280-4057-ac27-5603971de95d

Sold for $888’888

Hairy: Musician, fashion designer, and entrepreneur Steve Aoki recently collaborated with 3D illustrator Antoni Tudisco to produce the eclectic piece known simply as ‘Hairy’ (A blue bespectacled creature bopping to one of Aoki’s beats in a 36-second clip).

Source: https://niftygateway.com/itemdetail/primary/0xbeccd9e4a80d4b7b642760275f60b62608d464f7/1

Sold for $2.9 million

First Twitter Tweet: First tweet posted by Twitter founder and CEO Jack Dorsey

Source: https://v.cent.co/tweet/20

Sold for $69 million

EVERYDAYS: THE FIRST 5000 DAYS: The artwork, created by famed digital artist Mike “Beeple” Winkelmann represents a collage of 5,000 of Beeple’s earlier artworks

Source: https://onlineonly.christies.com/s/beeple-first-5000-days/beeple-b-1981-1/112924

What is an NFT?

As you saw in the few examples, NFTs can be anything. It could be a tweet, a digital painting, a video clip, an animation, music, a 3D model, a picture, a GIF or even virtual land in a blockchain-based game. To further explain what NFT exactly means, it’s easier to split the word and have a closer look at Non-Fungible and Token.

NFT = Non-fungible token

Non-fungible

The official definition of fungible is “to be substituted for something of equal value or utility; interchangeable, exchangeable, replaceable”. For now, let’s replace the word fungible with replaceable. So non-fungible means non-replaceable. Let’s make some examples:

Physical fungible (replaceable)

  • CHF Coins and CHF Notes (my 10 Swiss Franc note has the same value as 10 x 1 Swiss Franc coins)
  • Precious metals like gold and silver (my 1kg gold bar has the same value as your 1kg gold bar)

Virtual fungible (replaceable)

  • Bitcoins and other crypto currencies (my 0.00001 BTC has the same value as your 0.00001 BTC)

Physical non-fungible (non replaceable)

  • Historic Coins (the first 1 Swiss Franc coin, or a limited special edition coin. The value is not really defined. Also the first 1 Swiss Franc coin has not the same value as the current 1 Swiss Franc coin)
  • Art (like paintings from Banksy. It’s unique and the value is only defined by the potential buyers. If a painting is destroyed, it’s not replaceable by another “similar” one)

Virtual non-fungible (non replaceable)

  • Tweets
  • NBA Dunks
  • Art (image, video, 3D model, music)

Token

The token certifies a digital asset to be unique and therefore not interchangeable. It’s proof of ownership that is stored on the blockchain (in this case: Ethereum). While someone can sell an NFT representing his work, the buyer does not necessarily receive copyright privileges if ownership of the NFT changes, allowing the original owner to create further NFTs of the same work. An NFT is merely proof of ownership separate from copyright.

NFT Properties

UniqueProvably ScarceIndivisible
Each NFT has different properties that are usually stored in the token’s metadata.There is usually a limited number of NFTs with an extreme example of having only 1 copy, the number of tokens can be verified on the blockchain, hence its provability.Most NFTs cannot be split into smaller denominations, so you cannot buy or transfer a fraction of your NFT.

The dark side of NFTs

NFTs are stored on ETH (Ethereum) Blockchain, which is currently using 51 TWh/year. That’s 51’000’000’000 kWh/year (comparable to the power consumption of the whole of Portugal). If we calculate the carbon footprint, it’s 30’000’000 tons of CO2/year. You could drive 121’000’000’000 km with a car to have the same emissions as ETH has in one year.

Source: https://digiconomist.net/ethereum-energy-consumption/

Create your own NFT

The process is very simple and creating an NFT at OpenSea is done in a few minutes. I decided to go for OpenSea because it’s the most popular marketplace and easiest to use.

Create account, collection and item

  1. Create digital art (image, video, audio, or 3D model) with your favourite tools. I did it with Adobe Photoshop and Adobe Premiere.
  2. Create an account at OpenSea. I used MetaMask as my wallet. You can also choose other wallets. If using MetaMask, it’s the easiest to have it installed as a Chrome/Firefox addon.
  3. Create a collection and add a new item to it.
  4. Upload your art (file types supported: JPG, PNG, GIF, SVG, MP4, WEBM, MP3, WAV, OGG, GLB, GLTF. Max size: 40 MB) and give it a name. That’s all you need for now!

Sell your item

After you created your item inside your collection, it is ready for selling. You will have the option to sell for a fixed price or create an open auction (highest bid). I decided to go for an auction and I will let it run until the end of the year.

  1. Click on your item
  2. Click on the top right corner button “Sell”
  3. Select “Highest Bid” to create an open auction
  4. All other settings are customizable, like minimum bid, the reserve price and expiration date. I set the minimum bid to 0, the reserve price to 1 and the expiration date to 31.12.2021 (end of the year)
  5. Post the listing with the big blue button. Posting something the first time requires a gas fee. Depending on the time and day, this will vary. See https://ycharts.com/indicators/ethereum_average_gas_price
  6. Congratz! You created and listed your first NFT!

Our N47 NFT is up for sale!

Of course, I had to create an N47 NFT too! Our sale end of the year (December 31, 2021, at 12:00 am CEST). Check the listing at OpenSea and make a bid! It will be a great investment 🤑

https://opensea.io/assets/0x495f947276749ce646f68ac8c248420045cb7b5e/84521541564558765496637908089370856586310363315177326824411334291304117960705

Spring Boot password encryption with Jasypt

Reading Time: 5 minutes

Securing sensitive data is extremely important. In the following tutorial, we will go through the process of encrypting sensitive data in a Spring Boot application. We will take an easy approach to this very common procedure which takes place in any software project. This will be easy in the context of setup and usage of the given high-security java library. Without the need for deep knowledge/in-depth understanding of cryptography, encryption capabilities and encryption algorithms. Just following a simple setup with a few configuration steps. It is recommended to rely on the secure default configuration, but also Jasypt offers quite some customization if one needs it.

Jasypt (Java Simplified Encryption) provides utilities for encrypting user sensitive information, such as DB passwords, servers’ credentials, or other sensitive personal data. This information is key to users privacy, so we as developers need to make sure that no one gets the right to access them, irrelevant of the place where they are stored, they always need to be encrypted. Never store sensitive data in plain mode. It’s common sense we need to follow and it’s also something we need to honour if we want to gain our user’s trust. For this tutorial, we will use a specific library, Jasypt Spring Boot Starter, widely used across the Spring Boot community.

Jasypt setup steps

  1. Add jasypt-spring-boot-starter maven dependency in the pom.xml of the Spring Boot project
  2. Select a secret key to be used for encryption and decryption
  3. Generate Encrypted Key
  4. Add the Encrypted key in the config file
  5. Run the application

Let’s go into details in all of these steps:

Step 1. Adding maven dependency

<dependency>
    <groupId>com.github.ulisesbocchio</groupId>
    <artifactId>jasypt-spring-boot-starter</artifactId>
    <version>3.0.3</version>
</dependency>

This comes as a regular entry in the pom.xml file list of dependencies.

We should try to use latest stable versions. This version is the latest one at this moment and it offers better support for newer versions of Spring Boot. starting from 2.1.x and upwards. Also would advise using this because it comes with a more secure encryption algorithm by default, “PBEWITHHMACSHA512ANDAES_256”.

Step 2. Select a secret key to be used for encryption and decryption

This secret key will be used to encrypt and descript the data. You can think of it as a safeguard to further improve security. What it does, it actually adds a random string to the beginning or end of the input text prior to hashing or encrypting the value. This secret key goes in the property file, application.yml/application.properties in the Spring Boot project itself.

jasypt:
     encryptor:
           password: salting

Step 3. Generate Encrypted Key

Jasypt supplies a lot of (CLI) tools. In order to use these tools, you should download the distribution zip file (named jasypt-$VERSION-dist.zip) and unzip it. There will be an appropriate .bat or .sh file for the needed operation digest/encrypt/decrypt.

Example for encryption

$ ./encrypt.sh input="This is my message to be encrypted" password=MYPASSWORD

Example for decryption

$ ./decrypt.sh input="8fsdfdsafdsa9ffsad0fdsa0fdsfdsa3231x" password=MYPASSWORD

Another way of using Jasypt for encrypting your data is by using some online tools that provide Jasypt operations.

The simplest and most convenient way is a maven plugin. Not only that you can use it for a single value, it offers the capabilities to encrypt all sensitive data with a single command, meaning all placeholders will be updated in one step.

<build>
  <plugins>
    <plugin>
      <groupId>com.github.ulisesbocchio</groupId>
      <artifactId>jasypt-maven-plugin</artifactId>
      <version>3.0.3</version>
    </plugin>
  </plugins>
</build>

This jasypt-maven-plugin, by default, will check for configuration files under ./src/main/resources, or the regular Spring Boot resource folders. But also, Environment variables can be used to supply this master password. Instead of exposing the password “salting” inside the project itself, an Environment Variable can be created with, for instance, ENCRYPTION_MASTER_PASSWORD and then in the config file, password: ${ENCRYPTION_MASTER_PASSWORD}.

Example for encrypting a single value from a terminal.

This example uses the encryption password as an argument. Important, the terminal session needs to be opened where the pom.xml file with the maven plugin is located.

mvn jasypt:encrypt-value -Djasypt.encryptor.password=salting -Djasypt.plugin.value="secureDataWeNeedToEncrypt"

Example for encrypting all strings within projects property file.

The last argument is optional since Jasypt will scan that location anyway. What is important is that sensitive placeholders in the application property file MUST be wrapped in DEC() parenthesis. Activedirectory:password: DEC(supersecret) OracleDB:password: DEC(alsosupersecret).

mvn jasypt:encrypt -Djasypt.encryptor.password=salting -Djasypt.plugin.path="file:src/main/resources/application.yml"

If the previous statement completed successfully then, all sensitive data should be updated with their encrypted value. Updated properties output should be something like, Activedirectory:password: ENC(sFJDfdsfjjA8saT7YC65bsf71d0) OracleDB:password: ENC(34jjfsdfds+fds/fsd7Hs)

Step 4. Add the encrypted key in the config file

If you have been using the latest approach, then the application.properties/application.yaml files have already been updated with the newly encrypted values. All sensitive data wrapped with a DEC() is now encrypted, and all other strings in the configuration remained unchanged. If some of the other approaches were chosen, going one placeholder at a time, or using the cli, then we need to update the configuration file entries one by one. Still, the properties need to be wrapped in ENC() parenthesis anyway, since the output of the cli is only the encrypted value.

For the reverse process, it’s vice-versa, the first argument of the statement is: decrypt and all placeholders must be wrapped in ENC() parenthesis before execution.

Step 5. Run the application

That’s it. Your Spring Boot project will automatically decrypt all sensitive data when you start the application, no additional configuration is needed. Let me know in the comments section how was your experience. Was it smooth or are there some ongoing issues?

Deploy microservice on Kubernetes, step by step

Reading Time: 11 minutes

In this tutorial, I will try to explain step by step, how you can set up Kubernetes, deploy your microservice on Kubernetes, and check the result via the Kubernetes dashboard. All other things will be “as simple as possible”. As a cloud platform gcloud will be used. We will cover the following aspects of the problem:

  1. Create microservice to be deployed
  2. Placed application in your docker container
  3. What is Kubernetes and how to install it?
  4. Create a new Kubernetes project
  5. Create new Cluster
  6. Allow access from your local machine
  7. Create service account
  8. Activate service account
  9. Connect to cluster
  10. Gcloud initialization
  11. Generate access token
  12. Deploy and start Kubernetes dashboard
  13. Deploy microservice

Step 1: Create microservice to be deployed

Traditionally, in the programming world, everything starts with “Hello World”. So, as mentioned previously, to keep things simple, create a microservice that returns just “Hello World”. You can use https://start.spring.io/ for this goal. Create HelloController like this:

package com.example.demojooq.controllers;


import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequestMapping("/api/v1")
public class HelloController {

    @GetMapping("/say-hello")
    public String sayHello() {
        return "Hello world";
    }
}

Step 2: Placed application in your docker container

We have a microservice, need to put this microservice in a docker container and upload it on Kubernetes. From that point, Kubernetes will orchestrate the container according to your settings. Let’s create the first image from the microservice. Normally, as you might guess, it is called Dockerfile (without any extension), and the content is:

Dockerfile

FROM adoptopenjdk/openjdk11:jre-11.0.8_10-debianslim
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","app.jar"]

The next step is to create the docker-compose file. For that purpose, a call to Dockerfile will be made to build the image. You can do it manually, but the best way is from the docker-compose file, as you have a permanent track of the solution. This is a .yaml file. (picture below)

docker-compose.yaml

version: "3"
services:
  hello-world:
    build: .
    ports:
      - "8099:8080"

After starting docker, go to the folder where docker-compose is located and execute the command “docker-compose up”. The expectation is to reach this microservice on 8099 port. If everything is ok, in your docker will be something like this:

Check microservice docker installation with postman calling http://localhost:8099/api/v1/say-hello. In response, you have “Hello World”.

Step 3: What is Kubernetes and how to install it?

What is Kubernetes?

Kubernetes is an open-source container orchestrator that automates many tasks involved in deploying, managing, and scaling containerized applications. What happens when you use Docker, and your container fails? Probably the first thing to do is to restart your container. You do that manually (if you don’t have Kubernetes). Here comes Kubernetes, observe that container is down and start new container automatically. This is just a basic use case. Please read more on the internet, there is a bunch of information about this.

How to install Kubernetes?

Ok, until now you are sure that Kubernetes is needed, but where to find it, what are the costs, and how to install it? First of all, try “download Kubernetes” on Google… Pick the site https://kubernetes.io/docs/tasks/tools/… Options for Windows, Mac, Linux appear… A different installation like kind, minikube, kubeadmin… So, is it worth spending so much time setting properly this Kubernetes? You do not have to ask me, I agree with you, it is too much time. Fortunately, we can make a “go around” and skip all that: Go to Gcloud where Kubernetes is offered as a service and just use it. Somebody else takes care of this, we can focus just on the business logic in our microservice and use out-of-the-box Kubernetes installation from Gcloud. Sounds good, doesn’t it? The last and most important question; money. Is it for free? No, it is not. You have to pay for the Gcloud services and here is the price list: https://cloud.google.com/kubernetes-engine/pricing. But for ordinary people like you and me, Gcloud offers a free account for 3 months up to 300$ and it seems fair. It is enough time to learn about deploying microservices on Kubernetes. For any professional use in future, the company should stay behind this. Here is the link where you can create your free cloud account https://cloud.google.com/. One more thing, during the creation of a free account, Google will ask for your bank account, to automatically charge you. But do not worry, you are safe for the first three months and below 300$. And for any charging, you will be asked for permission before… So, until now my personal experience is positive, as Google is keeping the promise when you create an account. But the final decision is up to you.

Step 4: Create new Kubernetes project

Open up your Google account, sign in and go to the console.

Create a new project from the main dashboard; the name of the new project is “hello-world”. From now on, this is your active project.

Step 5: Create new cluster

Create new cluster (named it cluster2). Accept default values for others fields.

Step 6: Allow access from your local machine

Now, we must allow access from our local machine to Kubernetes, via kubectl. For that purpose, we need to follow these steps:

  1. Click on cluster2
  2. Find your local IP address and add it here according to the CIDR standard in the Edit control plane authorized networks

Step 7: Create service account

Give new account role “Owner”. Accept default values for other fields. After a service account is created, you should have something like this:

Generate keys for this service account with key type JSON. When the key is downloaded, it has some random name like hello-world-315318-ab0c74d58a70.json. Keep this file in a safe place, we will need it later.

Now, install Google Cloud SDK Shell on your machine according to your OS. Let’s do the configuration so kubectl can reach cluster2. Copy the file hello-world-315318-ab0c74d58a70.json and put it in the CLOUD SDK folder. For the Windows environment, it looks like this:

Step 8: Activate service account

The first thing to do is to activate the service account with the command: gcloud auth activate-service-account hello-world-service-account@hello-world-315318.iam.gserviceaccount.com –key-file=hello-world-315318-ab0c74d58a70.json

Step 9: Connect to cluster

Now go to cluster2 again and find the connection string to connect to the new cluster

Execute this connection string in Google Cloud Shell: gcloud container clusters get-credentials cluster2 –zone us-central1-c –project hello-world-315318

Step 10: Gcloud initialization

The next command to execute is gcloud init, to initialize connection with the new project. Here is the complete code on how to do that from the Gcloud Shell:

C:\Users\Dimche Trifunov\AppData\Local\Google\Cloud SDK>gcloud init
Welcome! This command will take you through the configuration of gcloud.

Settings from your current configuration [dev] are:
accessibility:
  screen_reader: 'False'
compute:
  region: europe-west3
  zone: europe-west3-a
core:
  account: hello-world-service-account@hello-world-315318.iam.gserviceaccount.com
  disable_usage_reporting: 'True'
  project: dops-containers

Pick configuration to use:
 [1] Re-initialize this configuration [dev] with new settings
 [2] Create a new configuration
 [3] Switch to and re-initialize existing configuration: [database-connection]
 [4] Switch to and re-initialize existing configuration: [default]
Please enter your numeric choice:  2

Enter configuration name. Names start with a lower case letter and
contain only lower case letters a-z, digits 0-9, and hyphens '-':  hello-world
Your current configuration has been set to: [hello-world]

You can skip diagnostics next time by using the following flag:
  gcloud init --skip-diagnostics

Network diagnostic detects and fixes local network connection issues.
Checking network connection...done.
Reachability Check passed.
Network diagnostic passed (1/1 checks passed).

Choose the account you would like to use to perform operations for
this configuration:
 [1] cicd-worker@devops-platform-n47.iam.gserviceaccount.com
 [2] d.trifunov74@gmail.com
 [3] dimche.trifunov@north-47.com
 [4] dtrifunov@lunar-sled-314616.iam.gserviceaccount.com
 [5] hello-world-service-account@hello-world-315318.iam.gserviceaccount.com
 [6] service-account-demo-dime@blissful-epoch-305214.iam.gserviceaccount.com
 [7] Log in with a new account
Please enter your numeric choice:  5

You are logged in as: [hello-world-service-account@hello-world-315318.iam.gserviceaccount.com].

API [cloudresourcemanager.googleapis.com] not enabled on project
[580325979968]. Would you like to enable and retry (this will take a
few minutes)? (y/N)?  y

Enabling service [cloudresourcemanager.googleapis.com] on project [580325979968]...
Operation "operations/acf.p2-580325979968-f1bf2515-deea-49d5-ae35-a0adfef9973e" finished successfully.
Pick cloud project to use:
 [1] hello-world-315318
 [2] Create a new project
Please enter numeric choice or text value (must exactly match list
item):  1

Your current project has been set to: [hello-world-315318].

Do you want to configure a default Compute Region and Zone? (Y/n)?  n

Error creating a default .boto configuration file. Please run [gsutil config -n] if you would like to create this file.
Your Google Cloud SDK is configured and ready to use!

* Commands that require authentication will use hello-world-service-account@hello-world-315318.iam.gserviceaccount.com by default
* Commands will reference project `hello-world-315318` by default
Run `gcloud help config` to learn how to change individual settings

This gcloud configuration is called [hello-world]. You can create additional configurations if you work with multiple accounts and/or projects.
Run `gcloud topic configurations` to learn more.

Some things to try next:

* Run `gcloud --help` to see the Cloud Platform services you can interact with. And run `gcloud help COMMAND` to get help on any gcloud command.
* Run `gcloud topic --help` to learn about advanced features of the SDK like arg files and output formatting

Step 11: Generate access token

Type kubectl get namespace, access token is generated in .kube folder (in home folder), in config file:

If you open this config file, you will find your access token. You will need this later.

Step 12: Deploy and start Kubernetes dashboard

Now, deploy Kubernetes dashboard with the next command: kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

C:\Users\Dimche Trifunov\AppData\Local\Google\Cloud SDK>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

C:\Users\Dimche Trifunov\AppData\Local\Google\Cloud SDK>kubectl proxy
Starting to serve on 127.0.0.1:8001

Start the dashboard with kubectl proxy command. Now open the dashboard from the link: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/overview?namespace=default

In front of you, this screen will appear:

Now, you need the token from the config file that we spoke about a moment ago. Open the config file with Notepad (on Windows), find your access token, and copy from there and paste it in the Enter token* field. Be careful when you are copying token from the config file as there might be several tokens. You must choose yours (image below).

Finally, the stage is prepared to deploy microservice.

Step 13: Deploy microservice

Build the docker image from Dockerfile with the command: docker build -t docker2222/dimac:latest. docker2222/dimac is my public docker repository.
Push the image on docker hub with the command: docker image push docker2222/dimac:latest.
Execute kubectl apply -f k8s.yaml where k8s.yaml is the file below:

---

apiVersion: v1
kind: Namespace
metadata:
  name: hello

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-world
  namespace: hello
  annotations:
    buildNumber: "1.0"
spec:
  selector:
    matchLabels:
      app: hello-world
  replicas: 1
  template:
    metadata:
      labels:
        app: hello-world
      annotations:
        buildNumber: "1.0"
    spec:
      containers:
        - name: hello-world
          image: docker2222/dimac:latest
          readinessProbe:
            httpGet:
              path: "/actuator/health/readiness"
              port: 8080
            initialDelaySeconds: 5
          ports:
            - containerPort: 8080
          env:
            - name: APPLICATION_VERSION
              value: "1.0"
---


apiVersion: v1
kind: Service
metadata:
  name: hello-world
  namespace: hello
spec:
  selector:
    app: hello-world
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
---

Last but not least, open the Kubernetes dashboard. If everything is OK, you should see your service.

ERC20 Token Smart Contract for Ethereum Blockchain

Reading Time: 5 minutes

Since the inception of Blockchain technology; Bitcoin, Ethereum, or crypto-currencies are hot topics and buzzing around the world many startups based on Blockchain technologies are using cryptocurrencies, in other words, crypto tokens for the utilization of their products. These crypto tokens can be deployed on many Blockchain like Ethereum, Cardano, Binance, Polkadot, etc. It’s another topic of discussion, on which blockchain these crypto tokens need to be implemented but as Ethereum being the first market mover, this blog post explains, how you can create such a token on the Ethereum blockchain.

Before creating an Ethereum based token (ERC20 token), understand first the basics of Smart-contract and their native programming language Solidity.

Smart Contract

A smart contract is simply a set of rules that contains the business logic or a protocol according to which all the transactions on a Blockchain should happen. The general purpose of a Smart contract is to satisfy common contractual conditions like creating its token, perform arbitrary competitions, function to send and receive tokens, and store states of transactions.

Solidity

Solidity is an object-oriented and high-level smart-contract programming language, which is developed on top of Ethereum Virtual Machine (EVM). Solidity compiler converts smart-contract code into EVM bytecode which is sent to the Ethereum network as a deployment transaction. It would be best to have a good understanding of Solidity programming language to efficiently write an Ethereum Smart Contract and build an application on smart-contract.

Coding example of smart-contract

This section contains the example of a smart-contract code written using the Solidity programming language.

Prerequisite

Integrated development environment (IDE)

Remix as the IDE. It is a web-based IDE with built-in static analysis and a testnet EVM. Remix provides the possibility to compile and deploy it to Ethereum testnet with Metamask. Here is a good blog post for it.

There is also another web-based IDE available like EthFiddle. For more information related to IDE please visit here.

Programming Language

Solidity

ERC20 Token Info
  • Symbol – N47
  • Name – N47Token
  • Decimals – 0
  • Total Supply – 1000000
Smart-contract Code
// SPDX-License-Identifier: unlicensed
pragma solidity 0.8.4;
// ----------------------------------------------------------------------------
// Safe maths
// ----------------------------------------------------------------------------
contract SafeMath {
    function safeAdd(uint a, uint b) public pure returns (uint c) {
        c = a + b;
        require(c >= a);
    }
    function safeSub(uint a, uint b) public pure returns (uint c) {
        require(b <= a);
        c = a - b;
    }
}
// ----------------------------------------------------------------------------
// ERC Token Standard #20 Interface
// https://github.com/ethereum/EIPs/blob/master/EIPS/eip-20.md
// ----------------------------------------------------------------------------
abstract contract ERC20Interface {
    function totalSupply() virtual public view returns (uint);
    function balanceOf(address tokenOwner) virtual public view returns (uint balance);
    function allowance(address tokenOwner, address spender) virtual public view returns (uint remaining);
    function transfer(address to, uint tokens) virtual public returns (bool success);
    function approve(address spender, uint tokens) virtual public returns (bool success);
    function transferFrom(address from, address to, uint tokens) virtual public returns (bool success);
    event Transfer(address indexed from, address indexed to, uint tokens);
    event Approval(address indexed tokenOwner, address indexed spender, uint tokens);
}
// ----------------------------------------------------------------------------
// ERC20 Token, with the addition of symbol, name and decimals
// assisted token transfers
// ----------------------------------------------------------------------------
contract N47Token is ERC20Interface, SafeMath {
    string public symbol;
    string public  name;
    uint8 public decimals;
    uint public _totalSupply;
    mapping(address => uint) balances;
    mapping(address => mapping(address => uint)) allowed;
    // ------------------------------------------------------------------------
    // Constructor
    // ------------------------------------------------------------------------
    constructor() {
        symbol = "N47";
        name = "N47Token";
        decimals = 0;
        _totalSupply = 1000000;
        balances[msg.sender] = _totalSupply;
        emit Transfer(address(0), msg.sender, _totalSupply);
    }
    // ------------------------------------------------------------------------
    // Total supply
    // ------------------------------------------------------------------------
    function totalSupply() public override view returns (uint) {
        return _totalSupply - balances[address(0)];
    }
    // ------------------------------------------------------------------------
    // Get the token balance for account tokenOwner
    // ------------------------------------------------------------------------
    function balanceOf(address tokenOwner) public override view returns (uint balance) {
        return balances[tokenOwner];
    }
    // ------------------------------------------------------------------------
    // Transfer the balance from token owner's account to receiver account
    // - Owner's account must have sufficient balance to transfer
    // - 0 value transfers are allowed
    // ------------------------------------------------------------------------
    function transfer(address receiver, uint tokens) public override returns (bool success) {
        balances[msg.sender] = safeSub(balances[msg.sender], tokens);
        balances[receiver] = safeAdd(balances[receiver], tokens);
        emit Transfer(msg.sender, receiver, tokens);
        return true;
    }
    // ------------------------------------------------------------------------
    // Token owner can approve for spender to transferFrom(...) tokens
    // from the token owner's account
    //
    // https://github.com/ethereum/EIPs/blob/master/EIPS/eip-20.md
    // recommends that there are no checks for the approval double-spend attack
    // as this should be implemented in user interfaces 
    // ------------------------------------------------------------------------
    function approve(address spender, uint tokens) public override returns (bool success) {
        allowed[msg.sender][spender] = tokens;
        emit Approval(msg.sender, spender, tokens);
        return true;
    }
    // ------------------------------------------------------------------------
    // Transfer tokens from sender account to receiver account
    // 
    // The calling account must already have sufficient tokens approve(...)-d
    // for spending from sender account and
    // - From account must have sufficient balance to transfer
    // - Spender must have sufficient allowance to transfer
    // - 0 value transfers are allowed
    // ------------------------------------------------------------------------
    function transferFrom(address sender, address receiver, uint tokens) public override returns (bool success) {
        balances[sender] = safeSub(balances[sender], tokens);
        allowed[sender][msg.sender] = safeSub(allowed[sender][msg.sender], tokens);
        balances[receiver] = safeAdd(balances[receiver], tokens);
        emit Transfer(sender, receiver, tokens);
        return true;
    }
    // ------------------------------------------------------------------------
    // Returns the amount of tokens approved by the owner that can be
    // transferred to the spender's account
    // ------------------------------------------------------------------------
    function allowance(address tokenOwner, address spender) public override view returns (uint remaining) {
        return allowed[tokenOwner][spender];
    }
}

Using the above code, smart-contract can be deployed on Ethereum Mainnet or Testnet. Deploying a smart contract is technically a transaction, that needs to pay Gas (fees) in terms of ETH (Native token for Ethereum network), in the same way, that needs to pay gas for a simple ETH transfer. However, Gas costs for contract deployment are far higher.

** To create another token, simply change those values in the smart contract marked as highlight lines.

Summary

Blockchain technology is way deeper than token and smart contracts. There are many technical aspects like Consensus, Blocks, Wallets, Transactions, Decentralization, mining, etc. The goal of this article was just to provide an overview of smart-contract creation. Just feel free to write down your valuable comments.

Infinite UITableView Scroll – Special Case

Reading Time: 6 minutes

When we are working with loading large data, if we load all the available items and try to display everything at once it will cause a big delay and the app will not work smoothly. The solution in cases like this is a combination of back-end and in-app solution. We should adjust the BE to work with pagination responses. The BE should give us only chunks of data and we should control the size of these chunks from the app sending the batch size. I’ve made research on this topic on the net and there are solutions but all of this is going in one direction. Using pagination, but every time starting from page 1 and loading the next pages after that. One of the best things was discovering the iOS Prefetching Protocol that I’ve never used before. This protocol is a piece of this solution.

Prerequisites

From the BE we need at least two APIs:

The first one is an API that will return all the necessary data: starting page, optionally which element from the starting page to be focussed and the total number of elements in the database. Why is the total number of elements needed? This is because if we don’t know it we won’t know how many rows we should present. Things will become more clear when we will start coding. The suggested JSON response should look like this:

{
	"total_elements": 480,
	"data": [
		{
			"id": 1,
			"name": "Test1"
		},
		{
			"id": 3,
			"name": "Test1"
		}
		//elements from 60..89

	],
	"first_page": 3,
	"per_page": 30
}

The second API is the API that will return the data and we will send a page number as an argument. The example JSON response is provided below:

{
	"data": [
		{
			"id": 1,
			"name": "Test1"
		},
		{
			"id": 3,
			"name": "Test1"
		}
		//elements from 1..29

	],
	"page_number": 1,
	"per_page": 30
}

Solution explained

I read a lot of articles about infinite scrolling UITableView’s but none of this is solving my special case – an option to start in the middle, and optionally to focus on a particular row from the table inside that page. Here is how I solved this issue:

First, I’m defining few variables in the code, some static integer values – the batch size (number of elements per page), start page value that will be variable and we will fetch it from the BE, and the total number of elements – variable that will be fetched also from the BE:

In my example, I will work with a list of integer values instead of using some objects, but this could be easily adjusted with any kind of objects/models.

Also, I will use a list of used pages, and I will keep track of the already fetched pages from the BE. Here is also one useful boolean flag “isNewDataLoading”. This flag will prevent us from calling the BE if the previous BE call is not finished.

    let batchSize: Int = 30
    let totalElements: Int = 480
    var startPage: Int = 5
    var elements: [Int?] = []
    var isNewDataLoading: Bool = false
    var loadedPages = [Int]()

The first call is the initial load method. Here, I will call the BE API that will return all the necessary data to pre-populate the code variables:

    func initialLoad() {
                
        for _ in 0..<totalElements {
            elements.append(.none)
        }
        
        loadedPages.append(startPage)
        
        for value in startPage*batchSize..<(startPage+1)*batchSize {
            elements[value] = value
        }
        
        mainTableView.reloadData()
        let toIndex = startPage*batchSize + ((startPage+1)*batchSize - startPage*batchSize)/2
        mainTableView.scrollToRow(at: IndexPath(row: toIndex, section: 0), at: .middle, animated: true)
    }

After the initial loading is done we have to explain the UITableView data source methods.

The “cellForRow” method has a simple logic. If we don’t have fetched the value for one of the cells, the cell will show a spinner (UIActivityIndicator); in case the value for the cell is already loaded we are showing into a text label (UILabel):

    func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
        let cell = tableView.dequeueReusableCell(withIdentifier: "Cell", for: indexPath) as! TestTableViewCell
        if isLoadingCell(for: indexPath) {
            cell.configure(with: .none)
        } else {
            cell.configure(with: elements[indexPath.row])
        }
        return cell
    }

The Magic

Historically all of the older solutions use the UIScrollView delegate methods and inspect the current y-offset of the table. If the user reaches the maximum y-offset the API is called with the next page.

I made research on the topic and I recognized that the Prefetch Protocol should be useful in this situation. Some of the solutions on the net used the prefetch protocol in their solutions, but it needs some modifications if we want to make our code work with different starting pages. Let’s see how it looks into the code:

    func tableView(_ tableView: UITableView, prefetchRowsAt indexPaths: [IndexPath]) {
        if indexPaths.contains(where: isLoadingCell) {
            
            let index = indexPaths[0]
            let page = index.row/batchSize
            if !loadedPages.contains(page) {
                //fetch new
                DispatchQueue.main.asyncAfter(deadline: DispatchTime.now() + 2.0) {
                    self.getNewData(page: page)
                }
            }
        }
    }

Since iOS 10 Apple introduced the API for prefetching data in UITableView and UICollectionView. More details about the prefetching protocol can be found on the Apple Developer site: https://developer.apple.com/documentation/uikit/uitableviewdatasourceprefetching

Little explanation of the code above: If the IndexPath of the cell that should be prefetched is an index that is not yet downloaded using the row value of the IndexPath we will determine the page to which the index path belongs to. If this page is not downloaded, we will download it. The code for downloading a new page will be shown below:

    func getNewData(page: Int) {
                        
        if isNewDataLoading {
            return
        }
        
        isNewDataLoading = true
        
        var temp: [Int] = [Int]()
        for num in page*batchSize..<(page+1)*batchSize {
            elements[num] = num
            temp.append(num)
        }
        
        loadedPages.append(page)
        
        let indexes: [IndexPath] = temp.map {
            return IndexPath(row: $0, section: 0)
        }
        
        mainTableView.reloadRows(at: indexes, with: .none)
        isNewDataLoading = false
    }

What is important in this method? The most important is to add the page to the list of already loaded pages, and the second thing is to reload only the rows in the table that belongs to the actual page.

The full code can be downloaded by clicking on the “Download” button below:

Feel free to add your comments or suggestion.

COVID-19′ effects on the future of conferences

Reading Time: 3 minutes

First, I would like to talk about a personal experience, that I faced last year. I was supposed to attend a conference in Zurich in March called Voxxed Days, due to the fact that the company I work for, was a sponsor. But by the time spring rolled around, the organizers postponed the conference to September 2020. Later, we got an email, that the conference was cancelled due to the pandemic. COVID-19 has provided an opportunity to rethink virtual conferences. I feel now, that virtual meetings become the norm. And as more meetings move online – a trend likely to continue even after the pandemic fades. These virtual meetings or conferences have some advantages and disadvantages, that we are gonna talk about.

Disadvantages

Online meetings might lack many of the benefits of an in-person conference: conversations over dinner; the after-conference party; face-to-face networking; fresh perspectives that can come from simply leaving one’s home ground.

We all know that networking plays a big role by physical conferences

The ability to speak to the speakers or attendees. Although some organisers are trying to replicate as much as they can through online events, that intangible element of being energised around others is much harder to capture when people aren’t physically gathered.

Online benefits

Meeting online, whether it’s for a conference, study section, or worldwide lab gathering, works better than expected, and it’s more convenient and affordable due to some factors like missing the in-person stuff – dinners, drinks, which affect the conference’s ticket. Another benefit of meeting virtually is how many more people can access the conferences. The high cost of travel costs, border restrictions will be eliminated from the equation, when it comes to virtual meetings or conferences. Not everyone can travel. Certainly, not everyone should. Many of us, particularly in the Global North, need to travel much less.

N47 webinars

Being and working together, particularly in smaller workshop settings, is an invaluable way to generate new ideas and connections in many fields and professional settings. For this purpose, our company N47 started to organize webinars. The first one was held on the 10th of December 2020.

AWS Landing Zone – Best practices for multi-account AWS environment

The second one will be held on the 11th of March 2021. It is better to have a look at this webinars and stay tuned for new webinars.

Infrastructure as code with Terraform, Bitbucket and Azure – How N47 deploys their current projects to a productive Azure environment

Keine alternative Textbeschreibung für dieses Bild vorhanden

Conclusion

In the end, I would like to share some thoughts regarding this topic. I think these changes that were done due to Covid, will stay with us. However, I do not expect online conferences to run perfectly, especially when the conversion to online was unexpected and hastily planned. The future of conferences and meeting has evolved becoming more accessible and more inclusive. But, what about physical conferences, when they return. For me, I hope that physical conferences return as fast as possible due to many things that were mentioned in the disadvantages of virtual conferences.

Spring Boot Internationalization using Resource Bundles

Reading Time: 3 minutes

Implementing Spring Boot internationalization can be easily achieved using Resource Bundles. I will show you a code example of how you can implement it in your projects.

Let’s create a simple Spring Boot application from start.spring.io.

The first step is to create a resource bundle (a set of properties files with the same base name and language suffix) in the resources package.

I will create properties files with base name texts and only one key greeting:

  • texts_en.properties
  • texts_de.properties
  • texts_it.properties
  • texts_fr.properties

In all of that files I will add the value “Hello World !!!”, and the translations for that phrase. I was using Google Translate so please do not judge me if something is wrong :).

After that, I will add some simple YML configuration in application.yml file which I will use later.

server:
  port: 7000

application:
  translation:
    properties:
      baseName: texts
      defaultLocale: de

Now, let’s create the configuration. I will create two Beans LocaleResolver and ResourceBundleMessageSource. Let’s explain both of them.

With the LocaleResolver interface, we are defining which implementation we are going to use. For this example, I chose to use AcceptHeaderLocaleResolver implementation. It means that the language value must be provided via Accept-Language header.

@Bean
    public LocaleResolver localeResolver() {
        AcceptHeaderLocaleResolver acceptHeaderLocaleResolver = new AcceptHeaderLocaleResolver();
        acceptHeaderLocaleResolver.setDefaultLocale(new Locale(defaultLocale));
        return acceptHeaderLocaleResolver;
    }

With ResourceBundleMessageSource we are defining which bundle we are going to use in the Translator component (I will create it later 🙂 ).

@Bean(name = "textsResourceBundleMessageSource")
    public ResourceBundleMessageSource messageSource() {
        ResourceBundleMessageSource rs = new ResourceBundleMessageSource();
        rs.setBasename(propertiesBasename);
        rs.setDefaultEncoding("UTF-8");
        rs.setUseCodeAsDefaultMessage(true);
        return rs;
    }

Now, let’s create the Translator component. In this component, I will create only one method, toLocale. In that method, I will fetch the Locale from the LocaleContexHolder and I will take the translation from the resource bundle.

@Component
public class Translator {

    private static ResourceBundleMessageSource messageSource;

    public Translator(@Qualifier("textsResourceBundleMessageSource") ResourceBundleMessageSource messageSource) {
        this.messageSource = messageSource;
    }

    public static String toLocale(String code) {
        Locale locale = LocaleContextHolder.getLocale();
        return messageSource.getMessage(code, null, locale);
    }
}

That’s all the configuration we need for this feature. Now, let’s create Controller, Service and TranslatorCodes Util classes so we can test the APIs.

@RestController
@RequestMapping("/index")
public class IndexController {

    private final TranslationService translationService;

    public IndexController(TranslationService translationService) {
        this.translationService = translationService;
    }

    @GetMapping("/translate")
    public ResponseEntity<String> getTranslation() {
        String translation = translationService.translate();
        return ResponseEntity.ok(translation);
    }
}
@Service
public class TranslationService {

    public String translate() {
        return toLocale(GREETINGS);
    }
}
public class TranslatorCode {
    public static final String GREETINGS = "greetings";
}

Now, you can start the application. After the application is started successfully, you can start making API calls.

Here is an example of API call that you can use as a cURL command.

curl –location –request GET “localhost:7000/index/translate” –header “Accept-Language: en”

These are some of the responses from the calls I made:

You can change the default behaviour, add some protection, add multiple resource bundles, you are not limited to using this feature.

Download the source code

This project is available on our BitBucket repository. Feel free to fix any mistakes and to leave a comment here if you have any questions or feedback.

https://bitbucket.org/n47/spring-boot-internationalization/src/master/

The practical guide – Part 3: Clean Architecture

Reading Time: 7 minutes

The practical guide – Part 1: Refactor android application to follow the MVP design pattern
The practical guide – Part 2: MVP -> MVVM

We know what design patterns are and how to implement them. But, if we want to have a more robust, scalable, maintainable, and testable application we have to do more. Today we will learn how to implement Clean Architecture proposed by Robert C. Martin (Uncle Bob).

Why is Architecture important?

Architecture is important for managing the complexity of the project. So, for a small project, we may not need one, but for big ones, it is a lifesaver. It makes the project easier to maintain, scale and test. It also makes the project more organized, so everyone can understand what the project is about with only looking at the classes.

Clean Architecture Introduction

Design patterns told us how to separate presentation and manipulation with the data. A clean architecture (like any other architecture) goes a little deeper and shows us how we should separate the manipulation of the data. Clean architecture has only a few layers, and each layer has its own responsibilities.

All credit for the image goes to Uncle Bob.

You have probably seen this image. It tells us how the layers are organized. As we can see, there are outer layers and inner layers. That is important because there are few rules that we have to follow:

  • Every layer can communicate only with the inner layers. So, the layers don’t know anything about the outer layers. The dependencies are provided by outer layers with Dependency Injection (hopefully I will make a post about this).
  • The most inner layer is the most abstract, and the most outer layer is the most concrete. This means that inner layers should contain business logic and outer layers should contain the implementation.

You may have noticed that I mentioned a few layers and not an exact number. That is because the clean architecture doesn’t define an exact number of layers. You can adapt the number of layers to your needs.

The flow

  • View The responsibility for the view stays the same as specified in the design patterns. Its only responsibilities are to display the data and react to user actions.
  • View Model/Presenter – Also, specified in the design patterns, their responsibility is to pass the data between the view and the model. But, instead of making the network/database calls, it uses the Use Cases for it. So, these classes shouldn’t know, where the data comes from, or where it goes. It just passes the data between the Use Cases and the Views.
  • Use Case (or Interactor) – These are the actions that the users can trigger. It contains the data that the action needs for it to be completed, and calls the repository to do the action. It can decide on which thread the action should be done, and on which thread the result should be posted.
  • Repository – The responsibility of the repository is to decide which data source the data needs to be handled. For every Use Case, there should be a method in the repository.
  • Data Source – There may be few data sources per application, like Network, Database, Cache etc. It contains the actual implementation of the data source.

Between every layer, we can have Mapper classes since the data can differ between layers. For example, we would like to store different data in a database from the one that we get from the network.

The implementation

Let’s start with the data sources. We will create two data source interfaces: RemoteDataSource and LocalDataSource.

interface RemoteDataSource {
  fun getQuotes(): Call<List<Quote>>
}

class RemoteDataSourceImplementation(private val api: QuotesApi) : RemoteDataSource {
  override fun getQuotes(): Call<List<Quote>> = api.quotes
}
interface LocalDataSource {
  fun insertQuotes(quotes: List<Quote>)
  fun getQuotes(): List<Quote>
}

class LocalDataSourceImplementation(private val quoteDao: QuoteDao) : LocalDataSource {
  override fun insertQuotes(quotes: List<Quote>) {
    quoteDao.insertQuotes(quotes)
  }

  override fun getQuotes(): List<Quote> = quoteDao.allQuotes
}

As you can see, we added only one method in RemoteDataSource, just for getting the quotes, but there are two methods in LocalDataSource since we have to store the quotes to the database after getting them from remote. The very important thing here is to notice that we are not creating the DAO and API objects, but we are asking for them to be provided in the constructor (Dependency Injection). This will enable us easily to switch to different network or database libraries and we won’t have to change anything here.

Let’s continue with the repository. We said that its responsibility is to decide where the data will come from. In our case, we want to return the quotes from the network, but if something fails we want to display the quotes from the database.

interface QuotesRepository {
  fun getQuotes(): List<Quote>
}

class QuotesRepositoryImplementation(
  private val localDataSource: LocalDataSource,
  private val remoteDataSource: RemoteDataSource
) : QuotesRepository {
  override fun getQuotes(): List<Quote> {
    return try {
      val response = remoteDataSource.getQuotes().execute()
      if (response.isSuccessful) {
        val quotes = response.body() ?: return localDataSource.getQuotes()
        localDataSource.insertQuotes(quotes)
        return quotes
      }
      localDataSource.getQuotes()
    } catch (e: Exception) {
      localDataSource.getQuotes()
    }
  }
}

It is a dumb logic, but I made it like that just for simplicity. Also very important is that we are using the interfaces, and not the actual implementation.

Let’s move to the use case. Here we will use the repository and we will switch between threads. We will use Kotlin coroutines, but you can use anything. If you are working with RxJava, here you will specify the Schedulers.

class GetQuotesUseCase(private val quotesRepository: QuotesRepository) {
  fun getQuotes(onResult: (List<Quote>) -> Unit = {}) {
    GlobalScope.launch(Dispatchers.IO) {
      val response = quotesRepository.getQuotes()
      GlobalScope.launch(Dispatchers.Main.immediate) {
        onResult(response)
      }
    }
  }
}

Usually, there is a base UseCase class, where you abstract the logic for threading. For simplicity, I skipped the error handling.

Last, we can clean up the ViewModel. I also converted it to Kotlin, and now it looks like this:

class MainViewModel : ViewModel() {
  lateinit var getQuotesUseCase: GetQuotesUseCase

  var quotesLiveData = MutableLiveData<List<Quote>>()

  fun getAllQuotes() {
    getQuotesUseCase.getQuotes { quotes: List<Quote> ->
      quotesLiveData.postValue(quotes)
    }
  }
}

I won’t explain anything here, I will just let you admire. Just kidding! You must be asking how we create getQuotesUseCase. But that is a story for another time. For now, I will create a class DependencyProvider, and I will provide everything there. You don’t have to worry about this for now.

And that’s it. Now we have an application that follows Clean Architecture guidelines. Here is the link for the project.

Final Notes

Now that our project follows Clean Architecture guidelines we can do many things. We can easily change frameworks and libraries with as little changes as possible (just changes at the DependencyProvider and everything will work). We can organize the application better with many repositories and many data sources. The project will be easy to understand, just by looking at the use cases. Testing of the application will be very easy (hopefully I will make a post about that, too).

I hope that this post will help you understand how Clean Architecture works in practice. If you have any questions or need any help don’t hesitate to ask. Happy Coding!

Create an admin panel with Node.js and AdminBro

Reading Time: 4 minutes

What’s great about Node.js is the huge economy of useful packages. For example, AdminBro is a package for creating admin interfaces that can be plugged into your application.

You provide database models or schemas (like blog posts, user comments, etc.) and AdminBro generates the user interface for you. You can manage content through this user interface and talk straight to your database.

What is AdminBro?

AdminBro is an open-source package from Software Brothers that adds an auto-generated admin panel to your Node.js application.

You can connect your various databases to the admin interface and perform standard CRUD operations (create, read, update, delete) on the records. This greatly simplifies and extends your ability to find, monitor, and update your app data across multiple sources.

Creating the admin panel

First of all, we need to create a new folder that will hold our app. Then we will open the terminal in that folder and run:

npm init

Go through all the steps and initialize the Node.js application and a package.json -file will be created:

Then, we need to install some dependencies express and the express-formidable packages. express-formidable is a peer dependency for AdminBro:

npm install express express-formidable

Then we can install the AdminBro and the AdminBro express plug-in:

npm install admin-bro @admin-bro/express

Now we will create a new file index.js and inside we can create an express router that will handle all AdminBro routes:

const AdminBro = require('admin-bro')
const AdminBroExpress = require('@admin-bro/express')

const express = require('express')
const app = express()

const adminBro = new AdminBro ({
    Databases: [],
    rootPath: ‘/admin’,
})

const router = AdminBroExpress.buildRouter (adminBro)

The next step is to set up the router as middleware using the Express.js app object:

app.use(adminBro.options.rootPath, router)
  
app.listen(3000, ()=> {
  console.log('Application is up and running under localhost:3000/admin')
})

And that’s it! You successfully set up the dashboard! Run:

node index.js

And go ahead and head over to the http://localhost:3000/admin path. The dashboard should be ready and working.

Installing the Database Adapter and Adding Resources

AdminBro can be connected to many different types of resources. Right now, they support the following options:

To add resources to AdminBro, you first have to register an adapter for the resource you want to use. Let’s go with the mongoose solution for now and install the required dependencies:

npm install mongoose @admin-bro/mongoose

Then we register the adapter so that it can be used in our project:

const AdminBroMongoose = require('@admin-bro/mongoose')

AdminBro.registerAdapter(AdminBroMongoose)

Now we can make a connection to the database and pass the resources:

const mongoose = require('mongoose')

const connection = await mongoose.connect('mongodb://localhost:27017/users', {useNewUrlParser: true, useUnifiedTopology: true})

const User = mongoose.model('User', { name: String, email: String, surname: String })

const adminBro = new AdminBro ({
  Databases: [connection],
  rootPath: '/admin',
  resources: [User]
})

Finishing up

Now let’s put all together and our index.js should look like this:

const AdminBro = require('admin-bro')
const AdminBroExpress = require('@admin-bro/express')
const AdminBroMongoose = require('@admin-bro/mongoose')

const express = require('express')
const app = express()

const mongoose = require('mongoose')

AdminBro.registerAdapter(AdminBroMongoose)

const run = async () => {
  const connection = await mongoose.connect('mongodb://localhost:27017/users', {useNewUrlParser: true, useUnifiedTopology: true})

  const User = mongoose.model('User', { name: String, email: String, surname: String })

  const adminBro = new AdminBro ({
    Databases: [connection],
    rootPath: '/admin',
    resources: [User]
  })
  const router = AdminBroExpress.buildRouter(adminBro)
  app.use(adminBro.options.rootPath, router)
    
  app.listen(3000, ()=> {
    console.log('Application is up and running under localhost:3000/admin')
  })
}

run()

At this point, we have basically built the admin interface. To verify that we have done everything correctly, first make sure your database is up and then re-run the server:

node index.js

Go to http://localhost:3000/admin and on the left side you can see your first model:

Summary

These are the basic steps to create an admin panel from scratch with Node.js and AdminBro. You can go deeper, you can customize your panel resources and widgets, add validation to the fields, configure role-based access control and much more. Any questions? You can check out the AdminBro docs for more details.

Network printing with CUPS from Docker

Reading Time: 7 minutes

Quite often, there is a need to automate a specific process. In this case, a client had a manual process in place where people printed specific type of documents at certain periods. There was some space for human error, people forgetting to print something, people not being able to print everything on time, people printing the same documents twice, etc. The human task of people manually printing documents can be automated on the application level, by creating a scheduled task for printing documents on a network printer. In order to achieve that, we came up with this…

The solution is a containerized CUPS server with appropriate drivers and printer configuration. We had to create a new Docker image with CUPS, which will serve as the CUPS server, then get the correct drives for the printer (since we are going to use a network printer and make the appropriate printer configuration). Let’s get to know more about CUPS before we go into the actual implementation.

What is CUPS?

CUPS is a modular printing system for Unix-like computer operating systems which allows a computer to act as a print server. A computer running CUPS is a host which can accept print jobs from client computers, process them, and send them to the appropriate printer. CUPS uses the Internet Printing Protocol (IPP) as the basis for managing print jobs and queues. CUPS is free software, provided under the Apache License.

How does it work?

The initial step requires a queue that keeps track of printers status. When you print to a printer, CUPS creates a queue for tracking the printer status and any pages you have printed. A queue can point to a local USB port connected printer, but it can also be a network printer or maybe even many printers on the internet. Where the printer resides doesn’t matter, the queue is independent of this fact and looks the same in any given printer environment.

Every time you print something, CUPS creates a print job which is consisted of the destination queue where documents are sent to, name of those documents, and its page descriptions. Job is numbered queue-1, queue-2, etc. so you can track the job as it is printed or cancel. CUPS is deterministic. When CUPS gets a job for printing, it determines the best programs filters, printer drivers, port monitors, and backends to convert the pages into a printable format and then runs them to actually print the job. After the print job is completed, the job is removed from the queue and then CUPS moves on to the next one. Notifications are also available when the job is finished or some errors occurred during printing there are multiple ways to get a notification on the outcome.

Ready, steady, Docker run

Let’s containerize first. The initial step is to set the base docker image. For this Dockerfile, we have decided that we are going with CentOS Linux distribution, by RHEL since it provides the cups packages from the regular repository. Other distributions might require premium repositories in order for cups packages to be available. The entry instruction which specified the OS architecture:

FROM centos:8

The next and more important step is, installing the packages: cups and cups-filters. The first one, cups, provides support for the actual printing system backend, filters and other software, whereas cups-filter is a required package for using printer drivers. With the dandified yum we update and install necessary dependencies:

RUN dnf update -y && \
	dnf install -y cups cups-filters openssl

# Install OpenJDK java 11
RUN dnf install -y java-11-openjdk && \
	dnf clean all && \
    rm -rf /var/cache/dnf

RUN java -version

ENV JAVA_HOME="/usr/lib/jvm/jre" \
    JAVA_VENDOR="openjdk" \
    JAVA_VERSION="11.0"

With that, JDK is available and we can confirm by running java –version.

Next follows the configuration for the cups server. This is done in the file named cupsd.conf, which resides in the /etc/cups directory of the image. A good practice here would be to create a copy of the original file.  In the cupsd.conf file each line can be configuration directive, blank line or a comment. Directive name and values are case insensitive, comments start with a # character.

The patching we did, on top-level directive DefaultEncryption set the value of IfRequested, to only enable encryption if it is requested. The other directive, Listen, add value 0.0.0.0:631 in order to allow all incoming connections.

RUN sed -e '0,/^</s//DefaultEncryption IfRequested\n&/' -i /etc/cups/cupsd.conf
RUN sed -i 's/Listen.*/Listen 0.0.0.0:631/g' /etc/cups/cupsd.conf

Allow the cups service to be reachable:

RUN /usr/sbin/cupsd \
  && while [ ! -f /var/run/cups/cupsd.pid ]; do sleep 1; done \
  && cupsctl --remote-admin --remote-any --share-printers \
  && kill $(cat /var/run/cups/cupsd.pid)

After the service setup is done, the network printer and its drivers’ configuration follows. In our scenario, we used Ricoh C5500 printer. A good resource for finding appropriate driver files for the printers would be: https://www.openprinting.org/

COPY conf/printers.conf /etc/cups/printers.conf
COPY conf/ricoh-c5500-postscript.ppd /etc/cups/ppd/ricoh-printer.ppd
COPY examples/accident-report.pdf /tmp/accident-report.pdf

A bit more general info on printer drivers: PostScript printer driver consists of a PostScript Printer Description (PPD) file that describes the features and capabilities of the device, then, filter programs that prepare print data for the device, and support files for colour management. Not only that but also, links with online help, etc. These PPD files include references to all of the filters and support files used by the driver, meaning there are details on all features that are provided by the driver. Every time a user prints something the scheduler program, cupsd service first, determine the format of the print job and the programs required to convert that job into something the printer can understand and perform. CUPS also includes filter programs for many common formats, for example, to convert PDF files into device-dependent/independent PostScript. All printer-specific configuration such is an IP address of the printer should be done in the printers.conf file.

Last but not least, we need to start the CUPS service:

CMD ["/usr/sbin/cupsd", "-f"]

Now everything is in place on the docker side. But then, somehow the print job needs to be triggered. That brings us to the final step, creating a client from the application mid-layer, which needs to set off a print job and the CUPS server will take care of the rest.

CUPS4J

For our solution, we used cups4j, a java library which is available in the mvn central repository. Basic usage of cups4j requires:

  • Setting up a CupsClient
  • Fetching an actual file
  • Creating a print job for that file
  • Printing (triggers the print job)

We also implemented a scheduler which will trigger this job weekly, meaning all documents will be run in a print queue once a week. If we want to specify custom host, then we need to provide the IP address of that host and the appropriate port number.

CupsClient cupsClient = new CupsClient("127.0.0.1", 631);
CupsPrinter cupsPrinter = cupsClient.getDefaultPrinter();
InputStream inputStream = new FileInputStream("test-file.pdf");
PrintJob printJob = new PrintJob.Builder(inputStream).build();
PrintRequestResult printRequestResult = cupsPrinter.print(printJob);

Summary

We managed to create dockerized solution step by step. First, we created an image that runs CUPS server, which we configured to a specific network printer. Then the printer waits for a print job to be triggered by the client. As a client, we created a cups4j simple client which raises the print job. Meaning all CUPS related configuration is done in Docker and the client only triggers the print job.

Core features of next-generation JavaScript

Reading Time: 4 minutes

Since we are working with the modern frameworks or libraries of course it is strongly recommended to use the next-generation JS features. Let’s take an overview of the most used features together.

As you know we do not use anymore var but instead, we are using let or const, depends on the case scenario.

Let

Let is a block-scoped local variable which makes the variable limited to the scope of the block statement.

Example:

function varTest() {
  var x = 1;
  {
    var x = 2;  // same variable!
    console.log(x);  // 2
  }
  console.log(x);  // 2
}

function letTest() {
  let x = 1;
  {
    let x = 2;  // different variable
    console.log(x);  // 2
  }
  console.log(x);  // 1
}

Const

Similar like let, const is also block-scoped, but here the difference is that a constant cannot be reassigned or redeclared. So, from here we can say they are read-only objects.
What is interesting here is that the value is mutable. That means direct reassignment is not allowed but changing object properties.

const ctest = 1;
ctest = 2;	// results with error

const cobj = {
    name: "Dimitar"
}
cobj.name = "Test"; // no errors

Arrow functions

Arrow functions are a replacement of the standard known normal functions which give us new shorter syntax and different behaviour of the scope where:

  • When calling normal functions this refers to the object that calls the function
  • When calling arrow function this refers to the outer function that surrounds the inner function
function thisTest() {
    let that = this
    this.prop1 = 0;

    setInterval(function growUp() {
        this.prop1++;
		that.prop1++;
        console.log(this.prop1)
        console.log(that.prop1)
    }, 1000)
}

function thisTest1() {
    let that = this
    this.prop1 = 0;

    setInterval(() => {
        console.log(this.prop1)
        console.log(that.prop1)
        this.prop1++;
		that.prop1++;
    }, 1000)
}

let thisNormal = new thisTest();
/* Prints:
undefined 0
NaN 1
NaN 2
NaN 3
...
*/
let thisArrow = new thisTest1();
/* Prints:
0 0
2 2
4 4
6 6
...
*/

Exports & Imports

In every modern project, we are splitting the code of our project in different modules, where those modules keep our code module focused and manageable. So, the communication between the modules, we maintain with imports (used to get access to a module) and exports (used to make it available to other modules).

From here we can export or import everything, from constants to classes by having unlimited exports and imports.

Exports:
export let numbers = [1, 2, 3 , 4, 5]
export class User {
	constructor(name) {
		this.name = name
}
}

Imports:
import { User } from ‘./…’

Classes

…are used to replace constructor functions and provide better readability.

class P {
    constructor() {
        name = "Dime"
    }
}

const p = new P();
p.name;


replaced with

class P2 {
     constructor () {
         this.name = 'Max';
     }
}

const p2 = new P2();
console.log(p2.name);

class Human {
    species = 'human';
}

class Person extends Human {
    name = 'Max';
    printMyName = () => {
        console.log(this.name);
    }
}

const person = new Person();
person.printMyName();
console.log(person.species);

Spread operator

Spread operator allows us to pull elements out of an array or pull the properties of an object. The spread operator is very useful because it is a lot used to help us clone arrays and objects with different reference.

const arr1 = [1, 2, 3];
const arr2 = [...arr1, 4, 5];

const obj1 = {
    name: "Dime"
}

const obj2 = {
    ...obj1,
    age: 26
}

const obj3 = {...obj1};
const obj4 = obj1;
obj1 === obj3;  // false
obj1 === obj4; // true

Destructuring

The last next-generation feature I am going to cover is destructuring. Destructuring allows us to easily access the values of arrays and objects and assign them to variables. This is mostly used when we are working with function arguments when we are calling with the whole object but getting the properties that we only need.

const arr = [1, 2, 3];
const [a1, a2] = arr;

const obj = {
    name: "Dime",
    age: 26
}

const {name} = obj;

const printValue = (obj) => {
    console.log(obj.name);
}

const printValue1 = ({name}) => {
    console.log(name);
}

printValue({name: "Dime", age: 26});
printValue1({name: "Dime", age: 26});

Conclusion

As we can see next generation is already here and I strongly believe that in near future in the modern projects there won’t be tolerated non-next-generation JavaScript code.
So don’t hesitate, simply just try it and enjoy the next-generation JavaScript features.

Improve your performance using JPA Entity Graph

Reading Time: 7 minutes

If you are a web developer, you probably have developed some endpoint which has a slow response time. The issue for that might be that you are calling some 3rd party API, you have file processing or it might be how your entities are retrieved from the database.

In this article, we are going to take a look at how the Entity Graph might help us to improve our query performance when using JPA and Spring Boot.

Let’s discuss the following scenario:

We want to build an application where we can keep track of buildings, how many apartments every building has and how many tenants every apartment has. I have already created a simple application that can be downloaded from here.

In order to achieve the previously mentioned scenario, we will need to have the following entities:

package com.north47.entitygraphdemo.repository.model;

import lombok.AllArgsConstructor;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.Setter;

import javax.persistence.*;
import java.util.ArrayList;
import java.util.List;

@Entity
@NoArgsConstructor
@AllArgsConstructor
@Getter
@Setter
public class Building {

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;

    private String buildingName;

    @OneToMany(mappedBy = "building", cascade = CascadeType.ALL)
    private List<Apartment> apartments;

    public void addApartment(Apartment apartment) {
        if (apartments == null) {
            apartments = new ArrayList<>();
        }
        apartments.add(apartment);
        apartment.setBuilding(this);
    }

}
package com.north47.entitygraphdemo.repository.model;

import lombok.AllArgsConstructor;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.Setter;

import javax.persistence.*;
import java.util.ArrayList;
import java.util.List;

@Entity
@NoArgsConstructor
@AllArgsConstructor
@Getter
@Setter
public class Apartment {

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;

    private String type;

    @JoinColumn(name = "building_id")
    @ManyToOne
    private Building building;

    @OneToMany(mappedBy = "apartment", cascade = CascadeType.ALL)
    private List<Tenant> tenants;

    public void addTenant(Tenant tenant) {
        if (tenants == null) {
            tenants = new ArrayList<>();
        }
        tenants.add(tenant);
        tenant.setApartment(this);
    }

}
package com.north47.entitygraphdemo.repository.model;

import lombok.AllArgsConstructor;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.Setter;

import javax.persistence.*;

@Entity
@NoArgsConstructor
@AllArgsConstructor
@Getter
@Setter
public class Tenant {

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;

    private String name;

    private String lastName;

    @JoinColumn(name = "apartment_id")
    @ManyToOne
    private Apartment apartment;
}

We want to observe what will happen when we want to retrieve all the entities. For that purpose, a service method is created in BuildingService called iterate that will get all the buildings and loop through all remaining entities. For this method to be visible to the outer world a BuildingController is created that exposes GET endpoint from where we can access the iterate method in BuildingService. In order to have some data in our database, there is a SQL script data.sql that will insert some data and will be executed on startup. I would strongly suggest to start your application in debug mode and iterate through every step of the method iterate.

If you have already started your application insert the following URL: http://localhost:8080/building/iterate in your browser or some API application (Postman for example) and execute this GET request. This will execute the iterate method that was created previously.

Let’s see the content of the iterate service method we are calling with this endpoint and observe the console while executing it:

package com.north47.entitygraphdemo.service;

import com.north47.entitygraphdemo.repository.BuildingRepository;
import com.north47.entitygraphdemo.repository.model.Apartment;
import com.north47.entitygraphdemo.repository.model.Building;
import com.north47.entitygraphdemo.repository.model.Tenant;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Service;

import java.util.List;

@Slf4j
@Service
@RequiredArgsConstructor
public class BuildingService {

    private final BuildingRepository buildingRepository;

    public void iterate() {
        log.debug("Iteration started");
        log.debug("Get all buildings");
        final List<Building> buildings = buildingRepository.findAll();
        buildings.forEach(building -> {
            log.debug("Get all apartments for building with id: {}", building.getId());
            final List<Apartment> apartments = building.getApartments();
            apartments.forEach(apartment -> {
                log.debug("Get all tenants for apartment with id: {}", apartment.getId());
                final List<Tenant> tenants = apartment.getTenants();
                log.debug("Apartment with id : {} has {} tenants", apartment.getId(), tenants.size());
            });
        });
    }
}

If you are in debug mode you may notice that after buildingRepository.findAll() is executed we can see the following log in the console:

Hibernate: select building0_.id as id1_1_, building0_.building_name as building2_1_ from building building0_

Let’s continue with executing the rest of the code. What will appear in the console is the following:

Hibernate: select apartments0_.building_id as building3_0_0_, apartments0_.id as id1_0_0_, apartments0_.id as id1_0_1_, apartments0_.building_id as building3_0_1_, apartments0_.type as type2_0_1_ from apartment apartments0_ where apartments0_.building_id=?
Hibernate: select tenants0_.apartment_id as apartmen4_2_0_, tenants0_.id as id1_2_0_, tenants0_.id as id1_2_1_, tenants0_.apartment_id as apartmen4_2_1_, tenants0_.last_name as last_nam2_2_1_, tenants0_.name as name3_2_1_ from tenant tenants0_ where tenants0_.apartment_id=?
Hibernate: select tenants0_.apartment_id as apartmen4_2_0_, tenants0_.id as id1_2_0_, tenants0_.id as id1_2_1_, tenants0_.apartment_id as apartmen4_2_1_, tenants0_.last_name as last_nam2_2_1_, tenants0_.name as name3_2_1_ from tenant tenants0_ where tenants0_.apartment_id=?

Even though we are not calling some repository methods, SQL queries are executed. This is happening because it is not specified the fetch type in the entities and the default one is the LAZY for OneToMany relationships. This means that when we will try to get the entities (in our case call methods getApartments in Building and getTenants in Aparment) that are annotated with @OneToMany, additional query will be executed. Imagine having a lot’s of data, and we want to execute a similar logic, then this will cause executing a lot more additional queries which will cause a huge latency. One solution is that we can always switch to changing the fetch type to EAGER, but that means that these collections will be always called and we won’t be able to change that in runtime.

One of the solutions can be the JPA Entity Graph. Let’s see how it can make our life easier. We will do the following changes in our domain class Building:

package com.north47.entitygraphdemo.repository.model;

import lombok.AllArgsConstructor;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.Setter;

import javax.persistence.*;
import java.util.HashSet;
import java.util.Set;

@Entity
@NoArgsConstructor
@AllArgsConstructor
@Getter
@Setter
@NamedEntityGraph(
        name = "Building.List",
        attributeNodes = {@NamedAttributeNode(value = "apartments", subgraph = "Building.Apartment")},
        subgraphs = {
                @NamedSubgraph(name = "Building.Apartment",
                        attributeNodes = @NamedAttributeNode(value = "tenants"))
        }
)
public class Building {

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;

    private String buildingName;

    @OneToMany(mappedBy = "building", cascade = CascadeType.ALL)
    private Set<Apartment> apartments;

    public void addApartment(Apartment apartment) {
        if (apartments == null) {
            apartments = new HashSet<>();
        }
        apartments.add(apartment);
        apartment.setBuilding(this);
    }

}

So what happened here? We have defined an entity graph with named Building.List. With the attribute.nodes we are saying which collections to be loaded. Since we also want to get the tenants, we have defined a subgraph called Building.Apartment and in the subgraphs, we are saying to load all the tenants for every apartment. In order for this entity graph to be used we need to create a method in our BuildingRepository to whom we will specify to use this entity graph:

package com.north47.entitygraphdemo.repository;

import com.north47.entitygraphdemo.repository.model.Building;
import org.springframework.data.jpa.repository.EntityGraph;
import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.data.jpa.repository.Query;

import java.util.List;

public interface BuildingRepository extends JpaRepository<Building, Long> {


    @Override
    List<Building> findAll();

    @EntityGraph(value = "Building.List")
    @Query("select b from Building as b")
    List<Building> findAllWithEntityGraph();
}

And of course, we will provide a service method that has the same logic but findAllWithEntityGraph() will be called:

public void iterateWithEntityGraph() {
        log.debug("Iteration with entity started");
        log.debug("Get all buildings");
        final List<Building> buildings = buildingRepository.findAllWithEntityGraph();
        buildings.forEach(building -> {
            log.debug("Get all apartments for building with id: {}", building.getId());
            final Set<Apartment> apartments = building.getApartments();
            apartments.forEach(apartment -> {
                log.debug("Get all tenants for apartment with id: {}", apartment.getId());
                final Set<Tenant> tenants = apartment.getTenants();
                log.debug("Apartment with id : {} has {} tenants", apartment.getId(), tenants.size());
            });
        });
    }

And what is remaining is to expose this method using the BuildingController so we can test our new functionality:

@GetMapping(value = "/iterate/entityGraph")
    public ResponseEntity<Void> iterateWithEntityGraph() {
        buildingService.iterateWithEntityGraph();
        return new ResponseEntity<>(HttpStatus.OK);
    }

Now if we put the following URL http://localhost:8080/building/iterate/entityGraph in our browser and observe our console we can see that only one query is executed:

Hibernate: select building0_.id as id1_1_0_, apartments1_.id as id1_0_1_, tenants2_.id as id1_2_2_, building0_.building_name as building2_1_0_, apartments1_.building_id as building3_0_1_, apartments1_.type as type2_0_1_, apartments1_.building_id as building3_0_0__, apartments1_.id as id1_0_0__, tenants2_.apartment_id as apartmen4_2_2_, tenants2_.last_name as last_nam2_2_2_, tenants2_.name as name3_2_2_, tenants2_.apartment_id as apartmen4_2_1__, tenants2_.id as id1_2_1__ from building building0_ left outer join apartment apartments1_ on building0_.id=apartments1_.building_id left outer join tenant tenants2_ on apartments1_.id=tenants2_.apartment_id

So we managed to reduce the number of queries from 4 to 1 and we still have the possibility to call the findAll() method in the BuildingRepository where we won’t load all the apartments or the tenants. In a real case scenario, you can define as many entity graphs as you want and specify which collections to be loaded or not.

Hope you had fun, you can find the code in our repository.

Providing Accessibility In Your iOS App (Basic Overview)

Reading Time: 6 minutes

As a developer, the most important aspect of your app is the user experience. No matter the type of software you are creating, it is very important that it can be easy to use and accessible to everyone. Especially the app you are developing is meant to be used by the elderly or users with disabilities.

This post won’t contain any detailed technicalities as to how to achieve good accessibility since there are already countless of blogs and websites that cover this topic in great detail. Its purpose is to cover the various topics that are important to achieving proper accessibility within your app. And with that, let’s start with the very basic.

What is accessibility in iOS?

In its most basic form, accessibility in software products is to allow everyone proper access and options to use your software no matter their disability. This covers visual, auditory impairments, as well as physical disabilities. It is important to leave the flow unobstructed and have every part of the app reachable as it’s intended for the default experience.

Initially, iOS already offers great out of the box functionalities to assist with achieving proper accessibility. As a developer, the important part is to make your app compatible with said functionalities. In some cases, it is also required by law to make your app accessible if the target audience for the app is a group of people that could have certain disabilities.

Additionally, Xcode offers the Accessibility Inspector which you can use to test out all of the UI elements and see if they are compatible with the necessary accessibility options.

Let’s first cover what iOS offers for accessibility that we would need to pay attention to.

Voice Over

The functionality of VoiceOver is to offer users certain gestures and voice-assisted tools to navigate through an app. These are easy to learn features that can greatly enhance the experience for users with disabilities.

Display & Text Size

Display & Text Size helps with scaling up the UI of the app to make the text larger and easier to read. Both of these features are very important to complete the full experience for all users.

Now, let’s cover the basic topics to make your app accessible

Scalability

As previously mentioned, some of the native functions help with adjusting the look of the app so its elements are larger and easier to look at. One of the steps to ensure that is making sure that almost every screen in the app has an embedded scroll view. Even if a certain screen might only contain a single label or a button or two, the scalability options push the boundaries so far that it will need the necessary space to fill in the data.

As you can see in the example above, the difference is substantial. This is why when dealing with accessibility it’s important to avoid using fixed width and height on your UI elements (with minor exceptions). Because otherwise you will experience text truncating and cut off content.

Design

Considering the latest trends in app design, everyone thrives for simplicity. While this is also true for accessible apps, it is also important to either make some exceptions or bring down the simplicity even further depending on the content of the app.

It is especially important not to clutter the screen with too much labels/buttons on a horizontal scale. Since the UI needs to scale up, you need to provide enough room for everything to be displayed.

One thing to note is that sometimes the scalability might break some design rules that you have about the app, but that’s completely fine. As long as everything can be accessible at all possible points, there can be some exceptions regarding that.

Text

The default font provided by Xcode already is capable to work with scalability within the app. It’s also a good reminder that the font of choice within your app is easy to read no matter the size.

Also, it’s advised to use the regular and/or larger font weights compared to Thin, Ultrathin and Light that can be harder to see.

Additionally, if there are any visuals in the app that contain text in images, if possible it’s better to adjust the design to have all of the text in native labels so the VoiceOver can read them out for the user.

Colours

On this topic, the most important part is to use a combination of colours that contrast each other to ensure that the content is properly viewable for everyone. In its most basic form, this is usually text against a background. In some cases, you might have to pay attention to colour choices for users with colour blindness. There are various tools to compare the necessary contrast.

Do you see the number 74?

Accessibility hints and labels

All of the UI elements that you can use in Xcode that the users can interact with can have some sort of accessibility info added on them. This is important so the users can know what do those elements convey. Especially on images, where you can provide all the details about what the images contain. You can also add hints on buttons, so the user knows what action would a certain button do prior to pressing it.

Conclusion

In general, the topic of providing access to your apps is not an easy thing to achieve. It requires proper setup with consideration of many aspects to ensure that all users can freely use your app without any hiccups. Luckily, Xcode provides all of the necessary functionalities to create and test all of the accessibility options. As mentioned at the start, there are plenty of tutorials that cover this topic in great detail with all of the technicalities. So hopefully this will guide you in the right way to achieve that!

What is CI? Continuous Integration Explained

Reading Time: 5 minutes

Continuous Integration (CI) is a software development practice that requires members of a team, to frequently integrate their code changes into a central repository (master branch), preferably several times a day.

Each merge is then verified by automatically generating a build, and running automated tests against that build.

By integrating regularly, you can detect errors quickly, as well as locate and fix them easier.

Why is Continuous Integration Needed?

Back in the days, BCI – Before Continuous Integration, developers from a single team might have worked in isolation for a longer period of time, and they merged their code changes only when they finished working on a particular feature or bug fix.

This caused the well-known merge hell (integration hell) or in other words a lot of code conflicts, bugs introduced, lots of time invested into the analysis, as well as frustrated developers and project managers.

All these ingredients made it harder to deliver updates and value to the customers on time.

How does Continuous Integration Work?

Continuous Integration as a software development practice entails two components: automation and cultural.

The cultural component focuses on the principle of frequent integrations of your code changes to the mainline of the central repository, using a version control system such as Git, Mercurial or Subversion.

But applying the cultural component you will drastically lower the frustrations and time wasted merging code, because, in reality, you are merging small changes all the time.

As a matter of fact, you can practice Continuous Integration using only this principle, but with adding the automation component into your CI process you can exploit the full potential of the Continuous Integration principle.

Continuous Integration Image
Source

As shown in the picture above, this includes a CI server that will generate builds automatically, run automated tests against those builds and notify (or alert) the team members of the results.

By leveraging the automation component you will immediately be aware of any errors, thus allowing the team to fix them fast and without too much time spent analysing.

There are plenty of CI tools out there that you can choose from, but the most common are: Jenkins, CircleCI, GitHub Actions, Bitbucket Pipelines etc.

Continuous Integration Best Practices and Benefits

Everyone should commit to the mainline daily

By doing frequent commits and integrations, developers let other developers know about the changes they’ve done, so passive communication is being maintained.

Other benefits that come with developers integrating multiple times a day:

  • integration hell is drastically reduced
  • conflicts are easily resolved as not much has changed in the meantime
  • errors are quickly detected

The builds should be automated and fast

Given the fact several integrations will be done daily, automating the CI Pipeline is crucial to improving developer productivity as it leads to less manual work and faster detection of errors and bugs.

Another important aspect of the automated build is optimising its execution speed and make it as fast as possible as this enables faster feedback and leads to more satisfied developers and customers.

Everyone should know what’s happening with the system

Given Continuous Integration is all about communication, a good practice is to inform each team member of the status of the repository.

In other words, whenever a merge is made, thus a build is triggered, each team member should be notified of the merge as well as the results of the build itself.

To notify all team members or stakeholders, use your imagination, though email is the most common channel, but you can leverage SMS, integrate your CI server with communication platforms like Slack, Microsoft Teams, Webex etc.

Test Driven Development

Test Driven Development (TDD) is a software development approach relying on the principle of writing tests before writing the actual code. What TDD offers as a value in general, is improved test coverage and an even better understanding of the system requirements.

But, put those together, Continuous Integration and TDD, and you will get a lot more trust and comfort in the CI Pipelines as every new feature or bug fix will be shipped with even better test coverage.

Test Driven Development also inspires a cultural change into the team and even the whole organisation, by motivating the developers to write even better and more robust test cases.

Pull requests and code review

A big portion of the software development teams nowadays, practice pull request and code review workflow.

A pull request is typically created whenever a developer is ready to merge new code changes into the mainline, making the pull request perfect for triggering the CI Pipeline.

Usually, additional manual approval is required after a successful build, where other developers review the new code, make suggestions and approve or deny the pull request. This final step brings additional value such as knowledge sharing and an additional layer of communication between the team members.

Summary

Building software solutions in a multi-developer team are as complex as it was five, ten or even twenty years ago if you are not using the right tools and exercise the right practices and principles, and Continuous Integration is definitely one of them.


I hope you enjoyed this article and you are not leaving empty-handed.
Feel free to leave a comment. 😀

Follow N47 on InstagramTwitterLinkedInFacebook for any updates.

Implementation of Product Importer in AEM

Reading Time: 5 minutes

Nowadays, every serious company has options on its website to present the products to potential customers. When we talk about big companies, with a lot of products and huge traffic, AEM is one of the best solutions. But, how are the products imported in AEM, where are they placed in AEM are?… you can learn here. We will cover the following aspects of the problem:

  • Fetch the products from the server
  • Convert server response (JSON String) into Java object
  • Where to place products in AEM repository
  • Product node structure
  • Save products in CRX
  • How to start product importer and follow the process

How to fetch the products from the server

Typically large companies are keeping their products on a separate dedicated server. With the API that they will provide to you, a connection to the server will be established, and you could fetch the products. In most cases, an OSGI service is created that keeps the configuration data for connecting the remote server. Typically we got the response as a JSON String. Bellow is just one idea of how to get a response. The URL parameter and the access token is provided by the client, and usually, we keep it in OSGI service configuration.

private String getAPIResponse(String url) {
		String accessToken = getAccessToken();
		CloseableHttpClient httpclient = HttpClients.createDefault();
		String authorizationString = "Bearer " + accessToken;
		HttpGet request = new HttpGet(URI.create(url));
		request.addHeader("Authorization", authorizationString);
		try {
			HttpResponse response = httpclient.execute(request);
			return getResponseBodyAsString(response);
		} catch (IOException e) {
			LOGGER.error("Failed httpclient.execute method", e);
		}
		return null;
	}
private String getResponseBodyAsString(HttpResponse response) {
		try (Scanner sc = new Scanner(response.getEntity().getContent())) {
			StringBuilder sb = new StringBuilder();
			while (sc.hasNext()) {
				sb.append(sc.nextLine());
			}
			return sb.toString();
		} catch (IOException e) {
			LOGGER.error("Failed to retreive response body", e);
		}
		return null;
	}

How to convert server response into Java object (APIModel)

Once we got a response as JSON String, the biggest challenge is converting response from the server (String apiResponse) into java class (APIModel). For that purpose we use the com.google.gson.Gson class. Sometimes it is unpredictable how Gson will de-serialize apiResponse into java objects. As for advice, if something goes wrong in mapping, just put “Object” in the mapping of the value, and later when debugging can check how Gson actually maps that value.

public APIModel convertIntoAPIModel(String apiResponse) {
 try {
 Gson gson = new Gson();
 return gson.fromJson(apiResponse, APIModel.class);
 } catch (RuntimeException e) {
 LOGGER.error("Error while converting into APIModel",e);
 throw e;
 }
}
@Model(adaptables = Resource.class)
public class APIModel {
	private List<ProductImportedModel> results;

	public List<ProductImportedModel> getResults() {
		return results;
	}
}
public class ProductImportedModel {
     private String nameOfProduct;
     private Date lastModifiedAt;
     private Date createdAt;
     ........
     ........
}

Where to place products in AEM repository

First, let’s look at the very base of this commerce part. We will look at the repository level in CRX to see the location and structure of the products in AEM. For that purpose most appropriate is an out of the box solution for we-retail which is part of the AEM installation. Products are stored in /var/commerce/products/your-company-name.

Product node structure

Let’s check the structure of one product in we-retail (on the image above “eqbisucos”). The product consists of one “master” product which contains the general properties of the product. These properties can be anything including price, rating, origin… and the most important properties are these two which mark it as a product:

  • cq:commerceType String product
  • sling:resourceType String commerce/components/product

Under this master node, there are subnodes as “image” and variants of the products. Regarding variants, it is important to mention that the difference with the product is that property commerceType has the value ‘variant’.

In the image above , we can see different variants as size-s, size-m, size-l.

Now, when we know the structure of the out of the box commerce product, let’s see how we can use our APIModel and transform it into node structure under /var/commerce.

Product node is without strictly defined structure. It depends on the concrete situation and the data for the product that we need to store. However, there are some rules to take in consideration:

  • define the master product node
  • create variants as sub-nodes of the master node. It could happen both (master and variant) to have very similar properties with a small difference, but that is acceptable. At least one property must be different
  • Product should have “image” sub-node with an image. It is good practice and it is not mandatory. Could be just one “image node” for master, or furthermore, every variant can have its own “image” node
  • It is possible to have other sub-nodes with different information for product or variants. Number of “Other nodes” is not limited. They can keep any information
  • Every product can have different node structure for master or variant. Some sub-nodes could be missing for some masters or variants

Persist products

Once we have determinate the node structure of the product, it is time to create node structure and store values as node properties. As first, for that purpose, we need service user with write permissions in /var/commerce part. The best approach is to use sling API, with all methods to create resources. Here is one example to create product node with properties.

Map<String, Object> properties = new HashMap<>();
		properties.put("sling:resourceType", "commerce/components/product");
		properties.put("cq:commerceType", commerceType);
		properties.put("jcr:primaryType", "nt:unstructured");
		properties.put("price", 1000);
		properties.put("color", "green);
Resource newProduct = resolver.create(productsResource, "myFirstProduct", properties);

Start product importer and follow the process

AEM JMX console is place where we trigger Product importer. Possible options are to trigger it manually or periodic as a cron job. It is a separate thread that does not return a result, so it is very hard to follow the process without server logs. But this is possible just for the developer. So, any solution?

Most suitable is to send a mail to the responsible person with time and place where error happen.

How we deploy with Terraform and BitBucket to Azure Kubernetes

Reading Time: 6 minutes

N47 implemented a set of back-office web applications for Prestige, a real estate management company located in Zurich, Switzerland. One application is a tool for displaying construction projects nearby properties managed by Prestige and a second example is a tool for creating and assigning orders to craftsmen. But the following examples aren’t specific for those use cases.

Screenshot of the Construction Project tool.

An Overview

The project entails one frontend application with multiple microservices whereby each service has its own database schema.

The application consumes data from Prestige’s main ERP system Abacus and third-party applications.

N47 is responsible for setting up and maintaining the full Kubernetes stack, MySQL Database, Azure Application Gateway and Azure Active Directory applications.

Another company is responsible for the networking and the Abacus part.

Architectural Overview

Involved Technologies

Our application uses following technologies:

  • Database: MySQL 8
  • Microservices: Java 11, Spring Boot 2.3, Flyway for database schema updates
  • Frontend: Vue.js 2.5 and Vuetify 2.3
  • API Gateway: ngix

The CI/CD technology stack includes:

  • Source code: BitBucket (Git)
  • Pipelines: BitBucket Pipelines
  • Static code analysis: SonarCloud
  • Infrastructure: Terraform
  • Cloud provider: Azure

We’ll focus on the second list of technologies.

Infrastructure as Code (IaC) with Terraform and BitBucket Pipelines

One thing I really like when using IaC is having the definition of the involved services and resources of the whole project in source code. That enables us to track the changes over time in the Git log and of course, it makes it far easier to set up a stage and deploy safely to production.

Please read more about Terraform in our blog post Build your own Cloud Infrastructure using Terraform. The Terraform website is of course as well a good resource.

Storage of Terraform State

One important thing when dealing with Terraform is storing the state in an appropriate place. We’ve chosen to create an Azure Storage Account and use Azure Blob Storage like this:

terraform {
  backend azurerm {
    storage_account_name = "prestigetoolsterraform"
    container_name = "prestige-tools-dev-tfstate"
    key = "prestige-tools-dev"
  }
}

The required access_key is passed as an argument to terraform within the pipeline (more details later). You can find more details in the official tutorial Store Terraform state in Azure Storage by Microsoft.

Another important point is not to run pipelines in parallel, as this could result in conflicts with locks.

Used Terraform Resources

We provide the needed resources on Azure via BitBucket + Terraform. Selection of important resources:

Structure of Terraform Project

We created an entry point for each stage (local, dev, test and prod) which is relatively small and mainly aggregate to the modules with some environment-specific configurations.

The configurations, credentials and other data are stored as variables in the BitBucket pipelines.

/environments
  /local
  /dev
  /test
  /prod
/modules
  /azure_active_directory
  /azure_application_gateway
  /azure_aplication_insights
    /_variables.tf
    /_output.tf
    /main.tf
  /azure_mysql
  /azure_kubernetes_cluster
  /...

The modules themselves have always a file _variables.tf, main.tf and _output.tf to have a clean separation of input, logic and output.


Example source code of the azure_aplication_insights module (please note that some of the text have been shortened in order to have enough space to display it properly)

_variables.tf

variable "name" {
  type = string
}

variable "location" {
  type = string
}

variable "resource_group_name" {
  type = string
}

main.tf

resource "azurerm_application_insights" "ai" {
  name                = var.name
  location            = var.location
  resource_group_name = var.resource_group_name
  application_type    = "web"
}

_output.tf

output "instrumentation_key" {
  value = azurerm_application_insights.ai.instrumentation_key
}

BitBucket Pipeline

The BitBucket pipeline controls Terraform and includes the init, plan and apply. We decided to manually apply the changes in the infrastructure in the beginning.

image: hashicorp/terraform:0.12.26

pipelines:
  default:
      - step:
        name: Plan DEV
        script:
          - cd environments/dev
          - terraform init -backend-config="access_key=$DEV_TF_CONFIG_ACCESS_KEY"
          - terraform plan -out out-overall.plan
        artifacts:
          - environments/dev/out-overall.plan

  branches:
    develop:
      - step:
        name: Plan DEV
        script:
          - cd environments/dev
          - terraform init -backend-config="access_key=$DEV_TF_CONFIG_ACCESS_KEY"
          - terraform plan -out out-overall.plan
        artifacts:
          - environments/dev/out-overall.plan
          - environments/dev/.terraform/**
      - step:
        name: Apply DEV
        trigger: manual
        deployment: dev
        script:
          - cd environments/dev
          - terraform apply out-overall.plan

    master:
      # PRESTIGE TEST
      - step:
          name: Plan TEST
          script:
            - cd environments/test
            - terraform init -backend-config="access_key=$PRESTIGE_TF_CONFIG_ACCESS_KEY"
            - terraform plan -out out-overall.plan
          artifacts:
            - environments/test/out-overall.plan
            - environments/test/.terraform/**
      - step:
          name: Apply TEST
          trigger: manual
          deployment: test
          script:
            - cd environments/test
            - terraform apply out-overall.plan

      # PRESTIGE PROD ...

Needed Steps for Deploying to Production

1. Create feature branch with some changes

2. Push to Git (BitBucket pipeline with step Plan DEV will run). All the details about the changes can be found in the Terraform plan command

3. Create a pull request and merge the feature branch into develop. This will start another pipeline with the two steps (plan + apply)

4. Check the output of the plan step before triggering the deploy on dev

5. Now the dev stage is updated and if everything is working as you wish, create another pull request to merge from develop to master. And re-do the same for the production of other stages

We have just deployed an infrastructure change to production without logging into any system except BitBucket. Time for celebration.

people watching concert
Symbol picture of N47 production deployment party (from unsplash)

Is Really Everything That Shiny?

Well, everything is a big word.

We found issues, for example with cross-module dependencies, which aren’t just solvable with a depends_on. Luckily, there are some alternatives:

network module:

output "id" {
  description = "The Azure assigned ID generated after the Virtual Network resource is created and available."
  value = azurerm_virtual_network.virtual_network.id
}

kubernetes cluster module, which depends on network:

variable "subnet_depends_on" {
  description = "Variable to force module to wait for the Virtual Network creation to finish"
}

and the usage of those two modules in environments/dev/main.tf

module "network" {
  source = "../../modules/azure_network"
}

module "kubernetes_cluster" {
  source = "../../modules/azure_kubernetes_cluster"
  subnet_depends_on = module.network.id
}

After having things set up, it really makes joy to wipe out a stage and just provision everything from scratch with running a BitBucket pipeline.


Create a CI/CD pipeline with GitHub Actions

Reading Time: 7 minutes

A CI/CD pipeline helps in automating your software delivery process. What the pipeline does is building code, running tests, and deploying a newer version of the application.

Not long ago GitHub announced GitHub Actions. Meaning that they have built it system for support for CI/CD. This means that developers can use GitHub Actions to create a CI/CD pipeline.

With Actions, GitHub now allows developers not just to host the code on the platform, but also to run it.

Let’s create a CI/CD pipeline using GitHub Actions, the pipeline will deploy a spring boot app to AWS Elastic Beanstalk.

First of all, let’s find a project

For this purpose, I will be using this project which I have forked:
When forked we need to open the project. Upon opening, we will see the section for GitHub Actions.

GitHub Actions Tool

Add predefined Java with Maven Workflow

Get started with GitHub Actions

By clicking on Actions we are provided with a set of predefined workflows. Since our project is Maven based we will be using the Java with Maven workflow.

By clicking “Start commit” GitHub Will add a commit with the workflow, the commit can be found here.

Let’s take a look at the predefined workflow:

name: Java CI with Maven

on:
  push:
    branches: [ master ]
  pull_request:
    branches: [ master ]

jobs:
  build:

    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v2
    - name: Set up JDK 1.8
      uses: actions/setup-java@v1
      with:
        java-version: 1.8
    - name: Build with Maven
      run: mvn -B package --file pom.xml

name: Java CI with Maven
This is just specifying the name for the workflow

on: push,pull_request
On command is used for specifying the events that will trigger the workflow. In this case, those events are push and pull_request on the master branch in this case

job:
A job is a set of steps that execute the same runner

runs-on: ubuntu-latest
The runs-on is specifying the underlaying OS we want for our workflow to run on for which we are using the latest version of ubuntu

steps:
A step is an individual task that can run commands (known as actions). Each step in a job executes on the same runner, allowing the actions in that job to share data with each other

actions:
Actions are the smallest portable building block of a workflow which are combined into steps to create a job. We can create our own actions, or use actions created by the GitHub community

Our steps actually setup Java and execute Maven commands needed for the build of the project.

Since we added the workflow by creating commit from the GUI the pipeline has automatically started and verified the commit – which we can see on the following image:

Pipeline report

Create an application in AWS Elastic Beanstalk

The next thing that we need to do is to create an app on Elastic Beanstalk where our application is going to be deployed. For that purpose, an AWS account is needed.

AWS Elastic Beanstalk service

Upon opening the Elastic Beanstalk service we need to choose the application name:

Application name

For the platform choose Java8.

Choose platform

For the application code, choose Sample application and click Create application.
Elastic Beanstalk will create and initialize an environment with a sample application.

Let’s continue working on our pipeline

We are going to use an action created from the GitHub community for deploying an application on Elastic Beanstalk. The action is einaregilsson/beanstalk-deploy.
This action requires additional configuration which is added using the keyboard with:

    - name: Deploy to EB
      uses: einaregilsson/beanstalk-deploy@v13
      with:
        aws_access_key: ${{ secrets.AWS_ACCESS_KEY_ID }}
        aws_secret_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        application_name: {change this with aws application name}
        environment_name: {change this with aws environment name}
        version_label: ${{github.SHA}}
        region: {change this with aws region}
        deployment_package: target/spring-petclinic-rest-2.2.5.jar

Add variables

We need to add value into the properties AWS Elastic Beanstalk application_name, environment_name AWS region and, AWS APIkey.

Go to AWS Elastic Beanstalk and copy the previously created Environment name and Application name.
Go to AWS Iam under your user in the security credentials section either create a new AWS access Key or use an existing one.
The AWS Access Key and AWS Secret access key should be added into the GitHub project settings under the secrets tab which looks like this:

GitHub Project Secrets

The complete pipeline should look like this:

name: Java CI with Maven

on:
  push:
    branches: [ master ]
  pull_request:
    branches: [ master ]

jobs:
  build:

    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v2
    - name: Set up JDK 1.8
      uses: actions/setup-java@v1
      with:
        java-version: 1.8
    - name: Build
      run: mvn -B package --file pom.xml
    - name: Deploy to EB
      uses: einaregilsson/beanstalk-deploy@v13
      with:
        aws_access_key: ${{ secrets.AWS_ACCESS_KEY_ID }}
        aws_secret_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        application_name: pet-clinic
        environment_name: PetClinic-env
        version_label: ${{github.SHA}}
        region: us-east-1
        deployment_package: target/spring-petclinic-rest-2.2.5.jar

Lastly, let’s modify the existing app

The deployed application in order to be considered a healthy instance from Elastic Beanstalk has to return an ok response when accessed from the Load Balancer which is standing upfront the Elastic Beanstalk. The load balancer is accessing the application on the root path. The forked application when accessed on the root path is forwarding the request towards swagger-ui.html. For that purpose, we need to remove the forwarding.

Change RootController.class:

@RequestMapping(value = "/", method = RequestMethod.GET, produces = "application/json")
    public ResponseEntity<Void> getRoot() {
        return new ResponseEntity<>(HttpStatus.OK);
    }

Change application.properties server port to 5000 since, by default, Spring Boot applications will listen on port 8080. Elastic Beanstalk assumes that the application will listen on port 5000.

server.port=5000

And remove the server.servlet.context-path=/petclinic/.

The successful commit which deployed our app on AWS Elastic Beanstalk can be seen here:

Pipeline build

And the Elastic Beanstalk with a green environment:

Elastic Beanstalk green environment

Voila, there we have it a CI/CD pipeline with GitHub Actions and deployment on AWS Elastic Beanstalk. You can find the forked project here.

Pet Clinic Swagger UI

CloudFormation: Passing values and parameters to nested stacks

Reading Time: 7 minutes

Why CloudFormation?

CloudFormation allows provisioning and managing AWS resources with simple configuration files, which let us spend less time managing those resources and have more time to focus on our applications that run on AWS instead.

We can simply write a configuration template (YAML/JSON file) that describes the resources we need in our application (like EC2 instances, Dynamo DB tables, or having the entire app monitoring automated in CloudWatch). We do not need to manually create and configure individual AWS resources and figure out what is dependent on what, and more importantly, it is scalable so we can re-use the same template, with a bunch of parameters, and have the entire infrastructure replicated in different stages/environments.

Another important aspect of CloudFormation is that we have our infrastructure as code, which can be version controlled, reviewed and easily maintained.

Nested stacks

CloudFormation nested stacks diagram

As our infrastructure grows, common patterns can emerge, which can be separated into dedicated templates, and re-used later in other templates. A good example is load balancers and VPC network. There is another reason, that may look unimportant, but CloudFormation stacks have a limit, which is 200 resources per stack, which can be easily reached as our application grows. That is why nested stacks can be really useful.

A nested stack is a simple stack resource of type AWS::CloudFormation::Stack. Nested stacks can have themselves contain other nested stacks, resulting in a hierarchy of stacks, as shown in the diagram on the right-hand side. There must be only one root stack, which is called parent.

Passing parameters to the nested stacks

One of the biggest challenges when having nested stacks is parameters exchange between stacks. Without parameters, it would be impossible to have robust and dynamic stacks, that are scalable and flexible.

The simplest example would be deploying the same CloudFormation stack to multiple stages, like beta, gamma and prod (dev, pre-prod, prod, or any other naming convention you prefer).

Depending on which stage you deploy your application, you may want to set different properties to certain resources. For example, in the development stage, you will not have the same traffic as prod, therefore you can fine-grain the resources for your needs, and prevent spending extra money for unused resources.

Another example is when an application is deployed to various regions, that have different traffic consumption and time spikes. For instance, an application may have 1 million users in Europe, but only 100 000 in Asia. Using stack parameters, allows you to reduce the resources you use in the latter region, which can significantly impact your finances.

Below is a code snippet, showing a simple use case where a DynamoDB table is created in a nested stack, that receives the stage parameter from the parent stack. Depending on which stage, at deploy time, we set different read and write capacity to our table resource.

Root stack

In the parent stack, we define Stage parameter under the Properties section. We later pass the parameters to the nested stack, which is created from a template child_stack.yml, stored in an S3 bucket.

---
AWSTemplateFormatVersion: '2010-09-09'
Description: Root stack
Parameters:
  Stage:
    Type: String
    Default: beta
    AllowedValues:
        - beta
        - gamma
        - prod
  TestRegion:
    Type: String
Resources:
    DynamoDBTablesStack:
      Type: AWS::CloudFormation::Stack
      Properties:
        TemplateURL: https://n47-cloudformation.s3.eu-central-1.amazonaws.com/child_stack.yml
        Parameters:
            Stage:
                Ref: Stage
Child stack

In the nested stack, we define the Stage parameter, just like we did in the parent. If we do not define it here either, the creation will fail because the passed parameter (from the parent) is not recognized. Whatever parameters we pass to the nested stack, have to be defined in its template parameters.

---
AWSTemplateFormatVersion: '2010-09-09'
Description: Nested stack
Parameters:
  Stage:
    Type: String
    Default: beta
    AllowedValues:
        - beta
        - gamma
        - prod
Mappings:
    UsersDDBReadWriteCapacityPerStage:
        beta:
            ReadCapacityUnits: 10
            WriteCapacityUnits: 10
        gamma:
            ReadCapacityUnits: 50
            WriteCapacityUnits: 50
        prod:
            ReadCapacityUnits: 500
            WriteCapacityUnits: 1000
Resources:
    UserTable:
        Type: AWS::DynamoDB::Table
        Properties:
            AttributeDefinitions:
                - AttributeName: user_id
                  AttributeType: 'S'
            KeySchema:
                - AttributeName: user_id
                  KeyType: HASH
            ProvisionedThroughput:
                ReadCapacityUnits: !FindInMap [UsersDDBReadWriteCapacityPerStage, !Ref Stage, ReadCapacityUnits]
                WriteCapacityUnits: !FindInMap [UsersDDBReadWriteCapacityPerStage, !Ref Stage, WriteCapacityUnits]
            TableName: Users

The Mappings section used in the child template is used for fetching the corresponding Read/Write capacity value at deploy time when the actual value for Stage parameter is available. More about Mappings can be found in the official documentation.

Output resources from nested stacks

Having many nested stacks usually implies cross-stack communication. This encourages more template code reuse.

We will do a simple illustration by extracting the DynamoDB table name we created in the nested stack before, and pass it as a parameter to a second nested stack, and also by exporting its value.

In order to expose resources from a stack, we need to define them in the Outputs section of the template. We start by adding an output resource, in the child stack, with logical id UsersDDBTableName, and an export named UsersDDBTableExport.

Outputs:
    UsersDDBTableName:
        # extract the table name from the arn
        Value: !Select [1, !Split ['/', !GetAtt UserTable.Arn]] 
        Export:
            Name: UsersDDBTableExport

Note: For each AWS account, Export names must be unique within a region.

Then we create a second nested stack, which will contain two DynamoDB tables, one named UsersWithParameter and the second one UsersWithImportValue. The former is created by passing the table name from the first child stack as a parameter, and the latter by importing the value that has been exported UsersDDBTableExport.

(Note, that this is just an example to showcase the two options to access resources between stacks, and is no real-world scenario)

For that, we added this stack definition in the root’s stack resources:

SecondChild:
    Type: AWS::CloudFormation::Stack
    Properties:
        TemplateURL: https://n47-cloudformation.s3.eu-central-1.amazonaws.com/child_stack_2.yml
        Parameters:
            TableName:
                Fn::GetAtt:
                  - DynamoDBTablesStack
                  - Outputs.UsersDDBTableName

Below is the entire content of the second child stack:

---
AWSTemplateFormatVersion: '2010-09-09'
Description: Nested stack
Parameters:
    TableName:
        Type: String
        
Resources:
    UserTableWithParameter:
        Type: AWS::DynamoDB::Table
        Properties:
            AttributeDefinitions:
                - AttributeName: customer_id
                  AttributeType: 'S'
            KeySchema:
                - AttributeName: customer_id
                  KeyType: HASH
            ProvisionedThroughput:
                ReadCapacityUnits: 1
                WriteCapacityUnits: 1
            TableName: !Join ['', [!Ref TableName, 'WithParameter'] ]
    UserTableWithImportValue:
        Type: AWS::DynamoDB::Table
        Properties:
            AttributeDefinitions:
                - AttributeName: customer_id
                  AttributeType: 'S'
            KeySchema:
                - AttributeName: customer_id
                  KeyType: HASH
            ProvisionedThroughput:
                ReadCapacityUnits: 1
                WriteCapacityUnits: 1
            TableName: !Join ['', [!ImportValue UsersDDBTableExport, 'WithImportValue'] ]

Even though we achieved the same thing by using nested stacks outputs, and exporting values, there is a difference between them. When you do an export, the exporting value is accessible to external stacks, within the same region, on the other hand, nested stacks outputs can be only passed, as a parameter to the other nested stacks within the same parent.

Notes:

  • Cross-stack references across regions cannot be created. You can use the intrinsic function Fn::ImportValue to import only values that have been exported within the same region
  • You cannot delete a stack if another stack references one of its outputs
  • You cannot modify or remove an output value that is referenced by another stack

Below are some screenshots from the AWS console, illustrating the created stacks, from the code snippets shared above:

Figure 1: root stack containing two nested stacks

Figure 2: first nested stack containing Users DynamoDB table

Figure 3: second nested stack containing UsersWithImportValue and UsersWithParameter DynamoDB tables

You can download the source templates here.


If you have any questions or feedback, feel free to comment here.

Taiko, useful toy for automation testing

Reading Time: 6 minutes

Every day we are implementing new features/client requirements. On every release, we want those changes to be correct, previous features should still be working… with other words we want a stable application. That is why it’s necessary for the BE and FE to write tests (unit and integration tests).

The best way is to have regression end to end automation tests. But it is not always fun to write them. Sometimes it is complex, takes some time so that we are avoiding writing them. Sometimes if a workload is larger it requires a dedicated team with QA to cover all this work. Furthermore to follow all changes and adapt existing tests etc.

There are a few tools that make all this work easier. Browser robots that record actions on the web pages, some frameworks that offer good and easy ways of writing automation tests, but either they are too difficult to learn and sometimes hard to use or they are not for free.

That is why I chose Taiko, a free and open source browser test automation framework that makes all this work easy to do. A few features that are crucial for writing end to end automated tests in my opinion are:

  • Easy setup
  • Interactive recording
  • Smart selectors
  • Easy integration with Cauge

The best way to present all this is to go through some simple examples. I’ll use http://saucedemo.com/ to write a simple test for adding items in the cart.

I want almost everyone to be able to write tests, it does not have to be a complex procedure to set it up. Taiko is a free open source node.js library and it works with chromium-based browsers. Tests are written in JavaScript or any language that compiles to JavaScript (TypeScript).

This means to start to write a test with Taiko we will need a pre-installed node.js. It is a straightforward setup. (https://nodejs.org/en/download/).

For the given example I used a power shell on windows but you can write it in any terminal application that you are familiar with. The command to install taiko is:

npm install -g taiko

After successful installation of taiko we will run REPL prompt:

npx taiko

Here are two important features:

  • Interactive recorder. It means that taiko will archive all successful commands that we are going to write here
  • And the second one is the use of Taiko’s API. We can list all available APIs with command
.api

or

.api <api>

All these API references are online too: https://docs.taiko.dev/api/reference.

Simple example

Let’s write one basic test for http://saucedemo.com/. By writing following commands in the prompt we will assure that saucedemo login, adding a product to cart and basket works:

await openBrowser();

// opens a new browser, I had chromium and it was open without any other setup because it uses Chrome DevTools Protocol instead of WebDriver…

await goto("saucedemo.com");

// navigates to / opens the web page that we want to test

var password = await text("_", below("Password for")).text();

In this line of code we have a few key commands:

  • var password – we will take the data (password) from and use it to log in
  • text – selector – which identifies an element on the page, it will look for text element with text to match with. In our example, it will be “_”
  • below – proximity selector – it makes relative HTML element search. It will search for elements under bellow “Password for” on the page

var usernames = await text("_", below("usernames")).text();
console.log(usernames);

// as it is js we can use this command too; it will be archived. I used it to check the values, it can be removed from the final script

var username = usernames.split("\n")[1];
console.log(username);
var password = passwords.split("\n")[1];
console.log(password);
await write(username, into(textBox({id: "user-name"})));

After the username and password are read from page, we will log in:

  • write – command that types given text into the given or focused element
  • into  – selector for the element to write text into
  • textBox – selector for a text field for input, selecting it with some attribute. In our case, it will be id, but it can be any attribute too
await write(password, into(textBox({id: "password"})));
await click("LOGIN");

// again smart selector, it automatically looks for and clicks button login

Since there are more products, we want to test a specific one, we will use proximity selector to add a specific product to the cart. If we don’t add “toRightOf” it will click the first component with “ADD TO CART” label on it.

await click("ADD TO CART", toRightOf("$9.99"));
await click("ADD TO CART", toRightOf("$15.99"));

To assure that ADD TO CART functionality works, we will check the basket, if the wanted products are there:

await click(link({class: "shopping_cart_link"}));

Assertions are made with every command, looking for any element. For example command

await click("ADD TO CART", toRightOf("$9.998"));

will throw an error

[FAIL] Error: Element with text $9.998 not found, run `.trace` for more info.

but if we want to make some customer check then we can use any node.js assertions:

assert.strictEqual(await text("9.99").exists(), true);
assert.strictEqual(await text("15.99").exists(), true);
await click("menu");
await click("Logout");

With all these commands we created one basic test scenario. All these commands are already archived and we can write them in js file to execute this test anytime:

.code testAddCart.js

And exit the recording session:

.exit

Running our previous test with:

npx taiko testAddCart.js

Other possibilities

Tests can be grouped and run with test runners. There are three that are supported Gauge, Mocha, Jest. Try it with Cauge, it is an easy straightforward procedure to set it up. Like that, by using Gauge, we can integrate these tests to build pipeline in Jenkins.

Conclusion

The setup is simple, very easy and fast.

The interactive way of writing tests, seeing the result of every command in real-time is very good for learning the library. You don’t have to make a write build run, just write it in REPL and that’s it, you see the results.

But the moment of selecting elements on the page was not so satisfying. Smart selectors are not so smart when there are more of the same elements. You have to go again with XPath or class and have to debug the page and check the code for attributes and values.

Using the Vue3’s composition API in Vue2

Reading Time: 4 minutes

Vue3 is officially out since September 18 and along with it comes the composition API which is replacing the old options API. The new composition is defined as a set of additive, function-based APIs that allow the flexible composition of component logic.

We are introduced to the composition API because with the old options API as the component grows the readability of the components was not quite pleasant or comfortable and was starting to get messy, the code reuse patterns had some drawbacks, the support for TypeScript was limited etc.
The visual comparison of both APIs look something like this:

First, we must install @vue/composition-api via Vue.use() before using other APIs.
Import the VueCompositionApi:

import Vue from 'vue'
import VueCompositionApi from '@vue/composition-api'

Register the plugin:

Vue.use(VueCompositionApi)

And then we are ready to start.

So, the new API must contain the setup() function which serves as the entry point for using the composition API inside the components. setup() is called before the beforeCreate hook. The component would look like this:

<template>
    <div> {{ note }} </div>
    <div> {{ data.count }} </div>
</template>

<script>
import { ref, computed, reactive, onMounted } from '@vue/composition-api'

export default {
  setup() {
    const notes = ref([])
    async function getData() {
      notes.value = await DataService.getNotesData()
    }

    const data = reactive({
      count: 0,
      actions: ['firstAction', 'secondAction', 'thirdAction'],
      object: {
        foo: 'bar'
      }
    })
    const computedData = computed(() => notes.value)
    
    onMounted(() => {
      getData()
    }

    return {
      ...toRefs(data),
      notes,
      computedData
    }
  }
}
</script>

In the code above we can see the structure of the new API. We have the setup() function, which is exported in the script tag.
Inside the setup function we can see several familiar properties.

const notes = ref([])

This is initializing a property inside the setup function scope. On this property, we must add ref if we want to make it reactive, which means if we don’t add the ref and we make a change to that variable, the change won’t be reflected in the DOM. This is the same as initializing variable in the data() in Vue2:

data() {
  return {
    notes: []
  }
}

As we can see we do not have the method section for creating functions, instead we define the functions in the setup(). After that, the method is used in the mounted hook which is a bit different than the one in the options API.

Some of the hooks were removed, but almost all of them are available in a similar form.

  • beforeCreate -> use setup()
  • created -> use setup()
  • beforeMount -> onBeforeMount
  • mounted -> onMounted
  • beforeUpdate -> onBeforeUpdate
  • updated -> onUpdated
  • beforeDestroy -> onBeforeUnmount
  • destroyed -> onUnmounted
  • activated -> onActivated
  • deactivated -> onDeactivated
  • errorCaptured -> onErrorCaptured

In addition, two new debug hooks were added to the composition API:

  • onRenderTracked
  • onRenderTriggered

Computed and watch are still available. In the code above we can see how computed is used to return the notes values.

Reactive() is similar to ref() and if we want to create a reactive object we can still use ref(). But underneath the hood, it’s just calling reactive(). On the other side, reactive() will not work with primitive values. It takes an object and returns a reactive proxy of the original. The big difference is when you want to access data defined using reactive(), for example, if we want to use the count, we must do it like:

<div> {{ data.count }} </div>

One very important thing is to convert a reactive object to plain object is using the toRefs and return the data like in the component:

return {
  ...toRefs(data)
}

This is just a small piece of the cake, only something to begin with using the composition API in Vue2 application. For more, you can visit the documentation on the following link.

JavaScript Best Practices for Readable and Maintainable Code

Reading Time: 4 minutes

Let’s have a look at some coding standards that can help with:

  • Keep the code consistent
  • Easier to read and understand
  • Easier to maintain
  • Easier to refactor

These coding standards are my own personal opinion that can help with the above points using what I have learned and experienced while developing and reviewing other developers code.

Variables

Always use ‘const’ & ‘let’ over ‘var’

Using const helps readability as developers know it can’t be reassigned.
var and let are both used for variable declaration in javascript but the difference between them is that var is function scoped and let is block scoped. It is too much to get into detail here maybe I will write another post for that.

Avoid using global variables

Minimize the use of global variables. Global variables are a terribly bad idea. The problem with global variables and functions is that they can be overwritten by other scripts.

Naming variables

Always try to come up with names that make sense and are not too long. Naming variables may be the hardest thing in coding.
let should be camelCase. const, if it is at the top of the file it should be SNAKE_CASE (All Caps). If it is not at the top then it should be camelCase.

API Calls

Pick a method and stick with it

By method I mean one of the below:

  • XMLHttpRequest
  • fetch
  • Axios
  • jQuery

So far Axios and fetch are the most preferred way to go with. The benefit of using Axios is that it has wide browser support. Even IE11 can run Axios.

Make the calls reusable

Instead of just having the calls everywhere in the code it is good to have modules for your API calls. This way it becomes easier to refactor. If something changes in the API you will have to change it only once.

Dom Manipulation

Use CSS classes over adding styles

For example we have some basic forms here:

<form>
  <input type='text' required>
  <button type='submit'>
    Submit
  </button>
</form>

Along with the following JavaScript code:

const input = document.querySelector('input');
const form = document.querySelector('form');
form.onsubmit = (e) => {
  e.preventDefault();
}
input.oninvalid = () => {
  input.style.borderColor = 'red';
  input.style.borderStyle = 'solid';
  input.style.borderWidth = '1px';
}

Instead of adding inline style like the above example, it is much cleaner to add CSS class to the input field like in the example below:

const input = document.querySelector('input');
const form = document.querySelector('form');
form.onsubmit = (e) => {
  e.preventDefault();
}
input.oninvalid = () => {
  input.className = 'invalid';
}
// CSS
.invalid {
  border: 1px solid red;
}

Accessing the DOM tree

Accessing the DOM tree is a quite expensive operation this is the bottleneck of JavaScript in terms of performance. Therefore, we must strive to minimize the operations of accessing the DOM tree.

Example:

// BAD
let padding = document.getElementById("content").style.padding
let margin = document.getElementById("content").style.margin;

//GOOD
let style = document.getElementById("content").style
let padding = style.padding
let margin = style.margin

Functions

Use ES6 arrow functions where possible

Arrow functions are a more concise syntax for a writing function expression. They’re are anonymous and change the way this binds in functions.

//BAD
var multiply = function(a, b) {
  return a* b;
}

//GOOD
const multiply = (a, b) => a * b

 Naming functions

Functions should be camelCase and should have descriptive names:

//BAD
const sMail = () => {
  //...
};
const sendmail = () => {
  //...
};

//GOOD
const sendMail = () => {
  //...
};

Functions should only do one thing

This is one of the most important rules in programming. When your function does more than one thing, it is harder to test and read. When you isolate a function to just one action, it can be refactored easily and your code will be read much much cleaner.

//BAD
const notifyListeners = listeners => {
  listeners.forEach(listener => {
    const listenerRecord = database.lookup(listener);
    if (listenerRecord.isActive()) {
      notify(listener);
    }
  });
}
//GOOD
const notifyActiveListeners = listeners => {
  listeners.filter(isListenerActive).forEach(notify);
}

function isListenerActive(listener) {
  const listenerRecord = database.lookup(listener);
  return listenerRecord.isActive();
}

Conclusion

Coding standards in any language can really help with the readability and the maintainability of an application. The main point is to be consistent as they really help scale up an application in a clean way.

If you take this advice, it will bring your read and maintainability to the next level. The next time you need to address an issue or implement a feature request, your journey will be fast and seamless.

Follis: A movement-based 3D input device for gymnastic ball usage

Reading Time: 7 minutes

Follis is a movement-based 3D input device/system for object manipulation in the virtual environment with the help of gymnastic balls. The system development is based on the four main aspects of UX Life-Cycle: analysis, design, prototyping and evaluation.

The number of employees doing their daily tasks on the computer is increasing. That creates several problems for them. The authorities offer various activities to remedy these problems, such as standing up while calling, going to colleagues’ tables and getting up every 30-45 minutes.

The main purpose of the system is to solve the immobility problems of people with prolonged sitting times through the combination of virtual reality and a gymnastic ball.

The secondary purpose is to provide a new locomotion technique by combining both artificial and physical locomotion.

During the system development, we carried out appropriate training together with a gymnastic ball trainer and a physiotherapist.

The reliability of the system is rated by 17 sports scientists and one gymnastic ball expert. The results show that the “Follis” system can provide suitable exercise sections for gymnastic balls for those who work in an office. The system can be a new solution to longer sitting problems and a new technology for virtual locomotion.

Background

Computer use in the office environment mostly requires sitting at a desk and working with a desktop computer. Employees are at risk of various musculoskeletal disorders, obesity, low blood flow, muscle pain, etc. due to inactivation.

The aim is to solve these problems by combining gymnastic ball and virtual reality (VR), at least by encouraging employees through VR. In addition, the ergonomic structure and space of the gymnastic ball, as well as the chair, allows it to be used in an office setting.

The solution sought is to get users to perform gymnastic ball movements in the virtual environment – the solution is based on tracking users’ body movements using the gyroscope technology of a smartphone. We wanted to get real-time input from users and let them manipulate objects in a 3D environment. Thus the system provides locomotion in virtual environments (VEs). The purpose of the system is simply to encourage office workers to do gymnastic ball movements during the day. If the office workers want to do a certain amount of gymnastics, it will definitely help protect themselves from the harm of prolonged sitting. Based on this point, the system provides gymnastic movements when playing VR games.

Story Board

Narrative Story Board

Design Solution

To design the new system as a technical solution, we agreed to track the user with a smartphone and program virtual reality games to perform basic gymnastic ball movements. These virtual games require basic pelvic movements, arm extension, leg movements, and adequate balance. Thus, the user will play a role in gymnastic ball training without knowing it. The total training is 15 minutes for experienced and inexperienced users.

Technical Flow Diagram

We designed 5 different games. Each game has 3 minutes to end. The games are designed as EndlessRunner games.

User Needs And Requirements

Requirements elicitation and requirements negotiation through interviews and observation. The results of the analysis. The technical and functional requirements of the design solution to improve sitting behaviour.

Technical Requirements

  1. The system must provide an exercise pillow.
  2. The system offers sufficient space in one room.
  3. The system must provide 15 minutes of gymnastic ball training.
  4. The system should provide basic warm-up movements for pelvic tilts, hopping, and arm extensions
  5. The system must have energetic music.
  6. The system must offer a safe training period.

Functional Requirements

  1. The system must provide a gymnastic ball.
  2. The system must provide a smartphone.
  3. The system must provide a gymnastic ball cover.
  4. The system must provide a slot on the cover for the smartphone.
  5. The system must provide a gymnastic cushion.
  6. The system must provide a long-range lightning cable for smartphones.
  7. The system must provide an HTC Vive with all setup.
  8. The system must work on Unity Remote 5.
  9. The system must use Unity 3D.

Games

The mini-games have been designed and grouped to provide the different types of movements that we defined for the gymnastic ball exercises. The number of these movements (pelvic movements, arm extensions and hopping) can change depending on the game. Some of these games are primarily designed for pelvic tilts, others are combinations of all movements.

GUI

The user can simply select games from the main menu by using the HMDs controller.

Minigames

The first game – Training Center – was designed for practising ball control and the VR environment. It requires ten times the inclination of the front/back and left/right pelvis

Training Center

The second game – Candy Train – was designed to allow hard stretching/hopping and soft pelvic tilts

The third game – Cafe Racer – was developed to enable predominantly hard pelvic inclinations and less soft stretching/jumping movements. The game is one of the most fluid games in the system.

The fourth game – Wild West – is designed to meet all movement requirements and perform the high-intensity gym ball workout. Stretching, jumping, and pelvic tilt movements are urgently required to complete the game.

The fifth game – Tarzan – is designed to achieve the same amount of arm stretching/jumping and pelvic movements.

DEMO VIDEO

Test Results

The results were analyzed and documented. According to the analysis, the system is being adapted to the needs of users to a large extent. It can provide proper gym ball workout with inclines, hops and pelvic stretching. The overall result of the SUS form is 70.33, which means that the system with the grade “B” is above average. The system can be used without learning too many things before using it. The gym ball as an input device for the 3D environment can artificially provide effective VR locomotion. Additionally, the combination of virtual reality and gym ball use could be an excellent solution to stimulate physical gym ball training.

How to integrate GraphQL in the Microservice

Reading Time: 4 minutes

GraphQL is a query language for your APIs and a runtime for fulfilling those queries with existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more. GraphQL is designed to make APIs fast, flexible, and developer-friendly.

GraphQL SPQR

GraphQL SPQR (GraphQL Schema Publisher & Query Resolver, pronounced like speaker) is a simple-to-use library for rapid development of GraphQL APIs in Java. It works by dynamically generating a GraphQL schema from Java code.

In this tutorial, we are going to explain simple steps for how to integrate Graphql in your microservice.

  • Include dependencies in pom.xml
<!-- GraphQL -->
<dependency>
    <groupId>io.leangen.graphql</groupId>
    <artifactId>spqr</artifactId>
    <version>${graphql-spqr.version}</version>
</dependency>
<dependency>
    <groupId>com.graphql-java-kickstart</groupId>
    <artifactId>graphql-spring-boot-autoconfigure</artifactId>
    <version>${graphql-spring-boot-autoconfigure.version}</version>
</dependency>
  • Spring Boot Java Configuration class:
@Configuration
public class GraphQLConfiguration {
    @Bean
    public GraphQLSchema schema(GraphQLRootQuery graphQLRootQuery,
                                GraphQLRootMutation graphQLRootMutation,
                                GraphQLRootSubscription graphQLRootSubscription,
                                GraphQLResolvers graphQLResolvers) {
        GraphQLSchema schema = new GraphQLSchemaGenerator()
            .withBasePackages("com.myproject.microservices")
            .withOperationsFromSingletons(graphQLRootQuery, graphQLRootMutation, graphQLRootSubscription, graphQLResolvers)
            .generate();
        return schema;
    }

    @Bean
    public GraphQLResolvers resolvers(MyOtherMicroserviceClient myOtherMicroserviceClient) {
        return new GraphQLResolvers(myOtherMicroserviceClient);
    }

    @Bean
    public GraphQLRootQuery query(MyOtherMicroserviceClient myOtherMicroserviceClient) {
        return new GraphQLRootQuery(myOtherMicroserviceClient);
    }

    @Bean
    public GraphQLRootMutation mutation(MyOtherMicroserviceClient myOtherMicroserviceClient) {
        return new GraphQLRootMutation(myOtherMicroserviceClient);
    }

    // define your own scalar types (custom data type) if you need to.
    @Bean
    public GraphQLEnumProperty graphQLEnumProperty() {
        return new GraphQLEnumProperty();
    }

    @Bean
    public JsonScalar jsonScalar() {
        return new JsonScalar();
    }

    /* Add your own custom error handler if you need to.
    This is needed, if you want to propagate any custom information error messages propagated to the client. */
    @Bean
    public GraphQLErrorHandler errorHandler() {
        ....
    }

}
  • GraphQL class for query operations:
public class GraphQLRootQuery {

    @GraphQLQuery(description = "Retrieve list of your attributes by search criteria")
    public List<AttributeDTO> getMyAttributes(@GraphQLId @GraphQLArgument(name = "id", description = "Id of your attribute") String id,
                                              @GraphQLArgument(name = " myQueryParam ", description = "…") String myQueryParam) {
        return …;
    }
}
  • GraphQL class for mutation operations:
public class GraphQLRootMutation {

    @GraphQLMutation(description = "Update attribute")
    public AttributeDTO updateAttribute(@GraphQLId @GraphQLNonNull @GraphQLArgument(name = "id", description = "Id of your attribute") String id,
                                        @GraphQLArgument(name = "payload", description = "Payload for update") UpdateRequest payload) {
        return …
    }
}
  • GraphQL resolvers:
public class GraphQLResolvers {

    @GraphQLQuery(description = "Retrieve additional information")
    public List<AdditionalInfoDTO> getAdditionalInfo(@GraphQLContext AttributesDTO attributesDTO) {
        return …
    }
}

Note: All the Java classes (AdditionalInfoDTO, AttributesDTO, UpdateRequest) are just examples for data transfer objects and requests that needs to be replaced with your custom classes in order the code to compile and be executable.

  • How to use GraphQL from client side?

Finally, we want to have a look, how GraphQL looks from the front end side. We are using a tool, called  GraphiQL (https://www.electronjs.org/apps/graphiql) to test it.

  • GraphQL Endpoint: URL of your service, defaults to /graphql
  • Method: it is always POST
  • HTTP Header: You can include authorization tokens with the request
  • Left pane: the query, must be always in JSON format
  • Right pane: response from the server, always JSON
  • Note: You get what you request, only those attribute are returned which you request.

Simple examples for query and mutation:

In this tutorial, you learned how to create your GraphQL API in Java with Spring Boot. But you are not limited to Spring Boot for that. You can use the GraphQL SPQR in pretty much any Java environment.

Scaling Microservices with Spring Cloud Netflix

Reading Time: 10 minutes

If you need to build large distributed systems, then this is the place to be. We are going to talk about some of the components that the solution from Spring and Netflix provides and how easy it is to use them. If you follow this article, in the end, you will create a complete application with service discovery, client-side load balancing, feign clients and much more.

Before we start, let’s explain some of the terms that we are going to use in this article:

  • Eureka – a service discovery service, where every client will register itself
  • Ribbon – a client-side load balancer
  • Feign client – declarative web service client which provides communication between microservices

On the picture above it is presented what we are going to build. We will create two user-oriented microservices, one called Supplier and the other called Order. The user will be able to place an order for some supplier but the supplier in order to perform the order will call the Order microservice. For the communication between Supplier and Order, we will use Feign Client in combination with service discovery that will be enabled by Eureka. In the end, we are going to scale the microservice Order and we will see how the Ribbon load balancer will work when we have more instances.

Let’s start by creating the Eureka service discovery microservice.

The easiest way is to go to the Spring Initializer and create your microservice with the following properties as you can see on the picture below:

The required dependencies for our service discovery service are only the Eureka Server.

Once you are done with this, click on generate and your project will be downloaded. Open it via your favourite IDE (I will be using IntelliJ) and there are just two more things that you need to do. In your main class you should add the following annotation @EnableEurekaServer:

package com.north;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.server.EnableEurekaServer;

@SpringBootApplication
@EnableEurekaServer
public class EurekaApplication {

    public static void main(String[] args) {
        SpringApplication.run(EurekaApplication.class, args);
    }

}

One more thing that we will need to change is in our application.yml file. By default an application.properties file is created, but if this is the case we will rename it to application.yml and add the following code:

server:
  port: 8761

eureka:
  instance:
    hostname: localhost
  client:
    registerWithEureka: false
    fetchRegistry: false
    serviceUrl:
      defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka/

With these, we are setting the server port and the service URL. And there we have our first service discovery. Start the application and go to your browser and enter the following link: http://localhost:8761. Now we should be able to see the eureka homepage:

As you can see, there are no instances registered at the moment. So let’s create our first instance.

We will start by creating the Order microservice. Go to the Spring Initializer and create a project with the following properties:

And we will add the following dependencies:

Let’s start by setting the name and the port of the application. Change your application.properties to application.yml and add the following code:

spring:
  application:
    name: order

server:
  port: 8082

Now the name of the application is set to Order and the application will run on port: 8082. If this port is taken on your machine, feel free to change the port. We are not going to be dependent on this port but you will see that we will be dependent on the application name when we want to communicate with it.

In order to enable this instance to be discovered by Eureka we need to add the following annotation to the main class:

package com.north.order;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;

@EnableDiscoveryClient
@SpringBootApplication
public class OrderApplication {

    public static void main(String[] args) {
        SpringApplication.run(OrderApplication.class, args);
    }

}

Now if we start the application and go back to the homepage of eureka by going to our browser and entering the following link: http://localhost:8761 we should be able to see that this instance is registered to Eureka.

Since we confirmed that this instance is registered to Eureka we can now create an endpoint from where an order can be placed. First, let’s create an entity Order:

package com.north.order.domain;

import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;

@Data
@AllArgsConstructor
@NoArgsConstructor
public class Order {

    private String productName;
    private int quantity;
}

It is a simple entity that will contain the name of the products and how many pieces of it we want to order. The rest controller should contain the following logic:

package com.north.order.ctrl;

import com.north.order.domain.Order;
import lombok.extern.slf4j.Slf4j;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RestController;

@RestController
@Slf4j
public class OrderController {

    @PostMapping(value = "/order")
    ResponseEntity<Void> placeOrder(@RequestBody Order order) {
        log.info("Placing an order for product: {} with quantity: {}", order.getProductName(), order.getQuantity());
        return ResponseEntity.ok().body(null);
    }
}

You can test this endpoint by using Postman or some similar tool but we want the Supplier microservice to call this endpoint.

Now that we are done with the Order microservice, let’s build the Supplier. Again we will open the Spring Initializer and create a project with the following properties:

And we will have the following dependencies:

Generate the project and import it into your IDE. First thing let’s change the application.properties file by changing the extension to yml and add the following code:

spring:
  application:
    name: supplier
server:
  servlet:
    context-path: /supplier

With this, we have set the application name and set a context-path. Since here we didn’t change the port, the default 8080 will be taken. In order to register this instance to Eureka and to be able to use Feign Client we need to add the following two annotations in our main class:

package com.north.supplier;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.openfeign.EnableFeignClients;

@SpringBootApplication
@EnableFeignClients
@EnableDiscoveryClient
public class SupplierApplication {

    public static void main(String[] args) {
        SpringApplication.run(SupplierApplication.class, args);
    }

}

Next thing is to create the same entity Order as we have in the previous microservice.

package com.north.supplier.domain;

import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;

@Data
@AllArgsConstructor
@NoArgsConstructor
public class Order {

    private String productName;
    private int quantity;
}

For communication with the Order microservice we will create a feign client called OrderClient:

package com.north.supplier.service;

import com.north.supplier.domain.Order;
import org.springframework.cloud.openfeign.FeignClient;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;

@FeignClient("order")
public interface OrderClient {

    @PostMapping("/order")
    void performOrder(@RequestBody Order order);
}

As a value in the @FeignClient annotation, we need to use the application name of the microservice that we want to communicate with, in our case Order. The method written here is the one that will call the previously exposed endpoint in the Order microservice. Let’s create a service that will use this feign client and execute an order:

package com.north.supplier.service;

import com.north.supplier.domain.Order;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Service;

@Slf4j
@Service
@RequiredArgsConstructor
public class OrderService {

    private final OrderClient orderClient;

    public void placeOrder(Order order) {
        log.info("Requesting order ms to place an order");
        orderClient.performOrder(order);
    }
}

At the end we will expose one endpoint that we can use to test this scenario:

package com.north.supplier.ctrl;

import com.north.supplier.domain.Order;
import com.north.supplier.service.OrderService;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequiredArgsConstructor
@Slf4j
public class OrderController {

    private final OrderService orderService;

    @RequestMapping(value = "/order")
    public ResponseEntity<Void> placeOrder(@RequestBody Order order) {
        orderService.placeOrder(order);
        return ResponseEntity.ok().body(null);
    }
}

Now that we are done, let’s start the application. First, if we check the Eureka homepage we should be able to see this instance also that is registered. You can also see this in the console of where the Supplier is being started:

2020-09-20 20:02:43.907  INFO 7956 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient    : DiscoveryClient_SUPPLIER/host.docker.internal:supplier: registering service...
2020-09-20 20:02:43.911  INFO 7956 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient    : DiscoveryClient_SUPPLIER/host.docker.internal:supplier - registration status: 204

To test this complete scenario make sure that all three applications are started and that the Order and Supplier are registered to Eureka. By using postman I will send a post request to the endpoint on the Supplier microservice and I should be able to see the order being placed in the Order microservice:

Just make sure that you have added in your Headers tab a header with key: Content-Type and value application/json. What should happen if execute this request is in the Supplier microservice console? -we should see the following log:

2020-09-20 20:20:36.674  INFO 19420 --- [nio-8080-exec-4] com.north.supplier.service.OrderService  : Requesting order ms to place an order

in the Order microservice console we should see:

2020-09-20 20:20:36.678  INFO 17800 --- [io-8082-exec-10] com.north.order.ctrl.OrderController     : Placing an order for product: bread with quantity: 300

At this point, we managed to create three microservices, two for user purpose and one for service discovery. We used the feign client for communication with microservices. At some point, if we decide to grow this application and there are too many orders to be executed and we add some complex logic to our Order microservice, we will reach a point where the Order microservice won’t be able to execute all the orders. Let’s see what will happen if we scale our Order microservice.

First, from your IDE stop the Order microservice. Just be sure that Eureka and Supplier are still running. Now go to the folder directory in the Order project (something like …\Documents\blog\order) and open in that location three command prompt windows. In each of them we will type the following command:

mvn spring-boot:run -Dspring-boot.run.arguments="--server.port=8084"

in the second:

mvn spring-boot:run -Dspring-boot.run.arguments="--server.port=8085"

in the third:

mvn spring-boot:run -Dspring-boot.run.arguments="--server.port=8086"

It should be something like this:

Three instances of the application should be now up and running on the port that we previously specified. If you open again the Eureka home page, you should be able to see all three instances registered. Now go back to the postman and execute the same post call to Supplier as we did previously and do this many times as fast as possible. Now if you take a look at the command prompt windows that we opened you should be able to see that every time a different instance of the Order microservice is called. This is provided by Ribbon that is provided out of the box on the client-side (in this case the Supplier microservice), without adding some additional code. As we have mentioned before we are not dependent on the port but we are using the application name in order for Supplier to send a request to Order.

To summarize, our Supplier microservice became aware of all the instances and now he is sending the request every time to a different instance of Order so the load will be balanced.

Hope that you enjoyed this article and had fun with it. The code is available on the our N47 Bitbucket:

iOS Unit Tests – My story

Reading Time: 5 minutes

In my last job interview – almost 2 years ago – I received a question about writing unit tests inside the iOS code. With the confidence of a developer with 8 years of experience in the iOS branch, my answer and my opinion at that moment were that if the developers are high-profiled and write good code writing unit tests is not necessary. Also, my answer continued with a conclusion: if the company has a testing team why do we (iOS developers) take their job? Throwing back on the answer and from today’s perspective, my feelings about the topic are mixed: First of all, I’ve learned more about the importance of the unit tests, and secondly, I’ve continued to work in an environment where we are not obligated to write tests.

Some basics

We should consider our code as pieces of code – called UNITS. The purpose of a unit test is to make validation of our code, and this will allow us to be sure that our code meets the design and fulfil the goal.

In XCode and iOS, the Unit tests are written in a separate target. The most important thing is the XCTest framework. The basic rule is that only the methods that start with the word test will be considered as unit tests by the compiler. Here is an example:

func testFormatForCard() {
    let formatter = DateFormatter()
    formatter.dateFormat = “ddMMyyyy”
    let date = formatter.date(from: “28061978")!
    XCTAssertEqual(date.formatForCard(), “28.06.1978”)
}

Once a test method is written with the proper semantic, the method is associated with rhombus sign on the left side:

Unit Tests with rhombus sign

We can run the tests to see if our code is good or if something is not working. If the test passes, the empty rhombus sign is changed with a green checkmark sign:

Figure 2: Test passed

In case of a failing test we have an assertion and a red sign:

Figure 3: Failed unit test

We can see in image 3 that intentionally the programmer entered the wrong output date 29.06.1978. That’s the way we should think when writing unit tests. First, we have to write the failing state and after that, we should enter the expected correct output. If in the second case the test passes, then we have created a useful unit test for that piece of code (unit). The general idea behind is if we change something in our code, and we made a mistake unintentionally the test should fail and warn us to fix the code.

Unit Test in practice

1. Code Coverage

There is a built-in option inside XCode for checking the code coverage of the tests. The ideal scenario is that 100% of our code is covered by unit tests. But is this really necessary? Will we be better programmers if our full code is supported with tests? Can we spend better the time we spent covering the tiniest pieces of code with proper tests?

In my opinion, writing unit tests is a philosophy and knowing the principles lead us to write qualitative code. That’s, of course, our goal as programmers. Covering the most important part of the code, especially the parts of code that are often changed like networking managers, parsers are a better option than trying to be perfect and writing 100% coverage.

2. Test Driven Development

The popular blog website for programmers Ray Wenderlich emphasizes the FIRST principles as a good way to follow writing unit tests. The basics of these principles are that the test should be fast, autonomous, repeatable, the output should be either “pass“ or “fail”, and ideally, the tests should be written before coding – Test Driven Development TDD.

The recommendation by TDD is writing tests before starting to fix a bug in the code. My opinion on this topic is also a mixing approach. Depending on the time you have, if the deadline is not close on the horizon, you can write tests before coding, or before starting to fix bugs in code, but that’s not always possible, and you won’t make a mistake if you skip this step sometimes.

Conclusion

I can say that writing unit tests is good for every programmer on his way to becoming great. The quality of coding could dramatically improve using unit tests. The philosophy of writing tests and thinking how the code should be structured to allow the tests to pass, will make you write cleaner code, better structured, using more protocols, make reusable classes… As in the strategy games, you shouldn’t always be a slave of the principles – the most important is to fulfill the goals in a quality manner. If you have enough time and not a strict deadline of the project you can make a bigger code coverage, but something around 75% is good enough.

The practical guide – Part 2: MVP -> MVVM

Reading Time: 5 minutes

The practical guide – Part 1: Refactor android application to follow the MVP design pattern

This is the second part of the practical guide. In the first part, we talked about refactoring android application to MVP. Now, instead of refactoring the base code to MVVM, I would like to refactor the MVP application to MVVM. That way we will learn the differences between MVP and MVVM.

Why MVVM?
First, I should tell you that Google accepted MVVM as preferred design pattern for building Android applications. So, they have build tools that helps us following this pattern. This is a great reason to learn and use this pattern, but why would Google choose MVVM over MVP (or other design patterns). Well, they know best but, my opinion is that they chose MVVM because it has one less dependency, due to the difference in communication between ViewModel and View. I will explain this in the next section.

Difference between MVP and MVVM
As we know, MVP stands for Model-View-Presenter. On the other hand, MVVM stands for Model-View-ViewModel. So, the Model and the View in MVVM are the same as in the MVP pattern. The only difference remains between the Presenter and View Model. More precisely, the difference is in the communication between the View and the ViewModel/Presenter.

As you can see in the diagrams, the only difference is the arrow from Presenter to View. What does it mean? It means that in MVP you have an instance of the Presenter in the View, and you have an instance of the View in the Presenter, hence double arrow in the Diagram. In MVVM you only have an instance of the ViewModel in the View. But, how do you communicate with the View? How can the View know when the ViewModel has made changes in the Model? For that, we need the Observer pattern. We have observables(subjects) in the ViewModel, and the View subscribes to this observables. So, when the observable is changed, then the View is automatically informed about that change, and it can update its views.

For practical implementation of this Observer pattern, we have to get help either from some external libraries like RxJava, or we can use the new architecture components from Android. We will use this later in this example.

Refactoring

MVP classes
MVVM Classes

First, we can get rid of the MainPresenter and MainView interfaces. We can have only one new class MainViewModel, that replaces the Presenter. Then we can extend MainViewModel from androidx.lifecycle.ViewModel. This is the first class that we will use from the android lifecycle components. This class helps us to deal with the lifecycle of the view model. It survives configuration changes, so it is a nice place for storing UI related data. Next, we will add quoteDao and quotesApi fields. We will initialize them with setters, instead of the constructor, because the creation of the ViewModel is a little bit different. We don’t have the MainView, and also we don’t need bindView() and dropView() methods.

Next, we will create the observable objects. These are objects that we want to display in the MainActivity, wrapped with androidx.lifecycle.LiveData or some other implementation of LiveData. This class helps us with the implementation of the observer pattern. We will create the objects in the MainViewModel, and we will observe these objects in the MainActivity. We want to display a list of Quote objects, so we will create MutableLiveData<List<Quote>> object. MutableLiveData because we want to change the value of the object manually.

getAllQuotes() will be very similar as in the Presenter, except minus the interaction with the MainView. So, instead of:

if (view != null) {
  view.displayQuotes(response.body());
}

we will have:

quotesLiveData.setValue(response.body());

We will also change the implementation of the DatabaseQuotesAsyncTask, so instead of sending the MainView, we will create a new interface that will get us the quotes from the async task and we will send the implementation of this interface there. In the implementation, we will update quotesLiveData, same as above.

In the MainActivity, we remove the implementation of the MainView. We won’t need to override onDestroy() method. We can replace the MainPresenter with the MainViewModel. We will create the ViewModel as follows:

viewModel = new ViewModelProvider(this, new ViewModelProvider.NewInstanceFactory()).get(MainViewModel.class);
viewModel.setQuoteDao(quoteDao);
viewModel.setQuotesApi(quotesApi);

Then we will observe to the quotesLiveData observable, and display the list.

viewModel.quotesLiveData.observe(this, quotes -> {
    QuotesAdapter adapter = new QuotesAdapter(quotes);
    rvQuotes.setAdapter(adapter);
});

And in the end, we call viewModel.getAllQuotes() to fetch the quotes.

And that’s it! Now we have an application that follows the MVVM design pattern. You can check the full code here.

Automate Processes with Camunda

Reading Time: 5 minutes

Overview

Camunda BPM is a light-weight, open-source platform for Business Process Management. It ships with tools for creating workflow and decision models, operating deployed models in production, and allowing users to execute workflow tasks assigned to them. It is developed in Java and released as open-source software under the terms of Apache License.

Modeling your first process

In order to show how Camunda works and looks like I will use this simple process. Let us imagine that you want to introduce a review process on your Twitter account and have every tweet go through this review process.

One way to manage this is to make a web application from scratch for this scenario. But we can also model this process with Camunda Modeler and utilize Camunda for this workflow.

On the following image, it is shown one way to model this process with standard BPMN model using Camunda Modeler:

Business Process Model and Notation (BPMN) for the above process

In this diagram, the process is started when someone writes a new tweet. After that, we have a human task where someone has to review this tweet and decide its approval status. And after that we have two possible options if the tweet is approved, a service task is called that will publish this on Twitter. If rejected we again call a service task, however this time we notify the user that his tweet was rejected.

I will go through all of these steps in more detail.

Starting the process

Camunda processes can be started programmatically using some of their supported SDKs like Java or by using the Camunda Tasklist GUI that comes out of the box. In this case, I will use the Camunda tasklist to start a new tweet.

Working on the human task

Human tasks are tasks that must be manually completed by some users. And this can be something as simple as completing a form or it can be something like actually shipping an item somewhere. They are visible in the Camunda Tasklist GUI and users can assign a certain task to themselves and complete them.

In our Camunda BPMN model, the next step in the process is a human task. In our process, we want to review the tweet in this task. On the following image is shown how the human tasks look like by default in Camunda Tasklist:

Automating service tasks

Service task is used to invoke some service, this can be some Java code or some asynchronous external worker.

After the tweet is reviewed we have ‘conditional flow’ in Camunda, which depends on whatever the tweet was approved or not, decides how the flow should continue. In both cases, our flow continues with a service task.

In our case, we have two separate service tasks. One is called when a tweet is rejected and will send a notification, while the other one is used when the tweet is approved and will publish it on Twitter.

First, let us take a look at the service tasks for sending rejection notification:

@Slf4j
@Service("emailAdapter")
public class RejectionNotificationDelegate implements JavaDelegate {

  public void execute(DelegateExecution execution) throws Exception {
    String content = (String) execution.getVariable("content");
    String comments = (String) execution.getVariable("comments");

    log.info("Hi!\n\n"
           + "Unfortunately your tweet has been rejected.\n\n"
           + "Original content: {}\n\n"
           + "Comment: {}\n\n"
           + "Sorry, please try with better content the next time :-)", content, comments);
  }
}

In this code, we obtain process variables like tweet content and rejection comments and we log them in the console. This logic, of course, can be extended to send actual emails, the important thing here is that in order to model Camunda service we only need to implement JavaDelegate interface and override execute method.

In the next code snippet, we have the snippet for publishing the tweet:

public class TweetContentDelegate implements JavaDelegate {

  public void execute(DelegateExecution execution) throws Exception {
	    String content = (String) execution.getVariable("content");

	    AccessToken accessToken = new AccessToken("token", "secret");
	    Twitter twitter = new TwitterFactory().getInstance();
	    twitter.setOAuthConsumer("consumer");
	    twitter.setOAuthAccessToken(accessToken);

	    twitter.updateStatus(content);
	}
}

As in the previous code, we also have to implement JavaDelegate and override execute method.

More Camunda examples can be found on their official GitHub repository: https://github.com/camunda/camunda-bpm-examples

Conclusion

In the above diagram, we have only seen one example of a process, but Camunda offers a lot more features for modeling business processes and a lot of out-of-the-box implementations that save a lot of time. Also, almost everything is customizable.

If your company has a lot of processes that can be modeled as a BPMN process or processes that require human intervention then Camunda can be the right tool for the job.

In my opinion, it’s definitely worth to have a basic understanding of how Camunda works in order to be able to spot a use-case for this tool.

RECOMMENDATION SYSTEMS AND COLLABORATIVE ALGORITHM

Reading Time: 8 minutes

WHY DO WE NEED RECOMMENDATION SYSTEMS?

Walking through the steps of technology, which has rapid growth nowadays, represents a huge challenge for humanity. Software systems are currently creating a dynamic world, which undoubtedly facilitates human life and enables its improvement to the highest point of a digital being.

Many mobile and web systems offer easy usage and search through the internet. They are a necessary segment of education, health, employment, trade and of course fun. In such a fast and dynamic life, it is necessary to have more and more systems that will help us enable fast recommendation search when in need of finding relevant information, all in order to save us time. Usually, generated recommendation systems are according to the collaboration filtering algorithms or content-based methods.

RECOMMENDATION SYSTEMS IN REAL LIFE

In real life, people are overwhelmed with making a lot of decisions, no matter of its importance, either minor or major. Understanding human choices is a field studied by cognitive psychology.

One of the most important factors influencing the decisions an individual makes is ‘the past experience’, i.e. the decisions made by the man in the past, affect those he will make in the future.

Human actions are also dependent on the experiences they have gained in interactions with other people. The opinion of others affects our lives without us being aware of it. Relationship with friends affects what neighbourhood we will live in, which place we will visit during our vacation, in which bar we will have a drink, etc.

A real life recommendation system

If one person has positive experiences with another one, then he/she has gained trust and authority over the particular individual and is more likely to follow their advice, as well as choosing the decisions that the person chose when they were in a similar situation.

RECOMMENDATION SYSTEMS IN THE DIGITAL WORLD

All large companies and complex systems use collaborative filtering, as an example of this is the social network “Facebook” with the phrase “People you may know”. Facebook is a hugely complex system and has a massive database, which is why they have a need for an optimization of the user data set so that they can provide a precise recommendation. They also have collaborating systems for the news feed, as well as for the game, fun pages, groups, and event sections.

Another, well-known technology and media service provider which uses those collaboration systems is Netflix, with the “Because you watched” phrase. Netflix uses algorithms and machine learning, probably based on genres, history of the watched movies, ratings and the amount of all ratings of the users that have a similar content taste as ours.

Here is as well Amazon, the multinational technology company, which uses the algorithms for a product recommendation for their clients. They use the item-to-item approach for the recommendation.

Hint: Click on the picture if you want to know more about Item-to-Item Collaborative Filtering

Last example but not least, is the most successful business social network LinkedIn, which uses ex. “People in the Information Technology & Services industry you may know”, “People you may know from Faculty XXX”, “Trending pages in your network”, “Online events for you” and a number of other phrases.

I made a research on the collaborative filtering algorithm, so I will deeply explain how this algorithm works, please read the analysis in the sections below.

RECOMMENDATION SYSTEM AND COLLABORATIVE FILTERING

Based on the selected data processing algorithm, the systems use different recommendation techniques.

Content-based system

People who liked this also likes that as well

Collaborative filtering

Analyzing a huge amount of information

Hybrid recommendation systems

COLLABORATIVE FILTERING – DETAILED ANALYSIS

On a coordinate system, we can show the popularity of products, as well as the number of orders.

The X-axis is presenting the product curve, which shows the popularity of a variety of products. The most popular product is on the left part – at the head of the tail, and the less popular ones are in the right part. Under popularity, I mean how many times the product has been ordered, and viewed by others.

The Y-axis is representing the number of orders and product overviews over a certain time interval.

By analyzing the curve, it is noticeable that the often ordered products usually are considered most popular, and those that have not been ordered recently are omitted. That is what the collaborative filtering algorithm offers.

A measure of similarity is how similar two data objects are to each other. The measure of similarity in a dataset usually described as distance with dimensions, which represent characteristics of the objects that are in comparison. If the distance is small, then the degree of similarity is large, and vice versa. The similarities are very subjective and highly dependent on the domain of the systems.

The similarities are in the range of 0 to 1 [0, 1].

Two main similarities:

  • Similarity = 1 if X = Y
  • Similarity = 0 if X != Y

Collaborative filtering is processing the similarity of the data we have, with the help of several theorems, such as Cosine similarity, Euclidean Distance, Manhattan distance etc.

COLLABORATIVE FILTERING – COSINE SIMILARITY

In the beginning, we need to have a database and characteristics of the items.

For cosine similarity implementation, we need a matrix of similarity from the user database. In this matrix, the vector A are the products, and vector B are the users. Matrix is in format AXB. The fields of the matrix represent the grade/rating of the users’ Ai over the products Bj.

Therefore, we can imagine that we have users from 1 to n {1, …n} and grades/ratings on the products {1,…10}. Every row represents a different user, and every column represents one product. Every field of the matrix consists of the product grade/rating that the user has entered. Now, with this generated matrix, we can use the formula for finding the similarity between the users:

STEP 1:

Similarity (UserN, User1) =

 STEP 2:

In step 1, we can see that User N has the most similarities with User 2, but we can see that in the data we have a deficiency for some product ratings, so we should count the priority of the products that User N, has not set a rating. Now we need the values for the most similar users with User N, and those are User 2 and User 4. The following formula should be used:

Priority (product) = User2 (value*similarity) + User4 (value*similarity).

Example:

Priority(product3) = 8 * 0.66 = 5.28

Priority(product4) = 8 * 0.71 = 5.68

Priority(product5) = 7 * 0.71 + 8 * 0.66 = 10.25

STEP 3:

If we want to recommend two products to User N, these will be product5 and product4.

CONCLUSION:

Similarity theorems have their advantages and disadvantages, depending on what data set they apply. From the above analysis, we came to a conclusion that if the data contains zero values and are rarely distributed, we use the metric for computed a cosine similarity that handles nonzero values. Otherwise, if the data are densely distributed and diversity instead of similarity of users/products, and we have non-zero values, then we use the measures for calculating Euclidean distance. Such systems are under constant pressure from large volumes of data in databases and will undergo to even more challenges due to the daily increasing volume of the data. Therefore, there is a growing need for such new technologies that will dramatically improve the scalability of the recommendation systems.

QUESTION: WHAT WILL HAPPEN IN THE FUTURE?
ANSWER: ONLY TIME WILL TELL.

4 steps to start building apps with Flutter

Reading Time: 2 minutes

As a front-end developer, I was always curious about mobile apps and wanted to build one. In the last years, I was testing multiple frameworks from Ionic to React native and to be honest, I was never satisfied. Until one day by accident, I tried FLUTTER and this happened:

Flutter is Google’s UI toolkit for building beautiful, natively compiled applications for mobileweb, and desktop from a single codebase.

From Flutter website

Just reading this sentence blew my mind and I was in. After two months playing around with the framework, I would say it’s the one that will take over in the next years for sure. Let’s jump and see how to start with it.

1 – Download the Flutter SDK

Download the stable version and put it in as a PATH environment variable. The download link is here.

2 – Run Flutter Doctor

flutter doctor

This command is the most important one as it checks your environment and displays a report of the status of your Flutter installation. Do not forget to check the output carefully, to be able to know what is still missing.

3 – Start Coding

import 'package:flutter/material.dart';

void main() async {
  runApp(
    MaterialApp(
      debugShowCheckedModeBanner: false,
      home: Scaffold(
        body: MyApp(),
      ),
    ),
  );
}

class MyApp extends StatefulWidget {
  @override
  _MyAppState createState() => _MyAppState();
}

class _MyAppState extends State<MyApp> {
  @override
  Widget build(BuildContext context) {
    return Text(
           'Click me!',
           style: TextStyle(
                  fontSize: 60.0,
                  fontWeight: FontWeight.bold,
                ),
           );
  }
}

4 – Enjoy it

Multitenancy with Spring Boot

Reading Time: 7 minutes

Why should you consider implementing multitenancy in your project?

  • Cost: Multi-tenant architecture allows the sharing of resources, databases, and the application itself, thus the cost to run the system is fixed.
  • Maintenance: Users do not have to pay a considerable amount of fees to keep the software up to date. This reduces the overall cost of maintenance for each tenant.
  • Performance: Easier to assess and optimize speed, utilization, response time across the entire system, and even update the technology stack when needed.

In this blog we will implement multitenancy in our Spring Boot project.

Let’s create a simple Spring Boot project from start.spring.io, with only basic dependencies (Spring Web, Spring Data JPA, Spring Configuration Processor, MySQL Driver).

The good thing for implementing multitenancy is that we do not need additional dependencies.
We will split this example into two parts. In the first one, we will explain the idea/logic behind it and split the approach into 7 configuration steps, and explain every step. In the second part, we will see how it’s implemented in real life and we will test the solution.

1. Let’s start with creating Tenant Storage. We will use it for keeping the tenant value while the request is executing.

public class TenantStorage {

    private static ThreadLocal<String> currentTenant = new ThreadLocal<>();

    public static void setCurrentTenant(String tenantId) {
        currentTenant.set(tenantId);
    }

    public static String getCurrentTenant() {
        return currentTenant.get();
    }

    public static void clear() {
        currentTenant.remove();
    }
}

2. Next, we will create the Tenant Interceptor. For every request, we will set the value at the beginning and clear it at the end. As you can see, in Tenant Interceptor, I decided for this demo to fetch the value of the tenant from request header (X-Tenant), this is up to you. Just keep an eye on data security when using this in production. Maybe you want to fetch from a cookie or some other header name.

@Component
public class TenantInterceptor implements WebRequestInterceptor {

    private static final String TENANT_HEADER = "X-Tenant";

    @Override
    public void preHandle(WebRequest request) {
        TenantStorage.setCurrentTenant(request.getHeader(TENANT_HEADER));
    }

    @Override
    public void postHandle(WebRequest webRequest, ModelMap modelMap) {
        TenantStorage.clear();
    }

    @Override
    public void afterCompletion(WebRequest webRequest, Exception e) {

    }
}

3. Next thing is to add the tenant Interceptor in the interceptor registry. For that purpose, I will create WebConfiguration that will implement WebMvcConfigurer.

@Configuration
public class WebConfiguration implements WebMvcConfigurer {

    private final TenantInterceptor tenantInterceptor;

    public WebConfiguration(TenantInterceptor tenantInterceptor) {
        this.tenantInterceptor = tenantInterceptor;
    }

    @Override
    public void addInterceptors(InterceptorRegistry registry) {
        registry.addWebRequestInterceptor(tenantInterceptor);
    }
}

4. Now, let’s update the application.yml file with some properties for the database connections.

tenants:
  datasources:
    n47schema1:
      jdbcUrl: jdbc:mysql://localhost:3306/n47schema1?verifyServerCertificate=false&useSSL=false&requireSSL=false
      driverClassName: com.mysql.cj.jdbc.Driver
      username: root
      password:
    n47schema2:
      jdbcUrl: jdbc:mysql://localhost:3306/n47schema2?verifyServerCertificate=false&useSSL=false&requireSSL=false
      driverClassName: com.mysql.cj.jdbc.Driver
      username: root
      password:
spring:
  jpa:
    database-platform: org.hibernate.dialect.MySQL5Dialect

5. Following, we will wrap the tenant’s values to map with key = tenant name, value = data source in DataSourceProperties.

@ConfigurationProperties(prefix = "tenants")
@Component
public class DataSourceProperties {

    private Map<Object, Object> dataSources = new LinkedHashMap<>();

    public Map<Object, Object> getDataSources() {
        return dataSources;
    }

    public void setDataSources(Map<String, Map<String, String>> datasources) {
        datasources.forEach((key, value) -> this.dataSources.put(key, convert(value)));
    }

    public DataSource convert(Map<String, String> source) {
        return DataSourceBuilder.create()
                .url(source.get("jdbcUrl"))
                .driverClassName(source.get("driverClassName"))
                .username(source.get("username"))
                .password(source.get("password"))
                .build();
    }
}

6. Afterwards, we should create DataSource Bean, and for that purpose, I will create DataSourceConfig.

@Configuration
public class DataSourceConfig {

    private final DataSourceProperties dataSourceProperties;

    public DataSourceConfig(DataSourceProperties dataSourceProperties) {
        this.dataSourceProperties = dataSourceProperties;
    }

    @Bean
    public DataSource dataSource() {
        TenantRoutingDataSource customDataSource = new TenantRoutingDataSource();
        customDataSource.setTargetDataSources(dataSourceProperties.getDataSources());
        return customDataSource;
    }
}

7. At last, we will extend the AbstractRoutingDataSource and implement our lookup key.

public class TenantRoutingDataSource extends AbstractRoutingDataSource {

    @Override
    protected Object determineCurrentLookupKey() {
        return TenantStorage.getCurrentTenant();
    }

}

And we are done with the first part.

Let’s see how it looks in the real world:

For this example, we will use two schemas from the same database instance, we will create a user and get all users. Also, I will show you how you can implement Flyway and test the solution.

First, let’s configure our databases. In my local instance of MySQL server, we will create two schemas: n47schema1 and n47schema2.

Next step is to execute this CREATE statement on both schemas:

CREATE TABLE `users` (
	`id` INT(11) NOT NULL AUTO_INCREMENT,
	`name` VARCHAR(64) NOT NULL DEFAULT '' COLLATE 'utf8_general_ci',
	PRIMARY KEY (`id`)
);

Then, we will create two APIs, one for creating a user, and the other one to fetch all users.

@RestController
@RequestMapping("/users")
public class UserController {

    private final UserRepository userRepository;

    public UserController(UserRepository userRepository) {
        this.userRepository = userRepository;
    }

    @PostMapping
    public UserDomain addUser(@RequestBody UserRequestBody userRequestBody) {
        UserDomain userDomain = new UserDomain(userRequestBody.getName());
        return userRepository.save(userDomain);
    }

    @GetMapping
    public List<UserDomain> getAll() {
        return userRepository.findAll();
    }
}

Also we need to create UserDomain, UserRepository and UserRequestBody.

@Entity
@Table(name = "users")
public class UserDomain {

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;

    private String name;

    public UserDomain() {
    }

    public UserDomain(String name) {
        this.name = name;
    }

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }
}
public interface UserRepository extends JpaRepository<UserDomain, Long> {
}
public class UserRequestBody {
    private String name;

    public UserRequestBody() {
    }

    public UserRequestBody(String name) {
        this.name = name;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }
}

And we are done with coding.

We can run our application and start making a request.

First, let’s create some users with a POST request to http://localhost:8080/users. The most important thing not to forget is that we need to provide header X-Tenant with the value n47schema1 or n47schema2.

We will create two users for tenant n47schema1: Antonie and John. Example:

After that, we will change the X-Tenant header value to n47schema2 and create two users: William and Joseph.

You will notice that the ids retrieved in the response are the same as the first tenant value. Now let’s fetch the users by the API.

When you make a GET request to http://localhost:8080/users with header X-Tenant having value n47schema1 you will fetch the users from the n47schema1 schema, and when you make a request with a header value n47schema2 you will fetch from the n47schema2 schema.

You can also check the data in the database to be sure that it is stored correctly.

You can always set fallback if the X-Tenant header is not provided, or it’s with a wrong value.

As the last thing, I will show you how you can implement Flyway with multitenancy. First, you need to add flyway as a dependency and disable it in the application.yml

spring:
  flyway:
    enabled: false

Add PostConstruct method in DataSourceConfig configuration:

@PostConstruct
public void migrate() {
        for (Object dataSource : dataSourceProperties.getDataSources().values()) {
            DataSource source = (DataSource) dataSource;
            Flyway flyway = Flyway.configure().dataSource(source).load();
            flyway.migrate();
        }
}

And we are done, hope this blog helps you to understand what multitenancy is and how it’s implemented in Spring Boot project.

Download the source code

The project is freely available on our Bitbucket repository. Feel free to fix any mistakes and to comment here if you have any questions or feedback.

https://bitbucket.org/n47/spring-boot-multitenancy/src/master/

Server-side rendering with Nuxt.js

Reading Time: 5 minutes

What is server-side rendering

By default, modern JS frameworks produce and manipulate DOM in the browser as an output. But, it is possible to render the same codebase into HTML strings on the server and send them to the browser and finally compile the static markup into a fully working application on the client-side. A server-rendered application can also be considered isomorphic or universal, in the sense that the majority of your code runs on both the server and the client.

Trade-offs when using SSR

  • Development constraints
  • Build setup, deployment requirements
  • Server-side load

Advantages when using SSR

  • Better SEO
  • Faster time to content

Nuxt.js

Nuxt is a framework based on Vue.js, Node.js, Webpack and Babel. It is free and open-source and we can use it to create various applications from static landing pages to complex enterprise-ready web solutions.

Supports 3 modes of working:

  1. Server-Rendered (Universal SSR)
  2. Single Page Applications (SPA)
  3. Static-Generated (Pre Rendering)

Some of the features Nuxt provides:

  • Write Vue Files (*.vue)
  • Automatic Code Splitting
  • Server-Side Rendering
  • Powerful Routing System with Asynchronous Data
  • Static File Serving
  • ES2015+ Transpilation
  • Bundling and minifying of your JS & CSS
  • Managing element
  • Hot module replacement in Development
  • Pre-processor: Sass, Less, Stylus, etc.
  • HTTP/2 push headers ready
  • Extending with Modular architecture

Project structure

Here we have the project structure that Nuxt provides out of the box. The pages, middleware, plugins and layouts directories are framework-specific and we’ll briefly explain their purpose.

The Nuxt community has added great README.md files into each directory with links to the documentation.

The layouts directory defines all of the layouts that our application can use. This is a great place to add shared global components that are used across the application like the header and footer for example. By default, the template that is used for .vue files in the pages directory is default.vue. It is needed to inject all of the page’s components, text, assets, and data.

Pages is the only required directory. All Vue components in this directory are automatically added to the vue-router based on their filenames and the directory structure. We can have dynamic routes by adding an underscore (_) to a directory or a .vue file.

/pages
---/categories
------/_category_id
------/products
---------/_product_id

// The structure above generates a router config that will provide 
// a route for /categories/1/products/3 for example

Middleware is a function that can be executed before rendering a page or layout. There is a variety of reasons we may want to do so. Route guarding is a popular use where we could check the Vuex store for a valid login or validate some params (instead of using the validate method on the component itself). Another use for middleware can be to generate dynamic breadcrumbs based on the route and params. These functions can be asynchronous, meaning nothing will be shown to the user until the middleware is resolved.

The plugins directory allows us to register Vue plugins before the application is created. This allows the plugin to be shared throughout our app on the Vue instance and be accessible in any component. Most major plugins have a Nuxt version that can be easily registered to the Vue instance by following their docs. However, there will be circumstances when we will have to develop a plugin or need to adapt to an existing plugin.

Nuxt’s supercharged components

Nuxt’s page components have extra methods attached to them that we can use to provide additional functionality. The main ones we would use in a project will be the asyncData and fetch methods. Both are very similar in concept, they run asynchronously before the component is generated, and they can be used to populate the data of a component and the store. They also enable the page to be fully rendered on the server before sending it to the client even when we have to wait for some database or API call.

Summary

There’s a lot to cover with Nuxt and server-side rendering, this post aims to provide a general overview of the framework, why server-side rendering is important and how we can scaffold our next web application using Nuxt.

Before using SSR though, we should ask whether we actually need it. Generally, it depends on how important time to content is for the application. If we are building an internal dashboard where an extra few hundred milliseconds on initial load isn’t an issue, SSR wouldn’t be needed. However, in applications where time to content is critical, SSR can help us achieve the best possible performance.

This is a starter Nuxt application which showcases examples of the features mentioned in the post if you have any comments or feedback let us know.
nuxt-ssr-example

Thanks for reading!

Lifecycle hooks in Vue.js

Reading Time: 3 minutes

In this post we’ll be doing an overview of lifecycle hooks in Vue.js.

Each Vue application first creates a Vue instance, with the Vue function:

var vm = new Vue({
  router,
  render: function (createElement) {
    return createElement(App)
  }
}).$mount('#app')

This Vue instance during its initialization goes through several phases and it exposes some properties and methods in each phase. This is an example of Vue using the template method behavioural design pattern.

The methods which run by default in this process of creating and updating the DOM are called lifecycle hooks and using them properly allows easy access to a behind the scenes overview of how the library works.

Below we can see a simple diagram showcasing all methods in one instance.

We can see that we have 8 methods that we can split into 4 phases in an application’s lifecycle.

  • Creation or initialization hooks (beforeCreate, created)
  • Mounting or DOM insertion hooks (beforeMount, mounted)
  • Updating hooks (beforeUpdate, updated)
  • Destroying hooks (beforeDestroy, destroyed)

Creation hooks

beforeCreate is the first hook that gets called in a Vue component, it has no access to the components reactive data and events as they haven’t been initialized yet. It’s good to use this hook for non-reactive data.
The created hook has access to the components events and reactive data and state. The DOM and $el properties are still not available. This hook is usually used to perform API calls and store the response.

Mounting hooks

The next lifecycle hook to be called is beforeMount and it happens right before the component is mounted on the DOM. Here is our last opportunity to perform operations before the DOM gets rendered.
Mounted is called right after and now the DOM is available. This is a good place to add third-party integrations like charts, Google Maps etc, which interact directly with the DOM.

mounted() {
  console.log('This is the DOM instance', this.$el)
  this.$nextTick(function() {
    console.log('Child components have now been loaded')
  }
}

Updating hooks

beforeUpdate and updated are the two hooks that are called each time a reactive property is changed. The data in beforeUpdate holds the previous values of the property, while after updated runs, the instance has finished re-rendering.

beforeUpdate () {
  console.log('before update called')
},
updated () {
  console.log('update finished')
}

Destroying hooks

When beforeDestroy, we can still mutate the state and the instance is still functional. Here we can do some last moment data mutation before the instance is destroyed.
When destroy is called, the Vue instance does not exist anymore. All directives, event listeners have been removed and child components have been destroyed.

beforeDestroy () {
  console.log('before destroy called')
},
destroyed () {
  console.log('destroyed')
}

I hope this provides a good overview of the lifecycle hooks in a Vue application. For more information, refer to the official docs here.

JavaScript loop and object iteration (optimization)

Reading Time: 3 minutes

Nowadays, it’s interesting that loops become part of our life as developers and we use them at least one time a day. Because of that, one day I decided to investigate and go deeper into JavaScript loops, where I found very interesting things and if I do not share them with you, I am going to feel guilty.

Before you continue reading, I would strongly recommend you to read my previous blog which I believe you will find very useful to create a full picture of the loops. So, go on and read it.

Object properties iteration

Let’s first analyze object iteration and suppose that we have an object, something like:

var obj = {
    property1: 1,
    property2: 2,
    …
}

First, what comes to our mind is to iterate them with the standard for each iteration:

for (var prop in obj) {
    console.log(prop);
}

In this case, we are going to iterate through the object properties but is it the correct way? The answer is yes and no, depends on your needs. Another way to iterate trough is to exclude all inherited properties, which in some case we do not need. So, we can exclude them by using the JavaScript method hasOwnProperty(). You can find an explanation about in operator and hasOwnProperty() in my previous blog.

Since we learned some object optimization/improvements/usage, now the question is, can we really do an optimization?
The answer is yes. Before I am going to show you how we can do that, let’s spend some time on the loops.

Loop iteration

In order to continue the previous example, I will continue explaining the loops with object iteration (of course you can test it with a list of integers like the speed test examples or anything you want).

To accomplish that, we will need the JavaScript method Object.keys().

  • Object.keys() method returns an array of a given object but only the own properties of the object

Let’s write the standard for loop:

var keys = Object.keys(obj)
for (var i = 0; i < keys.length; i++) {
    console.log(obj[keys[i]]);
}

Now we have a solution where we decreased the iteration time by eliminating the time for the evaluation `keys.length` from O(N) to O(1) which is a big time saving if we iterate big arrays.
So, during the development, if you are not limited (like applying some best practices,…), you can add another optimization, by using while loop.

var i = keys.length;
while (i--) {
    console.log(obj[keys[i]]);
}

In this case, we do not declare a new variable, we don’t execute new operations and the while loop will automatically stop when it reaches -1.

Speed testing:

Since the new browsers like Chrome are very fast and optimized, in order to see the best speed differences, I would suggest executing the loops on IE where you will be able to see a real speed difference between the loops.

var arr = new Array(10000);

Example speed test 1	
console.time();                        
for (var i = 0; i < arr.length; i++) {
    // operations...		
    var sum = i * i;   		
}
Execution 1: 4.4ms		
Execution 2: 5.5ms		
Execution 3: 5ms			
Execution 4: 4.6ms			
Execution 5: 5ms	
					
Example speed test 2                    
console.timeEnd(); 	          
while (i--) {
    // operations...
    var sum = i * i;
}
console.timeEnd();

Execution 1: 3.7ms
Execution 2: 4.8ms
Execution 3: 3.9ms
Execution 4: 3.8ms
Execution 5: 4.2ms

Thank you for reading and I would appreciate any feedback.

What is White Labeling in Software Development? How to implement it for Android?

Reading Time: 3 minutes

What is White Label?

A white-labelled product is basically a software product or a service that is developed or created by some company and other companies can buy and rebrand it to their need so the users of this product don’t know about the real creator of the product but the brand. To explain it better, assume that there is a white-label company that makes an app and sells it to companies A and B and then they rename the app name to A and B and change the content of the app corresponding to their products so then the application becomes the service of companies A and B.

Why to use White Label product?

White label products become handy in certain situations. Especially it is better to go with a white-label solution when you want to enter a market with a minimum cost and time. It might be in case you want to start a startup project and don’t want to invest much in the beginning then the white-label solution is a good choice. Some advantages and disadvantages of using a white-label solution are listed below:

Advantages:

  • Less time to market time
  • Cost-efficient (time, money)
  • No developer hiring needed

Disadvantages:

  • Fewer customization capabilities
  • No control over the quality of the software
  • Dependency on external sources (developer)

Important points to consider

There are some important aspects which should be considered by a company which makes a white-label solution and by a client of this product.

  • Technical documentation – complete technical documentation of the software depending on the agreement between both sides.
  • Scope of customization – both sides must know which parts of the product can be customized, what kind of new functionalities can be added, what kind of limitations might occur or etc.
  • Maintenance & Support – for how long and what kind of maintenance and support should a client expect.

A simple explanation of how to implement a White Label for Android application

In Android, it is simple to do white label implementation thanks to “productFlavors” and “flavorDimensions“. By means of these two terms, it is possible to have different resources for different applications such as different themes, colours, logos, application names and etc. Additionally, using the” gradle” file we can also create some configurations to enable or disable certain functionalities of the app based on the needs of the customer. At the end when we build the application, only the resources which belong to the selected flavour and dimension will be included in the apk file.

Conclusion

To conclude the blogpost I would emphasize the two reasons which I think the most important reasons to use a white label product for your projects. The first one is it would cost you less financial investment (saving money). The second is less time to market time (quick launch) since you don’t need to do everything from scratch. These reasons sound good but it is better you always do your own analysis and comparisons before you decide what is best for your scenario.

Thanks for reading!

Below I have listed some links which I think is worth checking if you are going to implement a white label for an Android project:

Making Swift networking code more readable

Reading Time: 3 minutes

With Swift 5 a new type got introduced:

@frozen public enum Result<Success, Failure> where Failure : Error {

    /// A success, storing a `Success` value.
    case success(Success)

    /// A failure, storing a `Failure` value.
    case failure(Failure)

The Result type is an enum consisting of 2 cases. The success and the failure case. Each of them can hold a generic value. The failure case, however, is limited to Types extending the Error type.

Not a big deal? Sure, but it’s the little things which add up and make a difference in the long run.

Lately, I was migrating from SwiftyJSON to native JSON parsing. Each network call was implemented in the following way:

func fetchSomething(completion: @escaping (SomeReturnValue?, SomeError?) -> Void) {
    NetworkingTool.request { (response) in
        guard response.isValid
            else { completion(nil, .somethingBad); return }
        do {
            let returnValue = try SomeReturnValue(response: response)
            completion(returnValue, nil)
        } catch {
            completion(nil, .scarry)
        }
    }
}

Looks okayish. Good. So let’s use it:

fetchSomething { (result, error) in
    guard error == nil
        else { handleError(error: error); return }
    doSomething(result: result)
}

Ok. But how to implement the doSomething? With an optional? This can’t be right, right? Force unwrap the result? And what about the error case? Force unwrap it? Oh and wait, what about the case where neither a result nor an error is returned? Is this even a thing? Ok, let me look up the implementation…

So a tiny bit of ambiguity paired with different people working on different parts of the network stack for different features can cause a real heterogeneous system. (Which does not imply that this is a bad system!)

If the company you’re working for is in favour of code ownership, you may not encounter this one. But so far no company I worked for was about code ownership. It’s usually your code is my code is our code, comrade. Period. There are simply too many trucks outside.

As long as code ownership isn’t a thing and you do not want to spend time on endless syntax and architectural discussions with little benefit or enforce a (new) best practice on all of your colleagues. Again. It comes really handy to have a built-in Result type which is reasonably unambiguous.

And since we all know that we’re spending more time reading code than writing, this saves us all valuable time.

Tool Showcase: Node-RED

Reading Time: 5 minutes

Node-RED is a flow-oriented tool to wire together hardware devices, APIs and online services. It is mainly targeting the IoT market but can be used for a lot of other things as well. Because of its easy to use browser-based UI and drag and drop programming system, it’s really beginner-friendly with a steep learning curve.

Even though it was developed by IBM in 2013, it’s not really known to most of the IT community. At least none of my colleagues knew it. That’s worth for me to write this tool showcase. Enjoy reading!

Getting started

Instead of just reading along, I encourage everybody to just start Node-RED and try it yourself. If docker is installed, this is just a matter of seconds. Use the following command to start a Node-RED instance locally:

docker run -it -p 1880:1880 --name mynodered nodered/node-red

That’s it. You are ready to go! Open your browser and go to localhost:1880 to access the Node-RED UI.

One of the simplest flows is the following one:

Drag an “http in”, “template” and “http out” node into the flow and connect them. After clicking the deploy button you can access localhost:1880/<whateverPathYouConfiguredInYourHttpInNode> to see whatever you’ve configured in your template node. Congratulations, you have just created your first Node-RED flow!

Of course, rendering static content on an endpoint is not the most exciting thing to do, but between the HTTP in and out nodes, you’re free to do whatever you want. Nodes to make HTTP calls to different URLs, reading and writing files and much more are provided by Node-RED by default.

The power of the community

Node-RED uses Node.js for its nodes (yes, the terminology “node” is overused in the Node-RED context 🙂 ). This has a big advantage, that new nodes can be added easily from the node package manager (npm). You can find these nodes by searching for “node-red-contrib” in the npm repository. An even simpler option is to install these nodes using the “Manage Palette” option in the UI. There you can install new nodes with a single click.

There are nodes for almost everything. Need support for slack? Yep, there’s a node for that. Tired of pressing light switches in your house to turn off and on your Philips Hue lights? Yep, there’s a node for that as well. Whatever you can imagine, there’s a node for it.

A slightly more advanced flow

To test some Node-RED features, I tried to come up with a slightly more complicated example. I own some Philips Hue lamps and a LaMetric Time. After searching some nodes for these devices, I was really surprised that somebody already built some for these two devices (I was especially surprised about the support for the not so well-known LaMetric Time).

My use case was pretty straight forward. Turn on the lights when it gets dark and display a message on my LaMetric near my TV. At midnight, turn off the lights and display a different message. Additionally, I wanted some web endpoints that I could call to trigger both actions manually.

After only a few minutes, I had the following flow:

And it works! I found a node that sends an event as soon as the sun goes down at my particular location. Very cool. All the other nodes (integration for Philips Hue and LaMetric) can also easily be added with the “Manage Palette” option in the GUI. All in all, the implementation of my example use-case was pretty straight forward and required no programming know-how.

Expandability

Even though there are almost 3000 community-contributed nodes available to use, you might have some hardware or API that does not (yet) have some pre-made nodes. In that case, you can implement your own nodes pretty easily. The only thing required is a text editor and some node.js know-how.

The Node-RED documentation provides a good guide on how to create custom nodes: https://nodered.org/docs/creating-nodes/first-node

It is highly recommended to push your custom nodes to the npm repository to be used by the community.

Additional Resources

There are a whole lot more features that are not described in this blogpost.

  • Flows are just .json files and can easily be imported or exported or added to a git repository
  • Flows can be converted to subflows and used like nodes in other flows
  • Multiple flows can run in parallel and trigger each other
  • There are special nodes for error handling or low-level TCP communication
  • There are keyboard shortcuts for everything
  • … and much more!

Feel free to have a look yourself:

Thanks for reading!

AEM 6.5 and SSL

Reading Time: 4 minutes

Today, almost all websites are delivered to the client via HTTPS, but HTTP is still frequently used for backend communication. To increase security Adobe has simplified the SSL configuration with AEM 6.3 and provides it as a feature called SSL By Default. This is intended to ensure that the internal connection to AEM instances is exclusively performed in an encrypted and authenticated way. This blog post describes a simple way to secure a local AEM instance using self-signed certificates for testing purposes.

Prerequisites

The following steps were evaluated on a Windows platform using cmd. It should be possible to perform them in the same way on another environment, but there may be small differences in the syntax of some commands. First, ensure that the tools below are installed:

Create self-signed certificate

AEM requires a private key/certificate pair in DER format for SSL setup. It can be created using the OpenSSL command-line tool.

1. Generate a private key of length 4096 bits.

openssl genrsa -out localhostprivate.key 4096

2. Create the certificate request with common name localhost from the private key.

openssl req -sha256 -new -key localhostprivate.key -out localhost.csr -subj "/CN=localhost"

3. Generate the SSL certificate and sign it with the private key (this is why it is called self-signed certificate). The certificate will be valid for one year.

openssl x509 -req -days 365 -in localhost.csr -signkey localhostprivate.key -out localhost.crt

4. Convert the private key to DER format.

openssl pkcs8 -topk8 -inform PEM -outform DER -in localhostprivate.key -out localhostprivate.der -nocrypt

Install SSL configuration via curl

Execute the following command from the directory where the private key/certificate was created.

curl -u admin:admin -F "keystorePassword=password" -F "keystorePasswordConfirm=password" -F "truststorePassword=password" -F "truststorePasswordConfirm=password" -F "privatekeyFile=@localhostprivate.der" -F "certificateFile=@localhost.crt" -F "httpsHostname=localhost" -F "httpsPort=8443" http://localhost:4502/libs/granite/security/post/sslSetup.html

Here AEM runs on port 4502 with credentials admin:admin. HTTPS is set up on port 8443. The keystore/truststore password has been set to password (Use a stronger secret in production).

If the command was successful you will get the following response:

Now you should be able to open access the AEM instance on port 8443 over HTTPS: https://localhost:8443/

Note that the browser will most likely show a “Not Secure” warning because self-signed certificates are not trusted by default.

Disable HTTP

Currently, the AEM instance is still accessible via http://localhost:4502. In AEM 6.5 there is no option to deactivate HTTP by configuration. Be careful: The OSGi configuration Apache Felix Jetty Based Http Service contains a checkbox to disable HTTP. It has been deprecated but is still active. You may not be able to access your instance anymore if you change something on this dialogue. The new SSL configuration options can be found on Adobe Granite SSL Connector Factory.

Adobe proposes a different approach to deactivate HTTP. A sling mapping can be added to redirect incoming HTTP requests to the HTTPS port. Open CRX DE https://localhost:8443/crx/de/index.jsp and create a new node below /etc/map/http:

  • Name: localhost.4502
  • Type: sling:Mapping

Then add a new property to this node:

  • Name: sling:redirect
  • Type: String
  • Value: https://localhost:8443

Click on Save All. Try to open http://localhost:4502 on the browser. You should now be redirected to https://localhost:8443/.

Conclusion

With the feature, SSL By Default Adobe provided a fast and easy way to set up SSL on an AEM instance. Self-signed certificates can be used for testing purposes on local/test instances. However, many organizations use their own PKI which manages the certificates. In this case, you should avoid self-signed certificates and use certificates signed by the certification authority of your organization.

Typescript/ES7 Decorators to make Vuex modules a breeze

Reading Time: 5 minutes

Overview

Who does not like to use Vuex in a VueJS App? I think no one 🙂

Today I would like to show you a very useful tool written in TypeScript that can boost your productivity with Vuex: vuex-module-decorators. Lately, it’s getting more popular. Below you can see the weekly downloads of the package which raise up constantly:

Weekly downloads on npmjs.com

But what does it exactly do and which benefit does it provide?

  • Typescript classes with strict type of safety
    Create modules where nothing can go wrong. The type check at compile time ensures that you cannot mutate data that is not part of the module or cannot access unavailable fields.
  • Decorators for declarative code
    Annotate your functions with @Action or @Mutation to automatically turn them into Vuex module methods.
  • Autocomplete for actions and mutations
    The shape of modules is fully typed, so you can access action and mutation functions with type-safety and get autocomplete help.

In short, with this library, you can write Vuex module in this format:

import { VuexModule, Module, Mutation, Action } from 'vuex-module-decorators'
import { get } from 'axios'

@Module
export default class Posts extends VuexModule {
    posts: PostEntity[] = [] // initialise empty for now

    get totalComments (): number {
        return posts.filter((post) => {
            // Take those posts that have comments
            return post.comments && post.comments.length
        }).reduce((sum, post) => {
            // Sum all the lengths of comments arrays
            return sum + post.comments.length
        }, 0)
    }

    @Mutation
    updatePosts(posts: PostEntity[]) {
        this.posts = posts
    }

    @Action({commit: 'updatePosts'})
    async function fetchPosts() {
        return get('https://jsonplaceholder.typicode.com/posts')
    }
}

As you can see, thanks to this package, we are able to write a Vuex module by writing a class which provides Mutations, Actions and Getters. Everything in one single file. How cool is that?

Benefits of type-safety

Instead of using the usual way to dispatch and commit…

store.commit('updatePosts', posts)
await store.dispatch('fetchPosts')

…with the getModule Accessor you can now use a more type-safe mechanism that does not offer type safety for the user data and no help for automatic completion in IDEs.

import { getModule } from 'vuex-module-decorators'
import Posts from `~/store/posts.js`

const postsModule = getModule(Posts)

// access posts
const posts = postsModule.posts

// use getters
const commentCount = postsModule.totalComments

// commit mutation
postsModule.updatePosts(newPostsArray)

// dispatch action
await postsModule.fetchPosts()

Core concepts

State

All properties of the class are converted into state props. For example, the following code

import { Module, VuexModule } from 'vuex-module-decorators'

@Module
export default class Vehicle extends VuexModule {
  wheels = 2
}

is equivalent to this:

export default {
  state: {
    wheels: 2
  }
}

Mutations

All functions decorated with @Mutation are converted into Vuex mutations. For example, the following code

import { Module, VuexModule, Mutation } from 'vuex-module-decorators'

@Module
export default class Vehicle extends VuexModule {
  wheels = 2

  @Mutation
  puncture(n: number) {
    this.wheels = this.wheels - n
  }
}

is equivalent to this:

export default {
  state: {
    wheels: 2
  },
  mutations: {
    puncture: (state, payload) => {
      state.wheels = state.wheels - payload
    }
  }
}

Actions

All functions that are decorated with @Action are converted into Vuex actions.

For example this code

import { Module, VuexModule, Mutation, Action } from 'vuex-module-decorators'
import { get } from 'request'

@Module
export default class Vehicle extends VuexModule {
  wheels = 2

  @Mutation
  addWheel(n: number) {
    this.wheels = this.wheels + n
  }

  @Action
  async fetchNewWheels(wheelStore: string) {
    const wheels = await get(wheelStore)
    this.context.commit('addWheel', wheels)
  }
}

is equivalent to this:

const request = require(‘request')

export default {
  state: {
    wheels: 2
  },
  mutations: {
    addWheel: (state, payload) => {
      state.wheels = state.wheels + payload
    }
  },
  actions: {
    fetchNewWheels: async (context, payload) => {
      const wheels = await request.get(payload)
      context.commit('addWheel', wheels)
    }
  }
}

Advanced concepts

Namespaced Modules

If you intend to use your module in a namespaced way, then you need to specify so in the @Module decorator:

@Module({ namespaced: true, name: 'mm' })
class MyModule extends VuexModule {
  wheels = 2

  @Mutation
  incrWheels(extra: number) {
    this.wheels += extra
  }

  get axles() {
    return this.wheels / 2
  }
}

const store = new Vuex.Store({
  modules: {
    mm: MyModule
  }
})

Registering global actions inside namespaced modules

In order to register actions of namespaced modules globally, you can add a parameter root: true to @Action and @MutationAction decorated methods:

@Module({ namespaced: true, name: 'mm' })
class MyModule extends VuexModule {
  wheels = 2

  @Mutation
  setWheels(wheels: number) {
    this.wheels = wheels
  }
  
  @Action({ root: true, commit: 'setWheels' })
  clear() {
    return 0
  }

  get axles() {
    return this.wheels / 2
  }
}

const store = new Vuex.Store({
  modules: {
    mm: MyModule
  }
})

This way the @Action clear of MyModule will be called by dispatching clear although being in the namespaced module mm. The same thing works for @MutationAction by just passing { root: true } to the decorator-options.

Dynamic Modules

Modules can be registered dynamically simply by passing a few properties into the @Module decorator, but an important part of the process is, we first create the store, and then pass the store to the module:

import store from '@/store'
import {Module, VuexModule} from 'vuex-module-decorators'

@Module({dynamic: true, store, name: 'mm'})
export default class MyModule extends VuexModule {
  /*
  Your module definition as usual
  */
}

Installation

The installation of the package is quite simple and does not require many steps:

Download the package

npm install vuex-module-decorators
# or
yarn add vuex-module-decorators

Vue configuration

// vue.config.js
module.exports = {
  // ... your other options
  transpileDependencies: [
    'vuex-module-decorators'
  ]
}

For more details, you can check the plugin’s official documentation.

Conclusion

I personally think this package can rump up your productivity because it embraces the “modularisation” pattern by making your app more scalable. Another big advantage is the fact you have “type-checking” thankfully to TypeScript. If you have a VueJS TypeScript Application, I strongly recommend you this package.

Spring Cloud OpenFeign

Reading Time: 3 minutes

Choosing the microservice architecture and Spring Boot means that you’ll need to pick the cleanest possible way for your services to communicate between themselves. Feign Client is one of the best solutions for this issue. It is a declarative Java web service client initially developed by Netflix. It’s an abstraction over REST-based calls allowing your microservices to communicate cleanly without the need to know REST details happening underneath. The main idea behind Feign Client is to create an interface with method definitions representing your service call. Even if you need some customization on requests or responses, you can do it in a declarative way. In this article, we will learn about integrating Feign in a Spring Boot application with an example for REST-based HTTP calls. An example will be given, in which two microservices will communicate with each other to transfer some data. But, first, let’s get familiar with feign.

What is Feign?

Feign is a declarative web service client that makes writing web service clients easier. We use the various annotations provided by the Spring framework such as Requestmapping, @PathVariable in a Java interface to Feign, a declarative web service client. It makes writing web service clients easier. To use Feign create an interface and annotate it. It has pluggable annotation support including Feign annotations and JAX-RS annotations. Feign also supports pluggable encoders and decoders. Spring Cloud adds support for Spring MVC annotations and for using the same HttpMessageConverters used by default in Spring Web. Spring Cloud integrates Ribbon and Eureka to provide a load-balanced HTTP client when using Feign.

Example Management API simulator

In the following code section, you can see a Feign Client resource example. The interface extends the origin API interface to declare the @FeignClient. The @FeignClient declares that a REST client should be created with this interface.

Setup pom.xml

The following dependency will be added:

    <dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-starter-openfeign</artifactId>
    </dependency>

Enable Feign Client

Now enable the Eureka Feign by using the @EnableFeignClients annotation in a main Spring Boot application class that is also annotated with the @SpringBootApplication annotation.

package com.example.demo;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.openfeign.EnableFeignClients;
@SpringBootApplication
@EnableFeignClients
@EnableDiscoveryClient
public class FeignClientApplication {
  public static void main(String[] args) {
    SpringApplication.run(FeignClientApplication.class, args);
  }
}

Use a Circuit Breaker with a Feign Client

If you want to use the Spring Cloud OpenFeign support for Hystrix circuit breakers, you must set the feign.hystrix.enabled property to true. In the Feign version of the Agency app, this property is configured in application.yml:

feign:
  hystrix:
    enabled: true
@FeignClient(name = "Validations", url = "${validations.host}")
public interface ValidationsClient {

    @GetMapping(value = "/validate-phone")
    InfoMessageResponse<PhoneNumber> validatePhoneNumber(@RequestParam("phoneNumber") String phoneNumber);

}

In the application.yml file, we will store the URL of the microservice with which we need to communicate:

validations:
  host: "http://localhost:9080/validations"

We will need to add a config for Feign as follows:

package com.demo;

import feign.Contract;
import org.springframework.cloud.openfeign.support.SpringMvcContract;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class FeignClientConfiguration {
    @Bean
    public Contract feignContract() {
        return new SpringMvcContract();
    }
}

Congrats! You just managed to run your Feign Client application by which you can easily locate and consume the REST service.

Summary

In this article, we have launched an example of microservice that communicates with one another. This article should be treated as an introduction to the subject of Feign Client and a discussion of integration with other important components of the microservice architecture.

Did we forget the Immutable classes in Java?

Reading Time: 5 minutes

Well, I did. In everyday work we can hear discussions about microservices, containers, beans, entities etc. but it is very hard and rare to hear some talk about immutable or mutable classes. Why is it like that?

Let’s first refresh our memories of what an Immutable class is.

Immutable class means, that once an object is initialized we cannot change its content.

To be more clear, let’s see how we can write Immutable classes in Java.

Basic rules to write some immutable classes are:
  1. Don’t provide “setter” methods — methods that modify fields or objects referred to by fields.
  2. Make all fields final and private.
  3. Don’t allow subclasses to override methods. The simplest way to do this is to declare the class as final. A more sophisticated approach is to make the constructor private and construct instances in factory methods.
  4. If the instance fields include references to mutable objects, don’t allow those objects to be changed:
    • Don’t provide methods that modify the mutable objects.
    • Don’t share references to the mutable objects. Never store references to external, mutable objects passed to the constructor; if necessary, create copies, and store references to the copies. Similarly, create copies of your internal mutable objects when necessary to avoid returning the originals in your methods.

How to make Immutable classes

After defining these basic rules there are several different solutions to write Immutable classes.

Basic one without using external libraries:
final public class ImmutableBasicExample {

   final private Long accountNumber;
   final private String accountCurrency;

   private void check(String accountCurrency, Long accountNumber) {
       // Some constructor rules
       // throw new IllegalArgumentException()
   }

   public ImmutableBasicExample(
           String accountCurrency, Long accountNumber) {
       check(accountCurrency, accountNumber);
       this.accountCurrency = accountCurrency;
       this.accountNumber = accountNumber;
   }

   public String getAccountCurrency() {
       return accountCurrency;
   }

   public Long getAccountNumber() {
       return accountNumber;
   }
}

Use final class, make fields final and set them in the constructor. Don’t write setter for fields, just getter.

We can use Lombok:
import lombok.Value;

@Value
public class LombokImmutable {
   Long accountNumber;
   String accountCurrency;
}

Much shorter, with just one annotation we have done our job. Value is variant of data and is immutable because: all fields are private and final, the class is made final, the getter for each field, additional some basic methods like toString(), equals() and hasCode().

Or the newest way with using a record (Java 14 preview features):
public record RecordRequstBody(
       Long accountNumber,
       String accountCurrency) {
}

By now the most sophisticated way to make an Immutable class. Small,  readable and useful code. Without using external libraries. Compiler auto generates following code: private final field, public read assessor, public constructor with signature same like state description and implementation of following methods: toString(), hashCode() and equals().

I’m sure that there are other ways to write Immutable classes, but these are enough to understand how much code and effort we need to write one.

Use of Immutable classes

We already use Immutable classes every day in our work. All primitives wrapper classes are immutable. Here is one everyday practical example:

Integer integerExample = 6;
System.out.println(integerExample);
changeInteger(integerExample);
System.out.println(integerExample);

where changeInteger is:

private void changeInteger(Integer integerExample) {
   integerExample = integerExample + 1;
}

The output will be:

6
6

It is because the line

integerExample = integerExample + 1;

creates a new reference to the new object for integerExample and the integerExample in the main code is still referencing the old Integer object, which is not changed.

This also applies to all other primitive wrappers that are immutable: Byte, Short, Integer, Long, Float, Double, Character, Boolean. Additionally BigDecimal, BigInteger and LocalDate are immutable.

Other one interesting immutable class in Java is String. If we write the following code:

String string1 = "first string";
string1 += "concatenation";

Concatenation of two Strings with “+” will produce new String. It is fine if we do this with two or a few Strings, but if we build one String with more concatenations then we will initialize more objects. (That is why here it is better to use StringBuilder)

We can create and use Immutable classes for RequestBody (DTO) in our rest controllers. It is a good idea because the state will be validated when the request is created, once, and will be valid all time after that. We will not have any need to change the state of the request. If we are changing the request then we are doing something wrong.

Another scenario where we can use them is when we need to have some business classes (where we will process some data) where the state should be unchanged. We can find a few examples for this, using them for Currency, Account information etc…

How often should we use them?

Well we see that there are some benefits using them:

  • They are thread-safe
  • Safer because their state can not be changed
  • They are simple for construct, test and use
  • It is good to understand them and use them if you work more functional and concurrent programming

But there are disadvantages too:

At first, you can’t change fields in them. To do that you should create a copy of them with changed values. It means that you will have more objects initialized in the VM and for that, you should add some code to copy objects.
To be sure that you will make some classes immutable you should be sure that the state of the object would not be changed. These days developers don’t spend time analyzing where that class will be or will not be changed.

There is one general concept from Effective Java, which describes the use of immutability:

Classes should be immutable unless there’s a very good reason to make them mutable… If a class cannot be made immutable, limit its mutability as much as possible.

Hibernate techniques for mapping sets, lists and enumerations

Reading Time: 4 minutes

As we all know, Hibernate is an Object Relational Mapping (ORM) framework for the Java programming language. This blog post will teach you how to use advanced hibernate techniques for mapping sets, lists and enums in simple and easy steps.

Mapping sets

Set is a collection of objects in which duplicate values are not allowed and the order of the objects is not important. Hibernate uses the following annotation for mapping sets:

  • @ElementCollection – Declares an element collection mapping. The data for the collection is stored in a separate table.
  • @CollectionTable – Specifies the name of a table that will hold the collection. Also provides the join column to refer to the primary table.
  • @Column – The name of the column to map in the collection table.

@ElementCollection is used to define the following relationships: One-to-many relationship to an @Embeddable object and One-to-many relationship to a Basic object, such as Java primitives (wrappers): int, Integer, Double, Date, String, etc…

Now you’re probably asking yourself: Hmmm… How does this compare to @OneToMany?

@ElementCollection is similar to @OneToMany except that the target object is not an @Entity. These annotations give you an еasy way to define a collection with simple/basic objects. But, you can’t query, persist or merge target objects independently of their parent object. ElementCollection does not support a cascade option, so target objects are ALWAYS persisted, merged, removed with their parent object.

Mapping lists

Lists are used when we need to keep track of order position and duplicates of the elements are allowed. Additional annotation that we are going to use here is @OrderColumn, that specified the name of the column to track the element order/position (name defaults to <property>_ORDER):

Mapping maps

When you want to access data via a key rather than integer index, you should probably decide to use maps. Additional annotation used for maps is @MapKeyColumn which helps us to define the name of the key column for a map. Name defaults to <property>_KEY :

Mapping sorted sets

As we mentioned before, the set is an unsorted collection with no duplicates. But what if we don’t need duplicates and the order of retrieval is also important? In that case, we can use @OrderBy and specify the ordering of the elements when a collection is retrieved.

Syntax: @OrderBy(“[field name or property name] [ASC |DESC]”)

Mapping sorted maps

@OrderBy can be also used in maps. In that case, the default value is a key column, ascending.

Mapping Enums

By default, Hibernate maps an enum to a number. This mapping is very efficient, but there is a high risk that adding or removing a value from your enum will change the ordinal of the remaining values. Because of that, you should map the enum value to a String with the @Enumerated annotation. This annotation is used to reference an Enum type and save the field in database as String.

Conclusion

In this article, we have taken a look in the simple techniques for mapping sets, lists and enumerations when we are using Hibernate. I hope you enjoyed reading it and have found it helpful.

GraphQL! 😍😍

Reading Time: 3 minutes

As excited as I am to talk about GraphQL, I don’t have many words to say.

Last year, we had a Front end conference in Konstanz, which was so amazing. Thanks to GraphQL Day Bodensee.

The conference promised to focus on adopting GraphQL and to get the most out of it in production. To learn from a lineup of thought leaders and connect with other forward-thinking local developers and technical leaders.

GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.

https://graphql.org/

Just reading this paragraph made me more interested than ever.

How was the conference?

The conference delivered everything that it promised. Small talks but very informative and helpful. People came from almost every country in Europe and America, they were super friendly. So now lets dive in, so that you all have an idea about this awesome query language.

Just to be clear GraphQL can be used with most of the framework out there.

Steps to use GraphQL?

1. Create a GraphQL service.

First, we define types and fields on those types

type Query {
  me: User
}

type User {
  id: ID
  name: String
}

Then, we create functions for each field on each type

function Query_me(request) {
  return request.auth.user;
}

function User_name(user) {
  return user.getName();
}

When the GraphQL service is running e.g. on a web service, we can send GraphQL queries to validate and execute. First, the received query is checked to ensure it only refers to the types and fields defined, then runs the provided functions to produce a result.

2. Send the Query

{
  me {
    name
  }
}

3. Get the JSON results

{
  "me": {
    "name": "Luke Skywalker"
  }
}

Pros +++

  • Good fit for complex systems and microservices
  • Fetching data with a single API call
  • No over- and under-fetching problems
  • Validation and type checking out-of-the-box

Cons – – –

  • Performance issues with complex queries
  • Overkill for small applications
  • Web caching complexity
  • Takes a while to understand

GraphQL vs Rest

Putting it together

GraphQL is an awesome query language, but as you can see from the pros & cons it doesn’t make sense to use it in every situation, in cases of small projects its an overkill, but in bigger ones, it can make the complexity of Backend Frontend much much easy.

Mobile App Marketing

Reading Time: 7 minutes

Almost everyone uses mobile devices nowadays. The market for mobile applications is really huge. The site https://42matters.com/ has statistics that show that currently, over 6.1 million applications are existing from > 1.9 million publishers. If you have a cool idea and you finally create a mobile application, the first goal is that your application will be downloaded from the bunch of apps in stores. Your app may be exceptional, well coded, without bugs, well designed, but still, the download numbers can be poor. Where is the problem? Why?

The answer is simple: You need a great marketing strategy to succeed.

This blog post will give you some theoretical knowledge for Mobile App Marketing, an overview of the main stages of the marketing funnel, the goals and metrics you should measure, and brief overview of some popular tools.

Mobile App Markets

If we enter a mobile app world we will probably go to one of the most popular ones:

  • Android Play Store (> 2.8 million live applications)
  • Apple App Store (> 2.2 million live applications)
  • Samsung Galaxy Apps
  • LG SmartWorld
  • Huawei App Store
  • Amazon App Store

These numbers show us that the competition is strong and probably our app must compete with at least 50 or 100 other apps on the same topic as ours.

Mobile App Marketing (MAM)

There are many definitions on the internet about mobile app marketing…

“Mobile app marketing is the process of creating marketing campaigns to reach your users at every stage of the marketing funnel. Learn key mobile app marketing tactics for every stage of user engagement with your app.”

The difference between mobile app marketing and mobile marketing:

MAM creates complete campaigns for a mobile application, it follows the complete cycle – from downloading the app, first engagements, becoming a regular user, and using many in-app purchases.

On the other hand, mobile marketing is every marketing that happens on mobile devices, including advertisements on websites, the banners presented on responsive web pages, email marketing, etc.

Like the general conclusion, we can say MAM is a subset of Mobile Marketing.

Mobile App Marketing Funnel

In the terminology of marketing, we often meet the word marketing funnel. Here is one definition of what it is:

“The marketing funnel is a visualization for understanding the process of turning leads into customers, as understood from a marketing (and sales) perspective.”

An example of a Mobile App Marketing funnel

A typical marketing funnel consists following stages:

  • Awareness
  • Consideration
  • Conversion
  • Customer Relationship
  • Retention

Let’s see a little deeper the three most important stages: awareness, conversion and retention.

Awareness

Even before we make our first release of the application we have to make awareness for our app. It’s not enough to start marketing after the release. The goal is to attract targeted users to your app before the production phase.

Here are a few different ways to attract new app users and raise awareness about your app:

  • Using social media
  • Launching paid advertising
  • App store
  • Websites for reviewing apps
  • Using QR codes

Conversion

Conversion is maybe the keyword if we see it from a business perspective. Conversion is every step that leads to financial benefit for our application. Conversion is every paid download of our application, every user that creates a profile and completes the onboarding process, every in-app purchase inside the app. As a conversion, we could count a regular use of our app.

The strategies we could use for a better conversion include the following:

  • Providing an easy registration process, with clear and not confusing steps for finishing the onboarding process
  • App free of bugs
  • Clear UI with great UX
  • Using push notifications and inside app messages to keep users engaged and informed about new things

Retention

Have you ever had this happen to you? Downloaded an app and deleted it in the next few minutes? For me personally yes – multiple times. I can witness that the reasons were difficulties in the registration process, mandatory registration, or apps with only landing page visible and paid strategy to see all other stuff inside.

The user is our KING and we have to make him happy. To make the users happy we must find what their needs are. Understand their desires, the way they want to use the app, what time of the day they are most active, are they happy if we are sending messages in the mornings or later in the night?

It’s 5-25 times more expensive to replace a customer/user than to keep one existing.

Some of the strategies to improve retention are push notifications, in-app messaging, taking surveys, or starting loyalty programs.

Goals and Metrics

There are different tactics and strategies in Mobile App Marketing, but there are two things in general that are present everywhere: Setting up goals and measure the key metrics to know if your strategy is working properly. The goals are necessary to know what we want to achieve with our marketing strategy. For example, our strategy could be increasing the number of downloads, by improving the quality of the keywords used in the store.

There are many different app marketing metrics but I will describe only the most important ones:

Churn rate

This is the percentage of users that stop to use the app. Statistics showed that almost 70 per cent of the users stopped using the app immediately after the installation, and around 98 per cent after 3 months.

Session Length

This parameter is for the time that the user spends in the application from login to close the application. Different applications should measure and consider this metric in different ways. For example, if we have an application like the a Health Fitness Application where the user is tracking the steps or the time spent in the gym, 1-2 minutes could be enough time. On the other side, if we measure the session length for an application for a newsfeed or a complex game – 2 minutes are not enough time.

Retention Rate

This is the percentage of users that come back to the application during a period of time and use the app regularly. Knowing the statistics of the people who use the app is a big benefit because we can target those profiles of users in our marketing channels.

Lifetime Value (LTV)

This metric is related to application revenue. It represents the financial value of our app and how much each customer is worth for us.

Tools

On the market for MAM tools, the offer is huge. There are many online tools. The span of services they offer is from App Store Optimization (ASO), sending a push notification through keywords improvement strategies and A/B testing tools.

Some of the tools offer great user interfaces, a lot of beneficial statistics that can help us build our mobile app marketing strategy.

Here is a list of the most popular ones:

  • Firebase
  • Optimizely
  • App Annie
  • AppRadar
  • Google AdMob
  • Leanplum
  • Airship
  • DeepLink

Personally I have experience with few of these like Firebase and Airship which offer a ton of services. Firebase is a very powerful tool, but I will go in more detail and will compare some of these tools in my next post.

Because of the big competitions, we should be aware that a top-quality application is not enough. A good mobile app marketing strategy is a must!

What is next?

This post will continue with a comparison of some of the most popular MAM tools. We see some great examples of how these tools help big companies to improve their marketing. I will make a comparative analysis of the prices and the plans they are offering. Also at the end, I will give some suggestions on how we can use these tools to improve our mobile strategy.

Crypto Trading Bot

Reading Time: 6 minutes

Every year, N47 as a tech family celebrates a tech festival as Hackdays at the end of the year. In December 2019 we were in Budapest, Hungary for Hackdays. There were five different teams and each team had created some cool projects in a short time. I was also part of a team and we implemented a simple Trading Bot for Crypto. In this blog post, I want to share my experiences.

Trading Platform

To create a Trading Bot, you first need to find the right trading platform. We selected Binance DEX, which can offer a good volume for selected trading pairs, testnet for test purposes and was a Decentralized EXchange (DEX). Thanks to DEX, we can connect the wallet directly and use the credit directly from it.

Binance Chain is a new blockchain and peer-to-peer system developed by Binance and the community. Binance DEX is a secure, native marketplace that is based on the Binance Chain and enables the exchange of digital assets that are issued and listed in the DEX. Reconciliation takes place within the blockchain nodes, and all transactions are recorded in the chain, creating a complete, verifiable activity book. BNB is the native token in the Binance Chain, so users are charged the BNB for sending transactions.

Trading fees are subject to a complex logic, which can lead to individual transactions not being calculated exactly at the rates mentioned here, but instead between them. This is due to the block-based matching engine used in the DEX. The difference between Binance Chain and Ethereum is that there is no idea ​​of gas. As a result, the fees for the remaining transactions are set. There are no fees for a new order.

The testnet is a test environment for Binance Chain network, run by the Binance Chain development community, which is open to developers. The validators on the testnet are from the development team. There is also a web wallet that can directly interact with the DEX testnet. It also provides 200 testnet BNB so that you can interact with the Binance DEX testnet.

For developers, Binance DEX has also provided the REST API for testnet and main net. It also provides different Binance Chain SDKs for different languages like GoLang, Javascript, Java etc. We used Java SDK for the Trading Bot with Spring Boot.

Trading Strategy

To implement a Trading Bot, you need to know which pair and when to buy and sell Crypto for these pairs. We selected a very simple trading strategy for our project. First, we selected the NEXO / BINANCE trading pair (Nexo / BNB) because this pair has the highest trading volume. Perhaps you can choose a different trading pair based on your analysis.

For the purchase and sale, we made a decision based on Candlestick count. We considered the size of Candlestick for 15 minutes. If three are still red (price drops), buy Nexo and if three are still green (price increase), sell Nexo. Once you’ve bought or sold, you’ll have to wait for the next three. Continue with the red or green Candlestick. The purchase and sales volume is always 20 Nexo. You can also choose this based on your analysis.

Let’s Code IT

We have implemented the frontend (Vue.Js) and the backend (Spring Boot) for the Trading Bot, but here I will only go into the backend application as it contains the main logic. As already mentioned, the backend application was created with Spring Boot and Binance Chain Java SDK.

We used ThreadPoolTaskScheduler for the application. This scheduler runs every 2 seconds and checks Candlestick. This scheduler has to be activated once via the frontend app and is then triggered automatically every 2 seconds.

ThreadPoolTaskScheduler.scheduleAtFixedRate(task, 2000);

Based on the scheduler, the execute() method is triggered every two seconds. This method first collects all previous Candlestick for 15 minutes and calculates the green and red Candlestick. Based on this, it will buy or sell.

private double quantity = 20.0;
private String symbol = NEXO-A84_BNB; 
public void execute() {
        List<Candlestick> candleSticks = binanceDexApiRestClient.getCandleStickBars(this.symbol, CandlestickInterval.FIFTEEN_MINUTES);
        List<Candlestick> lastThreeElements = candleSticks.subList(candleSticks.size() - 4, candleSticks.size() - 1);
        // check if last three candlesticks are all red (close - open is negative)
        boolean allRed = lastThreeElements.stream()
                .filter(cs -> Double.parseDouble(cs.getClose()) - Double.parseDouble(cs.getOpen()) < 0.0d).count() == 3;
        // check if last three candlesticks are all green (close - open is positive)
        boolean allGreen = lastThreeElements.stream()
                .filter(cs -> Double.parseDouble(cs.getOpen()) - Double.parseDouble(cs.getClose()) < 0.0d).count() == 3;
        Wallet wallet = new Wallet(privateKey, binanceDexEnvironment);

        // open and closed orders required to check last order creation time
        OrderList closedOrders = binanceDexApiRestClient.getClosedOrders(wallet.getAddress());
        OrderList openOrders = binanceDexApiRestClient.getOpenOrders(wallet.getAddress());

        // order book required for buying and selling price
        OrderBook orderBook = binanceDexApiRestClient.getOrderBook(symbol, 5);
        Account account = binanceDexApiRestClient.getAccount(wallet.getAddress());

        if ((openOrders.getOrder().isEmpty() || openOrders.getOrder().get(0).getOrderCreateTime().plusMinutes(45).isBeforeNow()) && (closedOrders.getOrder().isEmpty() || closedOrders.getOrder().get(0).getOrderCreateTime().plusMinutes(45).isBeforeNow())) {
            if (allRed) {
                if (Double.parseDouble(account.getBalances().stream().filter(b -> b.getSymbol().equals(symbol.split("_")[1])).findFirst().get().getFree()) >= (quantity * Double.parseDouble(orderBook.getBids().get(0).getPrice()))) {
                    order(wallet, symbol, OrderSide.BUY, orderBook.getBids().get(0).getPrice());
                    System.out.println("Buy Order Placed  Quantity:" + quantity + "  Symbol:" + symbol + "  Price:" + orderBook.getAsks().get(0).getPrice());
                    
                } else {
                    System.out.println("do not have enough Token: " + symbol + " in wallet for buy");
                }

            } else if (allGreen) {
                if (Double.parseDouble(account.getBalances().stream().filter(b -> b.getSymbol().equals(symbol.split("_")[0])).findFirst().get().getFree()) >= quantity) {
                    order(wallet, symbol, OrderSide.SELL, orderBook.getAsks().get(0).getPrice());
                    System.out.println("Sell Order Placed  Quantity:" + quantity + "  Symbol:" + symbol + "  Price:" + orderBook.getAsks().get(0).getPrice());
                } else {
                    System.out.println("do not have enough Token:" + symbol + " in wallet for sell");
                }

            } else System.out.println("do nothing");
        } else System.out.println("do nothing");

    }

    private void order(Wallet wallet, String symbol, OrderSide orderSide, String price) {
        NewOrder no = new NewOrder();
        no.setTimeInForce(TimeInForce.GTE);
        no.setOrderType(OrderType.LIMIT);
        no.setSide(orderSide);
        no.setPrice(price);
        no.setQuantity(String.valueOf(quantity));
        no.setSymbol(symbol);

        TransactionOption options = TransactionOption.DEFAULT_INSTANCE;

        try {
            List<TransactionMetadata> resp = binanceDexApiRestClient.newOrder(no, wallet, options, true);
            log.info("TransactionMetadata", resp);
        } catch (Exception e) {
            log.error("Error occurred while order", e);
        }
    }

At first glance, the strategy looks really simple, I agree. After this initial setup, however, it’s easy to add more complex logic with some AI.

Result

Since 12th December 2019, this bot is running on Google Cloud and did 1130 transactions (buy/sell) until 14th April 2020. Initially, I started this bot with 2.6 BNB. On 7th February 2020, the balance was 2.1 BNB in the wallet, but while writing this blog on 14th April 2020, it looks like the bot has recovered the loss and the balance is 2.59 BNB. Hopefully, in future it will make some profit💰🙂.

Let me know your suggestions in a comment on this bot and I would also like to answer your questions if you have anything on this topic. Thanks for the time.

JHipster with Google App Engine and Cloud MySQL

Reading Time: 5 minutes

How does it sound to set up a complete spring application, with front-end and database? With all the models, repositories and controllers? Even with Unit and Integration tests, with mocked data? All within a few hours? Your solution is JHipster!

JHipster

JHipster or “Java Hipster” is a handy application generator, a development platform, to develop and deploy web applications. JHipster has become popular in a short time, and it has been featured in many conferences all around the globe – Montreal, Omaha, Taipei, Richmond, Frankfurt, Paris, London. It supports:

  • Spring Boot (Back-end)
  • Angular/React/Vue (Front-end)
  • Spring microservices

JHipster is used for generating complete applications, it will create for you a Spring Boot and Angular/React/Vue application, high-quality application with most of the things pre-configured, using Java as back-end technology and an extensive set of Spring technologies: Spring Security, Spring Boot, Spring MVC (providing a framework for web-sockets, REST and MVC), Spring Data, etc. and Angular/React/Vue front-end and a suite of pre-configured development tools like Yeoman, Maven, Gradle, Grunt, Gulp.js and Bower.

JHipster gives you a head start in creating Spring Boot application with a set of pre-defined screens for user management, monitoring, and logging. The generated Spring Boot application is specifically tailored to make working with Angular/React/Vue a smoother experience. At the top of all that, JHipster also gives you the tools to update, manage and package the resulting application.

By now you may think it sounds too good to be true… But it is not everything that JHipster offers. If you are a web developer, by now probably you have a lot of questions. 🙂
One important question we will answer in this blog post: is it supported by today’s cloud solutions, is it compatible with all of them? The answer is yes, it is compatible with the popular cloud solutions from Google, Amazon, Microsoft, and Heroku. Let’s see what it takes to make a complete integration in Google’s cloud platform, the app engine.

Compatibility Test - NEXCOM

Google App Engine

Google App Engine is a cloud solution provided by Google, a platform for developing and hosting web applications in data centres managed by Google; Platform as a Service (PaaS). Applications are sandboxed and run across multiple servers. The App Engine supports Java or Python, uses the Google query language and stores data in Google BigTable.

It is free of usage up to a certain amount of resource usage. After the user is exceeding the limited usage rates for storage, CPU resources, requests or number of API calls and concurrent requests can pay for more of these resources.

It is fully compatible with the JHipster generated projects. What it takes to host your application is just to follow the official how-to guide from Google App Engine documentation, as normal Spring Boot Application. To make things easier, Google offers a database which works closely with the Google App Engine, the Cloud SQL.

Cloud SQL

Cloud SQL is a database service offered by Google for their cloud solutions, fully-managed that makes it easy to configure, manage, maintain, and operate your relational databases on Google Cloud Platform.

It offers three database options to integrate with:

  • MySQL
  • PostgreSQL
  • SQL Server

Let’s get into details of integrating with Cloud SQL for MySQL:

  1. The first step is to create a Cloud SQL instance on the Google Cloud Platform, which requires few things like instance ID, password and etc. to be set and it gives you the option to choose the MySQL database version.
  2. The following step is to create the database in the newly created instance. It is possible to have more databases in one instance.
  3. Now, our application, in the case to be able to communicate with the Cloud SQL, without any permission blockers, we need to register the application in the Cloud SQL and manually configure the service account roles.
  4. The final step is connecting your application to the created Cloud SQL instance. It is done through JDBC. All the required properties can be found in the overview of the Cloud SQL, instance connection name, credentials and etc.

So the conclusion: don’t be afraid to invest some time in new technologies, be curious, you never know where they may lead you. Thank you for reading. 🙂

Starting with unit testing using Vue.js 2 and Jest

Reading Time: 4 minutes

As a FrontEnd developer, you may know a lot of FrontEnd technologies and frameworks but in time you need to upgrade yourself as a developer. A good skill to strengthen your knowledge is to learn unit testing.

Since I am working with Vue.js for several years, we are going to see some of the basics for testing Vue components using the Jest JavaScript testing framework.

To start, first, we need a Vue.js 2 project created via the Vue CLI. After that we need to add the Jest framework to the project:

# jest unit testing
vue add @vue/unit-jest

I’ll make a simple component that will increase a number on click of a button:

// testComponent.js
export default {
  template: `
    <div>
      <span class="count">{{ count }}</span>
      <button @click="increase">Increase</button>
    </div>
  `,

  data() {
    return {
      count: 0
    }
  },

  methods: {
    increase () {
      this.count++
    }
  }
}

The way of testing is by mounting the components in isolation, after that comes the mocking the needed inputs like injections, props and user events. In the end, comes the confirmation of the outputs of the rendered results emitted events etc.

After that, the components are returned inside a wrapper. A wrapper is an object that contains a mounted component or a VNode and methods to test them.

Let’s create a wrapper using the mount method:

// jestTest.js

// first we import the mount method
import { mount } from '@/vue/test-utils'
import Calculate from './calculate'

// then we mount (wrap) the component
const wrapper = mount(Calculate)

// this way you can access the Vue instance
const vm = wrapper.vm

// you can inspect the wrapper by logging it into the console
console.log(wrapper)

Next step after we do the wrapping, follows to verify if the rendered HTML output of the component matches the expectations.

import { mount } from '@vue/test-utils'
import Calculate from './calculate'

describe('Calculate', () => {
  // Now mount the component and you have the wrapper
  const wrapper = mount(Calculate)

  it('renders the correct markup', () => {
    expect(wrapper.html()).toContain('<span class="calculate">0</span>')
  })

  // it's also easy to check for the existence of elements
  it('has a button', () => {
    expect(wrapper.contains('button')).toBe(true)
  })
})

Then run the tests with npm test and see them pass.

The code in testComponent.js should increment the number on button click so next step is to simulate the user interaction. For this, we need the wrapper’s method wrapper.find() to get the wrapper for the button and then simulate the click event by calling the method trigger().

it('simulation of button click should increment the calculate by 2', () => {
  expect(wrapper.vm.calculate).toBe(0)
  const button = wrapper.find('button')
  button.trigger('click')
  button.trigger('click')
  expect(wrapper.vm.calculate).toBe(2)
})

For asynchronous updates, we use the Vue.nextTick()(need to receive a function as a parameter) method, which comes from Vue. With this method, we are waiting for the DOM update and after that, we execute the code (the code in the function parameter).

// this will not be caught

it('time out', done => {
  Vue.nextTick(() => {
    expect(true).toBe(false)
    done()
  })
})


// the three following tests will work as expected 
// (1)

it('catch the error using done method', done => {
  Vue.config.errorHandler = done
  Vue.nextTick(() => {
    expect(true).toBe(false)
    done()
  })
})

// (2)
it('catch the error using a promise', () => {
  return Vue.nextTick()
    .then(function() {
      expect(true).toBe(false)
    })
})

it('catch the error using async/await', async () => {
  await Vue.nextTick()
  expect(true).toBe(false)
})

Using nextTick can be tricky for the errors because the errors thrown inside it might not be caught by the test runner. That is happening as a consequence of using promises internally. To fix this we can set the done callback as a Vue’s global error handler (1) or we can use the nextTick method without parameter and return it as a promise (2) like we did earlier.

This article is a guide on how to set up the environment and start writing unit tests using Jest. For more information about testing with Vue and using Jest, you can visit the official site for Vue test utils.

JHipster, is it worth it?

Reading Time: 7 minutes

JHipster is an open-source platform to generate, develop and deploy Spring Boot + Angular / React / Vue web applications. And with over 15 000 stars on Github, it is the most popular code generation framework for Spring Boot. But is it worth the hype or is the generated code too difficult to maintain and not production-ready?

How does it work?

The first thing to note is that JHipster is not a separate framework by itself. It uses yeoman and .jdl files in order to generate code in Spring Boot for backend and Angular or React or Vue for frontend. And after the initial generation of the project, you have the option to use the generated code without ever running JHipster commands again or to use JHipster in order to incrementally grow the projects and develop new features.

What exactly is JDL?

JDL is a JHipster-specific domain language where you can describe all your applications, deployments, entities and their relationships in a single file (or more than one) with a user-friendly syntax.

You can use our online JDL-Studio or one of the JHipster IDE plugins/extensions, which support working with JDL files.

Example of simple JDL file for Blog application:

entity Blog {
  name String required minlength(3)
  handle String required minlength(2)
}

entity Post {
  title String required
  content TextBlob required
  date Instant required
}

entity Tag {
  name String required minlength(2)
}

relationship ManyToOne {
  Blog{user(login)} to User
  Post{blog(name)} to Blog
}

relationship ManyToMany {
  Post{tag(name)} to Tag{entry}
}

paginate Post, Tag with infinite-scroll

Which technologies are used?

On the backend we have the following technologies:

  • Spring Boot as the primary backend framework
  • Maven or Gradle for configuration
  • Spring Security as a Security framework
  • Spring MVC REST + Jackson for REST communication
  • Spring Data JPA + Bean Validation for Object Relational Mapping
  • Liquibase for Database updates
  • MySQL, PostgreSQL, Oracle, MsSQL or MariaDB as SQL databases
  • MongoDB, Counchbase or Cassandra as NoSQL databases
  • Thymleaf as a templating engine
  • Optional Elasticsearch support if you want to have search capabilities on top of your database
  • Optional Spring WebSockets for Web Socket communication
  • Optional Kafka support as a publish-subscribe messaging system

On the frontend side these technologies are used:

  • Angular or React or Vue as a primary frontend framework
  • Responsive Web Design with Twitter Bootstrap
  • HTML5 Boilerplate compatible with modern browsers
  • Full internationalization support
  • Installation of new JavaScript libraries with NPM
  • Build, optimization and live reload with Webpack
  • Testing with Jest and Protractor
  • Optional Sass support for CSS design

How to get started?

  1. Pre-requirements: JavaGit and Node.js.
  2. Install JHipster npm install -g generator-jhipster
  3. Create a new directory and go into it mkdir myApp && cd myApp
  4. Run JHipster and follow instructions on the screen jhipster
  5. Model your entities with JDL Studio and download the resulting jhipster-jdl.jh file
  6. Generate your entities with jhipster import-jdl jhipster-jdl.jh
  7. Run ./mvnw to start generated backend
  8. Run npm start to start generated frontend with live reload support

How does the generated code and application look like?

In case you only want to see a sample generated application without starting the whole framework you can check this official Github repo for the latest up-to-date sample code: https://github.com/jhipster/jhipster-sample-app.

Following are some screen from my up and running JHipster application:

Welcome screen jhipster homepageThis is the initial screen when you open your JHipster app

Create a user screenjhipster user create screenWith this form you can create a new user in the app

View all users screenjhipster user management screenIn this screen you have the option to manage all your existing users

Monitoring of your JHipster application screenjhipster monitoring screenMonitoring of JVM metrics, as well as HTTP requests statistics

What are the pros and cons

The important thing to remember is that JHipster is not a “magic bullet” that will solve all your problems and is not an optimal solution for all the new projects. As a good software engineer, you will have to weigh in the pros and cons of this platform and decide when it makes sense to use and when it’s better to go with a different approach. Having used JHipster for production projects these are some of the pros and cons that I’ve experienced:

Pros

  • Easy bootstrap of a new project with a lot of technologies preconfigured
  • JHipster almost always follows best practices and latest trends in backend and frontend development
  • Login, register, management of users and monitoring comes out-of-the-box
  • Wizard for generating your project, only the technologies that you select are included in the project
  • After defining your own JDL file, all of the models, repository, service and controllers classes for your entities are generated, together with integration tests. This is saving a lot of time in the begging of the project when you want to get to feature development as soon as possible

Cons

  • If you are not familiar with technologies that are being used in the generated project it can be overwhelming and it’s easy to get lost into this mix of lots of different technologies
  • Using JHipster after the initial project is not a smooth experience. Classes and Liquibase scripts are being overwritten and you have to be very careful with changing the initial JDL model. Or you can decide to continue without using JHipster after the initial generation of projects
  • REST responses that are returned from endpoints will not always correspond to business requirements, very often you will have to manually modify your initial JHipster REST responses
  • Not all of the options that are available are at the same level, some technologies that JHipster is using and configuring are more polished than the others. Especially true if you decide to use community modules

What kind of projects are a good fit?

Having said all of this, it’s important to understand that there are projects which can benefit a lot from JHipster and projects that are better without using this platform.

In my experience, a good candidate is a greenfield project where it’s expected to deliver a lot of features fast. JHipster will help a lot to be productive from day one and to cut on the boilerplate code that you need to write. So you will be able, to begin with, feature development really fast. This works well with new projects with tight deadlines, proof of concepts, internal projects, hackathons, and startups.

On the other hand, a not so ideal situation is if you have an already started and up and running project, there is not much a JHipster can do in this case. Or another case would if the application has a lot of specific business logic and its not a simple CRUD application, for example, an AI project, a chatbot or a legacy ecosystem where these new technologies are not suitable or supported.

JHipster, is it worth it?

There is only one sure way to decide if JHipster is worth it for your next project or not and that is to try it out yourself and play around with the different features and configuration that JHipster offers.

At best, you will find a new framework for your next project and save a lot of effort next time you have to start a project. At worst, you will get to know the latest trends in both backend and frontend and learn some of the best practices from a very large community.

JavaScript objects: Why? How? Compared with switch-case and if?

Reading Time: 4 minutes

During the development, we use objects and we are always confused which approach is the best. That was the main reason why I decided to write a blog about it, so I really hope in near future most of your obstacles will be resolved and you will be more confident in choosing which is the best approach.

In this blog, I will explain some of the object features and I will make some comparisons, so let’s start!

Switch vs object literals

We all know what is a switch – case statement and we used them at least one time in our life, no matter which programming language. But since we are talking about JavaScript, have you ever asked yourself if it is clever to use it?

Well, of course, the answer is NO. Now you are asking yourself, then it must be if-else statements, but still, the answer is NO. The best approach is to use objects.

Let’s see why…

Problems:

  • switch-case
    • Hard to maintain which leads you to difficult debugging and testing
    • You are manually forced to use break within each case
    • Nested errors
    • Restrictions, like not allowing to use the same constant in two different cases…
    • In JavaScript, everything is based on curly braces, but not switch
    • Evaluates every case until it finds the right one
  • If-else statements
    • Hard to maintain which leads you to difficult debugging and testing
    • Hard to understand when there is complex logic
    • Hard to test
    • Evaluates every statement until it finds the right one if you don’t end it explicitly

According to these problems, the best solution is objects. The reason for that is the advantages that objects are offering us, like:

  • You are not forced to do anything
  • You can use functions inside the objects which means you are much more flexible
  • You can use closure benefits
  • You are using the standard JavaScript objects, which makes the code friendlier
  • Gives you better readability and maintainability
  • Since the objects approach is like a hash table the performance gets better rather than the average cost of switch case
  • All these advantages lead us to the conclusion that objects are more natural and are part of many design patterns in JavaScript where switch-case is an old way of coding

Let’s see an example with objects how they look like:

<!DOCTYPE html>
<html> 
<body>
<script>
      function getById (id) {
            var ids = {
                  'id1': function () {
                        return 'Id 1';
                  },
                  'id2': function () {
                        return 'Id 2';
                  },
                  'default': function () {
                        return 'Default';
                  }
            }
            return (ids[id] || ids['default'])();
      };

      var ref = getById('id1')
      console.log(ref)         // Id 1

      var ref1 = getById()
      console.log(ref)         // Default

      var ref1 = getById(‘noExistingId’)
      console.log(ref)         // Default
</script>
</body>
</html>

hasOwnProperty(key) vs in

hasOwnProperty() – method which returns Boolean or true only if the object contains that property as its own property.

in – operator which returns Boolean or true only if the object contains that property as its own property or in its prototype chain.

function TestObj(){
      this.name = 'TestName';
}
TestObj.prototype.gender = 'male';

var obj = new TestObj();

console.log(obj.hasOwnProperty('name'));       // true
console.log('name' in obj);                    // true
console.log(obj.hasOwnProperty('gender'));     // false 
console.log('gender' in obj);                  // true

Object properties

There are two ways: dot notation and bracket notation. Most of the developers often ask themselves which approach they should use or maybe if there is any difference? In the end, they use one of them wherein most of the cases both solutions work fine without knowing why and without paying attention to its differences. Let’s make an overview.

Dot notation:

  • s can only be alphanumeric, which means it can only include two special characters “_” and “$”
  • Property identifiers can’t start with numbers and variables

Bracket notation:

  • Property identifiers must be String or a variable which references to a String
  • Property identifier can contain any character in their name
var obj = {
      '$test': 'DolarValue',
      '%test': 'PercentageValue'
      …
}

console.log(obj["$test"])                      // 'DolarValue'
console.log(obj.$test)                          // 'DolarValue'

These both will give the same result because both support `$` in their name, but what happens if we use another special character like `%`?

console.log(obj["%test"])    // 'PercentageValue'
console.log(obj.%test)       // It will throw an error:              
                            Uncaught SyntaxError: Unexpected token ‘%’

In the next blog, I will cover loops optimization using the object properties and testing code speed.

The practical guide: Refactor android application to follow the MVP design pattern

Reading Time: 8 minutes

At the beginning of my carrier, I was very lucky to start a few new projects at my company. And every story was a short one and the same: This is going to be the best project ever -> Ops, it got little messy -> Oooops, everything is a mess, there is no going back. So, I realized that it isn’t enough that I can do anything the PO wants, but I have to find the right way of doing it.

I was thinking like: Ok, I will learn unit testing, my code will be tested, and everything will be fine. So, I got into testing, started to read everything about testing, especially unit testing. I started to understand it a little bit, so I decided to write a few unit tests in my existing (mess) projects. I created test classes for some fragment, started to type some test methods, thinking I will figure it out what to test easily, and when I had to type the name of the test method my mind froze.

I had no idea what to test. I started to divide the code into methods, but it wasn’t helping at all. After a ton of research and frustration, I realized that I can’t test anything until I learn some design patterns and/or architectures.

Design patterns and architectures

In theory, there is a small difference between design patterns and architectures. Basically, they are both terms that define the organization of code, but the difference is the layer. Architecture is a more general term of design pattern. There are many design patterns and architectures, and you can’t say for one that it is the best. Today we will look at the MVP design pattern, and we will try to refactor a really bad code into a better one. Hopefully, then, we will try to refactor the better code into an even better one, with clean-like architecture.

Application example

I created an application that fetches some programming quotes, saves them into a room database and displays them into a RecyclerView. If the network calls fail, we display the quotes from the database. It is a common use case, that complicates our lives a little bit. You can check out the no_pattern branch, and run the app.

public class MainActivity extends AppCompatActivity {

    private QuoteDao quoteDao;
    private RecyclerView rvQuotes;

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);

        quoteDao = QuoteDatabase.getQuoteDatabase(this).quoteDao();

        rvQuotes = findViewById(R.id.rvQuotes);
        rvQuotes.setLayoutManager(new LinearLayoutManager(this));

        QuotesApi quotesApi = RetrofitClient.getRetrofit().create(QuotesApi.class);
        Call<List<Quote>> quotesCall = quotesApi.getQuotes();
        quotesCall.enqueue(new Callback<List<Quote>>() {
            @Override
            public void onResponse(Call<List<Quote>> call, Response<List<Quote>> response) {
                if (response.isSuccessful()) {

                    new Thread(() -> quoteDao.insertQuotes(response.body())).start();

                    QuotesAdapter adapter = new QuotesAdapter(response.body());
                    rvQuotes.setAdapter(adapter);
                }
            }

            @Override
            public void onFailure(Call<List<Quote>> call, Throwable t) {
                DatabaseQuotesAsyncTask asyncTask = new DatabaseQuotesAsyncTask(quoteDao, rvQuotes);
                asyncTask.execute();
            }
        });
    }

    static class DatabaseQuotesAsyncTask extends AsyncTask<Void, Void, List<Quote>> {

        private QuoteDao quoteDao;
        private WeakReference<RecyclerView> rvQuotes;

        public DatabaseQuotesAsyncTask(QuoteDao quoteDao, RecyclerView rvQuotes) {
            this.quoteDao = quoteDao;
            this.rvQuotes = new WeakReference<>(rvQuotes);
        }

        @Override
        protected List<Quote> doInBackground(Void... voids) {
            return quoteDao.getAllQuotes();
        }

        @Override
        protected void onPostExecute(List<Quote> quotes) {
            QuotesAdapter adapter = new QuotesAdapter(quotes);
            rvQuotes.get().setAdapter(adapter);
        }
    }
}

This code is terrible. I mean, it is doing its job, but it is not testable, it is not scalable, it is not maintainable. We can’t test this code. MainActivity is 82 lines of code, just for displaying a list of quotes. Imagine if we add more calls here, more features in this screen (and usually there are more features), this code will easily become a mess.

How to fix this? We will start with a design pattern. We will refactor this code to follow the MVP pattern. Now, what is the MVP design pattern and how to implement it? MVP pattern is one of the most common design patterns. It is very close to MVC and MVVM. All of these design patterns (and others) share the idea that we should define and divide the responsibility of the classes. All of the mentioned design patterns above have 3 types of classes:

  • Model – data layer, used for managing business model classes
  • View – UI layer, used for displaying the data
  • Controller/Presenter/ViewModel – logic layer, intercepts the actions from the UI layer, updates the data and tells the UI layer to update itself

As you can see, Model and View classes are the same between all three patterns (they may have different responsibilities), the difference is the remaining class.

Let’s talk about MVP strictly, and try to convert our app to follow the MVP pattern. What classes belong to which type?

  • Model – Here belongs just the Quote class, so it stays the same
  • View – Here belong Activities and Fragments. In our case MainActivity
  • Presenter – We should create presenter for every Activity/Fragment

So, in the data layer (Model) we have just our Quote class and it stays the same. The View and the Presenter are left. Let’s create interfaces for the Views and Presenters. That way we will create the communication between them.

public interface BaseView<P> {
}

public interface BasePresenter<V> {
    void bindView(V view);
    void dropView();
}

Every Activity/Fragment should implement interfaces extending from BaseView, and every presenter should implement interfaces extending from BasePresenter. In MVP, the communication between the View and the Presenter is both ways (View <—> Presenter). This means that our Activity/Fragment should have an instance of the presenter and the presenter should have an instance of the view. Also, the Presenter should not contain any android component (easy check, you shouldn’t have any android, imports in the presenter). So, let’s create the View and the Presenter for MainActivity. I will organize the packages by feature because it is better when we follow some design patterns.

Now, we have MainActivity that implements MainView, and MainPresenterImpl that implements MainPresenter. We said that MainActivity should have an instance of MainPresenter and MainPresenterImpl should have an instance of MainView.

public interface MainPresenter extends BasePresenter<MainView> {
}

public class MainPresenterImpl implements MainPresenter {
    private MainView view;

    @Override
    public void bindView(MainView view) {
        this.view = view;
    }

    @Override
    public void dropView() {
        this.view = null;
    }
}

public interface MainView extends BaseView<MainPresenter> {
}

public class MainActivity extends AppCompatActivity implements MainView {
    //...
    private MainPresenter presenter;
   // ...
}

You will notice methods bindView() and dropView() in the presenter. That is because the view is responsible for its lifecycle, and it should inform the presenter about its presence/absence. In the lifecycle methods of the Activity/Fragment, we should call these methods. This should be done in one of these three pairs: In onCreate/onResume/onStart call bindView() and in onDestroy/onPause/onStart call dropView(). Use either one of the pairs, but you should not mix them. For example: Don’t call bindView() in onCreate and dropView() in onPause. Let’s call these methods in the MainActivity:

public class MainActivity extends AppCompatActivity implements MainView {
    //...
    private MainPresenter presenter;
    
    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);

        presenter = new MainPresenterImpl();
        presenter.bindView(this);

        //....
    }

    @Override
    protected void onDestroy() {
        presenter.dropView();
        super.onDestroy();
    }
}

Next, we should define methods in MainView and MainPresenter. In MainPresenter we want to get the quotes (it doesn’t matter for the view if the presenter gets them from the network or from the database) so we’ll create a method getAllQuotes(), and in the MainView we want to display the quotes, so we’ll create a method displayQuotes(List<Quote> quotes). After adding these methods in the interfaces, we will get compiler errors in the MainActivity and MainPresenterImpl. We need to implement these new methods. In the MainActivity we’ll just create a new adapter with the quotes and pass the adapter in the recyclerView. In the Presenter, it gets little trickier. We will move the network and database code from MainActivity to MainPresenterImpl, and whenever we create an adapter and set it to the recyclerView, we change that with calling displayQuotes() method of the MainView.

public class MainActivity extends AppCompatActivity implements MainView {
    //...
    @Override
    public void displayQuotes(List<Quote> quotes) {
        QuotesAdapter adapter = new QuotesAdapter(quotes);
        rvQuotes.setAdapter(adapter);
    }
}

// in presenter when we get the quotes
if (view != null) {
    view.displayQuotes(quotes);
}

Moreover, because QuoteDatabase requires context, and we can’t have a context in the Presenter, instead of creating QuoteDao inside the Presenter, we create it in MainActivity and pass it into the Presenter via the constructor. Finally, in the onCreate() method of the Activity, we call presenter.getAllQuotes(); You can check out the mvp_basic branch.

What have we done now? We refactored the MainActivity, to follow the MVP design pattern. Now, MainActvity has the responsibility only to display the quotes. It doesn’t need to be unit tested, because it doesn’t contain any logic. We moved the logic into the Presenter. But unfortunately, also the Presenter is hard to test now. We will try to make it better in the next article.

Testing asynchronous code in a concise and easy to read manner

Reading Time: 7 minutes

We live in a fast-paced world where a standard project delivery strategy is agile or it is a direction which people tend to follow. If you have been part of an agile software delivery practice then somewhere in your coding career you have met with some form of tests. Whether they might be unit or integration ( system ) or some form of E2E test.

You might be familiar with the testing pyramid and with the benefits and scopes of the different types of tests presented in the pyramid.

Let’s take a quick look at the pyramid:

Unit

As shown on the image above tests that we write are grouped into layers from which the pyramid is built. The foundation layer which is the biggest. It shows us their quantity. Meaning we need more of them on our application. They are also called Unit Tests because of the scope which they are testing. A small unit e.g. an if clause.

Integration/System

The tests belonging to the middle layer are called Integration tests and their purpose is to test integration between one or more elements inside an application and in quantitative representation we need fewer tests of this type than Unit tests.

UI/E2E

The last layer is the smallest one meaning that the quantity of those tests should be the smallest. Those types of tests are also called UI or E2E tests. Here a test has the biggest scope meaning that it is checking more interconnected parts of your application i.e whole register scenario from UI perspective.

As we go from the bottom to the top costs for maintenance are increasing, respectively their speed is decreasing. Confidence is also a crucial part. If a test higher in the pyramid passes we are more confident that our application works or some part of it at least.

Our focus is on the middle layer. So-called Integration tests lay there. As we mentioned above those are the tests that check the interconnection between one or more modules inside an application e.g tests which check that a user can be registered by pinging an endpoint. The scope of this test is to prepare data, send a request to the corresponding endpoint and also check whether the user has been successfully created in the underlying datastore. Testing integration between controller and repository layer, therefore, their name “An integration test”.
In my opinion, I think that tests are a must-have for every application.

Therefore we are writing integration tests for asynchronous code.

With multi-threaded data processing systems and increased popularity of reactive programming in Java, we are puzzled with writing proficient tests for asynchronous code.
Writing high-value tests is hard, but writing high-value tests for asynchronous code is harder.

Problem

Let’s take a look at this example where we have a small system that exposes several endpoints for updating a person. We have created several tests each is updating a person with different names. When a test is running it tries to update a person by sending a request via an endpoint. The system receives the request and returns ok status. In the meantime, it spans a different thread for the actual person update. On the side of the tests, we don’t know how much time does it gonna take for the update to happen so the naive approach is to wait for a specific time after which we are going to verify whether the actual update has happened.

We have several tests which ping a different endpoint. The endpoints are differing in the wait time that would be needed to process each request
updatePersonWith1SecondDelay
updatePersonWith2SecondDelay
updatePersonWith3SecondDelay
updatePersonWithDelayFrom1To5Seconds

In order for our tests to pass, I used the naive approach by adding a function waitForCompetition() which is nothing else than some sleep of the test thread. Thread.sleep() in Java.

Example

The first execution of tests with a timeout of 1 second. The total execution is 4 seconds but not all tests have passed.

The second execution of tests with a timeout of 3 seconds. The total execution is 12 seconds but not all tests have passed.

Third execution of tests with a timeout of 5 seconds. The total execution is 20 seconds where all tests have passed.

But in order for all the tests to pass, we would need a max of 5-second sleep wait which is executed after each test. This way we are guaranteeing that every test will pass. However, we add an unnecessary wait of 4 seconds for the first test and respectively add wait time for other tests. This results increased execution time, hence optimum wait time is not guaranteed.

Solution

As stated in the official documentation Awaitility is a small java library for synchronizing asynchronous operation. Which helps expressing expectations in a concise and easy to read manner. Which is a smart option for checking the outcome of some async operation.
It’s fairly easy to incorporate this library into your codebase.

You just need to add the library into pom.xml:

		<dependency>
			<groupId>org.awaitility</groupId>
			<artifactId>awaitility</artifactId>
			<version>3.0.0</version>
			<scope>test</scope>
		</dependency>

And add the import in your test:
import static org.awaitility.Awaitility.await;

Let’s take a look at an example before using this library:

@Test
    public void testDelay1Second() throws Exception {
        Person person = new Person();
        person.setName("Yara");
        person.setAddress("New York");
        person.setAge("23");
        personRepository.save(person);

        ObjectMapper mapper = new ObjectMapper();

        person.setName("Daenerys");

        this.mockMvc.perform(put("/api/endpoint1/" + person.getId())
                .contentType(APPLICATION_JSON)
                .content(mapper.writeValueAsBytes(person)))
                .andExpect(status().isOk())
                .andExpect(content().string(containsString("Request received")));

        waitForCompletion();
        assertThat(personRepository.findById(person.getId()).get().getName())
                .isEqualTo("Daenerys");
    }

An example with Awaitility:

@Test
    public void testDelay1Second() throws Exception {
        Person person = new Person();
        person.setName("Yara");
        person.setAddress("New York");
        person.setAge("23");
        personRepository.save(person);

        ObjectMapper mapper = new ObjectMapper();

        person.setName("Daenerys");

        this.mockMvc.perform(put("/api/endpoint1/" + person.getId())
                .contentType(APPLICATION_JSON)
                .content(mapper.writeValueAsBytes(person)))
                .andExpect(status().isOk())
                .andExpect(content().string(containsString("Request received")));

        await().atMost(Duration.FIVE_SECONDS).untilAsserted(() -> assertThat(personRepository.findById(person.getId()).get().getName())
                .isEqualTo("Daenerys"));
    }

Example of the executed test suite with the library:

As we can see the execution time is greatly reduced from 20 seconds for all tests to pass in just under 10 seconds.
As you can spot the function waitForCompletition() is removed and a new wait is introduced from the library as await().atMost(Duration.FIVE_SECONDS).untilAsserted()

You can also configure the library using static methods from the Awaitility class:
Awaitility.setDefaultPollInterval(10, TimeUnit.MILLISECONDS);
Awaitility.setDefaultPollDelay(Duration.ZERO);
Awaitility.setDefaultTimeout(Duration.ONE_MINUTE);

Conclusion

In this article, we have taken a look at how to improve tests when dealing with asynchronous code using an interesting library. I hope this post helps benefit you and adds to your knowledge. You can find a working example with all of the tests with and without the Awaitility library on this repository.
Also, you can find more about the library here.

Reactive Spring with WebFlux and SQL Databases

Reading Time: 6 minutes

Since SpringBoot 2 the Spring WebFlux was introduced so we can create reactive web applications. This was great and it was working fine with NoSql databases but when it came to relational databases this was an issue. The JDBC database operations are blocking by nature and this will stop you to create a totally non-blocking application. But in order to have an asynchronous and non-blocking application, we will need to cover every layer of the application. The hero that solved this was the R2DBC – Reactive Relational Database Connectivity that gives a possibility to make none-blocking calls to Relational Databases.

The combination of WebFlux and R2DBC is enough to cover every layer in our application that we are going to build. As a relational database, we are going to use H2. So on to the coding!

Go to the spring initializr page from where we are going to build our application and select the following configuration:

  • Group: com.north47 (or your package name)
  • Artifact: spring-r2dbc
  • Dependencies: Spring Reactive Web, Spring Data R2DBC [Experimental], H2 Database, Lombok

(You won’t be able to see the Lombok on this picture, but there it is! If for some reason the Lombok is causing you issues you might need to install a plugin. To do this in Intellij go to File -> Settings -> Plugins search for Lombok, install it and restart your IDE. If you can’t manage to do it just go the old way remove the annotations @Data, @AllArgsConstructor, @NoArgsConstructor in the Book.java class and just create your own setters, getters and constructors).

Now click on Generate, unzip the application and open it via your IDE.

Let’s first create a SQL script that will create our table. Go to src -> main -> resources and right-click on it and select New -> File. Name the file: schema.sql and enter there the following code:

CREATE TABLE BOOK (
ID INTEGER IDENTITY PRIMARY KEY ,
NAME VARCHAR(255) NOT NULL,
AUTHOR VARCHAR (255) NOT NULL
);

This will create a table with name ‘Book’ and the following columns: ID, NAME and AUTHOR.

We will create an additional script that will put us some data in our database. Repeat the following procedure from previous and this time give a name to the file: data.sql and add the following code:

INSERT INTO BOOK (ID,NAME,AUTHOR) VALUES (1,'Angels and Demons','Dan Brown');
INSERT INTO BOOK (ID,NAME, AUTHOR) VALUES (2,'The Matarese Circle', 'Robert Ludlum');
INSERT INTO BOOK (ID,NAME,AUTHOR) VALUES (3,'Name of the Rose', 'Umberto Eco');

This will put some data into our database.

In resources delete the application.properties file and let’s create a new file where we are going to add the following:

logging:
  level:
    org.springframework.data.r2dbc: DEBUG
spring:
  r2dbc:
    url: r2dbc:h2:mem:///test?options=DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE
    name: sa
    password:


Now that we have defined the r2dbc URL and enabled DEBUG logging level for r2dbc let’s go to create our java classes.

Create a new package domain under the ‘com.north47.springr2dbc’ and create a new class Book. This will be our database model:

package com.north47.springr2dbc.domain;

import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import org.springframework.data.annotation.Id;
import org.springframework.data.relational.core.mapping.Column;
import org.springframework.data.relational.core.mapping.Table;

@Table("book")
@Data
@AllArgsConstructor
@NoArgsConstructor
public class Book {

    @Id
    private Long id;

    @Column(value = "name")
    private String name;

    @Column(value = "author")
    private String author;

}

Now to create our repository first create a new package named ‘repository’ under ‘com.north47.springrdbc’. In there create an interface named BookRepository. This interface will extend the R2dbRepository:

package com.north47.springr2dbc.repository;

import com.north47.springr2dbc.domain.Book;
import org.springframework.data.r2dbc.repository.R2dbcRepository;

public interface BookRepository extends R2dbcRepository<Book, Long> {
}

As you may notice we are not extending the JpaRepository as usual. The R2dbcRepository will provide us with methods that can work with objects like Flux, Mono etc…

After this, we will create endpoints from where we can access the previously inserted data or create new, modify it or delete it.

Create a new package ‘resource’ under the ‘com.north47.springr2dbc’ package and in there we will create our BookResource:

package com.north47.springr2dbc.resource;

import com.north47.springr2dbc.domain.Book;
import com.north47.springr2dbc.repository.BookRepository;
import org.springframework.http.MediaType;
import org.springframework.web.bind.annotation.*;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;

@RestController
@RequestMapping(value = "/books")
public class BookResource {

    private final BookRepository bookRepository;

    public BookResource(BookRepository bookRepository) {
        this.bookRepository = bookRepository;
    }

    @GetMapping(produces = MediaType.TEXT_EVENT_STREAM_VALUE)
    public Flux<Book> getAllBooks() {
        return bookRepository.findAll();
    }

    @GetMapping(value = "/{id}")
    public Mono<Book> findById(@PathVariable Long id) {
        return bookRepository.findById(id);
    }

    @PostMapping(consumes = MediaType.APPLICATION_JSON_VALUE)
    public Mono<Book> save(@RequestBody Book book) {
        return bookRepository.save(book);
    }

    @DeleteMapping(value = "/{id}")
    public Mono<Void> delete(@PathVariable Long id) {
        return bookRepository.deleteById(id);
    }
}

And there we have endpoints from where we can access our data and modify it.

On to the postman so we can test our application, but of course, first, let’s start it. When you run the application you can see in the console that your server is started:

Netty started on port(s): 8080

Also since we enabled DEBUG log level you should be able to see al the SQL queries that are executed from the scripts that we wrote previously.

In postman set a GET method and the url: localhost:8080/books. In the Headers add key: ‘Content-Type’, value:’application-json’.

Press that send button and there it is you will get the data:

data:{"id":1,"name":"Angels and Demons","author":"Dan Brown"}

data:{"id":2,"name":"The Matarese Circle","author":"Robert Ludlum"}

data:{"id":3,"name":"Name of the Rose","author":"Umberto Eco"}

You can test also the other endpoints, for example, getting a book by id just by changing the URL to localhost:8080/books/1. The result will be:

{
    "id": 1,
    "name": "Angels and Demons",
    "author": "Dan Brown"
}

Now you can test the other endpoints by creating a new Book by sending a POST request to the localhost:8080/books or delete a book by sending a DELETE to localhost:8080/books/{id}.

Here you can find the whole code:

Spring-R2DBC

Hope you enjoyed it!

Dependency injection and how I use it in Vaccination iOS app

Reading Time: 5 minutes

In programming, dependency injection is a technique where one object serves dependencies to another object. The concept is that instead of the client object to decide what kind of service it will use, another object tells to the client what service he has to use.

We can see the dependency injection as a software pattern. The fundament of this pattern is passing the service or object to the client, instead of allowing the client to find or to build the service on his own. What is the advantage of using this pattern? The main pros of this pattern are the readability of the code and code reusability.

Dependency injection – Injector example

Dependency injection is one form of the broader technique of inversion of control. The client delegates the responsibility of providing dependencies to external code (the injector) (Figure 1). The client is not allowed to call the injector code; it is injecting code that constructs the services and calls the client to inject them. This means the client code does not need to know about the injecting code, how to construct the services or even which actual services it is using; the client only needs to know about the intrinsic interfaces of the services because these define how the client may use the services. This separates the responsibilities of use and construction.

Types of DI

There are 3 types of dependency injection:

  1. Constructor injection: the dependencies are provided through a class constructor
  2. Setter injection: the client exposes a setter method that the injector uses to inject the dependency
  3. Interface injection: the dependency provides an injector method that will inject the dependency into any client passed to it. Clients must implement an interface that exposes a setter method that accepts the dependency

Vaccination app example

In the iOS world, the constructor injection is known as an initializer-based injection. This concept is realized with injecting the dependency object (or service) during initialization of the client class and this dependency is consistent/unchangeable during the life cycle of the client object.

In the previous few months, I’ve worked on the vaccination iOS application for N47 and I’ve decided to use the popular MVVM pattern inside. In the core of this pattern is the dependency injection. The components of the pattern are Model, View, and ViewModel, and each component is responsible for a different thing in the app. The point is to make the code more modular and easy to test.

The ViewModel (VM) component is a structure that contains only the data needed by the View component. The View component is presenting the data injected by the ViewModel. The ViewModel at other side is created by injecting dependency from the Model component. The main advantage of the MVVM is that we are creating views that have only one goal – presenting data. The view itself is not aware of the other task like fetching, persisting, etc.

We can see the initializer-based injection in action with the real example used in the Vaccination Demo App of N47. Let’s see first how the Details ViewModel looks like:

struct VaccineDetailsViewModel {
    let title: String
    let description: String
    let date: String?
}

The vaccine details view only needs title, description, and date for the vaccine. It doesn’t need more information. On the other hand, the vaccine model can contain more details about the vaccine, but this information is useless for the View. Inside the view controller (View component) we define view model property and set it via controller initializer. We can see this in the code snippet below:

 var vacineViewModel: VaccineDetailsViewModel?

class func createController(viewModel: VaccineDetailsViewModel?) -> VaccinesDetailViewController {
     
        let controller = VaccinesDetailViewController(nibName: "VaccinesDetailViewController", bundle: nil)
        controller.vacineViewModel = viewModel

        return controller
    }

This type of injection is preferable because it keeps us the safety of creating incomplete objects and with that, we will avoid coding mistakes.
So when I want to create a controller that will present the details for the vaccines and the scheduled vaccines I’m using injection via initializer in this way:

let details = VaccinesDetailViewController.createController(model: vaccination.createModel())

Other DI types in action…

In the Vaccination App, I’m also using Dependency Injection via setter creating the UITableView cells.

var vaccinationData: Vaccination? = nil {
        didSet {
            guard let vaccineId = vaccinationData?.vaccineId else { return }
            guard let vaccine = VaccineManager.sharedInstance.getVaccineById(vaccineId: vaccineId) else { return }
            let language = ModuleSharedPreferences.shared.language.rawValue
            let translation = vaccine.translations[language]
            
            vaccineTitleLabel.text = translation?.name
            vaccineApplyDateLabel.text = vaccinationData?.date
        }
    }

The code snippet above shows the vaccination data object that should be set with setter if we want the cell to be populated with data. Here is the code that will do the magic:

        let cell = tableView.dequeueReusableCell(withIdentifier: VaccinesTableViewCell.cellIdentifier, for: indexPath) as! VaccinesTableViewCell
        
        let vaccination = vaccinationList[indexPath.row]
        
        cell.vaccinationData = vaccination

Conclusion

Dependency injection is a powerful technique. Our code becomes more readable, reusable and easy for testing. We were able to see this technique in action in a real project and it was used within the popular design patterns MVVM. Using this technique we become sure that our components/services are completed, fully created before we start to use it.

Difference between Normal and Arrow Functions

Reading Time: 3 minutes

Arrow functions have been adopted since ECMAScript 2015. They are very powerful and simple. Many ES5-based projects adopted this feature to refactor the code. However, the arrow functions aren’t the same as the normal functions you’ve got to know. In this blog, I will explain what is the difference between normal and arrow functions.

This Keyword

A normal functions .this – binding is determined by who calls the function:

let a = 10;
let obj = { 
  hello,
  a: 20
};

function hello() {
  console.log(this.a);
}

hello(); // outputs 10
obj.hello(); //outputs 20

The method hello gives a different result by how it’s called. This is because a normal functions this is bound to the object that calls the function.

In contrast to a normal function, an arrow functions this is always bound to the outer function that surrounds the inner function:

let a = 10;
let hello = () => { 
  console.log(this.a);
};

let obj = {
  hello,
  a: 20
}

hello(); // outputs 10
obj.hello(); //outputs 10

In this example, hello is an arrow function. It’s a property of the obj. Even though obj calls hello, it still prints 10 because a function’ this always refers to the outer environments this. And the global this is a window so it points out to window.x.

Arguments

A normal function has a special property called argument:

function hello () {
  console.log(arguments.length)
};

hello(1, 2, 3); // outputs 3

hello(10); // outputs 1

An arrow function doesn’t have an arguments property:

let hello = () => {
  console.log(arguments.length)
};

hello(1, 2, 3); // outputs arguments is not defined

Binds

function.prototype.bind is a method you can use to change the this of the function:

let car = ‘Volvo’ 

function whatCar() {
  console.log(this.car)
};

whatCar(); // outputs ‘Volvo’
whatCar.bind({car: ‘Nissan’})() // outputs ‘Nissan’

whatCar prints the value depending on the assigned this.

But an arrow function doesn’t work with function.prototype.bind because it doesn’t have a local thisBinding. So it just looks at the outer environments this:

let car = ‘Volvo’ 

let whatCar = () => {
  console.log(this.car)
};

whatCar(); // outputs ‘Volvo’
whatCar.bind({car: ‘Nissan’})() // outputs ‘Volvo

Constructor

Normal function can be constructible and callable and arrow function is only callable and not constructible, they can never be invoked with the new keyword:

function hello () {};
let hi = () => {};
new hello(); // works
 hi(); // outputs hi is not a constructor

Arrow functions are more handy and stylish, but there are differences between arrow function and normal function. Maybe an arrow function won’t always be the best option to use. All I want to say is the best way to choose what function to use depends on each situation.

ReactiveX in Android with an example – RxJava

Reading Time: 5 minutes

What is Reactive Programming?

Reactive programming is programming with asynchronous data streams. It enables to create streams of anything – events, fails, variables, messages and etc. By using reactive programming in your application, you are able to create streams which you can then perform actions while the data emitted by those created streams.

Observer Pattern

The observer pattern is a software design pattern which defines a one-to-many relationship between objects. It means if the value/state of the observed object is changed/modified, the other objects which are observing are getting notified and updated.

ReactiveX

ReactiveX is a polyglot implementation of reactive programming which extends observer pattern and provides a bunch of data manipulation operators, threading abilities.

RxJava

RxJava is the JVM implementation of ReactiveX.

  • Observable – is a stream which emits the data
  • Observer – receives the emitted data from the observable
    • onSubscribe() – called when subscription is made
    • onNext() – called each time observable emits
    • onError() – called when an error occurs
    • onComplete() – called when the observable completes the emission of all items
  • Subscription – when the observer subscribes to observable to receive the emitted data. An observable can be subscribed by many observers
  • Scheduler – defines the thread where the observable emits and the observer receives it (for instance: background, UI thread)
    • subscribeOn(Schedulers.io())
    • observeOn(AndroidSchedulers.mainThread())
  • Operators – enable manipulation of the streamed data before the observer receives it
    • map()
    • flatMap()
    • concatMap() etc.

Example usage on Android

Tools, libraries, services used in the example:

  • Libraries:
    • ButterKnife – simplifying binding for android views
    • RxJava, RxAndroid – for reactive libraries
    • Retrofit2 – for network calls
  • Fake online rest API:
  • Java object generator from JSON file

What we want to achieve is to fetch users from 1. show in RecyclerView and load todo list to show the number of todos in the same RecyclerView without blocking the UI.

Here we define our endpoints. Retrofit2 supports return type of RxJava Observable for network calls.

    @GET("/users")
    Observable<List<User>> getUsers();

    @GET("/users/{id}/todos")
    Observable<List<Todo>> getTodosByUserID(@Path("id") int id);

    @GET("/todos")
    Observable<List<Todo>> getTodos();

Let’s fetch users:

  • .getUsers – returns observable of a list of users
  • .subscribeOn(Schedulers.io()) – make getUser() performs on background thread
  • .observeOn(AndroidSchedulers.mainThread()) – we switch to UI thread
  • flatMap – we set data to RecyclerView and return Observable user list which will be needed in fetching todo list
    private Observable<User> getUsersObservable() {
        return ServicesProvider.getDummyApi()
                .getUsers()
                .subscribeOn(Schedulers.io())
                .observeOn(AndroidSchedulers.mainThread())
                .flatMap((Function<List<User>, ObservableSource<User>>) users -> {
                    adapterRV.setData(users);
                    return Observable.fromIterable(users);
                });
    }

Now, fetch todo list of users using the 2nd endpoint.

Since we are not going to make another call, we don’t need Observable type in return of this method. So, here we use map() instead of flatMap() and we return User type.

    private Observable<User> getTodoListByUserId(User user) {
        return ServicesProvider.getDummyApi()
                .getTodosByUserID(user.getId())
                .subscribeOn(Schedulers.io())
                .map(todoList -> {
                    sleep();
                    user.setTodoList(todoList);
                    return user;
                });
    }

Now, fetch todo list of users using the 3rd endpoint.

The difference to the 2nd endpoint is that this returns a list of todos for all users. Here we can see the usage of filter() operator.

    private Observable<User> getAllTodo(User user) {
        return ServicesProvider.getDummyApi()
                .getTodos()
                .subscribeOn(Schedulers.io())
                .flatMapIterable((Function<List<Todo>, Iterable<Todo>>) todoList -> todoList)
                .filter(todo -> todo.getUserId().equals(user.getId()) && todo.getCompleted())
                .toList().toObservable()
                .map(todoList -> {
                    sleep();
                    user.setTodoList(todoList);
                    return user;
                });
    }
  • .flatMapIterable() – is used to convert Observable<List<T>> to Observable<T> which is needed for filter each item in list
  • .filter() – we filter todos to get each user’s completed todo list
  • .toList().toObservable() – for converting back to Observable<List<T>>
  • .map() – we set filtered list to user object which will be used in next code snippet

Now, the last step, we call the methods:

        getUsersObservable()
                .subscribeOn(Schedulers.io())
                .concatMap((Function<User, ObservableSource<User>>) this::getTodoListByUserId) // operator can be concatMap()
                .observeOn(AndroidSchedulers.mainThread())
                .subscribe(new Observer<User>() {
                    @Override
                    public void onSubscribe(Disposable d) {
                        disposables.add(d);
                    }

                    @Override
                    public void onNext(User user) {
                        adapterRV.updateData(user);
                    }

                    @Override
                    public void onError(Throwable e) {
                        Log.e(TAG, e.getMessage());
                    }

                    @Override
                    public void onComplete() {
                        Log.d(TAG, "completed!");
                    }
                });
  • subscribeOn() – makes the next operator performed on background
  • concatMap() – here we call one of our methods getTodoListByUserId() or getAllTodo()
  • .observeOn(), .subscribe() – every time the user’s todo list is fetched from api in background thread, it emits the data and triggers onNext() so we update RecyclerView in UI thread
  • Left
    • getTodoListByUserId()
    • flatMap()
  • Right
    • concatMap()
    • getAllTodo() – filter usage

Difference between flatMap and concatMap is that the former is done in an arbitrary order but the latter preserves the order

Disposable

When an observer subscribes to an observable, a disposable object is provided in onSubscribe() method so it can later be used to terminate the background process to avoid it returning from callback to a dead activity.

private CompositeDisposable disposables = new CompositeDisposable();

observableobject.subscribe(new Observer() {
    @Override
    public void onSubscribe(Disposable d) {
        disposables.add(d);
    }

@Override
protected void onDestroy() {
    super.onDestroy();
    disposables.dispose();
}

Summary

In this post, I tried to give brief information about reactive programming, observer pattern, ReactiveX library and a simple example on android.

Why should you use RxJava in your projects?

  • less boilerplate code
  • easy thread management
  • thread-safety
  • easy error handling

Gitlab Repository

Example sourcecode: https://gitlab.com/47northlabs/public/android-rxjava

Links