Spring Boot REST API with OpenAPI (SwaggerUI) Codegen

Reading Time: 5 minutes

When working with microservices architecture, one of the most important aspects is inter-service communication. Usually, each microservice stores data in its own database, and if we follow the MVC design pattern, we probably have model classes that map the relational database to object models, and components that contain methods for performing CRUD operations. These components are exposed by controller endpoints.

So that one microservice calls another, the caller needs to know the exact request and response model classes. This article will show a simple example of how to generate such models with SpringDoc OpenAPI.

I will create two services that will provide basic CRUD operations. For demonstrating purposes I chose to store data about vehicles:

  • vehicle-manager- the microservice that provides vehicles’ data to the client
  • vehicle-manager-client – the client microservice that requests vehicles’ data

For the purpose of this tutorial, I created empty Spring Boot projects via SpringInitializr.

In order to use the OpenAPI in our Spring Boot project, we need to add the following Maven dependency in our pom file:

<dependency>
  <groupId>org.springdoc</groupId>
  <artifactId>springdoc-openapi-ui</artifactId>
  <version>1.5.5</version>
</dependency>

In the vehicle-manager microservice I created a Vehicle class that looks like this:

@Data
@Builder
@Schema(name = "Vehicle", description = "Example vehicle schema")
public class Vehicle {
    private VehicleType vehicleType;
    private String registrationPlate;
    private int seatsCount;
    private Category category;
    private double price;
    private Currency currency;
    private boolean available;
}

And a controller:

package com.n47.vehiclemanager.ctrl;

import com.n47.vehiclemanager.model.Vehicle;
import com.n47.vehiclemanager.service.VehicleService;
import io.swagger.v3.oas.annotations.tags.Tag;
import lombok.RequiredArgsConstructor;
import org.springframework.web.bind.annotation.*;

import javax.validation.Valid;

@Tag(name = "vehicle", description = "Vehicle controller API")
@RestController
@RequiredArgsConstructor
@RequestMapping(path = "/vehicle")
public class VehicleCtrl {

    private final VehicleService vehicleService;

    @PostMapping(path = "/add")
    public void addVehicle(@RequestBody @Valid Vehicle vehicle) {
        vehicleService.addVehicle(vehicle);
    }

    @GetMapping(path = "/get")
    public Vehicle getVehicle(@RequestParam String registrationPlate) throws Exception {
        return vehicleService.getVehicle(registrationPlate);
    }
}

The important annotations here from openAPI are @Schema and @Tag. The former is used to define the actual class that needs to be included in the API documentation. The latter is used for grouping operations, such as all methods under one controller.

The swagger documentation interface for Vehiclemanager microservice is shown on Figure 1, and can be accessed on the following links:

If we open http://localhost:8080/api-docs in our browser (or any other port we set our Spring boot app to run on), we can get the entire documentation for the Vehiclemanager microservice. The important part for the model generation is right under components/schemas, while the controller endpoints are under paths.

{
   "openapi":"3.0.1",
   "info":{
      "title":"OpenAPI definition",
      "version":"v0"
   },
   "servers":[
      {
         "url":"http://localhost:8080",
         "description":"Generated server url"
      }
   ],
   "tags":[
      {
         "name":"vehicle",
         "description":"Vehicle controller API"
      }
   ],
   "paths":{
      "/vehicle/add":{
         "post":{
            "tags":[
               "vehicle"
            ],
            "operationId":"addVehicle",
            "requestBody":{
               "content":{
                  "application/json":{
                     "schema":{
                        "$ref":"#/components/schemas/Vehicle"
                     }
                  }
               },
               "required":true
            },
            "responses":{
               "200":{
                  "description":"OK"
               }
            }
         }
      },
      "/vehicle/get":{
         "get":{
            "tags":[
               "vehicle"
            ],
            "operationId":"getVehicle",
            "parameters":[
               {
                  "name":"registrationPlate",
                  "in":"query",
                  "required":true,
                  "schema":{
                     "type":"string"
                  }
               }
            ],
            "responses":{
               "200":{
                  "description":"OK",
                  "content":{
                     "*/*":{
                        "schema":{
                           "$ref":"#/components/schemas/Vehicle"
                        }
                     }
                  }
               }
            }
         }
      }
   },
   "components":{
      "schemas":{
         "Vehicle":{
            "type":"object",
            "properties":{
               "vehicleType":{
                  "type":"string",
                  "enum":[
                     "MOTORBIKE",
                     "CAR",
                     "VAN",
                     "BUS",
                     "TRUCK"
                  ]
               },
               "registrationPlate":{
                  "type":"string"
               },
               "seatsCount":{
                  "type":"integer",
                  "format":"int32"
               },
               "category":{
                  "type":"string",
                  "enum":[
                     "A",
                     "B",
                     "C",
                     "D",
                     "E"
                  ]
               },
               "price":{
                  "type":"number",
                  "format":"double"
               },
               "currency":{
                  "type":"string",
                  "enum":[
                     "EUR",
                     "USD",
                     "CHF",
                     "MKD"
                  ]
               },
               "available":{
                  "type":"boolean"
               }
            },
            "description":"Example vehicle schema"
         }
      }
   }
}

I am going to create a Vehiclemanager-client service, running on port 8082, that will get vehicle information for a given registration plate, by calling the Vehiclemanager microservice. In order to do so, we need to generate the Vehicle model class defined in the original Vehicle microservice. We can generate it by adding the swagger codegen plugin in the pom’s plugins section, in the new demo service, like this:

<profiles>
  <profile>
	<id>generateModels</id>
	<build>
	  <plugins>
		<plugin>
	      <groupId>io.swagger.codegen.v3</groupId>
			<artifactId>swagger-codegen-maven-plugin</artifactId>
			<version>3.0.11</version>
			<configuration>
		      <output>${project.basedir}</output>
			  <inputSpec>default-config</inputSpec>
			  <language>java</language>
			  <generateModels>true</generateModels>
			  <generateModelDocumentation>false</generateModelDocumentation>
			  <generateApis>false</generateApis>
			  <generateApiTests>false</generateApiTests>
			  <generateModelTests>false</generateModelTests>
			  <generateSupportingFiles>false</generateSupportingFiles>
			  <configOptions>
			    <sourceFolder>src/main/java</sourceFolder>
				<hideGenerationTimestamp>true</hideGenerationTimestamp>
				<sortParamsByRequiredFlag>true</sortParamsByRequiredFlag>
				<checkDuplicatedModelName>true</checkDuplicatedModelName>
				<useBeanValidation>true</useBeanValidation>
				<library>feign</library>
				<dateLibrary>java8-localdatetime</dateLibrary>
			  </configOptions>
			</configuration>
			<executions>
			  <execution>
				<id>generate-vehiclemanager-classes</id>
				<goals>
			      <goal>generate</goal>
				</goals>
				<configuration>
				  <inputSpec>http://localhost:8080/api-docs</inputSpec>
				  <language>java</language>
				  <modelPackage>com.n47.domain.external.model</modelPackage>
				  <modelsToGenerate>Vehicle</modelsToGenerate>
				</configuration>
			  </execution>
			</executions>
		  </plugin>
		</plugins>
	 </build>
  </profile>
</profiles>

After running the corresponding maven profile with:

> mvn clean compile -P generateModels

the models defined in <modelsToGenerate> tag will be created under the specified package in <modelPackage> tag.

Codegen generates for us the entire model class with all classes that are defined inside it.

It is important to note that we can have models generated from different services. In each execution (see line 30 from the XML snippet) we can define the corresponding API documentation link in the <inputSpec> tag (line 37).

To demo data transfer from Vehiclemanager to Vehiclemanager-client microservice, we can send a simple request via Postman. The request I am going to use will be a GET request, that accepts a parameter registrationPlate which is used to query the vehicles stored in the Vehiclemanager microservice. The response is shown in Figure 3, which is a JSON containing the vehicle’s data that I hardcoded in the Vehiclemanager microservice.

Using OpenAPI helps us getting rid of copy-paste and boilerplate code, and more importantly, we have an automated mechanism that on each Maven clean compile generates the latest models from other microservices.

You can find the full code example microservices in the links below:

Feel free to download and run them yourself, and leave a comment or feedback.

Rest assure your API

Reading Time: 4 minutes


Most if not all of the todays’ applications expose some API for interaction. Either for customers or other applications. Application programming interface or API is a software mediator that allows two applications. Each time when we are using Facebook, YouTube or some other app, essentially we are using an API. API is a set of HTTP endpoints that use to send and retrieve data in some form, JSON or XML. Making sure those HTTP endpoints are sending and retrieving correct data thus are working according to the specifications is a vital requirement. Testing APIs belongs to the last (E2E) layer of the testing pyramid for which you may find more information in my previous blog.

Introduction to Rest Assured

Rest Assured is an open-source Java library that is used for testing RESTfull web services. It allows us to write tests using the BDD pattern. Rest Assured is a headless client for accessing Rest web services. The library is highly customizable, allowing us to create a wide variety of request combinations to test different application core business logic combinations.

High customizability also comes in handy when we want to verify the responses from the server, where we can verify the Status code, Status message, Body, Headers, etc. This makes Rest-Assured a versatile library and is often used for API testing.

Rest Assured

Pseudo Syntax:

Given(). 
        param("a", "b"). 
        header("c", "d").
when().
Method().
Then(). 
        statusCode(XXX).
        body("x, ”y", equalTo("z"));

The syntax of Rest Assured.io is the most interesting part, it’s using the BDD syntax and it’s very understandable.

Explanation:

CodeDescription
Given()‘Given’ keyword, is used to set up the (Pre-conditions/ Context), here, you pass the request headers, query and path param, body, cookies. This is optional if these items are not needed in the request
When()‘when’ keyword is a notion that marks the premise of the test
Method()Specifies the action of HTTP method (POST,GET,PUT,PATCH,DELETE)
Then()Specifies the (Result/Outcomes) and is used for assertions

Let’s create automation tests

Create new project in intelliJ

Add the following dependency:

        <dependency>
            <groupId>org.junit.jupiter</groupId>
            <artifactId>junit-jupiter</artifactId>
            <version>RELEASE</version>
            <scope>test</scope>
        </dependency>

        <dependency>
            <groupId>io.rest-assured</groupId>
            <artifactId>rest-assured</artifactId>
            <version>4.4.0</version>
            <scope>test</scope>
        </dependency>

Create new Test class

Simple test example:

public class HelloYouTubeRestAssured {

    @Test
    public void greetingsYouTube() {
        given().when()
                .get("http://youtube.com/")
                .then()
                .statusCode(200);
    }
}

The simple test connects to YouTube, performing GET call and making sure that the server responds with a success status code of 200.

Another tests verifies the Users API:

public class UsersApiTest {

    @Test
    public void checkUsers() {
        given()
                .baseUri("https://jsonplaceholder.typicode.com")
                .when()
                .get("/users")
                .then()
                .statusCode(200)
                .statusLine("HTTP/1.1 200 OK")
                .body("id",hasSize(10))
                .body("name[0]", equalTo("Leanne Graham"))
                .body("username[0]", equalTo("Bret"))
                .body("email[0]", equalTo("Sincere@april.biz"))
                .body("address[0].city", equalTo("Gwenborough"))
                .body("phone[0]", startsWith("1-770-736-8031"))
                .body("website[0]", equalTo("hildegard.org"))
                .body("company[0].name", equalTo("Romaguera-Crona"));
    }
}

As we can see from the above examples the tests are enclosed in the sense that a single call is performed to the server and only a single response is evaluated. The above test navigates to the Users API of the application and then verifies the response from the server. The verification first verifies that the status code from the code is OK. Then we verify that the response has 10 items after that we verify that the first item has the corresponding data. We are able to assert also inner data of the user object in address[0].city and company[0].name. The assertions which we use are from org.hamcrest which are incorporated into Rest-Assured.

Conclusion

Even though here we have scratched the surface, I hope that you now have a better understanding of Rest-Assured. You can find a working example with the tests on this repository.
Also, you can find more about the Rest-assured usage here.

WTF are NFTs?!

Reading Time: 6 minutes

Before I start to explain what an NFT is, let’s have a look at some examples. I tried to gather different styles of NFTs in the high price segment. There are of course also NFTs for 100$ and less.

Sold for $210’000

LeBron James: Dunk, From the Top (Series 1)

Source: https://nbatopshot.com/moment/bigdog_brothers+2499f572-8280-4057-ac27-5603971de95d

Sold for $888’888

Hairy: Musician, fashion designer, and entrepreneur Steve Aoki recently collaborated with 3D illustrator Antoni Tudisco to produce the eclectic piece known simply as ‘Hairy’ (A blue bespectacled creature bopping to one of Aoki’s beats in a 36-second clip).

Source: https://niftygateway.com/itemdetail/primary/0xbeccd9e4a80d4b7b642760275f60b62608d464f7/1

Sold for $2.9 million

First Twitter Tweet: First tweet posted by Twitter founder and CEO Jack Dorsey

Source: https://v.cent.co/tweet/20

Sold for $69 million

EVERYDAYS: THE FIRST 5000 DAYS: The artwork, created by famed digital artist Mike “Beeple” Winkelmann represents a collage of 5,000 of Beeple’s earlier artworks

Source: https://onlineonly.christies.com/s/beeple-first-5000-days/beeple-b-1981-1/112924

What is an NFT?

As you saw in the few examples, NFTs can be anything. It could be a tweet, a digital painting, a video clip, an animation, music, a 3D model, a picture, a GIF or even virtual land in a blockchain-based game. To further explain what NFT exactly means, it’s easier to split the word and have a closer look at Non-Fungible and Token.

NFT = Non-fungible token

Non-fungible

The official definition of fungible is “to be substituted for something of equal value or utility; interchangeable, exchangeable, replaceable”. For now, let’s replace the word fungible with replaceable. So non-fungible means non-replaceable. Let’s make some examples:

Physical fungible (replaceable)

  • CHF Coins and CHF Notes (my 10 Swiss Franc note has the same value as 10 x 1 Swiss Franc coins)
  • Precious metals like gold and silver (my 1kg gold bar has the same value as your 1kg gold bar)

Virtual fungible (replaceable)

  • Bitcoins and other crypto currencies (my 0.00001 BTC has the same value as your 0.00001 BTC)

Physical non-fungible (non replaceable)

  • Historic Coins (the first 1 Swiss Franc coin, or a limited special edition coin. The value is not really defined. Also the first 1 Swiss Franc coin has not the same value as the current 1 Swiss Franc coin)
  • Art (like paintings from Banksy. It’s unique and the value is only defined by the potential buyers. If a painting is destroyed, it’s not replaceable by another “similar” one)

Virtual non-fungible (non replaceable)

  • Tweets
  • NBA Dunks
  • Art (image, video, 3D model, music)

Token

The token certifies a digital asset to be unique and therefore not interchangeable. It’s proof of ownership that is stored on the blockchain (in this case: Ethereum). While someone can sell an NFT representing his work, the buyer does not necessarily receive copyright privileges if ownership of the NFT changes, allowing the original owner to create further NFTs of the same work. An NFT is merely proof of ownership separate from copyright.

NFT Properties

UniqueProvably ScarceIndivisible
Each NFT has different properties that are usually stored in the token’s metadata.There is usually a limited number of NFTs with an extreme example of having only 1 copy, the number of tokens can be verified on the blockchain, hence its provability.Most NFTs cannot be split into smaller denominations, so you cannot buy or transfer a fraction of your NFT.

The dark side of NFTs

NFTs are stored on ETH (Ethereum) Blockchain, which is currently using 51 TWh/year. That’s 51’000’000’000 kWh/year (comparable to the power consumption of the whole of Portugal). If we calculate the carbon footprint, it’s 30’000’000 tons of CO2/year. You could drive 121’000’000’000 km with a car to have the same emissions as ETH has in one year.

Source: https://digiconomist.net/ethereum-energy-consumption/

Create your own NFT

The process is very simple and creating an NFT at OpenSea is done in a few minutes. I decided to go for OpenSea because it’s the most popular marketplace and easiest to use.

Create account, collection and item

  1. Create digital art (image, video, audio, or 3D model) with your favourite tools. I did it with Adobe Photoshop and Adobe Premiere.
  2. Create an account at OpenSea. I used MetaMask as my wallet. You can also choose other wallets. If using MetaMask, it’s the easiest to have it installed as a Chrome/Firefox addon.
  3. Create a collection and add a new item to it.
  4. Upload your art (file types supported: JPG, PNG, GIF, SVG, MP4, WEBM, MP3, WAV, OGG, GLB, GLTF. Max size: 40 MB) and give it a name. That’s all you need for now!

Sell your item

After you created your item inside your collection, it is ready for selling. You will have the option to sell for a fixed price or create an open auction (highest bid). I decided to go for an auction and I will let it run until the end of the year.

  1. Click on your item
  2. Click on the top right corner button “Sell”
  3. Select “Highest Bid” to create an open auction
  4. All other settings are customizable, like minimum bid, the reserve price and expiration date. I set the minimum bid to 0, the reserve price to 1 and the expiration date to 31.12.2021 (end of the year)
  5. Post the listing with the big blue button. Posting something the first time requires a gas fee. Depending on the time and day, this will vary. See https://ycharts.com/indicators/ethereum_average_gas_price
  6. Congratz! You created and listed your first NFT!

Our N47 NFT is up for sale!

Of course, I had to create an N47 NFT too! Our sale end of the year (December 31, 2021, at 12:00 am CEST). Check the listing at OpenSea and make a bid! It will be a great investment 🤑

https://opensea.io/assets/0x495f947276749ce646f68ac8c248420045cb7b5e/84521541564558765496637908089370856586310363315177326824411334291304117960705

Spring Boot password encryption with Jasypt

Reading Time: 5 minutes

Securing sensitive data is extremely important. In the following tutorial, we will go through the process of encrypting sensitive data in a Spring Boot application. We will take an easy approach to this very common procedure which takes place in any software project. This will be easy in the context of setup and usage of the given high-security java library. Without the need for deep knowledge/in-depth understanding of cryptography, encryption capabilities and encryption algorithms. Just following a simple setup with a few configuration steps. It is recommended to rely on the secure default configuration, but also Jasypt offers quite some customization if one needs it.

Jasypt (Java Simplified Encryption) provides utilities for encrypting user sensitive information, such as DB passwords, servers’ credentials, or other sensitive personal data. This information is key to users privacy, so we as developers need to make sure that no one gets the right to access them, irrelevant of the place where they are stored, they always need to be encrypted. Never store sensitive data in plain mode. It’s common sense we need to follow and it’s also something we need to honour if we want to gain our user’s trust. For this tutorial, we will use a specific library, Jasypt Spring Boot Starter, widely used across the Spring Boot community.

Jasypt setup steps

  1. Add jasypt-spring-boot-starter maven dependency in the pom.xml of the Spring Boot project
  2. Select a secret key to be used for encryption and decryption
  3. Generate Encrypted Key
  4. Add the Encrypted key in the config file
  5. Run the application

Let’s go into details in all of these steps:

Step 1. Adding maven dependency

<dependency>
    <groupId>com.github.ulisesbocchio</groupId>
    <artifactId>jasypt-spring-boot-starter</artifactId>
    <version>3.0.3</version>
</dependency>

This comes as a regular entry in the pom.xml file list of dependencies.

We should try to use latest stable versions. This version is the latest one at this moment and it offers better support for newer versions of Spring Boot. starting from 2.1.x and upwards. Also would advise using this because it comes with a more secure encryption algorithm by default, “PBEWITHHMACSHA512ANDAES_256”.

Step 2. Select a secret key to be used for encryption and decryption

This secret key will be used to encrypt and descript the data. You can think of it as a safeguard to further improve security. What it does, it actually adds a random string to the beginning or end of the input text prior to hashing or encrypting the value. This secret key goes in the property file, application.yml/application.properties in the Spring Boot project itself.

jasypt:
     encryptor:
           password: salting

Step 3. Generate Encrypted Key

Jasypt supplies a lot of (CLI) tools. In order to use these tools, you should download the distribution zip file (named jasypt-$VERSION-dist.zip) and unzip it. There will be an appropriate .bat or .sh file for the needed operation digest/encrypt/decrypt.

Example for encryption

$ ./encrypt.sh input="This is my message to be encrypted" password=MYPASSWORD

Example for decryption

$ ./decrypt.sh input="8fsdfdsafdsa9ffsad0fdsa0fdsfdsa3231x" password=MYPASSWORD

Another way of using Jasypt for encrypting your data is by using some online tools that provide Jasypt operations.

The simplest and most convenient way is a maven plugin. Not only that you can use it for a single value, it offers the capabilities to encrypt all sensitive data with a single command, meaning all placeholders will be updated in one step.

<build>
  <plugins>
    <plugin>
      <groupId>com.github.ulisesbocchio</groupId>
      <artifactId>jasypt-maven-plugin</artifactId>
      <version>3.0.3</version>
    </plugin>
  </plugins>
</build>

This jasypt-maven-plugin, by default, will check for configuration files under ./src/main/resources, or the regular Spring Boot resource folders. But also, Environment variables can be used to supply this master password. Instead of exposing the password “salting” inside the project itself, an Environment Variable can be created with, for instance, ENCRYPTION_MASTER_PASSWORD and then in the config file, password: ${ENCRYPTION_MASTER_PASSWORD}.

Example for encrypting a single value from a terminal.

This example uses the encryption password as an argument. Important, the terminal session needs to be opened where the pom.xml file with the maven plugin is located.

mvn jasypt:encrypt-value -Djasypt.encryptor.password=salting -Djasypt.plugin.value="secureDataWeNeedToEncrypt"

Example for encrypting all strings within projects property file.

The last argument is optional since Jasypt will scan that location anyway. What is important is that sensitive placeholders in the application property file MUST be wrapped in DEC() parenthesis. Activedirectory:password: DEC(supersecret) OracleDB:password: DEC(alsosupersecret).

mvn jasypt:encrypt -Djasypt.encryptor.password=salting -Djasypt.plugin.path="file:src/main/resources/application.yml"

If the previous statement completed successfully then, all sensitive data should be updated with their encrypted value. Updated properties output should be something like, Activedirectory:password: ENC(sFJDfdsfjjA8saT7YC65bsf71d0) OracleDB:password: ENC(34jjfsdfds+fds/fsd7Hs)

Step 4. Add the encrypted key in the config file

If you have been using the latest approach, then the application.properties/application.yaml files have already been updated with the newly encrypted values. All sensitive data wrapped with a DEC() is now encrypted, and all other strings in the configuration remained unchanged. If some of the other approaches were chosen, going one placeholder at a time, or using the cli, then we need to update the configuration file entries one by one. Still, the properties need to be wrapped in ENC() parenthesis anyway, since the output of the cli is only the encrypted value.

For the reverse process, it’s vice-versa, the first argument of the statement is: decrypt and all placeholders must be wrapped in ENC() parenthesis before execution.

Step 5. Run the application

That’s it. Your Spring Boot project will automatically decrypt all sensitive data when you start the application, no additional configuration is needed. Let me know in the comments section how was your experience. Was it smooth or are there some ongoing issues?

Deploy microservice on Kubernetes, step by step

Reading Time: 11 minutes

In this tutorial, I will try to explain step by step, how you can set up Kubernetes, deploy your microservice on Kubernetes, and check the result via the Kubernetes dashboard. All other things will be “as simple as possible”. As a cloud platform gcloud will be used. We will cover the following aspects of the problem:

  1. Create microservice to be deployed
  2. Placed application in your docker container
  3. What is Kubernetes and how to install it?
  4. Create a new Kubernetes project
  5. Create new Cluster
  6. Allow access from your local machine
  7. Create service account
  8. Activate service account
  9. Connect to cluster
  10. Gcloud initialization
  11. Generate access token
  12. Deploy and start Kubernetes dashboard
  13. Deploy microservice

Step 1: Create microservice to be deployed

Traditionally, in the programming world, everything starts with “Hello World”. So, as mentioned previously, to keep things simple, create a microservice that returns just “Hello World”. You can use https://start.spring.io/ for this goal. Create HelloController like this:

package com.example.demojooq.controllers;


import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequestMapping("/api/v1")
public class HelloController {

    @GetMapping("/say-hello")
    public String sayHello() {
        return "Hello world";
    }
}

Step 2: Placed application in your docker container

We have a microservice, need to put this microservice in a docker container and upload it on Kubernetes. From that point, Kubernetes will orchestrate the container according to your settings. Let’s create the first image from the microservice. Normally, as you might guess, it is called Dockerfile (without any extension), and the content is:

Dockerfile

FROM adoptopenjdk/openjdk11:jre-11.0.8_10-debianslim
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","app.jar"]

The next step is to create the docker-compose file. For that purpose, a call to Dockerfile will be made to build the image. You can do it manually, but the best way is from the docker-compose file, as you have a permanent track of the solution. This is a .yaml file. (picture below)

docker-compose.yaml

version: "3"
services:
  hello-world:
    build: .
    ports:
      - "8099:8080"

After starting docker, go to the folder where docker-compose is located and execute the command “docker-compose up”. The expectation is to reach this microservice on 8099 port. If everything is ok, in your docker will be something like this:

Check microservice docker installation with postman calling http://localhost:8099/api/v1/say-hello. In response, you have “Hello World”.

Step 3: What is Kubernetes and how to install it?

What is Kubernetes?

Kubernetes is an open-source container orchestrator that automates many tasks involved in deploying, managing, and scaling containerized applications. What happens when you use Docker, and your container fails? Probably the first thing to do is to restart your container. You do that manually (if you don’t have Kubernetes). Here comes Kubernetes, observe that container is down and start new container automatically. This is just a basic use case. Please read more on the internet, there is a bunch of information about this.

How to install Kubernetes?

Ok, until now you are sure that Kubernetes is needed, but where to find it, what are the costs, and how to install it? First of all, try “download Kubernetes” on Google… Pick the site https://kubernetes.io/docs/tasks/tools/… Options for Windows, Mac, Linux appear… A different installation like kind, minikube, kubeadmin… So, is it worth spending so much time setting properly this Kubernetes? You do not have to ask me, I agree with you, it is too much time. Fortunately, we can make a “go around” and skip all that: Go to Gcloud where Kubernetes is offered as a service and just use it. Somebody else takes care of this, we can focus just on the business logic in our microservice and use out-of-the-box Kubernetes installation from Gcloud. Sounds good, doesn’t it? The last and most important question; money. Is it for free? No, it is not. You have to pay for the Gcloud services and here is the price list: https://cloud.google.com/kubernetes-engine/pricing. But for ordinary people like you and me, Gcloud offers a free account for 3 months up to 300$ and it seems fair. It is enough time to learn about deploying microservices on Kubernetes. For any professional use in future, the company should stay behind this. Here is the link where you can create your free cloud account https://cloud.google.com/. One more thing, during the creation of a free account, Google will ask for your bank account, to automatically charge you. But do not worry, you are safe for the first three months and below 300$. And for any charging, you will be asked for permission before… So, until now my personal experience is positive, as Google is keeping the promise when you create an account. But the final decision is up to you.

Step 4: Create new Kubernetes project

Open up your Google account, sign in and go to the console.

Create a new project from the main dashboard; the name of the new project is “hello-world”. From now on, this is your active project.

Step 5: Create new cluster

Create new cluster (named it cluster2). Accept default values for others fields.

Step 6: Allow access from your local machine

Now, we must allow access from our local machine to Kubernetes, via kubectl. For that purpose, we need to follow these steps:

  1. Click on cluster2
  2. Find your local IP address and add it here according to the CIDR standard in the Edit control plane authorized networks

Step 7: Create service account

Give new account role “Owner”. Accept default values for other fields. After a service account is created, you should have something like this:

Generate keys for this service account with key type JSON. When the key is downloaded, it has some random name like hello-world-315318-ab0c74d58a70.json. Keep this file in a safe place, we will need it later.

Now, install Google Cloud SDK Shell on your machine according to your OS. Let’s do the configuration so kubectl can reach cluster2. Copy the file hello-world-315318-ab0c74d58a70.json and put it in the CLOUD SDK folder. For the Windows environment, it looks like this:

Step 8: Activate service account

The first thing to do is to activate the service account with the command: gcloud auth activate-service-account hello-world-service-account@hello-world-315318.iam.gserviceaccount.com –key-file=hello-world-315318-ab0c74d58a70.json

Step 9: Connect to cluster

Now go to cluster2 again and find the connection string to connect to the new cluster

Execute this connection string in Google Cloud Shell: gcloud container clusters get-credentials cluster2 –zone us-central1-c –project hello-world-315318

Step 10: Gcloud initialization

The next command to execute is gcloud init, to initialize connection with the new project. Here is the complete code on how to do that from the Gcloud Shell:

C:\Users\Dimche Trifunov\AppData\Local\Google\Cloud SDK>gcloud init
Welcome! This command will take you through the configuration of gcloud.

Settings from your current configuration [dev] are:
accessibility:
  screen_reader: 'False'
compute:
  region: europe-west3
  zone: europe-west3-a
core:
  account: hello-world-service-account@hello-world-315318.iam.gserviceaccount.com
  disable_usage_reporting: 'True'
  project: dops-containers

Pick configuration to use:
 [1] Re-initialize this configuration [dev] with new settings
 [2] Create a new configuration
 [3] Switch to and re-initialize existing configuration: [database-connection]
 [4] Switch to and re-initialize existing configuration: [default]
Please enter your numeric choice:  2

Enter configuration name. Names start with a lower case letter and
contain only lower case letters a-z, digits 0-9, and hyphens '-':  hello-world
Your current configuration has been set to: [hello-world]

You can skip diagnostics next time by using the following flag:
  gcloud init --skip-diagnostics

Network diagnostic detects and fixes local network connection issues.
Checking network connection...done.
Reachability Check passed.
Network diagnostic passed (1/1 checks passed).

Choose the account you would like to use to perform operations for
this configuration:
 [1] cicd-worker@devops-platform-n47.iam.gserviceaccount.com
 [2] d.trifunov74@gmail.com
 [3] dimche.trifunov@north-47.com
 [4] dtrifunov@lunar-sled-314616.iam.gserviceaccount.com
 [5] hello-world-service-account@hello-world-315318.iam.gserviceaccount.com
 [6] service-account-demo-dime@blissful-epoch-305214.iam.gserviceaccount.com
 [7] Log in with a new account
Please enter your numeric choice:  5

You are logged in as: [hello-world-service-account@hello-world-315318.iam.gserviceaccount.com].

API [cloudresourcemanager.googleapis.com] not enabled on project
[580325979968]. Would you like to enable and retry (this will take a
few minutes)? (y/N)?  y

Enabling service [cloudresourcemanager.googleapis.com] on project [580325979968]...
Operation "operations/acf.p2-580325979968-f1bf2515-deea-49d5-ae35-a0adfef9973e" finished successfully.
Pick cloud project to use:
 [1] hello-world-315318
 [2] Create a new project
Please enter numeric choice or text value (must exactly match list
item):  1

Your current project has been set to: [hello-world-315318].

Do you want to configure a default Compute Region and Zone? (Y/n)?  n

Error creating a default .boto configuration file. Please run [gsutil config -n] if you would like to create this file.
Your Google Cloud SDK is configured and ready to use!

* Commands that require authentication will use hello-world-service-account@hello-world-315318.iam.gserviceaccount.com by default
* Commands will reference project `hello-world-315318` by default
Run `gcloud help config` to learn how to change individual settings

This gcloud configuration is called [hello-world]. You can create additional configurations if you work with multiple accounts and/or projects.
Run `gcloud topic configurations` to learn more.

Some things to try next:

* Run `gcloud --help` to see the Cloud Platform services you can interact with. And run `gcloud help COMMAND` to get help on any gcloud command.
* Run `gcloud topic --help` to learn about advanced features of the SDK like arg files and output formatting

Step 11: Generate access token

Type kubectl get namespace, access token is generated in .kube folder (in home folder), in config file:

If you open this config file, you will find your access token. You will need this later.

Step 12: Deploy and start Kubernetes dashboard

Now, deploy Kubernetes dashboard with the next command: kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

C:\Users\Dimche Trifunov\AppData\Local\Google\Cloud SDK>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

C:\Users\Dimche Trifunov\AppData\Local\Google\Cloud SDK>kubectl proxy
Starting to serve on 127.0.0.1:8001

Start the dashboard with kubectl proxy command. Now open the dashboard from the link: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/overview?namespace=default

In front of you, this screen will appear:

Now, you need the token from the config file that we spoke about a moment ago. Open the config file with Notepad (on Windows), find your access token, and copy from there and paste it in the Enter token* field. Be careful when you are copying token from the config file as there might be several tokens. You must choose yours (image below).

Finally, the stage is prepared to deploy microservice.

Step 13: Deploy microservice

Build the docker image from Dockerfile with the command: docker build -t docker2222/dimac:latest. docker2222/dimac is my public docker repository.
Push the image on docker hub with the command: docker image push docker2222/dimac:latest.
Execute kubectl apply -f k8s.yaml where k8s.yaml is the file below:

---

apiVersion: v1
kind: Namespace
metadata:
  name: hello

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-world
  namespace: hello
  annotations:
    buildNumber: "1.0"
spec:
  selector:
    matchLabels:
      app: hello-world
  replicas: 1
  template:
    metadata:
      labels:
        app: hello-world
      annotations:
        buildNumber: "1.0"
    spec:
      containers:
        - name: hello-world
          image: docker2222/dimac:latest
          readinessProbe:
            httpGet:
              path: "/actuator/health/readiness"
              port: 8080
            initialDelaySeconds: 5
          ports:
            - containerPort: 8080
          env:
            - name: APPLICATION_VERSION
              value: "1.0"
---


apiVersion: v1
kind: Service
metadata:
  name: hello-world
  namespace: hello
spec:
  selector:
    app: hello-world
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
---

Last but not least, open the Kubernetes dashboard. If everything is OK, you should see your service.

ERC20 Token Smart Contract for Ethereum Blockchain

Reading Time: 5 minutes

Since the inception of Blockchain technology; Bitcoin, Ethereum, or crypto-currencies are hot topics and buzzing around the world many startups based on Blockchain technologies are using cryptocurrencies, in other words, crypto tokens for the utilization of their products. These crypto tokens can be deployed on many Blockchain like Ethereum, Cardano, Binance, Polkadot, etc. It’s another topic of discussion, on which blockchain these crypto tokens need to be implemented but as Ethereum being the first market mover, this blog post explains, how you can create such a token on the Ethereum blockchain.

Before creating an Ethereum based token (ERC20 token), understand first the basics of Smart-contract and their native programming language Solidity.

Smart Contract

A smart contract is simply a set of rules that contains the business logic or a protocol according to which all the transactions on a Blockchain should happen. The general purpose of a Smart contract is to satisfy common contractual conditions like creating its token, perform arbitrary competitions, function to send and receive tokens, and store states of transactions.

Solidity

Solidity is an object-oriented and high-level smart-contract programming language, which is developed on top of Ethereum Virtual Machine (EVM). Solidity compiler converts smart-contract code into EVM bytecode which is sent to the Ethereum network as a deployment transaction. It would be best to have a good understanding of Solidity programming language to efficiently write an Ethereum Smart Contract and build an application on smart-contract.

Coding example of smart-contract

This section contains the example of a smart-contract code written using the Solidity programming language.

Prerequisite

Integrated development environment (IDE)

Remix as the IDE. It is a web-based IDE with built-in static analysis and a testnet EVM. Remix provides the possibility to compile and deploy it to Ethereum testnet with Metamask. Here is a good blog post for it.

There is also another web-based IDE available like EthFiddle. For more information related to IDE please visit here.

Programming Language

Solidity

ERC20 Token Info
  • Symbol – N47
  • Name – N47Token
  • Decimals – 0
  • Total Supply – 1000000
Smart-contract Code
// SPDX-License-Identifier: unlicensed
pragma solidity 0.8.4;
// ----------------------------------------------------------------------------
// Safe maths
// ----------------------------------------------------------------------------
contract SafeMath {
    function safeAdd(uint a, uint b) public pure returns (uint c) {
        c = a + b;
        require(c >= a);
    }
    function safeSub(uint a, uint b) public pure returns (uint c) {
        require(b <= a);
        c = a - b;
    }
}
// ----------------------------------------------------------------------------
// ERC Token Standard #20 Interface
// https://github.com/ethereum/EIPs/blob/master/EIPS/eip-20.md
// ----------------------------------------------------------------------------
abstract contract ERC20Interface {
    function totalSupply() virtual public view returns (uint);
    function balanceOf(address tokenOwner) virtual public view returns (uint balance);
    function allowance(address tokenOwner, address spender) virtual public view returns (uint remaining);
    function transfer(address to, uint tokens) virtual public returns (bool success);
    function approve(address spender, uint tokens) virtual public returns (bool success);
    function transferFrom(address from, address to, uint tokens) virtual public returns (bool success);
    event Transfer(address indexed from, address indexed to, uint tokens);
    event Approval(address indexed tokenOwner, address indexed spender, uint tokens);
}
// ----------------------------------------------------------------------------
// ERC20 Token, with the addition of symbol, name and decimals
// assisted token transfers
// ----------------------------------------------------------------------------
contract N47Token is ERC20Interface, SafeMath {
    string public symbol;
    string public  name;
    uint8 public decimals;
    uint public _totalSupply;
    mapping(address => uint) balances;
    mapping(address => mapping(address => uint)) allowed;
    // ------------------------------------------------------------------------
    // Constructor
    // ------------------------------------------------------------------------
    constructor() {
        symbol = "N47";
        name = "N47Token";
        decimals = 0;
        _totalSupply = 1000000;
        balances[msg.sender] = _totalSupply;
        emit Transfer(address(0), msg.sender, _totalSupply);
    }
    // ------------------------------------------------------------------------
    // Total supply
    // ------------------------------------------------------------------------
    function totalSupply() public override view returns (uint) {
        return _totalSupply - balances[address(0)];
    }
    // ------------------------------------------------------------------------
    // Get the token balance for account tokenOwner
    // ------------------------------------------------------------------------
    function balanceOf(address tokenOwner) public override view returns (uint balance) {
        return balances[tokenOwner];
    }
    // ------------------------------------------------------------------------
    // Transfer the balance from token owner's account to receiver account
    // - Owner's account must have sufficient balance to transfer
    // - 0 value transfers are allowed
    // ------------------------------------------------------------------------
    function transfer(address receiver, uint tokens) public override returns (bool success) {
        balances[msg.sender] = safeSub(balances[msg.sender], tokens);
        balances[receiver] = safeAdd(balances[receiver], tokens);
        emit Transfer(msg.sender, receiver, tokens);
        return true;
    }
    // ------------------------------------------------------------------------
    // Token owner can approve for spender to transferFrom(...) tokens
    // from the token owner's account
    //
    // https://github.com/ethereum/EIPs/blob/master/EIPS/eip-20.md
    // recommends that there are no checks for the approval double-spend attack
    // as this should be implemented in user interfaces 
    // ------------------------------------------------------------------------
    function approve(address spender, uint tokens) public override returns (bool success) {
        allowed[msg.sender][spender] = tokens;
        emit Approval(msg.sender, spender, tokens);
        return true;
    }
    // ------------------------------------------------------------------------
    // Transfer tokens from sender account to receiver account
    // 
    // The calling account must already have sufficient tokens approve(...)-d
    // for spending from sender account and
    // - From account must have sufficient balance to transfer
    // - Spender must have sufficient allowance to transfer
    // - 0 value transfers are allowed
    // ------------------------------------------------------------------------
    function transferFrom(address sender, address receiver, uint tokens) public override returns (bool success) {
        balances[sender] = safeSub(balances[sender], tokens);
        allowed[sender][msg.sender] = safeSub(allowed[sender][msg.sender], tokens);
        balances[receiver] = safeAdd(balances[receiver], tokens);
        emit Transfer(sender, receiver, tokens);
        return true;
    }
    // ------------------------------------------------------------------------
    // Returns the amount of tokens approved by the owner that can be
    // transferred to the spender's account
    // ------------------------------------------------------------------------
    function allowance(address tokenOwner, address spender) public override view returns (uint remaining) {
        return allowed[tokenOwner][spender];
    }
}

Using the above code, smart-contract can be deployed on Ethereum Mainnet or Testnet. Deploying a smart contract is technically a transaction, that needs to pay Gas (fees) in terms of ETH (Native token for Ethereum network), in the same way, that needs to pay gas for a simple ETH transfer. However, Gas costs for contract deployment are far higher.

** To create another token, simply change those values in the smart contract marked as highlight lines.

Summary

Blockchain technology is way deeper than token and smart contracts. There are many technical aspects like Consensus, Blocks, Wallets, Transactions, Decentralization, mining, etc. The goal of this article was just to provide an overview of smart-contract creation. Just feel free to write down your valuable comments.

Infinite UITableView Scroll – Special Case

Reading Time: 6 minutes

When we are working with loading large data, if we load all the available items and try to display everything at once it will cause a big delay and the app will not work smoothly. The solution in cases like this is a combination of back-end and in-app solution. We should adjust the BE to work with pagination responses. The BE should give us only chunks of data and we should control the size of these chunks from the app sending the batch size. I’ve made research on this topic on the net and there are solutions but all of this is going in one direction. Using pagination, but every time starting from page 1 and loading the next pages after that. One of the best things was discovering the iOS Prefetching Protocol that I’ve never used before. This protocol is a piece of this solution.

Prerequisites

From the BE we need at least two APIs:

The first one is an API that will return all the necessary data: starting page, optionally which element from the starting page to be focussed and the total number of elements in the database. Why is the total number of elements needed? This is because if we don’t know it we won’t know how many rows we should present. Things will become more clear when we will start coding. The suggested JSON response should look like this:

{
	"total_elements": 480,
	"data": [
		{
			"id": 1,
			"name": "Test1"
		},
		{
			"id": 3,
			"name": "Test1"
		}
		//elements from 60..89

	],
	"first_page": 3,
	"per_page": 30
}

The second API is the API that will return the data and we will send a page number as an argument. The example JSON response is provided below:

{
	"data": [
		{
			"id": 1,
			"name": "Test1"
		},
		{
			"id": 3,
			"name": "Test1"
		}
		//elements from 1..29

	],
	"page_number": 1,
	"per_page": 30
}

Solution explained

I read a lot of articles about infinite scrolling UITableView’s but none of this is solving my special case – an option to start in the middle, and optionally to focus on a particular row from the table inside that page. Here is how I solved this issue:

First, I’m defining few variables in the code, some static integer values – the batch size (number of elements per page), start page value that will be variable and we will fetch it from the BE, and the total number of elements – variable that will be fetched also from the BE:

In my example, I will work with a list of integer values instead of using some objects, but this could be easily adjusted with any kind of objects/models.

Also, I will use a list of used pages, and I will keep track of the already fetched pages from the BE. Here is also one useful boolean flag “isNewDataLoading”. This flag will prevent us from calling the BE if the previous BE call is not finished.

    let batchSize: Int = 30
    let totalElements: Int = 480
    var startPage: Int = 5
    var elements: [Int?] = []
    var isNewDataLoading: Bool = false
    var loadedPages = [Int]()

The first call is the initial load method. Here, I will call the BE API that will return all the necessary data to pre-populate the code variables:

    func initialLoad() {
                
        for _ in 0..<totalElements {
            elements.append(.none)
        }
        
        loadedPages.append(startPage)
        
        for value in startPage*batchSize..<(startPage+1)*batchSize {
            elements[value] = value
        }
        
        mainTableView.reloadData()
        let toIndex = startPage*batchSize + ((startPage+1)*batchSize - startPage*batchSize)/2
        mainTableView.scrollToRow(at: IndexPath(row: toIndex, section: 0), at: .middle, animated: true)
    }

After the initial loading is done we have to explain the UITableView data source methods.

The “cellForRow” method has a simple logic. If we don’t have fetched the value for one of the cells, the cell will show a spinner (UIActivityIndicator); in case the value for the cell is already loaded we are showing into a text label (UILabel):

    func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
        let cell = tableView.dequeueReusableCell(withIdentifier: "Cell", for: indexPath) as! TestTableViewCell
        if isLoadingCell(for: indexPath) {
            cell.configure(with: .none)
        } else {
            cell.configure(with: elements[indexPath.row])
        }
        return cell
    }

The Magic

Historically all of the older solutions use the UIScrollView delegate methods and inspect the current y-offset of the table. If the user reaches the maximum y-offset the API is called with the next page.

I made research on the topic and I recognized that the Prefetch Protocol should be useful in this situation. Some of the solutions on the net used the prefetch protocol in their solutions, but it needs some modifications if we want to make our code work with different starting pages. Let’s see how it looks into the code:

    func tableView(_ tableView: UITableView, prefetchRowsAt indexPaths: [IndexPath]) {
        if indexPaths.contains(where: isLoadingCell) {
            
            let index = indexPaths[0]
            let page = index.row/batchSize
            if !loadedPages.contains(page) {
                //fetch new
                DispatchQueue.main.asyncAfter(deadline: DispatchTime.now() + 2.0) {
                    self.getNewData(page: page)
                }
            }
        }
    }

Since iOS 10 Apple introduced the API for prefetching data in UITableView and UICollectionView. More details about the prefetching protocol can be found on the Apple Developer site: https://developer.apple.com/documentation/uikit/uitableviewdatasourceprefetching

Little explanation of the code above: If the IndexPath of the cell that should be prefetched is an index that is not yet downloaded using the row value of the IndexPath we will determine the page to which the index path belongs to. If this page is not downloaded, we will download it. The code for downloading a new page will be shown below:

    func getNewData(page: Int) {
                        
        if isNewDataLoading {
            return
        }
        
        isNewDataLoading = true
        
        var temp: [Int] = [Int]()
        for num in page*batchSize..<(page+1)*batchSize {
            elements[num] = num
            temp.append(num)
        }
        
        loadedPages.append(page)
        
        let indexes: [IndexPath] = temp.map {
            return IndexPath(row: $0, section: 0)
        }
        
        mainTableView.reloadRows(at: indexes, with: .none)
        isNewDataLoading = false
    }

What is important in this method? The most important is to add the page to the list of already loaded pages, and the second thing is to reload only the rows in the table that belongs to the actual page.

The full code can be downloaded by clicking on the “Download” button below:

Feel free to add your comments or suggestion.

COVID-19′ effects on the future of conferences

Reading Time: 3 minutes

First, I would like to talk about a personal experience, that I faced last year. I was supposed to attend a conference in Zurich in March called Voxxed Days, due to the fact that the company I work for, was a sponsor. But by the time spring rolled around, the organizers postponed the conference to September 2020. Later, we got an email, that the conference was cancelled due to the pandemic. COVID-19 has provided an opportunity to rethink virtual conferences. I feel now, that virtual meetings become the norm. And as more meetings move online – a trend likely to continue even after the pandemic fades. These virtual meetings or conferences have some advantages and disadvantages, that we are gonna talk about.

Disadvantages

Online meetings might lack many of the benefits of an in-person conference: conversations over dinner; the after-conference party; face-to-face networking; fresh perspectives that can come from simply leaving one’s home ground.

We all know that networking plays a big role by physical conferences

The ability to speak to the speakers or attendees. Although some organisers are trying to replicate as much as they can through online events, that intangible element of being energised around others is much harder to capture when people aren’t physically gathered.

Online benefits

Meeting online, whether it’s for a conference, study section, or worldwide lab gathering, works better than expected, and it’s more convenient and affordable due to some factors like missing the in-person stuff – dinners, drinks, which affect the conference’s ticket. Another benefit of meeting virtually is how many more people can access the conferences. The high cost of travel costs, border restrictions will be eliminated from the equation, when it comes to virtual meetings or conferences. Not everyone can travel. Certainly, not everyone should. Many of us, particularly in the Global North, need to travel much less.

N47 webinars

Being and working together, particularly in smaller workshop settings, is an invaluable way to generate new ideas and connections in many fields and professional settings. For this purpose, our company N47 started to organize webinars. The first one was held on the 10th of December 2020.

AWS Landing Zone – Best practices for multi-account AWS environment

The second one will be held on the 11th of March 2021. It is better to have a look at this webinars and stay tuned for new webinars.

Infrastructure as code with Terraform, Bitbucket and Azure – How N47 deploys their current projects to a productive Azure environment

Keine alternative Textbeschreibung für dieses Bild vorhanden

Conclusion

In the end, I would like to share some thoughts regarding this topic. I think these changes that were done due to Covid, will stay with us. However, I do not expect online conferences to run perfectly, especially when the conversion to online was unexpected and hastily planned. The future of conferences and meeting has evolved becoming more accessible and more inclusive. But, what about physical conferences, when they return. For me, I hope that physical conferences return as fast as possible due to many things that were mentioned in the disadvantages of virtual conferences.

Spring Boot Internationalization using Resource Bundles

Reading Time: 3 minutes

Implementing Spring Boot internationalization can be easily achieved using Resource Bundles. I will show you a code example of how you can implement it in your projects.

Let’s create a simple Spring Boot application from start.spring.io.

The first step is to create a resource bundle (a set of properties files with the same base name and language suffix) in the resources package.

I will create properties files with base name texts and only one key greeting:

  • texts_en.properties
  • texts_de.properties
  • texts_it.properties
  • texts_fr.properties

In all of that files I will add the value “Hello World !!!”, and the translations for that phrase. I was using Google Translate so please do not judge me if something is wrong :).

After that, I will add some simple YML configuration in application.yml file which I will use later.

server:
  port: 7000

application:
  translation:
    properties:
      baseName: texts
      defaultLocale: de

Now, let’s create the configuration. I will create two Beans LocaleResolver and ResourceBundleMessageSource. Let’s explain both of them.

With the LocaleResolver interface, we are defining which implementation we are going to use. For this example, I chose to use AcceptHeaderLocaleResolver implementation. It means that the language value must be provided via Accept-Language header.

@Bean
    public LocaleResolver localeResolver() {
        AcceptHeaderLocaleResolver acceptHeaderLocaleResolver = new AcceptHeaderLocaleResolver();
        acceptHeaderLocaleResolver.setDefaultLocale(new Locale(defaultLocale));
        return acceptHeaderLocaleResolver;
    }

With ResourceBundleMessageSource we are defining which bundle we are going to use in the Translator component (I will create it later 🙂 ).

@Bean(name = "textsResourceBundleMessageSource")
    public ResourceBundleMessageSource messageSource() {
        ResourceBundleMessageSource rs = new ResourceBundleMessageSource();
        rs.setBasename(propertiesBasename);
        rs.setDefaultEncoding("UTF-8");
        rs.setUseCodeAsDefaultMessage(true);
        return rs;
    }

Now, let’s create the Translator component. In this component, I will create only one method, toLocale. In that method, I will fetch the Locale from the LocaleContexHolder and I will take the translation from the resource bundle.

@Component
public class Translator {

    private static ResourceBundleMessageSource messageSource;

    public Translator(@Qualifier("textsResourceBundleMessageSource") ResourceBundleMessageSource messageSource) {
        this.messageSource = messageSource;
    }

    public static String toLocale(String code) {
        Locale locale = LocaleContextHolder.getLocale();
        return messageSource.getMessage(code, null, locale);
    }
}

That’s all the configuration we need for this feature. Now, let’s create Controller, Service and TranslatorCodes Util classes so we can test the APIs.

@RestController
@RequestMapping("/index")
public class IndexController {

    private final TranslationService translationService;

    public IndexController(TranslationService translationService) {
        this.translationService = translationService;
    }

    @GetMapping("/translate")
    public ResponseEntity<String> getTranslation() {
        String translation = translationService.translate();
        return ResponseEntity.ok(translation);
    }
}
@Service
public class TranslationService {

    public String translate() {
        return toLocale(GREETINGS);
    }
}
public class TranslatorCode {
    public static final String GREETINGS = "greetings";
}

Now, you can start the application. After the application is started successfully, you can start making API calls.

Here is an example of API call that you can use as a cURL command.

curl –location –request GET “localhost:7000/index/translate” –header “Accept-Language: en”

These are some of the responses from the calls I made:

You can change the default behaviour, add some protection, add multiple resource bundles, you are not limited to using this feature.

Download the source code

This project is available on our BitBucket repository. Feel free to fix any mistakes and to leave a comment here if you have any questions or feedback.

https://bitbucket.org/n47/spring-boot-internationalization/src/master/

The practical guide – Part 3: Clean Architecture

Reading Time: 7 minutes

The practical guide – Part 1: Refactor android application to follow the MVP design pattern
The practical guide – Part 2: MVP -> MVVM

We know what design patterns are and how to implement them. But, if we want to have a more robust, scalable, maintainable, and testable application we have to do more. Today we will learn how to implement Clean Architecture proposed by Robert C. Martin (Uncle Bob).

Why is Architecture important?

Architecture is important for managing the complexity of the project. So, for a small project, we may not need one, but for big ones, it is a lifesaver. It makes the project easier to maintain, scale and test. It also makes the project more organized, so everyone can understand what the project is about with only looking at the classes.

Clean Architecture Introduction

Design patterns told us how to separate presentation and manipulation with the data. A clean architecture (like any other architecture) goes a little deeper and shows us how we should separate the manipulation of the data. Clean architecture has only a few layers, and each layer has its own responsibilities.

All credit for the image goes to Uncle Bob.

You have probably seen this image. It tells us how the layers are organized. As we can see, there are outer layers and inner layers. That is important because there are few rules that we have to follow:

  • Every layer can communicate only with the inner layers. So, the layers don’t know anything about the outer layers. The dependencies are provided by outer layers with Dependency Injection (hopefully I will make a post about this).
  • The most inner layer is the most abstract, and the most outer layer is the most concrete. This means that inner layers should contain business logic and outer layers should contain the implementation.

You may have noticed that I mentioned a few layers and not an exact number. That is because the clean architecture doesn’t define an exact number of layers. You can adapt the number of layers to your needs.

The flow

  • View The responsibility for the view stays the same as specified in the design patterns. Its only responsibilities are to display the data and react to user actions.
  • View Model/Presenter – Also, specified in the design patterns, their responsibility is to pass the data between the view and the model. But, instead of making the network/database calls, it uses the Use Cases for it. So, these classes shouldn’t know, where the data comes from, or where it goes. It just passes the data between the Use Cases and the Views.
  • Use Case (or Interactor) – These are the actions that the users can trigger. It contains the data that the action needs for it to be completed, and calls the repository to do the action. It can decide on which thread the action should be done, and on which thread the result should be posted.
  • Repository – The responsibility of the repository is to decide which data source the data needs to be handled. For every Use Case, there should be a method in the repository.
  • Data Source – There may be few data sources per application, like Network, Database, Cache etc. It contains the actual implementation of the data source.

Between every layer, we can have Mapper classes since the data can differ between layers. For example, we would like to store different data in a database from the one that we get from the network.

The implementation

Let’s start with the data sources. We will create two data source interfaces: RemoteDataSource and LocalDataSource.

interface RemoteDataSource {
  fun getQuotes(): Call<List<Quote>>
}

class RemoteDataSourceImplementation(private val api: QuotesApi) : RemoteDataSource {
  override fun getQuotes(): Call<List<Quote>> = api.quotes
}
interface LocalDataSource {
  fun insertQuotes(quotes: List<Quote>)
  fun getQuotes(): List<Quote>
}

class LocalDataSourceImplementation(private val quoteDao: QuoteDao) : LocalDataSource {
  override fun insertQuotes(quotes: List<Quote>) {
    quoteDao.insertQuotes(quotes)
  }

  override fun getQuotes(): List<Quote> = quoteDao.allQuotes
}

As you can see, we added only one method in RemoteDataSource, just for getting the quotes, but there are two methods in LocalDataSource since we have to store the quotes to the database after getting them from remote. The very important thing here is to notice that we are not creating the DAO and API objects, but we are asking for them to be provided in the constructor (Dependency Injection). This will enable us easily to switch to different network or database libraries and we won’t have to change anything here.

Let’s continue with the repository. We said that its responsibility is to decide where the data will come from. In our case, we want to return the quotes from the network, but if something fails we want to display the quotes from the database.

interface QuotesRepository {
  fun getQuotes(): List<Quote>
}

class QuotesRepositoryImplementation(
  private val localDataSource: LocalDataSource,
  private val remoteDataSource: RemoteDataSource
) : QuotesRepository {
  override fun getQuotes(): List<Quote> {
    return try {
      val response = remoteDataSource.getQuotes().execute()
      if (response.isSuccessful) {
        val quotes = response.body() ?: return localDataSource.getQuotes()
        localDataSource.insertQuotes(quotes)
        return quotes
      }
      localDataSource.getQuotes()
    } catch (e: Exception) {
      localDataSource.getQuotes()
    }
  }
}

It is a dumb logic, but I made it like that just for simplicity. Also very important is that we are using the interfaces, and not the actual implementation.

Let’s move to the use case. Here we will use the repository and we will switch between threads. We will use Kotlin coroutines, but you can use anything. If you are working with RxJava, here you will specify the Schedulers.

class GetQuotesUseCase(private val quotesRepository: QuotesRepository) {
  fun getQuotes(onResult: (List<Quote>) -> Unit = {}) {
    GlobalScope.launch(Dispatchers.IO) {
      val response = quotesRepository.getQuotes()
      GlobalScope.launch(Dispatchers.Main.immediate) {
        onResult(response)
      }
    }
  }
}

Usually, there is a base UseCase class, where you abstract the logic for threading. For simplicity, I skipped the error handling.

Last, we can clean up the ViewModel. I also converted it to Kotlin, and now it looks like this:

class MainViewModel : ViewModel() {
  lateinit var getQuotesUseCase: GetQuotesUseCase

  var quotesLiveData = MutableLiveData<List<Quote>>()

  fun getAllQuotes() {
    getQuotesUseCase.getQuotes { quotes: List<Quote> ->
      quotesLiveData.postValue(quotes)
    }
  }
}

I won’t explain anything here, I will just let you admire. Just kidding! You must be asking how we create getQuotesUseCase. But that is a story for another time. For now, I will create a class DependencyProvider, and I will provide everything there. You don’t have to worry about this for now.

And that’s it. Now we have an application that follows Clean Architecture guidelines. Here is the link for the project.

Final Notes

Now that our project follows Clean Architecture guidelines we can do many things. We can easily change frameworks and libraries with as little changes as possible (just changes at the DependencyProvider and everything will work). We can organize the application better with many repositories and many data sources. The project will be easy to understand, just by looking at the use cases. Testing of the application will be very easy (hopefully I will make a post about that, too).

I hope that this post will help you understand how Clean Architecture works in practice. If you have any questions or need any help don’t hesitate to ask. Happy Coding!

Create an admin panel with Node.js and AdminBro

Reading Time: 4 minutes

What’s great about Node.js is the huge economy of useful packages. For example, AdminBro is a package for creating admin interfaces that can be plugged into your application.

You provide database models or schemas (like blog posts, user comments, etc.) and AdminBro generates the user interface for you. You can manage content through this user interface and talk straight to your database.

What is AdminBro?

AdminBro is an open-source package from Software Brothers that adds an auto-generated admin panel to your Node.js application.

You can connect your various databases to the admin interface and perform standard CRUD operations (create, read, update, delete) on the records. This greatly simplifies and extends your ability to find, monitor, and update your app data across multiple sources.

Creating the admin panel

First of all, we need to create a new folder that will hold our app. Then we will open the terminal in that folder and run:

npm init

Go through all the steps and initialize the Node.js application and a package.json -file will be created:

Then, we need to install some dependencies express and the express-formidable packages. express-formidable is a peer dependency for AdminBro:

npm install express express-formidable

Then we can install the AdminBro and the AdminBro express plug-in:

npm install admin-bro @admin-bro/express

Now we will create a new file index.js and inside we can create an express router that will handle all AdminBro routes:

const AdminBro = require('admin-bro')
const AdminBroExpress = require('@admin-bro/express')

const express = require('express')
const app = express()

const adminBro = new AdminBro ({
    Databases: [],
    rootPath: ‘/admin’,
})

const router = AdminBroExpress.buildRouter (adminBro)

The next step is to set up the router as middleware using the Express.js app object:

app.use(adminBro.options.rootPath, router)
  
app.listen(3000, ()=> {
  console.log('Application is up and running under localhost:3000/admin')
})

And that’s it! You successfully set up the dashboard! Run:

node index.js

And go ahead and head over to the http://localhost:3000/admin path. The dashboard should be ready and working.

Installing the Database Adapter and Adding Resources

AdminBro can be connected to many different types of resources. Right now, they support the following options:

To add resources to AdminBro, you first have to register an adapter for the resource you want to use. Let’s go with the mongoose solution for now and install the required dependencies:

npm install mongoose @admin-bro/mongoose

Then we register the adapter so that it can be used in our project:

const AdminBroMongoose = require('@admin-bro/mongoose')

AdminBro.registerAdapter(AdminBroMongoose)

Now we can make a connection to the database and pass the resources:

const mongoose = require('mongoose')

const connection = await mongoose.connect('mongodb://localhost:27017/users', {useNewUrlParser: true, useUnifiedTopology: true})

const User = mongoose.model('User', { name: String, email: String, surname: String })

const adminBro = new AdminBro ({
  Databases: [connection],
  rootPath: '/admin',
  resources: [User]
})

Finishing up

Now let’s put all together and our index.js should look like this:

const AdminBro = require('admin-bro')
const AdminBroExpress = require('@admin-bro/express')
const AdminBroMongoose = require('@admin-bro/mongoose')

const express = require('express')
const app = express()

const mongoose = require('mongoose')

AdminBro.registerAdapter(AdminBroMongoose)

const run = async () => {
  const connection = await mongoose.connect('mongodb://localhost:27017/users', {useNewUrlParser: true, useUnifiedTopology: true})

  const User = mongoose.model('User', { name: String, email: String, surname: String })

  const adminBro = new AdminBro ({
    Databases: [connection],
    rootPath: '/admin',
    resources: [User]
  })
  const router = AdminBroExpress.buildRouter(adminBro)
  app.use(adminBro.options.rootPath, router)
    
  app.listen(3000, ()=> {
    console.log('Application is up and running under localhost:3000/admin')
  })
}

run()

At this point, we have basically built the admin interface. To verify that we have done everything correctly, first make sure your database is up and then re-run the server:

node index.js

Go to http://localhost:3000/admin and on the left side you can see your first model:

Summary

These are the basic steps to create an admin panel from scratch with Node.js and AdminBro. You can go deeper, you can customize your panel resources and widgets, add validation to the fields, configure role-based access control and much more. Any questions? You can check out the AdminBro docs for more details.

Network printing with CUPS from Docker

Reading Time: 7 minutes

Quite often, there is a need to automate a specific process. In this case, a client had a manual process in place where people printed specific type of documents at certain periods. There was some space for human error, people forgetting to print something, people not being able to print everything on time, people printing the same documents twice, etc. The human task of people manually printing documents can be automated on the application level, by creating a scheduled task for printing documents on a network printer. In order to achieve that, we came up with this…

The solution is a containerized CUPS server with appropriate drivers and printer configuration. We had to create a new Docker image with CUPS, which will serve as the CUPS server, then get the correct drives for the printer (since we are going to use a network printer and make the appropriate printer configuration). Let’s get to know more about CUPS before we go into the actual implementation.

What is CUPS?

CUPS is a modular printing system for Unix-like computer operating systems which allows a computer to act as a print server. A computer running CUPS is a host which can accept print jobs from client computers, process them, and send them to the appropriate printer. CUPS uses the Internet Printing Protocol (IPP) as the basis for managing print jobs and queues. CUPS is free software, provided under the Apache License.

How does it work?

The initial step requires a queue that keeps track of printers status. When you print to a printer, CUPS creates a queue for tracking the printer status and any pages you have printed. A queue can point to a local USB port connected printer, but it can also be a network printer or maybe even many printers on the internet. Where the printer resides doesn’t matter, the queue is independent of this fact and looks the same in any given printer environment.

Every time you print something, CUPS creates a print job which is consisted of the destination queue where documents are sent to, name of those documents, and its page descriptions. Job is numbered queue-1, queue-2, etc. so you can track the job as it is printed or cancel. CUPS is deterministic. When CUPS gets a job for printing, it determines the best programs filters, printer drivers, port monitors, and backends to convert the pages into a printable format and then runs them to actually print the job. After the print job is completed, the job is removed from the queue and then CUPS moves on to the next one. Notifications are also available when the job is finished or some errors occurred during printing there are multiple ways to get a notification on the outcome.

Ready, steady, Docker run

Let’s containerize first. The initial step is to set the base docker image. For this Dockerfile, we have decided that we are going with CentOS Linux distribution, by RHEL since it provides the cups packages from the regular repository. Other distributions might require premium repositories in order for cups packages to be available. The entry instruction which specified the OS architecture:

FROM centos:8

The next and more important step is, installing the packages: cups and cups-filters. The first one, cups, provides support for the actual printing system backend, filters and other software, whereas cups-filter is a required package for using printer drivers. With the dandified yum we update and install necessary dependencies:

RUN dnf update -y && \
	dnf install -y cups cups-filters openssl

# Install OpenJDK java 11
RUN dnf install -y java-11-openjdk && \
	dnf clean all && \
    rm -rf /var/cache/dnf

RUN java -version

ENV JAVA_HOME="/usr/lib/jvm/jre" \
    JAVA_VENDOR="openjdk" \
    JAVA_VERSION="11.0"

With that, JDK is available and we can confirm by running java –version.

Next follows the configuration for the cups server. This is done in the file named cupsd.conf, which resides in the /etc/cups directory of the image. A good practice here would be to create a copy of the original file.  In the cupsd.conf file each line can be configuration directive, blank line or a comment. Directive name and values are case insensitive, comments start with a # character.

The patching we did, on top-level directive DefaultEncryption set the value of IfRequested, to only enable encryption if it is requested. The other directive, Listen, add value 0.0.0.0:631 in order to allow all incoming connections.

RUN sed -e '0,/^</s//DefaultEncryption IfRequested\n&/' -i /etc/cups/cupsd.conf
RUN sed -i 's/Listen.*/Listen 0.0.0.0:631/g' /etc/cups/cupsd.conf

Allow the cups service to be reachable:

RUN /usr/sbin/cupsd \
  && while [ ! -f /var/run/cups/cupsd.pid ]; do sleep 1; done \
  && cupsctl --remote-admin --remote-any --share-printers \
  && kill $(cat /var/run/cups/cupsd.pid)

After the service setup is done, the network printer and its drivers’ configuration follows. In our scenario, we used Ricoh C5500 printer. A good resource for finding appropriate driver files for the printers would be: https://www.openprinting.org/

COPY conf/printers.conf /etc/cups/printers.conf
COPY conf/ricoh-c5500-postscript.ppd /etc/cups/ppd/ricoh-printer.ppd
COPY examples/accident-report.pdf /tmp/accident-report.pdf

A bit more general info on printer drivers: PostScript printer driver consists of a PostScript Printer Description (PPD) file that describes the features and capabilities of the device, then, filter programs that prepare print data for the device, and support files for colour management. Not only that but also, links with online help, etc. These PPD files include references to all of the filters and support files used by the driver, meaning there are details on all features that are provided by the driver. Every time a user prints something the scheduler program, cupsd service first, determine the format of the print job and the programs required to convert that job into something the printer can understand and perform. CUPS also includes filter programs for many common formats, for example, to convert PDF files into device-dependent/independent PostScript. All printer-specific configuration such is an IP address of the printer should be done in the printers.conf file.

Last but not least, we need to start the CUPS service:

CMD ["/usr/sbin/cupsd", "-f"]

Now everything is in place on the docker side. But then, somehow the print job needs to be triggered. That brings us to the final step, creating a client from the application mid-layer, which needs to set off a print job and the CUPS server will take care of the rest.

CUPS4J

For our solution, we used cups4j, a java library which is available in the mvn central repository. Basic usage of cups4j requires:

  • Setting up a CupsClient
  • Fetching an actual file
  • Creating a print job for that file
  • Printing (triggers the print job)

We also implemented a scheduler which will trigger this job weekly, meaning all documents will be run in a print queue once a week. If we want to specify custom host, then we need to provide the IP address of that host and the appropriate port number.

CupsClient cupsClient = new CupsClient("127.0.0.1", 631);
CupsPrinter cupsPrinter = cupsClient.getDefaultPrinter();
InputStream inputStream = new FileInputStream("test-file.pdf");
PrintJob printJob = new PrintJob.Builder(inputStream).build();
PrintRequestResult printRequestResult = cupsPrinter.print(printJob);

Summary

We managed to create dockerized solution step by step. First, we created an image that runs CUPS server, which we configured to a specific network printer. Then the printer waits for a print job to be triggered by the client. As a client, we created a cups4j simple client which raises the print job. Meaning all CUPS related configuration is done in Docker and the client only triggers the print job.

Core features of next-generation JavaScript

Reading Time: 4 minutes

Since we are working with the modern frameworks or libraries of course it is strongly recommended to use the next-generation JS features. Let’s take an overview of the most used features together.

As you know we do not use anymore var but instead, we are using let or const, depends on the case scenario.

Let

Let is a block-scoped local variable which makes the variable limited to the scope of the block statement.

Example:

function varTest() {
  var x = 1;
  {
    var x = 2;  // same variable!
    console.log(x);  // 2
  }
  console.log(x);  // 2
}

function letTest() {
  let x = 1;
  {
    let x = 2;  // different variable
    console.log(x);  // 2
  }
  console.log(x);  // 1
}

Const

Similar like let, const is also block-scoped, but here the difference is that a constant cannot be reassigned or redeclared. So, from here we can say they are read-only objects.
What is interesting here is that the value is mutable. That means direct reassignment is not allowed but changing object properties.

const ctest = 1;
ctest = 2;	// results with error

const cobj = {
    name: "Dimitar"
}
cobj.name = "Test"; // no errors

Arrow functions

Arrow functions are a replacement of the standard known normal functions which give us new shorter syntax and different behaviour of the scope where:

  • When calling normal functions this refers to the object that calls the function
  • When calling arrow function this refers to the outer function that surrounds the inner function
function thisTest() {
    let that = this
    this.prop1 = 0;

    setInterval(function growUp() {
        this.prop1++;
		that.prop1++;
        console.log(this.prop1)
        console.log(that.prop1)
    }, 1000)
}

function thisTest1() {
    let that = this
    this.prop1 = 0;

    setInterval(() => {
        console.log(this.prop1)
        console.log(that.prop1)
        this.prop1++;
		that.prop1++;
    }, 1000)
}

let thisNormal = new thisTest();
/* Prints:
undefined 0
NaN 1
NaN 2
NaN 3
...
*/
let thisArrow = new thisTest1();
/* Prints:
0 0
2 2
4 4
6 6
...
*/

Exports & Imports

In every modern project, we are splitting the code of our project in different modules, where those modules keep our code module focused and manageable. So, the communication between the modules, we maintain with imports (used to get access to a module) and exports (used to make it available to other modules).

From here we can export or import everything, from constants to classes by having unlimited exports and imports.

Exports:
export let numbers = [1, 2, 3 , 4, 5]
export class User {
	constructor(name) {
		this.name = name
}
}

Imports:
import { User } from ‘./…’

Classes

…are used to replace constructor functions and provide better readability.

class P {
    constructor() {
        name = "Dime"
    }
}

const p = new P();
p.name;


replaced with

class P2 {
     constructor () {
         this.name = 'Max';
     }
}

const p2 = new P2();
console.log(p2.name);

class Human {
    species = 'human';
}

class Person extends Human {
    name = 'Max';
    printMyName = () => {
        console.log(this.name);
    }
}

const person = new Person();
person.printMyName();
console.log(person.species);

Spread operator

Spread operator allows us to pull elements out of an array or pull the properties of an object. The spread operator is very useful because it is a lot used to help us clone arrays and objects with different reference.

const arr1 = [1, 2, 3];
const arr2 = [...arr1, 4, 5];

const obj1 = {
    name: "Dime"
}

const obj2 = {
    ...obj1,
    age: 26
}

const obj3 = {...obj1};
const obj4 = obj1;
obj1 === obj3;  // false
obj1 === obj4; // true

Destructuring

The last next-generation feature I am going to cover is destructuring. Destructuring allows us to easily access the values of arrays and objects and assign them to variables. This is mostly used when we are working with function arguments when we are calling with the whole object but getting the properties that we only need.

const arr = [1, 2, 3];
const [a1, a2] = arr;

const obj = {
    name: "Dime",
    age: 26
}

const {name} = obj;

const printValue = (obj) => {
    console.log(obj.name);
}

const printValue1 = ({name}) => {
    console.log(name);
}

printValue({name: "Dime", age: 26});
printValue1({name: "Dime", age: 26});

Conclusion

As we can see next generation is already here and I strongly believe that in near future in the modern projects there won’t be tolerated non-next-generation JavaScript code.
So don’t hesitate, simply just try it and enjoy the next-generation JavaScript features.

Improve your performance using JPA Entity Graph

Reading Time: 7 minutes

If you are a web developer, you probably have developed some endpoint which has a slow response time. The issue for that might be that you are calling some 3rd party API, you have file processing or it might be how your entities are retrieved from the database.

In this article, we are going to take a look at how the Entity Graph might help us to improve our query performance when using JPA and Spring Boot.

Let’s discuss the following scenario:

We want to build an application where we can keep track of buildings, how many apartments every building has and how many tenants every apartment has. I have already created a simple application that can be downloaded from here.

In order to achieve the previously mentioned scenario, we will need to have the following entities:

package com.north47.entitygraphdemo.repository.model;

import lombok.AllArgsConstructor;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.Setter;

import javax.persistence.*;
import java.util.ArrayList;
import java.util.List;

@Entity
@NoArgsConstructor
@AllArgsConstructor
@Getter
@Setter
public class Building {

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;

    private String buildingName;

    @OneToMany(mappedBy = "building", cascade = CascadeType.ALL)
    private List<Apartment> apartments;

    public void addApartment(Apartment apartment) {
        if (apartments == null) {
            apartments = new ArrayList<>();
        }
        apartments.add(apartment);
        apartment.setBuilding(this);
    }

}
package com.north47.entitygraphdemo.repository.model;

import lombok.AllArgsConstructor;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.Setter;

import javax.persistence.*;
import java.util.ArrayList;
import java.util.List;

@Entity
@NoArgsConstructor
@AllArgsConstructor
@Getter
@Setter
public class Apartment {

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;

    private String type;

    @JoinColumn(name = "building_id")
    @ManyToOne
    private Building building;

    @OneToMany(mappedBy = "apartment", cascade = CascadeType.ALL)
    private List<Tenant> tenants;

    public void addTenant(Tenant tenant) {
        if (tenants == null) {
            tenants = new ArrayList<>();
        }
        tenants.add(tenant);
        tenant.setApartment(this);
    }

}
package com.north47.entitygraphdemo.repository.model;

import lombok.AllArgsConstructor;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.Setter;

import javax.persistence.*;

@Entity
@NoArgsConstructor
@AllArgsConstructor
@Getter
@Setter
public class Tenant {

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;

    private String name;

    private String lastName;

    @JoinColumn(name = "apartment_id")
    @ManyToOne
    private Apartment apartment;
}

We want to observe what will happen when we want to retrieve all the entities. For that purpose, a service method is created in BuildingService called iterate that will get all the buildings and loop through all remaining entities. For this method to be visible to the outer world a BuildingController is created that exposes GET endpoint from where we can access the iterate method in BuildingService. In order to have some data in our database, there is a SQL script data.sql that will insert some data and will be executed on startup. I would strongly suggest to start your application in debug mode and iterate through every step of the method iterate.

If you have already started your application insert the following URL: http://localhost:8080/building/iterate in your browser or some API application (Postman for example) and execute this GET request. This will execute the iterate method that was created previously.

Let’s see the content of the iterate service method we are calling with this endpoint and observe the console while executing it:

package com.north47.entitygraphdemo.service;

import com.north47.entitygraphdemo.repository.BuildingRepository;
import com.north47.entitygraphdemo.repository.model.Apartment;
import com.north47.entitygraphdemo.repository.model.Building;
import com.north47.entitygraphdemo.repository.model.Tenant;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Service;

import java.util.List;

@Slf4j
@Service
@RequiredArgsConstructor
public class BuildingService {

    private final BuildingRepository buildingRepository;

    public void iterate() {
        log.debug("Iteration started");
        log.debug("Get all buildings");
        final List<Building> buildings = buildingRepository.findAll();
        buildings.forEach(building -> {
            log.debug("Get all apartments for building with id: {}", building.getId());
            final List<Apartment> apartments = building.getApartments();
            apartments.forEach(apartment -> {
                log.debug("Get all tenants for apartment with id: {}", apartment.getId());
                final List<Tenant> tenants = apartment.getTenants();
                log.debug("Apartment with id : {} has {} tenants", apartment.getId(), tenants.size());
            });
        });
    }
}

If you are in debug mode you may notice that after buildingRepository.findAll() is executed we can see the following log in the console:

Hibernate: select building0_.id as id1_1_, building0_.building_name as building2_1_ from building building0_

Let’s continue with executing the rest of the code. What will appear in the console is the following:

Hibernate: select apartments0_.building_id as building3_0_0_, apartments0_.id as id1_0_0_, apartments0_.id as id1_0_1_, apartments0_.building_id as building3_0_1_, apartments0_.type as type2_0_1_ from apartment apartments0_ where apartments0_.building_id=?
Hibernate: select tenants0_.apartment_id as apartmen4_2_0_, tenants0_.id as id1_2_0_, tenants0_.id as id1_2_1_, tenants0_.apartment_id as apartmen4_2_1_, tenants0_.last_name as last_nam2_2_1_, tenants0_.name as name3_2_1_ from tenant tenants0_ where tenants0_.apartment_id=?
Hibernate: select tenants0_.apartment_id as apartmen4_2_0_, tenants0_.id as id1_2_0_, tenants0_.id as id1_2_1_, tenants0_.apartment_id as apartmen4_2_1_, tenants0_.last_name as last_nam2_2_1_, tenants0_.name as name3_2_1_ from tenant tenants0_ where tenants0_.apartment_id=?

Even though we are not calling some repository methods, SQL queries are executed. This is happening because it is not specified the fetch type in the entities and the default one is the LAZY for OneToMany relationships. This means that when we will try to get the entities (in our case call methods getApartments in Building and getTenants in Aparment) that are annotated with @OneToMany, additional query will be executed. Imagine having a lot’s of data, and we want to execute a similar logic, then this will cause executing a lot more additional queries which will cause a huge latency. One solution is that we can always switch to changing the fetch type to EAGER, but that means that these collections will be always called and we won’t be able to change that in runtime.

One of the solutions can be the JPA Entity Graph. Let’s see how it can make our life easier. We will do the following changes in our domain class Building:

package com.north47.entitygraphdemo.repository.model;

import lombok.AllArgsConstructor;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.Setter;

import javax.persistence.*;
import java.util.HashSet;
import java.util.Set;

@Entity
@NoArgsConstructor
@AllArgsConstructor
@Getter
@Setter
@NamedEntityGraph(
        name = "Building.List",
        attributeNodes = {@NamedAttributeNode(value = "apartments", subgraph = "Building.Apartment")},
        subgraphs = {
                @NamedSubgraph(name = "Building.Apartment",
                        attributeNodes = @NamedAttributeNode(value = "tenants"))
        }
)
public class Building {

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;

    private String buildingName;

    @OneToMany(mappedBy = "building", cascade = CascadeType.ALL)
    private Set<Apartment> apartments;

    public void addApartment(Apartment apartment) {
        if (apartments == null) {
            apartments = new HashSet<>();
        }
        apartments.add(apartment);
        apartment.setBuilding(this);
    }

}

So what happened here? We have defined an entity graph with named Building.List. With the attribute.nodes we are saying which collections to be loaded. Since we also want to get the tenants, we have defined a subgraph called Building.Apartment and in the subgraphs, we are saying to load all the tenants for every apartment. In order for this entity graph to be used we need to create a method in our BuildingRepository to whom we will specify to use this entity graph:

package com.north47.entitygraphdemo.repository;

import com.north47.entitygraphdemo.repository.model.Building;
import org.springframework.data.jpa.repository.EntityGraph;
import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.data.jpa.repository.Query;

import java.util.List;

public interface BuildingRepository extends JpaRepository<Building, Long> {


    @Override
    List<Building> findAll();

    @EntityGraph(value = "Building.List")
    @Query("select b from Building as b")
    List<Building> findAllWithEntityGraph();
}

And of course, we will provide a service method that has the same logic but findAllWithEntityGraph() will be called:

public void iterateWithEntityGraph() {
        log.debug("Iteration with entity started");
        log.debug("Get all buildings");
        final List<Building> buildings = buildingRepository.findAllWithEntityGraph();
        buildings.forEach(building -> {
            log.debug("Get all apartments for building with id: {}", building.getId());
            final Set<Apartment> apartments = building.getApartments();
            apartments.forEach(apartment -> {
                log.debug("Get all tenants for apartment with id: {}", apartment.getId());
                final Set<Tenant> tenants = apartment.getTenants();
                log.debug("Apartment with id : {} has {} tenants", apartment.getId(), tenants.size());
            });
        });
    }

And what is remaining is to expose this method using the BuildingController so we can test our new functionality:

@GetMapping(value = "/iterate/entityGraph")
    public ResponseEntity<Void> iterateWithEntityGraph() {
        buildingService.iterateWithEntityGraph();
        return new ResponseEntity<>(HttpStatus.OK);
    }

Now if we put the following URL http://localhost:8080/building/iterate/entityGraph in our browser and observe our console we can see that only one query is executed:

Hibernate: select building0_.id as id1_1_0_, apartments1_.id as id1_0_1_, tenants2_.id as id1_2_2_, building0_.building_name as building2_1_0_, apartments1_.building_id as building3_0_1_, apartments1_.type as type2_0_1_, apartments1_.building_id as building3_0_0__, apartments1_.id as id1_0_0__, tenants2_.apartment_id as apartmen4_2_2_, tenants2_.last_name as last_nam2_2_2_, tenants2_.name as name3_2_2_, tenants2_.apartment_id as apartmen4_2_1__, tenants2_.id as id1_2_1__ from building building0_ left outer join apartment apartments1_ on building0_.id=apartments1_.building_id left outer join tenant tenants2_ on apartments1_.id=tenants2_.apartment_id

So we managed to reduce the number of queries from 4 to 1 and we still have the possibility to call the findAll() method in the BuildingRepository where we won’t load all the apartments or the tenants. In a real case scenario, you can define as many entity graphs as you want and specify which collections to be loaded or not.

Hope you had fun, you can find the code in our repository.

Providing Accessibility In Your iOS App (Basic Overview)

Reading Time: 6 minutes

As a developer, the most important aspect of your app is the user experience. No matter the type of software you are creating, it is very important that it can be easy to use and accessible to everyone. Especially the app you are developing is meant to be used by the elderly or users with disabilities.

This post won’t contain any detailed technicalities as to how to achieve good accessibility since there are already countless of blogs and websites that cover this topic in great detail. Its purpose is to cover the various topics that are important to achieving proper accessibility within your app. And with that, let’s start with the very basic.

What is accessibility in iOS?

In its most basic form, accessibility in software products is to allow everyone proper access and options to use your software no matter their disability. This covers visual, auditory impairments, as well as physical disabilities. It is important to leave the flow unobstructed and have every part of the app reachable as it’s intended for the default experience.

Initially, iOS already offers great out of the box functionalities to assist with achieving proper accessibility. As a developer, the important part is to make your app compatible with said functionalities. In some cases, it is also required by law to make your app accessible if the target audience for the app is a group of people that could have certain disabilities.

Additionally, Xcode offers the Accessibility Inspector which you can use to test out all of the UI elements and see if they are compatible with the necessary accessibility options.

Let’s first cover what iOS offers for accessibility that we would need to pay attention to.

Voice Over

The functionality of VoiceOver is to offer users certain gestures and voice-assisted tools to navigate through an app. These are easy to learn features that can greatly enhance the experience for users with disabilities.

Display & Text Size

Display & Text Size helps with scaling up the UI of the app to make the text larger and easier to read. Both of these features are very important to complete the full experience for all users.

Now, let’s cover the basic topics to make your app accessible

Scalability

As previously mentioned, some of the native functions help with adjusting the look of the app so its elements are larger and easier to look at. One of the steps to ensure that is making sure that almost every screen in the app has an embedded scroll view. Even if a certain screen might only contain a single label or a button or two, the scalability options push the boundaries so far that it will need the necessary space to fill in the data.

As you can see in the example above, the difference is substantial. This is why when dealing with accessibility it’s important to avoid using fixed width and height on your UI elements (with minor exceptions). Because otherwise you will experience text truncating and cut off content.

Design

Considering the latest trends in app design, everyone thrives for simplicity. While this is also true for accessible apps, it is also important to either make some exceptions or bring down the simplicity even further depending on the content of the app.

It is especially important not to clutter the screen with too much labels/buttons on a horizontal scale. Since the UI needs to scale up, you need to provide enough room for everything to be displayed.

One thing to note is that sometimes the scalability might break some design rules that you have about the app, but that’s completely fine. As long as everything can be accessible at all possible points, there can be some exceptions regarding that.

Text

The default font provided by Xcode already is capable to work with scalability within the app. It’s also a good reminder that the font of choice within your app is easy to read no matter the size.

Also, it’s advised to use the regular and/or larger font weights compared to Thin, Ultrathin and Light that can be harder to see.

Additionally, if there are any visuals in the app that contain text in images, if possible it’s better to adjust the design to have all of the text in native labels so the VoiceOver can read them out for the user.

Colours

On this topic, the most important part is to use a combination of colours that contrast each other to ensure that the content is properly viewable for everyone. In its most basic form, this is usually text against a background. In some cases, you might have to pay attention to colour choices for users with colour blindness. There are various tools to compare the necessary contrast.

Do you see the number 74?

Accessibility hints and labels

All of the UI elements that you can use in Xcode that the users can interact with can have some sort of accessibility info added on them. This is important so the users can know what do those elements convey. Especially on images, where you can provide all the details about what the images contain. You can also add hints on buttons, so the user knows what action would a certain button do prior to pressing it.

Conclusion

In general, the topic of providing access to your apps is not an easy thing to achieve. It requires proper setup with consideration of many aspects to ensure that all users can freely use your app without any hiccups. Luckily, Xcode provides all of the necessary functionalities to create and test all of the accessibility options. As mentioned at the start, there are plenty of tutorials that cover this topic in great detail with all of the technicalities. So hopefully this will guide you in the right way to achieve that!

What is CI? Continuous Integration Explained

Reading Time: 5 minutes

Continuous Integration (CI) is a software development practice that requires members of a team, to frequently integrate their code changes into a central repository (master branch), preferably several times a day.

Each merge is then verified by automatically generating a build, and running automated tests against that build.

By integrating regularly, you can detect errors quickly, as well as locate and fix them easier.

Why is Continuous Integration Needed?

Back in the days, BCI – Before Continuous Integration, developers from a single team might have worked in isolation for a longer period of time, and they merged their code changes only when they finished working on a particular feature or bug fix.

This caused the well-known merge hell (integration hell) or in other words a lot of code conflicts, bugs introduced, lots of time invested into the analysis, as well as frustrated developers and project managers.

All these ingredients made it harder to deliver updates and value to the customers on time.

How does Continuous Integration Work?

Continuous Integration as a software development practice entails two components: automation and cultural.

The cultural component focuses on the principle of frequent integrations of your code changes to the mainline of the central repository, using a version control system such as Git, Mercurial or Subversion.

But applying the cultural component you will drastically lower the frustrations and time wasted merging code, because, in reality, you are merging small changes all the time.

As a matter of fact, you can practice Continuous Integration using only this principle, but with adding the automation component into your CI process you can exploit the full potential of the Continuous Integration principle.

Continuous Integration Image
Source

As shown in the picture above, this includes a CI server that will generate builds automatically, run automated tests against those builds and notify (or alert) the team members of the results.

By leveraging the automation component you will immediately be aware of any errors, thus allowing the team to fix them fast and without too much time spent analysing.

There are plenty of CI tools out there that you can choose from, but the most common are: Jenkins, CircleCI, GitHub Actions, Bitbucket Pipelines etc.

Continuous Integration Best Practices and Benefits

Everyone should commit to the mainline daily

By doing frequent commits and integrations, developers let other developers know about the changes they’ve done, so passive communication is being maintained.

Other benefits that come with developers integrating multiple times a day:

  • integration hell is drastically reduced
  • conflicts are easily resolved as not much has changed in the meantime
  • errors are quickly detected

The builds should be automated and fast

Given the fact several integrations will be done daily, automating the CI Pipeline is crucial to improving developer productivity as it leads to less manual work and faster detection of errors and bugs.

Another important aspect of the automated build is optimising its execution speed and make it as fast as possible as this enables faster feedback and leads to more satisfied developers and customers.

Everyone should know what’s happening with the system

Given Continuous Integration is all about communication, a good practice is to inform each team member of the status of the repository.

In other words, whenever a merge is made, thus a build is triggered, each team member should be notified of the merge as well as the results of the build itself.

To notify all team members or stakeholders, use your imagination, though email is the most common channel, but you can leverage SMS, integrate your CI server with communication platforms like Slack, Microsoft Teams, Webex etc.

Test Driven Development

Test Driven Development (TDD) is a software development approach relying on the principle of writing tests before writing the actual code. What TDD offers as a value in general, is improved test coverage and an even better understanding of the system requirements.

But, put those together, Continuous Integration and TDD, and you will get a lot more trust and comfort in the CI Pipelines as every new feature or bug fix will be shipped with even better test coverage.

Test Driven Development also inspires a cultural change into the team and even the whole organisation, by motivating the developers to write even better and more robust test cases.

Pull requests and code review

A big portion of the software development teams nowadays, practice pull request and code review workflow.

A pull request is typically created whenever a developer is ready to merge new code changes into the mainline, making the pull request perfect for triggering the CI Pipeline.

Usually, additional manual approval is required after a successful build, where other developers review the new code, make suggestions and approve or deny the pull request. This final step brings additional value such as knowledge sharing and an additional layer of communication between the team members.

Summary

Building software solutions in a multi-developer team are as complex as it was five, ten or even twenty years ago if you are not using the right tools and exercise the right practices and principles, and Continuous Integration is definitely one of them.


I hope you enjoyed this article and you are not leaving empty-handed.
Feel free to leave a comment. 😀

Follow N47 on InstagramTwitterLinkedInFacebook for any updates.

How we deploy with Terraform and BitBucket to Azure Kubernetes

Reading Time: 6 minutes

N47 implemented a set of back-office web applications for Prestige, a real estate management company located in Zurich, Switzerland. One application is a tool for displaying construction projects nearby properties managed by Prestige and a second example is a tool for creating and assigning orders to craftsmen. But the following examples aren’t specific for those use cases.

Screenshot of the Construction Project tool.

An Overview

The project entails one frontend application with multiple microservices whereby each service has its own database schema.

The application consumes data from Prestige’s main ERP system Abacus and third-party applications.

N47 is responsible for setting up and maintaining the full Kubernetes stack, MySQL Database, Azure Application Gateway and Azure Active Directory applications.

Another company is responsible for the networking and the Abacus part.

Architectural Overview

Involved Technologies

Our application uses following technologies:

  • Database: MySQL 8
  • Microservices: Java 11, Spring Boot 2.3, Flyway for database schema updates
  • Frontend: Vue.js 2.5 and Vuetify 2.3
  • API Gateway: ngix

The CI/CD technology stack includes:

  • Source code: BitBucket (Git)
  • Pipelines: BitBucket Pipelines
  • Static code analysis: SonarCloud
  • Infrastructure: Terraform
  • Cloud provider: Azure

We’ll focus on the second list of technologies.

Infrastructure as Code (IaC) with Terraform and BitBucket Pipelines

One thing I really like when using IaC is having the definition of the involved services and resources of the whole project in source code. That enables us to track the changes over time in the Git log and of course, it makes it far easier to set up a stage and deploy safely to production.

Please read more about Terraform in our blog post Build your own Cloud Infrastructure using Terraform. The Terraform website is of course as well a good resource.

Storage of Terraform State

One important thing when dealing with Terraform is storing the state in an appropriate place. We’ve chosen to create an Azure Storage Account and use Azure Blob Storage like this:

terraform {
  backend azurerm {
    storage_account_name = "prestigetoolsterraform"
    container_name = "prestige-tools-dev-tfstate"
    key = "prestige-tools-dev"
  }
}

The required access_key is passed as an argument to terraform within the pipeline (more details later). You can find more details in the official tutorial Store Terraform state in Azure Storage by Microsoft.

Another important point is not to run pipelines in parallel, as this could result in conflicts with locks.

Used Terraform Resources

We provide the needed resources on Azure via BitBucket + Terraform. Selection of important resources:

Structure of Terraform Project

We created an entry point for each stage (local, dev, test and prod) which is relatively small and mainly aggregate to the modules with some environment-specific configurations.

The configurations, credentials and other data are stored as variables in the BitBucket pipelines.

/environments
  /local
  /dev
  /test
  /prod
/modules
  /azure_active_directory
  /azure_application_gateway
  /azure_aplication_insights
    /_variables.tf
    /_output.tf
    /main.tf
  /azure_mysql
  /azure_kubernetes_cluster
  /...

The modules themselves have always a file _variables.tf, main.tf and _output.tf to have a clean separation of input, logic and output.


Example source code of the azure_aplication_insights module (please note that some of the text have been shortened in order to have enough space to display it properly)

_variables.tf

variable "name" {
  type = string
}

variable "location" {
  type = string
}

variable "resource_group_name" {
  type = string
}

main.tf

resource "azurerm_application_insights" "ai" {
  name                = var.name
  location            = var.location
  resource_group_name = var.resource_group_name
  application_type    = "web"
}

_output.tf

output "instrumentation_key" {
  value = azurerm_application_insights.ai.instrumentation_key
}

BitBucket Pipeline

The BitBucket pipeline controls Terraform and includes the init, plan and apply. We decided to manually apply the changes in the infrastructure in the beginning.

image: hashicorp/terraform:0.12.26

pipelines:
  default:
      - step:
        name: Plan DEV
        script:
          - cd environments/dev
          - terraform init -backend-config="access_key=$DEV_TF_CONFIG_ACCESS_KEY"
          - terraform plan -out out-overall.plan
        artifacts:
          - environments/dev/out-overall.plan

  branches:
    develop:
      - step:
        name: Plan DEV
        script:
          - cd environments/dev
          - terraform init -backend-config="access_key=$DEV_TF_CONFIG_ACCESS_KEY"
          - terraform plan -out out-overall.plan
        artifacts:
          - environments/dev/out-overall.plan
          - environments/dev/.terraform/**
      - step:
        name: Apply DEV
        trigger: manual
        deployment: dev
        script:
          - cd environments/dev
          - terraform apply out-overall.plan

    master:
      # PRESTIGE TEST
      - step:
          name: Plan TEST
          script:
            - cd environments/test
            - terraform init -backend-config="access_key=$PRESTIGE_TF_CONFIG_ACCESS_KEY"
            - terraform plan -out out-overall.plan
          artifacts:
            - environments/test/out-overall.plan
            - environments/test/.terraform/**
      - step:
          name: Apply TEST
          trigger: manual
          deployment: test
          script:
            - cd environments/test
            - terraform apply out-overall.plan

      # PRESTIGE PROD ...

Needed Steps for Deploying to Production

1. Create feature branch with some changes

2. Push to Git (BitBucket pipeline with step Plan DEV will run). All the details about the changes can be found in the Terraform plan command

3. Create a pull request and merge the feature branch into develop. This will start another pipeline with the two steps (plan + apply)

4. Check the output of the plan step before triggering the deploy on dev

5. Now the dev stage is updated and if everything is working as you wish, create another pull request to merge from develop to master. And re-do the same for the production of other stages

We have just deployed an infrastructure change to production without logging into any system except BitBucket. Time for celebration.

people watching concert
Symbol picture of N47 production deployment party (from unsplash)

Is Really Everything That Shiny?

Well, everything is a big word.

We found issues, for example with cross-module dependencies, which aren’t just solvable with a depends_on. Luckily, there are some alternatives:

network module:

output "id" {
  description = "The Azure assigned ID generated after the Virtual Network resource is created and available."
  value = azurerm_virtual_network.virtual_network.id
}

kubernetes cluster module, which depends on network:

variable "subnet_depends_on" {
  description = "Variable to force module to wait for the Virtual Network creation to finish"
}

and the usage of those two modules in environments/dev/main.tf

module "network" {
  source = "../../modules/azure_network"
}

module "kubernetes_cluster" {
  source = "../../modules/azure_kubernetes_cluster"
  subnet_depends_on = module.network.id
}

After having things set up, it really makes joy to wipe out a stage and just provision everything from scratch with running a BitBucket pipeline.


Create a CI/CD pipeline with GitHub Actions

Reading Time: 7 minutes

A CI/CD pipeline helps in automating your software delivery process. What the pipeline does is building code, running tests, and deploying a newer version of the application.

Not long ago GitHub announced GitHub Actions. Meaning that they have built it system for support for CI/CD. This means that developers can use GitHub Actions to create a CI/CD pipeline.

With Actions, GitHub now allows developers not just to host the code on the platform, but also to run it.

Let’s create a CI/CD pipeline using GitHub Actions, the pipeline will deploy a spring boot app to AWS Elastic Beanstalk.

First of all, let’s find a project

For this purpose, I will be using this project which I have forked:
When forked we need to open the project. Upon opening, we will see the section for GitHub Actions.

GitHub Actions Tool

Add predefined Java with Maven Workflow

Get started with GitHub Actions

By clicking on Actions we are provided with a set of predefined workflows. Since our project is Maven based we will be using the Java with Maven workflow.

By clicking “Start commit” GitHub Will add a commit with the workflow, the commit can be found here.

Let’s take a look at the predefined workflow:

name: Java CI with Maven

on:
  push:
    branches: [ master ]
  pull_request:
    branches: [ master ]

jobs:
  build:

    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v2
    - name: Set up JDK 1.8
      uses: actions/setup-java@v1
      with:
        java-version: 1.8
    - name: Build with Maven
      run: mvn -B package --file pom.xml

name: Java CI with Maven
This is just specifying the name for the workflow

on: push,pull_request
On command is used for specifying the events that will trigger the workflow. In this case, those events are push and pull_request on the master branch in this case

job:
A job is a set of steps that execute the same runner

runs-on: ubuntu-latest
The runs-on is specifying the underlaying OS we want for our workflow to run on for which we are using the latest version of ubuntu

steps:
A step is an individual task that can run commands (known as actions). Each step in a job executes on the same runner, allowing the actions in that job to share data with each other

actions:
Actions are the smallest portable building block of a workflow which are combined into steps to create a job. We can create our own actions, or use actions created by the GitHub community

Our steps actually setup Java and execute Maven commands needed for the build of the project.

Since we added the workflow by creating commit from the GUI the pipeline has automatically started and verified the commit – which we can see on the following image:

Pipeline report

Create an application in AWS Elastic Beanstalk

The next thing that we need to do is to create an app on Elastic Beanstalk where our application is going to be deployed. For that purpose, an AWS account is needed.

AWS Elastic Beanstalk service

Upon opening the Elastic Beanstalk service we need to choose the application name:

Application name

For the platform choose Java8.

Choose platform

For the application code, choose Sample application and click Create application.
Elastic Beanstalk will create and initialize an environment with a sample application.

Let’s continue working on our pipeline

We are going to use an action created from the GitHub community for deploying an application on Elastic Beanstalk. The action is einaregilsson/beanstalk-deploy.
This action requires additional configuration which is added using the keyboard with:

    - name: Deploy to EB
      uses: einaregilsson/beanstalk-deploy@v13
      with:
        aws_access_key: ${{ secrets.AWS_ACCESS_KEY_ID }}
        aws_secret_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        application_name: {change this with aws application name}
        environment_name: {change this with aws environment name}
        version_label: ${{github.SHA}}
        region: {change this with aws region}
        deployment_package: target/spring-petclinic-rest-2.2.5.jar

Add variables

We need to add value into the properties AWS Elastic Beanstalk application_name, environment_name AWS region and, AWS APIkey.

Go to AWS Elastic Beanstalk and copy the previously created Environment name and Application name.
Go to AWS Iam under your user in the security credentials section either create a new AWS access Key or use an existing one.
The AWS Access Key and AWS Secret access key should be added into the GitHub project settings under the secrets tab which looks like this:

GitHub Project Secrets

The complete pipeline should look like this:

name: Java CI with Maven

on:
  push:
    branches: [ master ]
  pull_request:
    branches: [ master ]

jobs:
  build:

    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v2
    - name: Set up JDK 1.8
      uses: actions/setup-java@v1
      with:
        java-version: 1.8
    - name: Build
      run: mvn -B package --file pom.xml
    - name: Deploy to EB
      uses: einaregilsson/beanstalk-deploy@v13
      with:
        aws_access_key: ${{ secrets.AWS_ACCESS_KEY_ID }}
        aws_secret_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        application_name: pet-clinic
        environment_name: PetClinic-env
        version_label: ${{github.SHA}}
        region: us-east-1
        deployment_package: target/spring-petclinic-rest-2.2.5.jar

Lastly, let’s modify the existing app

The deployed application in order to be considered a healthy instance from Elastic Beanstalk has to return an ok response when accessed from the Load Balancer which is standing upfront the Elastic Beanstalk. The load balancer is accessing the application on the root path. The forked application when accessed on the root path is forwarding the request towards swagger-ui.html. For that purpose, we need to remove the forwarding.

Change RootController.class:

@RequestMapping(value = "/", method = RequestMethod.GET, produces = "application/json")
    public ResponseEntity<Void> getRoot() {
        return new ResponseEntity<>(HttpStatus.OK);
    }

Change application.properties server port to 5000 since, by default, Spring Boot applications will listen on port 8080. Elastic Beanstalk assumes that the application will listen on port 5000.

server.port=5000

And remove the server.servlet.context-path=/petclinic/.

The successful commit which deployed our app on AWS Elastic Beanstalk can be seen here:

Pipeline build

And the Elastic Beanstalk with a green environment:

Elastic Beanstalk green environment

Voila, there we have it a CI/CD pipeline with GitHub Actions and deployment on AWS Elastic Beanstalk. You can find the forked project here.

Pet Clinic Swagger UI

CloudFormation: Passing values and parameters to nested stacks

Reading Time: 7 minutes

Why CloudFormation?

CloudFormation allows provisioning and managing AWS resources with simple configuration files, which let us spend less time managing those resources and have more time to focus on our applications that run on AWS instead.

We can simply write a configuration template (YAML/JSON file) that describes the resources we need in our application (like EC2 instances, Dynamo DB tables, or having the entire app monitoring automated in CloudWatch). We do not need to manually create and configure individual AWS resources and figure out what is dependent on what, and more importantly, it is scalable so we can re-use the same template, with a bunch of parameters, and have the entire infrastructure replicated in different stages/environments.

Another important aspect of CloudFormation is that we have our infrastructure as code, which can be version controlled, reviewed and easily maintained.

Nested stacks

CloudFormation nested stacks diagram

As our infrastructure grows, common patterns can emerge, which can be separated into dedicated templates, and re-used later in other templates. A good example is load balancers and VPC network. There is another reason, that may look unimportant, but CloudFormation stacks have a limit, which is 200 resources per stack, which can be easily reached as our application grows. That is why nested stacks can be really useful.

A nested stack is a simple stack resource of type AWS::CloudFormation::Stack. Nested stacks can have themselves contain other nested stacks, resulting in a hierarchy of stacks, as shown in the diagram on the right-hand side. There must be only one root stack, which is called parent.

Passing parameters to the nested stacks

One of the biggest challenges when having nested stacks is parameters exchange between stacks. Without parameters, it would be impossible to have robust and dynamic stacks, that are scalable and flexible.

The simplest example would be deploying the same CloudFormation stack to multiple stages, like beta, gamma and prod (dev, pre-prod, prod, or any other naming convention you prefer).

Depending on which stage you deploy your application, you may want to set different properties to certain resources. For example, in the development stage, you will not have the same traffic as prod, therefore you can fine-grain the resources for your needs, and prevent spending extra money for unused resources.

Another example is when an application is deployed to various regions, that have different traffic consumption and time spikes. For instance, an application may have 1 million users in Europe, but only 100 000 in Asia. Using stack parameters, allows you to reduce the resources you use in the latter region, which can significantly impact your finances.

Below is a code snippet, showing a simple use case where a DynamoDB table is created in a nested stack, that receives the stage parameter from the parent stack. Depending on which stage, at deploy time, we set different read and write capacity to our table resource.

Root stack

In the parent stack, we define Stage parameter under the Properties section. We later pass the parameters to the nested stack, which is created from a template child_stack.yml, stored in an S3 bucket.

---
AWSTemplateFormatVersion: '2010-09-09'
Description: Root stack
Parameters:
  Stage:
    Type: String
    Default: beta
    AllowedValues:
        - beta
        - gamma
        - prod
  TestRegion:
    Type: String
Resources:
    DynamoDBTablesStack:
      Type: AWS::CloudFormation::Stack
      Properties:
        TemplateURL: https://n47-cloudformation.s3.eu-central-1.amazonaws.com/child_stack.yml
        Parameters:
            Stage:
                Ref: Stage
Child stack

In the nested stack, we define the Stage parameter, just like we did in the parent. If we do not define it here either, the creation will fail because the passed parameter (from the parent) is not recognized. Whatever parameters we pass to the nested stack, have to be defined in its template parameters.

---
AWSTemplateFormatVersion: '2010-09-09'
Description: Nested stack
Parameters:
  Stage:
    Type: String
    Default: beta
    AllowedValues:
        - beta
        - gamma
        - prod
Mappings:
    UsersDDBReadWriteCapacityPerStage:
        beta:
            ReadCapacityUnits: 10
            WriteCapacityUnits: 10
        gamma:
            ReadCapacityUnits: 50
            WriteCapacityUnits: 50
        prod:
            ReadCapacityUnits: 500
            WriteCapacityUnits: 1000
Resources:
    UserTable:
        Type: AWS::DynamoDB::Table
        Properties:
            AttributeDefinitions:
                - AttributeName: user_id
                  AttributeType: 'S'
            KeySchema:
                - AttributeName: user_id
                  KeyType: HASH
            ProvisionedThroughput:
                ReadCapacityUnits: !FindInMap [UsersDDBReadWriteCapacityPerStage, !Ref Stage, ReadCapacityUnits]
                WriteCapacityUnits: !FindInMap [UsersDDBReadWriteCapacityPerStage, !Ref Stage, WriteCapacityUnits]
            TableName: Users

The Mappings section used in the child template is used for fetching the corresponding Read/Write capacity value at deploy time when the actual value for Stage parameter is available. More about Mappings can be found in the official documentation.

Output resources from nested stacks

Having many nested stacks usually implies cross-stack communication. This encourages more template code reuse.

We will do a simple illustration by extracting the DynamoDB table name we created in the nested stack before, and pass it as a parameter to a second nested stack, and also by exporting its value.

In order to expose resources from a stack, we need to define them in the Outputs section of the template. We start by adding an output resource, in the child stack, with logical id UsersDDBTableName, and an export named UsersDDBTableExport.

Outputs:
    UsersDDBTableName:
        # extract the table name from the arn
        Value: !Select [1, !Split ['/', !GetAtt UserTable.Arn]] 
        Export:
            Name: UsersDDBTableExport

Note: For each AWS account, Export names must be unique within a region.

Then we create a second nested stack, which will contain two DynamoDB tables, one named UsersWithParameter and the second one UsersWithImportValue. The former is created by passing the table name from the first child stack as a parameter, and the latter by importing the value that has been exported UsersDDBTableExport.

(Note, that this is just an example to showcase the two options to access resources between stacks, and is no real-world scenario)

For that, we added this stack definition in the root’s stack resources:

SecondChild:
    Type: AWS::CloudFormation::Stack
    Properties:
        TemplateURL: https://n47-cloudformation.s3.eu-central-1.amazonaws.com/child_stack_2.yml
        Parameters:
            TableName:
                Fn::GetAtt:
                  - DynamoDBTablesStack
                  - Outputs.UsersDDBTableName

Below is the entire content of the second child stack:

---
AWSTemplateFormatVersion: '2010-09-09'
Description: Nested stack
Parameters:
    TableName:
        Type: String
        
Resources:
    UserTableWithParameter:
        Type: AWS::DynamoDB::Table
        Properties:
            AttributeDefinitions:
                - AttributeName: customer_id
                  AttributeType: 'S'
            KeySchema:
                - AttributeName: customer_id
                  KeyType: HASH
            ProvisionedThroughput:
                ReadCapacityUnits: 1
                WriteCapacityUnits: 1
            TableName: !Join ['', [!Ref TableName, 'WithParameter'] ]
    UserTableWithImportValue:
        Type: AWS::DynamoDB::Table
        Properties:
            AttributeDefinitions:
                - AttributeName: customer_id
                  AttributeType: 'S'
            KeySchema:
                - AttributeName: customer_id
                  KeyType: HASH
            ProvisionedThroughput:
                ReadCapacityUnits: 1
                WriteCapacityUnits: 1
            TableName: !Join ['', [!ImportValue UsersDDBTableExport, 'WithImportValue'] ]

Even though we achieved the same thing by using nested stacks outputs, and exporting values, there is a difference between them. When you do an export, the exporting value is accessible to external stacks, within the same region, on the other hand, nested stacks outputs can be only passed, as a parameter to the other nested stacks within the same parent.

Notes:

  • Cross-stack references across regions cannot be created. You can use the intrinsic function Fn::ImportValue to import only values that have been exported within the same region
  • You cannot delete a stack if another stack references one of its outputs
  • You cannot modify or remove an output value that is referenced by another stack

Below are some screenshots from the AWS console, illustrating the created stacks, from the code snippets shared above:

Figure 1: root stack containing two nested stacks

Figure 2: first nested stack containing Users DynamoDB table

Figure 3: second nested stack containing UsersWithImportValue and UsersWithParameter DynamoDB tables

You can download the source templates here.


If you have any questions or feedback, feel free to comment here.

Taiko, useful toy for automation testing

Reading Time: 6 minutes

Every day we are implementing new features/client requirements. On every release, we want those changes to be correct, previous features should still be working… with other words we want a stable application. That is why it’s necessary for the BE and FE to write tests (unit and integration tests).

The best way is to have regression end to end automation tests. But it is not always fun to write them. Sometimes it is complex, takes some time so that we are avoiding writing them. Sometimes if a workload is larger it requires a dedicated team with QA to cover all this work. Furthermore to follow all changes and adapt existing tests etc.

There are a few tools that make all this work easier. Browser robots that record actions on the web pages, some frameworks that offer good and easy ways of writing automation tests, but either they are too difficult to learn and sometimes hard to use or they are not for free.

That is why I chose Taiko, a free and open source browser test automation framework that makes all this work easy to do. A few features that are crucial for writing end to end automated tests in my opinion are:

  • Easy setup
  • Interactive recording
  • Smart selectors
  • Easy integration with Cauge

The best way to present all this is to go through some simple examples. I’ll use http://saucedemo.com/ to write a simple test for adding items in the cart.

I want almost everyone to be able to write tests, it does not have to be a complex procedure to set it up. Taiko is a free open source node.js library and it works with chromium-based browsers. Tests are written in JavaScript or any language that compiles to JavaScript (TypeScript).

This means to start to write a test with Taiko we will need a pre-installed node.js. It is a straightforward setup. (https://nodejs.org/en/download/).

For the given example I used a power shell on windows but you can write it in any terminal application that you are familiar with. The command to install taiko is:

npm install -g taiko

After successful installation of taiko we will run REPL prompt:

npx taiko

Here are two important features:

  • Interactive recorder. It means that taiko will archive all successful commands that we are going to write here
  • And the second one is the use of Taiko’s API. We can list all available APIs with command
.api

or

.api <api>

All these API references are online too: https://docs.taiko.dev/api/reference.

Simple example

Let’s write one basic test for http://saucedemo.com/. By writing following commands in the prompt we will assure that saucedemo login, adding a product to cart and basket works:

await openBrowser();

// opens a new browser, I had chromium and it was open without any other setup because it uses Chrome DevTools Protocol instead of WebDriver…

await goto("saucedemo.com");

// navigates to / opens the web page that we want to test

var password = await text("_", below("Password for")).text();

In this line of code we have a few key commands:

  • var password – we will take the data (password) from and use it to log in
  • text – selector – which identifies an element on the page, it will look for text element with text to match with. In our example, it will be “_”
  • below – proximity selector – it makes relative HTML element search. It will search for elements under bellow “Password for” on the page

var usernames = await text("_", below("usernames")).text();
console.log(usernames);

// as it is js we can use this command too; it will be archived. I used it to check the values, it can be removed from the final script

var username = usernames.split("\n")[1];
console.log(username);
var password = passwords.split("\n")[1];
console.log(password);
await write(username, into(textBox({id: "user-name"})));

After the username and password are read from page, we will log in:

  • write – command that types given text into the given or focused element
  • into  – selector for the element to write text into
  • textBox – selector for a text field for input, selecting it with some attribute. In our case, it will be id, but it can be any attribute too
await write(password, into(textBox({id: "password"})));
await click("LOGIN");

// again smart selector, it automatically looks for and clicks button login

Since there are more products, we want to test a specific one, we will use proximity selector to add a specific product to the cart. If we don’t add “toRightOf” it will click the first component with “ADD TO CART” label on it.

await click("ADD TO CART", toRightOf("$9.99"));
await click("ADD TO CART", toRightOf("$15.99"));

To assure that ADD TO CART functionality works, we will check the basket, if the wanted products are there:

await click(link({class: "shopping_cart_link"}));

Assertions are made with every command, looking for any element. For example command

await click("ADD TO CART", toRightOf("$9.998"));

will throw an error

[FAIL] Error: Element with text $9.998 not found, run `.trace` for more info.

but if we want to make some customer check then we can use any node.js assertions:

assert.strictEqual(await text("9.99").exists(), true);
assert.strictEqual(await text("15.99").exists(), true);
await click("menu");
await click("Logout");

With all these commands we created one basic test scenario. All these commands are already archived and we can write them in js file to execute this test anytime:

.code testAddCart.js

And exit the recording session:

.exit

Running our previous test with:

npx taiko testAddCart.js

Other possibilities

Tests can be grouped and run with test runners. There are three that are supported Gauge, Mocha, Jest. Try it with Cauge, it is an easy straightforward procedure to set it up. Like that, by using Gauge, we can integrate these tests to build pipeline in Jenkins.

Conclusion

The setup is simple, very easy and fast.

The interactive way of writing tests, seeing the result of every command in real-time is very good for learning the library. You don’t have to make a write build run, just write it in REPL and that’s it, you see the results.

But the moment of selecting elements on the page was not so satisfying. Smart selectors are not so smart when there are more of the same elements. You have to go again with XPath or class and have to debug the page and check the code for attributes and values.

Using the Vue3’s composition API in Vue2

Reading Time: 4 minutes

Vue3 is officially out since September 18 and along with it comes the composition API which is replacing the old options API. The new composition is defined as a set of additive, function-based APIs that allow the flexible composition of component logic.

We are introduced to the composition API because with the old options API as the component grows the readability of the components was not quite pleasant or comfortable and was starting to get messy, the code reuse patterns had some drawbacks, the support for TypeScript was limited etc.
The visual comparison of both APIs look something like this:

First, we must install @vue/composition-api via Vue.use() before using other APIs.
Import the VueCompositionApi:

import Vue from 'vue'
import VueCompositionApi from '@vue/composition-api'

Register the plugin:

Vue.use(VueCompositionApi)

And then we are ready to start.

So, the new API must contain the setup() function which serves as the entry point for using the composition API inside the components. setup() is called before the beforeCreate hook. The component would look like this:

<template>
    <div> {{ note }} </div>
    <div> {{ data.count }} </div>
</template>

<script>
import { ref, computed, reactive, onMounted } from '@vue/composition-api'

export default {
  setup() {
    const notes = ref([])
    async function getData() {
      notes.value = await DataService.getNotesData()
    }

    const data = reactive({
      count: 0,
      actions: ['firstAction', 'secondAction', 'thirdAction'],
      object: {
        foo: 'bar'
      }
    })
    const computedData = computed(() => notes.value)
    
    onMounted(() => {
      getData()
    }

    return {
      ...toRefs(data),
      notes,
      computedData
    }
  }
}
</script>

In the code above we can see the structure of the new API. We have the setup() function, which is exported in the script tag.
Inside the setup function we can see several familiar properties.

const notes = ref([])

This is initializing a property inside the setup function scope. On this property, we must add ref if we want to make it reactive, which means if we don’t add the ref and we make a change to that variable, the change won’t be reflected in the DOM. This is the same as initializing variable in the data() in Vue2:

data() {
  return {
    notes: []
  }
}

As we can see we do not have the method section for creating functions, instead we define the functions in the setup(). After that, the method is used in the mounted hook which is a bit different than the one in the options API.

Some of the hooks were removed, but almost all of them are available in a similar form.

  • beforeCreate -> use setup()
  • created -> use setup()
  • beforeMount -> onBeforeMount
  • mounted -> onMounted
  • beforeUpdate -> onBeforeUpdate
  • updated -> onUpdated
  • beforeDestroy -> onBeforeUnmount
  • destroyed -> onUnmounted
  • activated -> onActivated
  • deactivated -> onDeactivated
  • errorCaptured -> onErrorCaptured

In addition, two new debug hooks were added to the composition API:

  • onRenderTracked
  • onRenderTriggered

Computed and watch are still available. In the code above we can see how computed is used to return the notes values.

Reactive() is similar to ref() and if we want to create a reactive object we can still use ref(). But underneath the hood, it’s just calling reactive(). On the other side, reactive() will not work with primitive values. It takes an object and returns a reactive proxy of the original. The big difference is when you want to access data defined using reactive(), for example, if we want to use the count, we must do it like:

<div> {{ data.count }} </div>

One very important thing is to convert a reactive object to plain object is using the toRefs and return the data like in the component:

return {
  ...toRefs(data)
}

This is just a small piece of the cake, only something to begin with using the composition API in Vue2 application. For more, you can visit the documentation on the following link.

JavaScript Best Practices for Readable and Maintainable Code

Reading Time: 4 minutes

Let’s have a look at some coding standards that can help with:

  • Keep the code consistent
  • Easier to read and understand
  • Easier to maintain
  • Easier to refactor

These coding standards are my own personal opinion that can help with the above points using what I have learned and experienced while developing and reviewing other developers code.

Variables

Always use ‘const’ & ‘let’ over ‘var’

Using const helps readability as developers know it can’t be reassigned.
var and let are both used for variable declaration in javascript but the difference between them is that var is function scoped and let is block scoped. It is too much to get into detail here maybe I will write another post for that.

Avoid using global variables

Minimize the use of global variables. Global variables are a terribly bad idea. The problem with global variables and functions is that they can be overwritten by other scripts.

Naming variables

Always try to come up with names that make sense and are not too long. Naming variables may be the hardest thing in coding.
let should be camelCase. const, if it is at the top of the file it should be SNAKE_CASE (All Caps). If it is not at the top then it should be camelCase.

API Calls

Pick a method and stick with it

By method I mean one of the below:

  • XMLHttpRequest
  • fetch
  • Axios
  • jQuery

So far Axios and fetch are the most preferred way to go with. The benefit of using Axios is that it has wide browser support. Even IE11 can run Axios.

Make the calls reusable

Instead of just having the calls everywhere in the code it is good to have modules for your API calls. This way it becomes easier to refactor. If something changes in the API you will have to change it only once.

Dom Manipulation

Use CSS classes over adding styles

For example we have some basic forms here:

<form>
  <input type='text' required>
  <button type='submit'>
    Submit
  </button>
</form>

Along with the following JavaScript code:

const input = document.querySelector('input');
const form = document.querySelector('form');
form.onsubmit = (e) => {
  e.preventDefault();
}
input.oninvalid = () => {
  input.style.borderColor = 'red';
  input.style.borderStyle = 'solid';
  input.style.borderWidth = '1px';
}

Instead of adding inline style like the above example, it is much cleaner to add CSS class to the input field like in the example below:

const input = document.querySelector('input');
const form = document.querySelector('form');
form.onsubmit = (e) => {
  e.preventDefault();
}
input.oninvalid = () => {
  input.className = 'invalid';
}
// CSS
.invalid {
  border: 1px solid red;
}

Accessing the DOM tree

Accessing the DOM tree is a quite expensive operation this is the bottleneck of JavaScript in terms of performance. Therefore, we must strive to minimize the operations of accessing the DOM tree.

Example:

// BAD
let padding = document.getElementById("content").style.padding
let margin = document.getElementById("content").style.margin;

//GOOD
let style = document.getElementById("content").style
let padding = style.padding
let margin = style.margin

Functions

Use ES6 arrow functions where possible

Arrow functions are a more concise syntax for a writing function expression. They’re are anonymous and change the way this binds in functions.

//BAD
var multiply = function(a, b) {
  return a* b;
}

//GOOD
const multiply = (a, b) => a * b

 Naming functions

Functions should be camelCase and should have descriptive names:

//BAD
const sMail = () => {
  //...
};
const sendmail = () => {
  //...
};

//GOOD
const sendMail = () => {
  //...
};

Functions should only do one thing

This is one of the most important rules in programming. When your function does more than one thing, it is harder to test and read. When you isolate a function to just one action, it can be refactored easily and your code will be read much much cleaner.

//BAD
const notifyListeners = listeners => {
  listeners.forEach(listener => {
    const listenerRecord = database.lookup(listener);
    if (listenerRecord.isActive()) {
      notify(listener);
    }
  });
}
//GOOD
const notifyActiveListeners = listeners => {
  listeners.filter(isListenerActive).forEach(notify);
}

function isListenerActive(listener) {
  const listenerRecord = database.lookup(listener);
  return listenerRecord.isActive();
}

Conclusion

Coding standards in any language can really help with the readability and the maintainability of an application. The main point is to be consistent as they really help scale up an application in a clean way.

If you take this advice, it will bring your read and maintainability to the next level. The next time you need to address an issue or implement a feature request, your journey will be fast and seamless.

Follis: A movement-based 3D input device for gymnastic ball usage

Reading Time: 7 minutes

Follis is a movement-based 3D input device/system for object manipulation in the virtual environment with the help of gymnastic balls. The system development is based on the four main aspects of UX Life-Cycle: analysis, design, prototyping and evaluation.

The number of employees doing their daily tasks on the computer is increasing. That creates several problems for them. The authorities offer various activities to remedy these problems, such as standing up while calling, going to colleagues’ tables and getting up every 30-45 minutes.

The main purpose of the system is to solve the immobility problems of people with prolonged sitting times through the combination of virtual reality and a gymnastic ball.

The secondary purpose is to provide a new locomotion technique by combining both artificial and physical locomotion.

During the system development, we carried out appropriate training together with a gymnastic ball trainer and a physiotherapist.

The reliability of the system is rated by 17 sports scientists and one gymnastic ball expert. The results show that the “Follis” system can provide suitable exercise sections for gymnastic balls for those who work in an office. The system can be a new solution to longer sitting problems and a new technology for virtual locomotion.

Background

Computer use in the office environment mostly requires sitting at a desk and working with a desktop computer. Employees are at risk of various musculoskeletal disorders, obesity, low blood flow, muscle pain, etc. due to inactivation.

The aim is to solve these problems by combining gymnastic ball and virtual reality (VR), at least by encouraging employees through VR. In addition, the ergonomic structure and space of the gymnastic ball, as well as the chair, allows it to be used in an office setting.

The solution sought is to get users to perform gymnastic ball movements in the virtual environment – the solution is based on tracking users’ body movements using the gyroscope technology of a smartphone. We wanted to get real-time input from users and let them manipulate objects in a 3D environment. Thus the system provides locomotion in virtual environments (VEs). The purpose of the system is simply to encourage office workers to do gymnastic ball movements during the day. If the office workers want to do a certain amount of gymnastics, it will definitely help protect themselves from the harm of prolonged sitting. Based on this point, the system provides gymnastic movements when playing VR games.

Story Board

Narrative Story Board

Design Solution

To design the new system as a technical solution, we agreed to track the user with a smartphone and program virtual reality games to perform basic gymnastic ball movements. These virtual games require basic pelvic movements, arm extension, leg movements, and adequate balance. Thus, the user will play a role in gymnastic ball training without knowing it. The total training is 15 minutes for experienced and inexperienced users.

Technical Flow Diagram

We designed 5 different games. Each game has 3 minutes to end. The games are designed as EndlessRunner games.

User Needs And Requirements

Requirements elicitation and requirements negotiation through interviews and observation. The results of the analysis. The technical and functional requirements of the design solution to improve sitting behaviour.

Technical Requirements

  1. The system must provide an exercise pillow.
  2. The system offers sufficient space in one room.
  3. The system must provide 15 minutes of gymnastic ball training.
  4. The system should provide basic warm-up movements for pelvic tilts, hopping, and arm extensions
  5. The system must have energetic music.
  6. The system must offer a safe training period.

Functional Requirements

  1. The system must provide a gymnastic ball.
  2. The system must provide a smartphone.
  3. The system must provide a gymnastic ball cover.
  4. The system must provide a slot on the cover for the smartphone.
  5. The system must provide a gymnastic cushion.
  6. The system must provide a long-range lightning cable for smartphones.
  7. The system must provide an HTC Vive with all setup.
  8. The system must work on Unity Remote 5.
  9. The system must use Unity 3D.

Games

The mini-games have been designed and grouped to provide the different types of movements that we defined for the gymnastic ball exercises. The number of these movements (pelvic movements, arm extensions and hopping) can change depending on the game. Some of these games are primarily designed for pelvic tilts, others are combinations of all movements.

GUI

The user can simply select games from the main menu by using the HMDs controller.

Minigames

The first game – Training Center – was designed for practising ball control and the VR environment. It requires ten times the inclination of the front/back and left/right pelvis

Training Center

The second game – Candy Train – was designed to allow hard stretching/hopping and soft pelvic tilts

The third game – Cafe Racer – was developed to enable predominantly hard pelvic inclinations and less soft stretching/jumping movements. The game is one of the most fluid games in the system.

The fourth game – Wild West – is designed to meet all movement requirements and perform the high-intensity gym ball workout. Stretching, jumping, and pelvic tilt movements are urgently required to complete the game.

The fifth game – Tarzan – is designed to achieve the same amount of arm stretching/jumping and pelvic movements.

DEMO VIDEO

Test Results

The results were analyzed and documented. According to the analysis, the system is being adapted to the needs of users to a large extent. It can provide proper gym ball workout with inclines, hops and pelvic stretching. The overall result of the SUS form is 70.33, which means that the system with the grade “B” is above average. The system can be used without learning too many things before using it. The gym ball as an input device for the 3D environment can artificially provide effective VR locomotion. Additionally, the combination of virtual reality and gym ball use could be an excellent solution to stimulate physical gym ball training.

How to integrate GraphQL in the Microservice

Reading Time: 4 minutes

GraphQL is a query language for your APIs and a runtime for fulfilling those queries with existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more. GraphQL is designed to make APIs fast, flexible, and developer-friendly.

GraphQL SPQR

GraphQL SPQR (GraphQL Schema Publisher & Query Resolver, pronounced like speaker) is a simple-to-use library for rapid development of GraphQL APIs in Java. It works by dynamically generating a GraphQL schema from Java code.

In this tutorial, we are going to explain simple steps for how to integrate Graphql in your microservice.

  • Include dependencies in pom.xml
<!-- GraphQL -->
<dependency>
    <groupId>io.leangen.graphql</groupId>
    <artifactId>spqr</artifactId>
    <version>${graphql-spqr.version}</version>
</dependency>
<dependency>
    <groupId>com.graphql-java-kickstart</groupId>
    <artifactId>graphql-spring-boot-autoconfigure</artifactId>
    <version>${graphql-spring-boot-autoconfigure.version}</version>
</dependency>
  • Spring Boot Java Configuration class:
@Configuration
public class GraphQLConfiguration {
    @Bean
    public GraphQLSchema schema(GraphQLRootQuery graphQLRootQuery,
                                GraphQLRootMutation graphQLRootMutation,
                                GraphQLRootSubscription graphQLRootSubscription,
                                GraphQLResolvers graphQLResolvers) {
        GraphQLSchema schema = new GraphQLSchemaGenerator()
            .withBasePackages("com.myproject.microservices")
            .withOperationsFromSingletons(graphQLRootQuery, graphQLRootMutation, graphQLRootSubscription, graphQLResolvers)
            .generate();
        return schema;
    }

    @Bean
    public GraphQLResolvers resolvers(MyOtherMicroserviceClient myOtherMicroserviceClient) {
        return new GraphQLResolvers(myOtherMicroserviceClient);
    }

    @Bean
    public GraphQLRootQuery query(MyOtherMicroserviceClient myOtherMicroserviceClient) {
        return new GraphQLRootQuery(myOtherMicroserviceClient);
    }

    @Bean
    public GraphQLRootMutation mutation(MyOtherMicroserviceClient myOtherMicroserviceClient) {
        return new GraphQLRootMutation(myOtherMicroserviceClient);
    }

    // define your own scalar types (custom data type) if you need to.
    @Bean
    public GraphQLEnumProperty graphQLEnumProperty() {
        return new GraphQLEnumProperty();
    }

    @Bean
    public JsonScalar jsonScalar() {
        return new JsonScalar();
    }

    /* Add your own custom error handler if you need to.
    This is needed, if you want to propagate any custom information error messages propagated to the client. */
    @Bean
    public GraphQLErrorHandler errorHandler() {
        ....
    }

}
  • GraphQL class for query operations:
public class GraphQLRootQuery {

    @GraphQLQuery(description = "Retrieve list of your attributes by search criteria")
    public List<AttributeDTO> getMyAttributes(@GraphQLId @GraphQLArgument(name = "id", description = "Id of your attribute") String id,
                                              @GraphQLArgument(name = " myQueryParam ", description = "…") String myQueryParam) {
        return …;
    }
}
  • GraphQL class for mutation operations:
public class GraphQLRootMutation {

    @GraphQLMutation(description = "Update attribute")
    public AttributeDTO updateAttribute(@GraphQLId @GraphQLNonNull @GraphQLArgument(name = "id", description = "Id of your attribute") String id,
                                        @GraphQLArgument(name = "payload", description = "Payload for update") UpdateRequest payload) {
        return …
    }
}
  • GraphQL resolvers:
public class GraphQLResolvers {

    @GraphQLQuery(description = "Retrieve additional information")
    public List<AdditionalInfoDTO> getAdditionalInfo(@GraphQLContext AttributesDTO attributesDTO) {
        return …
    }
}

Note: All the Java classes (AdditionalInfoDTO, AttributesDTO, UpdateRequest) are just examples for data transfer objects and requests that needs to be replaced with your custom classes in order the code to compile and be executable.

  • How to use GraphQL from client side?

Finally, we want to have a look, how GraphQL looks from the front end side. We are using a tool, called  GraphiQL (https://www.electronjs.org/apps/graphiql) to test it.

  • GraphQL Endpoint: URL of your service, defaults to /graphql
  • Method: it is always POST
  • HTTP Header: You can include authorization tokens with the request
  • Left pane: the query, must be always in JSON format
  • Right pane: response from the server, always JSON
  • Note: You get what you request, only those attribute are returned which you request.

Simple examples for query and mutation:

In this tutorial, you learned how to create your GraphQL API in Java with Spring Boot. But you are not limited to Spring Boot for that. You can use the GraphQL SPQR in pretty much any Java environment.

Scaling Microservices with Spring Cloud Netflix

Reading Time: 10 minutes

If you need to build large distributed systems, then this is the place to be. We are going to talk about some of the components that the solution from Spring and Netflix provides and how easy it is to use them. If you follow this article, in the end, you will create a complete application with service discovery, client-side load balancing, feign clients and much more.

Before we start, let’s explain some of the terms that we are going to use in this article:

  • Eureka – a service discovery service, where every client will register itself
  • Ribbon – a client-side load balancer
  • Feign client – declarative web service client which provides communication between microservices

On the picture above it is presented what we are going to build. We will create two user-oriented microservices, one called Supplier and the other called Order. The user will be able to place an order for some supplier but the supplier in order to perform the order will call the Order microservice. For the communication between Supplier and Order, we will use Feign Client in combination with service discovery that will be enabled by Eureka. In the end, we are going to scale the microservice Order and we will see how the Ribbon load balancer will work when we have more instances.

Let’s start by creating the Eureka service discovery microservice.

The easiest way is to go to the Spring Initializer and create your microservice with the following properties as you can see on the picture below:

The required dependencies for our service discovery service are only the Eureka Server.

Once you are done with this, click on generate and your project will be downloaded. Open it via your favourite IDE (I will be using IntelliJ) and there are just two more things that you need to do. In your main class you should add the following annotation @EnableEurekaServer:

package com.north;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.server.EnableEurekaServer;

@SpringBootApplication
@EnableEurekaServer
public class EurekaApplication {

    public static void main(String[] args) {
        SpringApplication.run(EurekaApplication.class, args);
    }

}

One more thing that we will need to change is in our application.yml file. By default an application.properties file is created, but if this is the case we will rename it to application.yml and add the following code:

server:
  port: 8761

eureka:
  instance:
    hostname: localhost
  client:
    registerWithEureka: false
    fetchRegistry: false
    serviceUrl:
      defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka/

With these, we are setting the server port and the service URL. And there we have our first service discovery. Start the application and go to your browser and enter the following link: http://localhost:8761. Now we should be able to see the eureka homepage:

As you can see, there are no instances registered at the moment. So let’s create our first instance.

We will start by creating the Order microservice. Go to the Spring Initializer and create a project with the following properties:

And we will add the following dependencies:

Let’s start by setting the name and the port of the application. Change your application.properties to application.yml and add the following code:

spring:
  application:
    name: order

server:
  port: 8082

Now the name of the application is set to Order and the application will run on port: 8082. If this port is taken on your machine, feel free to change the port. We are not going to be dependent on this port but you will see that we will be dependent on the application name when we want to communicate with it.

In order to enable this instance to be discovered by Eureka we need to add the following annotation to the main class:

package com.north.order;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;

@EnableDiscoveryClient
@SpringBootApplication
public class OrderApplication {

    public static void main(String[] args) {
        SpringApplication.run(OrderApplication.class, args);
    }

}

Now if we start the application and go back to the homepage of eureka by going to our browser and entering the following link: http://localhost:8761 we should be able to see that this instance is registered to Eureka.

Since we confirmed that this instance is registered to Eureka we can now create an endpoint from where an order can be placed. First, let’s create an entity Order:

package com.north.order.domain;

import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;

@Data
@AllArgsConstructor
@NoArgsConstructor
public class Order {

    private String productName;
    private int quantity;
}

It is a simple entity that will contain the name of the products and how many pieces of it we want to order. The rest controller should contain the following logic:

package com.north.order.ctrl;

import com.north.order.domain.Order;
import lombok.extern.slf4j.Slf4j;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RestController;

@RestController
@Slf4j
public class OrderController {

    @PostMapping(value = "/order")
    ResponseEntity<Void> placeOrder(@RequestBody Order order) {
        log.info("Placing an order for product: {} with quantity: {}", order.getProductName(), order.getQuantity());
        return ResponseEntity.ok().body(null);
    }
}

You can test this endpoint by using Postman or some similar tool but we want the Supplier microservice to call this endpoint.

Now that we are done with the Order microservice, let’s build the Supplier. Again we will open the Spring Initializer and create a project with the following properties:

And we will have the following dependencies:

Generate the project and import it into your IDE. First thing let’s change the application.properties file by changing the extension to yml and add the following code:

spring:
  application:
    name: supplier
server:
  servlet:
    context-path: /supplier

With this, we have set the application name and set a context-path. Since here we didn’t change the port, the default 8080 will be taken. In order to register this instance to Eureka and to be able to use Feign Client we need to add the following two annotations in our main class:

package com.north.supplier;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.openfeign.EnableFeignClients;

@SpringBootApplication
@EnableFeignClients
@EnableDiscoveryClient
public class SupplierApplication {

    public static void main(String[] args) {
        SpringApplication.run(SupplierApplication.class, args);
    }

}

Next thing is to create the same entity Order as we have in the previous microservice.

package com.north.supplier.domain;

import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;

@Data
@AllArgsConstructor
@NoArgsConstructor
public class Order {

    private String productName;
    private int quantity;
}

For communication with the Order microservice we will create a feign client called OrderClient:

package com.north.supplier.service;

import com.north.supplier.domain.Order;
import org.springframework.cloud.openfeign.FeignClient;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;

@FeignClient("order")
public interface OrderClient {

    @PostMapping("/order")
    void performOrder(@RequestBody Order order);
}

As a value in the @FeignClient annotation, we need to use the application name of the microservice that we want to communicate with, in our case Order. The method written here is the one that will call the previously exposed endpoint in the Order microservice. Let’s create a service that will use this feign client and execute an order:

package com.north.supplier.service;

import com.north.supplier.domain.Order;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Service;

@Slf4j
@Service
@RequiredArgsConstructor
public class OrderService {

    private final OrderClient orderClient;

    public void placeOrder(Order order) {
        log.info("Requesting order ms to place an order");
        orderClient.performOrder(order);
    }
}

At the end we will expose one endpoint that we can use to test this scenario:

package com.north.supplier.ctrl;

import com.north.supplier.domain.Order;
import com.north.supplier.service.OrderService;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequiredArgsConstructor
@Slf4j
public class OrderController {

    private final OrderService orderService;

    @RequestMapping(value = "/order")
    public ResponseEntity<Void> placeOrder(@RequestBody Order order) {
        orderService.placeOrder(order);
        return ResponseEntity.ok().body(null);
    }
}

Now that we are done, let’s start the application. First, if we check the Eureka homepage we should be able to see this instance also that is registered. You can also see this in the console of where the Supplier is being started:

2020-09-20 20:02:43.907  INFO 7956 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient    : DiscoveryClient_SUPPLIER/host.docker.internal:supplier: registering service...
2020-09-20 20:02:43.911  INFO 7956 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient    : DiscoveryClient_SUPPLIER/host.docker.internal:supplier - registration status: 204

To test this complete scenario make sure that all three applications are started and that the Order and Supplier are registered to Eureka. By using postman I will send a post request to the endpoint on the Supplier microservice and I should be able to see the order being placed in the Order microservice:

Just make sure that you have added in your Headers tab a header with key: Content-Type and value application/json. What should happen if execute this request is in the Supplier microservice console? -we should see the following log:

2020-09-20 20:20:36.674  INFO 19420 --- [nio-8080-exec-4] com.north.supplier.service.OrderService  : Requesting order ms to place an order

in the Order microservice console we should see:

2020-09-20 20:20:36.678  INFO 17800 --- [io-8082-exec-10] com.north.order.ctrl.OrderController     : Placing an order for product: bread with quantity: 300

At this point, we managed to create three microservices, two for user purpose and one for service discovery. We used the feign client for communication with microservices. At some point, if we decide to grow this application and there are too many orders to be executed and we add some complex logic to our Order microservice, we will reach a point where the Order microservice won’t be able to execute all the orders. Let’s see what will happen if we scale our Order microservice.

First, from your IDE stop the Order microservice. Just be sure that Eureka and Supplier are still running. Now go to the folder directory in the Order project (something like …\Documents\blog\order) and open in that location three command prompt windows. In each of them we will type the following command:

mvn spring-boot:run -Dspring-boot.run.arguments="--server.port=8084"

in the second:

mvn spring-boot:run -Dspring-boot.run.arguments="--server.port=8085"

in the third:

mvn spring-boot:run -Dspring-boot.run.arguments="--server.port=8086"

It should be something like this:

Three instances of the application should be now up and running on the port that we previously specified. If you open again the Eureka home page, you should be able to see all three instances registered. Now go back to the postman and execute the same post call to Supplier as we did previously and do this many times as fast as possible. Now if you take a look at the command prompt windows that we opened you should be able to see that every time a different instance of the Order microservice is called. This is provided by Ribbon that is provided out of the box on the client-side (in this case the Supplier microservice), without adding some additional code. As we have mentioned before we are not dependent on the port but we are using the application name in order for Supplier to send a request to Order.

To summarize, our Supplier microservice became aware of all the instances and now he is sending the request every time to a different instance of Order so the load will be balanced.

Hope that you enjoyed this article and had fun with it. The code is available on the our N47 Bitbucket:

iOS Unit Tests – My story

Reading Time: 5 minutes

In my last job interview – almost 2 years ago – I received a question about writing unit tests inside the iOS code. With the confidence of a developer with 8 years of experience in the iOS branch, my answer and my opinion at that moment were that if the developers are high-profiled and write good code writing unit tests is not necessary. Also, my answer continued with a conclusion: if the company has a testing team why do we (iOS developers) take their job? Throwing back on the answer and from today’s perspective, my feelings about the topic are mixed: First of all, I’ve learned more about the importance of the unit tests, and secondly, I’ve continued to work in an environment where we are not obligated to write tests.

Some basics

We should consider our code as pieces of code – called UNITS. The purpose of a unit test is to make validation of our code, and this will allow us to be sure that our code meets the design and fulfil the goal.

In XCode and iOS, the Unit tests are written in a separate target. The most important thing is the XCTest framework. The basic rule is that only the methods that start with the word test will be considered as unit tests by the compiler. Here is an example:

func testFormatForCard() {
    let formatter = DateFormatter()
    formatter.dateFormat = “ddMMyyyy”
    let date = formatter.date(from: “28061978")!
    XCTAssertEqual(date.formatForCard(), “28.06.1978”)
}

Once a test method is written with the proper semantic, the method is associated with rhombus sign on the left side:

Unit Tests with rhombus sign

We can run the tests to see if our code is good or if something is not working. If the test passes, the empty rhombus sign is changed with a green checkmark sign:

Figure 2: Test passed

In case of a failing test we have an assertion and a red sign:

Figure 3: Failed unit test

We can see in image 3 that intentionally the programmer entered the wrong output date 29.06.1978. That’s the way we should think when writing unit tests. First, we have to write the failing state and after that, we should enter the expected correct output. If in the second case the test passes, then we have created a useful unit test for that piece of code (unit). The general idea behind is if we change something in our code, and we made a mistake unintentionally the test should fail and warn us to fix the code.

Unit Test in practice

1. Code Coverage

There is a built-in option inside XCode for checking the code coverage of the tests. The ideal scenario is that 100% of our code is covered by unit tests. But is this really necessary? Will we be better programmers if our full code is supported with tests? Can we spend better the time we spent covering the tiniest pieces of code with proper tests?

In my opinion, writing unit tests is a philosophy and knowing the principles lead us to write qualitative code. That’s, of course, our goal as programmers. Covering the most important part of the code, especially the parts of code that are often changed like networking managers, parsers are a better option than trying to be perfect and writing 100% coverage.

2. Test Driven Development

The popular blog website for programmers Ray Wenderlich emphasizes the FIRST principles as a good way to follow writing unit tests. The basics of these principles are that the test should be fast, autonomous, repeatable, the output should be either “pass“ or “fail”, and ideally, the tests should be written before coding – Test Driven Development TDD.

The recommendation by TDD is writing tests before starting to fix a bug in the code. My opinion on this topic is also a mixing approach. Depending on the time you have, if the deadline is not close on the horizon, you can write tests before coding, or before starting to fix bugs in code, but that’s not always possible, and you won’t make a mistake if you skip this step sometimes.

Conclusion

I can say that writing unit tests is good for every programmer on his way to becoming great. The quality of coding could dramatically improve using unit tests. The philosophy of writing tests and thinking how the code should be structured to allow the tests to pass, will make you write cleaner code, better structured, using more protocols, make reusable classes… As in the strategy games, you shouldn’t always be a slave of the principles – the most important is to fulfill the goals in a quality manner. If you have enough time and not a strict deadline of the project you can make a bigger code coverage, but something around 75% is good enough.

The practical guide – Part 2: MVP -> MVVM

Reading Time: 5 minutes

The practical guide – Part 1: Refactor android application to follow the MVP design pattern

This is the second part of the practical guide. In the first part, we talked about refactoring android application to MVP. Now, instead of refactoring the base code to MVVM, I would like to refactor the MVP application to MVVM. That way we will learn the differences between MVP and MVVM.

Why MVVM?
First, I should tell you that Google accepted MVVM as preferred design pattern for building Android applications. So, they have build tools that helps us following this pattern. This is a great reason to learn and use this pattern, but why would Google choose MVVM over MVP (or other design patterns). Well, they know best but, my opinion is that they chose MVVM because it has one less dependency, due to the difference in communication between ViewModel and View. I will explain this in the next section.

Difference between MVP and MVVM
As we know, MVP stands for Model-View-Presenter. On the other hand, MVVM stands for Model-View-ViewModel. So, the Model and the View in MVVM are the same as in the MVP pattern. The only difference remains between the Presenter and View Model. More precisely, the difference is in the communication between the View and the ViewModel/Presenter.

As you can see in the diagrams, the only difference is the arrow from Presenter to View. What does it mean? It means that in MVP you have an instance of the Presenter in the View, and you have an instance of the View in the Presenter, hence double arrow in the Diagram. In MVVM you only have an instance of the ViewModel in the View. But, how do you communicate with the View? How can the View know when the ViewModel has made changes in the Model? For that, we need the Observer pattern. We have observables(subjects) in the ViewModel, and the View subscribes to this observables. So, when the observable is changed, then the View is automatically informed about that change, and it can update its views.

For practical implementation of this Observer pattern, we have to get help either from some external libraries like RxJava, or we can use the new architecture components from Android. We will use this later in this example.

Refactoring

MVP classes
MVVM Classes

First, we can get rid of the MainPresenter and MainView interfaces. We can have only one new class MainViewModel, that replaces the Presenter. Then we can extend MainViewModel from androidx.lifecycle.ViewModel. This is the first class that we will use from the android lifecycle components. This class helps us to deal with the lifecycle of the view model. It survives configuration changes, so it is a nice place for storing UI related data. Next, we will add quoteDao and quotesApi fields. We will initialize them with setters, instead of the constructor, because the creation of the ViewModel is a little bit different. We don’t have the MainView, and also we don’t need bindView() and dropView() methods.

Next, we will create the observable objects. These are objects that we want to display in the MainActivity, wrapped with androidx.lifecycle.LiveData or some other implementation of LiveData. This class helps us with the implementation of the observer pattern. We will create the objects in the MainViewModel, and we will observe these objects in the MainActivity. We want to display a list of Quote objects, so we will create MutableLiveData<List<Quote>> object. MutableLiveData because we want to change the value of the object manually.

getAllQuotes() will be very similar as in the Presenter, except minus the interaction with the MainView. So, instead of:

if (view != null) {
  view.displayQuotes(response.body());
}

we will have:

quotesLiveData.setValue(response.body());

We will also change the implementation of the DatabaseQuotesAsyncTask, so instead of sending the MainView, we will create a new interface that will get us the quotes from the async task and we will send the implementation of this interface there. In the implementation, we will update quotesLiveData, same as above.

In the MainActivity, we remove the implementation of the MainView. We won’t need to override onDestroy() method. We can replace the MainPresenter with the MainViewModel. We will create the ViewModel as follows:

viewModel = new ViewModelProvider(this, new ViewModelProvider.NewInstanceFactory()).get(MainViewModel.class);
viewModel.setQuoteDao(quoteDao);
viewModel.setQuotesApi(quotesApi);

Then we will observe to the quotesLiveData observable, and display the list.

viewModel.quotesLiveData.observe(this, quotes -> {
    QuotesAdapter adapter = new QuotesAdapter(quotes);
    rvQuotes.setAdapter(adapter);
});

And in the end, we call viewModel.getAllQuotes() to fetch the quotes.

And that’s it! Now we have an application that follows the MVVM design pattern. You can check the full code here.

Automate Processes with Camunda

Reading Time: 5 minutes

Overview

Camunda BPM is a light-weight, open-source platform for Business Process Management. It ships with tools for creating workflow and decision models, operating deployed models in production, and allowing users to execute workflow tasks assigned to them. It is developed in Java and released as open-source software under the terms of Apache License.

Modeling your first process

In order to show how Camunda works and looks like I will use this simple process. Let us imagine that you want to introduce a review process on your Twitter account and have every tweet go through this review process.

One way to manage this is to make a web application from scratch for this scenario. But we can also model this process with Camunda Modeler and utilize Camunda for this workflow.

On the following image, it is shown one way to model this process with standard BPMN model using Camunda Modeler:

Business Process Model and Notation (BPMN) for the above process

In this diagram, the process is started when someone writes a new tweet. After that, we have a human task where someone has to review this tweet and decide its approval status. And after that we have two possible options if the tweet is approved, a service task is called that will publish this on Twitter. If rejected we again call a service task, however this time we notify the user that his tweet was rejected.

I will go through all of these steps in more detail.

Starting the process

Camunda processes can be started programmatically using some of their supported SDKs like Java or by using the Camunda Tasklist GUI that comes out of the box. In this case, I will use the Camunda tasklist to start a new tweet.

Working on the human task

Human tasks are tasks that must be manually completed by some users. And this can be something as simple as completing a form or it can be something like actually shipping an item somewhere. They are visible in the Camunda Tasklist GUI and users can assign a certain task to themselves and complete them.

In our Camunda BPMN model, the next step in the process is a human task. In our process, we want to review the tweet in this task. On the following image is shown how the human tasks look like by default in Camunda Tasklist:

Automating service tasks

Service task is used to invoke some service, this can be some Java code or some asynchronous external worker.

After the tweet is reviewed we have ‘conditional flow’ in Camunda, which depends on whatever the tweet was approved or not, decides how the flow should continue. In both cases, our flow continues with a service task.

In our case, we have two separate service tasks. One is called when a tweet is rejected and will send a notification, while the other one is used when the tweet is approved and will publish it on Twitter.

First, let us take a look at the service tasks for sending rejection notification:

@Slf4j
@Service("emailAdapter")
public class RejectionNotificationDelegate implements JavaDelegate {

  public void execute(DelegateExecution execution) throws Exception {
    String content = (String) execution.getVariable("content");
    String comments = (String) execution.getVariable("comments");

    log.info("Hi!\n\n"
           + "Unfortunately your tweet has been rejected.\n\n"
           + "Original content: {}\n\n"
           + "Comment: {}\n\n"
           + "Sorry, please try with better content the next time :-)", content, comments);
  }
}

In this code, we obtain process variables like tweet content and rejection comments and we log them in the console. This logic, of course, can be extended to send actual emails, the important thing here is that in order to model Camunda service we only need to implement JavaDelegate interface and override execute method.

In the next code snippet, we have the snippet for publishing the tweet:

public class TweetContentDelegate implements JavaDelegate {

  public void execute(DelegateExecution execution) throws Exception {
	    String content = (String) execution.getVariable("content");

	    AccessToken accessToken = new AccessToken("token", "secret");
	    Twitter twitter = new TwitterFactory().getInstance();
	    twitter.setOAuthConsumer("consumer");
	    twitter.setOAuthAccessToken(accessToken);

	    twitter.updateStatus(content);
	}
}

As in the previous code, we also have to implement JavaDelegate and override execute method.

More Camunda examples can be found on their official GitHub repository: https://github.com/camunda/camunda-bpm-examples

Conclusion

In the above diagram, we have only seen one example of a process, but Camunda offers a lot more features for modeling business processes and a lot of out-of-the-box implementations that save a lot of time. Also, almost everything is customizable.

If your company has a lot of processes that can be modeled as a BPMN process or processes that require human intervention then Camunda can be the right tool for the job.

In my opinion, it’s definitely worth to have a basic understanding of how Camunda works in order to be able to spot a use-case for this tool.

RECOMMENDATION SYSTEMS AND COLLABORATIVE ALGORITHM

Reading Time: 8 minutes

WHY DO WE NEED RECOMMENDATION SYSTEMS?

Walking through the steps of technology, which has rapid growth nowadays, represents a huge challenge for humanity. Software systems are currently creating a dynamic world, which undoubtedly facilitates human life and enables its improvement to the highest point of a digital being.

Many mobile and web systems offer easy usage and search through the internet. They are a necessary segment of education, health, employment, trade and of course fun. In such a fast and dynamic life, it is necessary to have more and more systems that will help us enable fast recommendation search when in need of finding relevant information, all in order to save us time. Usually, generated recommendation systems are according to the collaboration filtering algorithms or content-based methods.

RECOMMENDATION SYSTEMS IN REAL LIFE

In real life, people are overwhelmed with making a lot of decisions, no matter of its importance, either minor or major. Understanding human choices is a field studied by cognitive psychology.

One of the most important factors influencing the decisions an individual makes is ‘the past experience’, i.e. the decisions made by the man in the past, affect those he will make in the future.

Human actions are also dependent on the experiences they have gained in interactions with other people. The opinion of others affects our lives without us being aware of it. Relationship with friends affects what neighbourhood we will live in, which place we will visit during our vacation, in which bar we will have a drink, etc.

A real life recommendation system

If one person has positive experiences with another one, then he/she has gained trust and authority over the particular individual and is more likely to follow their advice, as well as choosing the decisions that the person chose when they were in a similar situation.

RECOMMENDATION SYSTEMS IN THE DIGITAL WORLD

All large companies and complex systems use collaborative filtering, as an example of this is the social network “Facebook” with the phrase “People you may know”. Facebook is a hugely complex system and has a massive database, which is why they have a need for an optimization of the user data set so that they can provide a precise recommendation. They also have collaborating systems for the news feed, as well as for the game, fun pages, groups, and event sections.

Another, well-known technology and media service provider which uses those collaboration systems is Netflix, with the “Because you watched” phrase. Netflix uses algorithms and machine learning, probably based on genres, history of the watched movies, ratings and the amount of all ratings of the users that have a similar content taste as ours.

Here is as well Amazon, the multinational technology company, which uses the algorithms for a product recommendation for their clients. They use the item-to-item approach for the recommendation.

Hint: Click on the picture if you want to know more about Item-to-Item Collaborative Filtering

Last example but not least, is the most successful business social network LinkedIn, which uses ex. “People in the Information Technology & Services industry you may know”, “People you may know from Faculty XXX”, “Trending pages in your network”, “Online events for you” and a number of other phrases.

I made a research on the collaborative filtering algorithm, so I will deeply explain how this algorithm works, please read the analysis in the sections below.

RECOMMENDATION SYSTEM AND COLLABORATIVE FILTERING

Based on the selected data processing algorithm, the systems use different recommendation techniques.

Content-based system

People who liked this also likes that as well

Collaborative filtering

Analyzing a huge amount of information

Hybrid recommendation systems

COLLABORATIVE FILTERING – DETAILED ANALYSIS

On a coordinate system, we can show the popularity of products, as well as the number of orders.

The X-axis is presenting the product curve, which shows the popularity of a variety of products. The most popular product is on the left part – at the head of the tail, and the less popular ones are in the right part. Under popularity, I mean how many times the product has been ordered, and viewed by others.

The Y-axis is representing the number of orders and product overviews over a certain time interval.

By analyzing the curve, it is noticeable that the often ordered products usually are considered most popular, and those that have not been ordered recently are omitted. That is what the collaborative filtering algorithm offers.

A measure of similarity is how similar two data objects are to each other. The measure of similarity in a dataset usually described as distance with dimensions, which represent characteristics of the objects that are in comparison. If the distance is small, then the degree of similarity is large, and vice versa. The similarities are very subjective and highly dependent on the domain of the systems.

The similarities are in the range of 0 to 1 [0, 1].

Two main similarities:

  • Similarity = 1 if X = Y
  • Similarity = 0 if X != Y

Collaborative filtering is processing the similarity of the data we have, with the help of several theorems, such as Cosine similarity, Euclidean Distance, Manhattan distance etc.

COLLABORATIVE FILTERING – COSINE SIMILARITY

In the beginning, we need to have a database and characteristics of the items.

For cosine similarity implementation, we need a matrix of similarity from the user database. In this matrix, the vector A are the products, and vector B are the users. Matrix is in format AXB. The fields of the matrix represent the grade/rating of the users’ Ai over the products Bj.

Therefore, we can imagine that we have users from 1 to n {1, …n} and grades/ratings on the products {1,…10}. Every row represents a different user, and every column represents one product. Every field of the matrix consists of the product grade/rating that the user has entered. Now, with this generated matrix, we can use the formula for finding the similarity between the users:

STEP 1:

Similarity (UserN, User1) =

 STEP 2:

In step 1, we can see that User N has the most similarities with User 2, but we can see that in the data we have a deficiency for some product ratings, so we should count the priority of the products that User N, has not set a rating. Now we need the values for the most similar users with User N, and those are User 2 and User 4. The following formula should be used:

Priority (product) = User2 (value*similarity) + User4 (value*similarity).

Example:

Priority(product3) = 8 * 0.66 = 5.28

Priority(product4) = 8 * 0.71 = 5.68

Priority(product5) = 7 * 0.71 + 8 * 0.66 = 10.25

STEP 3:

If we want to recommend two products to User N, these will be product5 and product4.

CONCLUSION:

Similarity theorems have their advantages and disadvantages, depending on what data set they apply. From the above analysis, we came to a conclusion that if the data contains zero values and are rarely distributed, we use the metric for computed a cosine similarity that handles nonzero values. Otherwise, if the data are densely distributed and diversity instead of similarity of users/products, and we have non-zero values, then we use the measures for calculating Euclidean distance. Such systems are under constant pressure from large volumes of data in databases and will undergo to even more challenges due to the daily increasing volume of the data. Therefore, there is a growing need for such new technologies that will dramatically improve the scalability of the recommendation systems.

QUESTION: WHAT WILL HAPPEN IN THE FUTURE?
ANSWER: ONLY TIME WILL TELL.

4 steps to start building apps with Flutter

Reading Time: 2 minutes

As a front-end developer, I was always curious about mobile apps and wanted to build one. In the last years, I was testing multiple frameworks from Ionic to React native and to be honest, I was never satisfied. Until one day by accident, I tried FLUTTER and this happened:

Flutter is Google’s UI toolkit for building beautiful, natively compiled applications for mobileweb, and desktop from a single codebase.

From Flutter website

Just reading this sentence blew my mind and I was in. After two months playing around with the framework, I would say it’s the one that will take over in the next years for sure. Let’s jump and see how to start with it.

1 – Download the Flutter SDK

Download the stable version and put it in as a PATH environment variable. The download link is here.

2 – Run Flutter Doctor

flutter doctor

This command is the most important one as it checks your environment and displays a report of the status of your Flutter installation. Do not forget to check the output carefully, to be able to know what is still missing.

3 – Start Coding

import 'package:flutter/material.dart';

void main() async {
  runApp(
    MaterialApp(
      debugShowCheckedModeBanner: false,
      home: Scaffold(
        body: MyApp(),
      ),
    ),
  );
}

class MyApp extends StatefulWidget {
  @override
  _MyAppState createState() => _MyAppState();
}

class _MyAppState extends State<MyApp> {
  @override
  Widget build(BuildContext context) {
    return Text(
           'Click me!',
           style: TextStyle(
                  fontSize: 60.0,
                  fontWeight: FontWeight.bold,
                ),
           );
  }
}

4 – Enjoy it

Multitenancy with Spring Boot

Reading Time: 7 minutes

Why should you consider implementing multitenancy in your project?

  • Cost: Multi-tenant architecture allows the sharing of resources, databases, and the application itself, thus the cost to run the system is fixed.
  • Maintenance: Users do not have to pay a considerable amount of fees to keep the software up to date. This reduces the overall cost of maintenance for each tenant.
  • Performance: Easier to assess and optimize speed, utilization, response time across the entire system, and even update the technology stack when needed.

In this blog we will implement multitenancy in our Spring Boot project.

Let’s create a simple Spring Boot project from start.spring.io, with only basic dependencies (Spring Web, Spring Data JPA, Spring Configuration Processor, MySQL Driver).

The good thing for implementing multitenancy is that we do not need additional dependencies.
We will split this example into two parts. In the first one, we will explain the idea/logic behind it and split the approach into 7 configuration steps, and explain every step. In the second part, we will see how it’s implemented in real life and we will test the solution.

1. Let’s start with creating Tenant Storage. We will use it for keeping the tenant value while the request is executing.

public class TenantStorage {

    private static ThreadLocal<String> currentTenant = new ThreadLocal<>();

    public static void setCurrentTenant(String tenantId) {
        currentTenant.set(tenantId);
    }

    public static String getCurrentTenant() {
        return currentTenant.get();
    }

    public static void clear() {
        currentTenant.remove();
    }
}

2. Next, we will create the Tenant Interceptor. For every request, we will set the value at the beginning and clear it at the end. As you can see, in Tenant Interceptor, I decided for this demo to fetch the value of the tenant from request header (X-Tenant), this is up to you. Just keep an eye on data security when using this in production. Maybe you want to fetch from a cookie or some other header name.

@Component
public class TenantInterceptor implements WebRequestInterceptor {

    private static final String TENANT_HEADER = "X-Tenant";

    @Override
    public void preHandle(WebRequest request) {
        TenantStorage.setCurrentTenant(request.getHeader(TENANT_HEADER));
    }

    @Override
    public void postHandle(WebRequest webRequest, ModelMap modelMap) {
        TenantStorage.clear();
    }

    @Override
    public void afterCompletion(WebRequest webRequest, Exception e) {

    }
}

3. Next thing is to add the tenant Interceptor in the interceptor registry. For that purpose, I will create WebConfiguration that will implement WebMvcConfigurer.

@Configuration
public class WebConfiguration implements WebMvcConfigurer {

    private final TenantInterceptor tenantInterceptor;

    public WebConfiguration(TenantInterceptor tenantInterceptor) {
        this.tenantInterceptor = tenantInterceptor;
    }

    @Override
    public void addInterceptors(InterceptorRegistry registry) {
        registry.addWebRequestInterceptor(tenantInterceptor);
    }
}

4. Now, let’s update the application.yml file with some properties for the database connections.

tenants:
  datasources:
    n47schema1:
      jdbcUrl: jdbc:mysql://localhost:3306/n47schema1?verifyServerCertificate=false&useSSL=false&requireSSL=false
      driverClassName: com.mysql.cj.jdbc.Driver
      username: root
      password:
    n47schema2:
      jdbcUrl: jdbc:mysql://localhost:3306/n47schema2?verifyServerCertificate=false&useSSL=false&requireSSL=false
      driverClassName: com.mysql.cj.jdbc.Driver
      username: root
      password:
spring:
  jpa:
    database-platform: org.hibernate.dialect.MySQL5Dialect

5. Following, we will wrap the tenant’s values to map with key = tenant name, value = data source in DataSourceProperties.

@ConfigurationProperties(prefix = "tenants")
@Component
public class DataSourceProperties {

    private Map<Object, Object> dataSources = new LinkedHashMap<>();

    public Map<Object, Object> getDataSources() {
        return dataSources;
    }

    public void setDataSources(Map<String, Map<String, String>> datasources) {
        datasources.forEach((key, value) -> this.dataSources.put(key, convert(value)));
    }

    public DataSource convert(Map<String, String> source) {
        return DataSourceBuilder.create()
                .url(source.get("jdbcUrl"))
                .driverClassName(source.get("driverClassName"))
                .username(source.get("username"))
                .password(source.get("password"))
                .build();
    }
}

6. Afterwards, we should create DataSource Bean, and for that purpose, I will create DataSourceConfig.

@Configuration
public class DataSourceConfig {

    private final DataSourceProperties dataSourceProperties;

    public DataSourceConfig(DataSourceProperties dataSourceProperties) {
        this.dataSourceProperties = dataSourceProperties;
    }

    @Bean
    public DataSource dataSource() {
        TenantRoutingDataSource customDataSource = new TenantRoutingDataSource();
        customDataSource.setTargetDataSources(dataSourceProperties.getDataSources());
        return customDataSource;
    }
}

7. At last, we will extend the AbstractRoutingDataSource and implement our lookup key.

public class TenantRoutingDataSource extends AbstractRoutingDataSource {

    @Override
    protected Object determineCurrentLookupKey() {
        return TenantStorage.getCurrentTenant();
    }

}

And we are done with the first part.

Let’s see how it looks in the real world:

For this example, we will use two schemas from the same database instance, we will create a user and get all users. Also, I will show you how you can implement Flyway and test the solution.

First, let’s configure our databases. In my local instance of MySQL server, we will create two schemas: n47schema1 and n47schema2.

Next step is to execute this CREATE statement on both schemas:

CREATE TABLE `users` (
	`id` INT(11) NOT NULL AUTO_INCREMENT,
	`name` VARCHAR(64) NOT NULL DEFAULT '' COLLATE 'utf8_general_ci',
	PRIMARY KEY (`id`)
);

Then, we will create two APIs, one for creating a user, and the other one to fetch all users.

@RestController
@RequestMapping("/users")
public class UserController {

    private final UserRepository userRepository;

    public UserController(UserRepository userRepository) {
        this.userRepository = userRepository;
    }

    @PostMapping
    public UserDomain addUser(@RequestBody UserRequestBody userRequestBody) {
        UserDomain userDomain = new UserDomain(userRequestBody.getName());
        return userRepository.save(userDomain);
    }

    @GetMapping
    public List<UserDomain> getAll() {
        return userRepository.findAll();
    }
}

Also we need to create UserDomain, UserRepository and UserRequestBody.

@Entity
@Table(name = "users")
public class UserDomain {

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;

    private String name;

    public UserDomain() {
    }

    public UserDomain(String name) {
        this.name = name;
    }

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }
}
public interface UserRepository extends JpaRepository<UserDomain, Long> {
}
public class UserRequestBody {
    private String name;

    public UserRequestBody() {
    }

    public UserRequestBody(String name) {
        this.name = name;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }
}

And we are done with coding.

We can run our application and start making a request.

First, let’s create some users with a POST request to http://localhost:8080/users. The most important thing not to forget is that we need to provide header X-Tenant with the value n47schema1 or n47schema2.

We will create two users for tenant n47schema1: Antonie and John. Example:

After that, we will change the X-Tenant header value to n47schema2 and create two users: William and Joseph.

You will notice that the ids retrieved in the response are the same as the first tenant value. Now let’s fetch the users by the API.

When you make a GET request to http://localhost:8080/users with header X-Tenant having value n47schema1 you will fetch the users from the n47schema1 schema, and when you make a request with a header value n47schema2 you will fetch from the n47schema2 schema.

You can also check the data in the database to be sure that it is stored correctly.

You can always set fallback if the X-Tenant header is not provided, or it’s with a wrong value.

As the last thing, I will show you how you can implement Flyway with multitenancy. First, you need to add flyway as a dependency and disable it in the application.yml

spring:
  flyway:
    enabled: false

Add PostConstruct method in DataSourceConfig configuration:

@PostConstruct
public void migrate() {
        for (Object dataSource : dataSourceProperties.getDataSources().values()) {
            DataSource source = (DataSource) dataSource;
            Flyway flyway = Flyway.configure().dataSource(source).load();
            flyway.migrate();
        }
}

And we are done, hope this blog helps you to understand what multitenancy is and how it’s implemented in Spring Boot project.

Download the source code

The project is freely available on our Bitbucket repository. Feel free to fix any mistakes and to comment here if you have any questions or feedback.

https://bitbucket.org/n47/spring-boot-multitenancy/src/master/

Server-side rendering with Nuxt.js

Reading Time: 5 minutes

What is server-side rendering

By default, modern JS frameworks produce and manipulate DOM in the browser as an output. But, it is possible to render the same codebase into HTML strings on the server and send them to the browser and finally compile the static markup into a fully working application on the client-side. A server-rendered application can also be considered isomorphic or universal, in the sense that the majority of your code runs on both the server and the client.

Trade-offs when using SSR

  • Development constraints
  • Build setup, deployment requirements
  • Server-side load

Advantages when using SSR

  • Better SEO
  • Faster time to content

Nuxt.js

Nuxt is a framework based on Vue.js, Node.js, Webpack and Babel. It is free and open-source and we can use it to create various applications from static landing pages to complex enterprise-ready web solutions.

Supports 3 modes of working:

  1. Server-Rendered (Universal SSR)
  2. Single Page Applications (SPA)
  3. Static-Generated (Pre Rendering)

Some of the features Nuxt provides:

  • Write Vue Files (*.vue)
  • Automatic Code Splitting
  • Server-Side Rendering
  • Powerful Routing System with Asynchronous Data
  • Static File Serving
  • ES2015+ Transpilation
  • Bundling and minifying of your JS & CSS
  • Managing element
  • Hot module replacement in Development
  • Pre-processor: Sass, Less, Stylus, etc.
  • HTTP/2 push headers ready
  • Extending with Modular architecture

Project structure

Here we have the project structure that Nuxt provides out of the box. The pages, middleware, plugins and layouts directories are framework-specific and we’ll briefly explain their purpose.

The Nuxt community has added great README.md files into each directory with links to the documentation.

The layouts directory defines all of the layouts that our application can use. This is a great place to add shared global components that are used across the application like the header and footer for example. By default, the template that is used for .vue files in the pages directory is default.vue. It is needed to inject all of the page’s components, text, assets, and data.

Pages is the only required directory. All Vue components in this directory are automatically added to the vue-router based on their filenames and the directory structure. We can have dynamic routes by adding an underscore (_) to a directory or a .vue file.

/pages
---/categories
------/_category_id
------/products
---------/_product_id

// The structure above generates a router config that will provide 
// a route for /categories/1/products/3 for example

Middleware is a function that can be executed before rendering a page or layout. There is a variety of reasons we may want to do so. Route guarding is a popular use where we could check the Vuex store for a valid login or validate some params (instead of using the validate method on the component itself). Another use for middleware can be to generate dynamic breadcrumbs based on the route and params. These functions can be asynchronous, meaning nothing will be shown to the user until the middleware is resolved.

The plugins directory allows us to register Vue plugins before the application is created. This allows the plugin to be shared throughout our app on the Vue instance and be accessible in any component. Most major plugins have a Nuxt version that can be easily registered to the Vue instance by following their docs. However, there will be circumstances when we will have to develop a plugin or need to adapt to an existing plugin.

Nuxt’s supercharged components

Nuxt’s page components have extra methods attached to them that we can use to provide additional functionality. The main ones we would use in a project will be the asyncData and fetch methods. Both are very similar in concept, they run asynchronously before the component is generated, and they can be used to populate the data of a component and the store. They also enable the page to be fully rendered on the server before sending it to the client even when we have to wait for some database or API call.

Summary

There’s a lot to cover with Nuxt and server-side rendering, this post aims to provide a general overview of the framework, why server-side rendering is important and how we can scaffold our next web application using Nuxt.

Before using SSR though, we should ask whether we actually need it. Generally, it depends on how important time to content is for the application. If we are building an internal dashboard where an extra few hundred milliseconds on initial load isn’t an issue, SSR wouldn’t be needed. However, in applications where time to content is critical, SSR can help us achieve the best possible performance.

This is a starter Nuxt application which showcases examples of the features mentioned in the post if you have any comments or feedback let us know.
nuxt-ssr-example

Thanks for reading!

Lifecycle hooks in Vue.js

Reading Time: 3 minutes

In this post we’ll be doing an overview of lifecycle hooks in Vue.js.

Each Vue application first creates a Vue instance, with the Vue function:

var vm = new Vue({
  router,
  render: function (createElement) {
    return createElement(App)
  }
}).$mount('#app')

This Vue instance during its initialization goes through several phases and it exposes some properties and methods in each phase. This is an example of Vue using the template method behavioural design pattern.

The methods which run by default in this process of creating and updating the DOM are called lifecycle hooks and using them properly allows easy access to a behind the scenes overview of how the library works.

Below we can see a simple diagram showcasing all methods in one instance.

We can see that we have 8 methods that we can split into 4 phases in an application’s lifecycle.

  • Creation or initialization hooks (beforeCreate, created)
  • Mounting or DOM insertion hooks (beforeMount, mounted)
  • Updating hooks (beforeUpdate, updated)
  • Destroying hooks (beforeDestroy, destroyed)

Creation hooks

beforeCreate is the first hook that gets called in a Vue component, it has no access to the components reactive data and events as they haven’t been initialized yet. It’s good to use this hook for non-reactive data.
The created hook has access to the components events and reactive data and state. The DOM and $el properties are still not available. This hook is usually used to perform API calls and store the response.

Mounting hooks

The next lifecycle hook to be called is beforeMount and it happens right before the component is mounted on the DOM. Here is our last opportunity to perform operations before the DOM gets rendered.
Mounted is called right after and now the DOM is available. This is a good place to add third-party integrations like charts, Google Maps etc, which interact directly with the DOM.

mounted() {
  console.log('This is the DOM instance', this.$el)
  this.$nextTick(function() {
    console.log('Child components have now been loaded')
  }
}

Updating hooks

beforeUpdate and updated are the two hooks that are called each time a reactive property is changed. The data in beforeUpdate holds the previous values of the property, while after updated runs, the instance has finished re-rendering.

beforeUpdate () {
  console.log('before update called')
},
updated () {
  console.log('update finished')
}

Destroying hooks

When beforeDestroy, we can still mutate the state and the instance is still functional. Here we can do some last moment data mutation before the instance is destroyed.
When destroy is called, the Vue instance does not exist anymore. All directives, event listeners have been removed and child components have been destroyed.

beforeDestroy () {
  console.log('before destroy called')
},
destroyed () {
  console.log('destroyed')
}

I hope this provides a good overview of the lifecycle hooks in a Vue application. For more information, refer to the official docs here.

JavaScript loop and object iteration (optimization)

Reading Time: 3 minutes

Nowadays, it’s interesting that loops become part of our life as developers and we use them at least one time a day. Because of that, one day I decided to investigate and go deeper into JavaScript loops, where I found very interesting things and if I do not share them with you, I am going to feel guilty.

Before you continue reading, I would strongly recommend you to read my previous blog which I believe you will find very useful to create a full picture of the loops. So, go on and read it.

Object properties iteration

Let’s first analyze object iteration and suppose that we have an object, something like:

var obj = {
    property1: 1,
    property2: 2,
    …
}

First, what comes to our mind is to iterate them with the standard for each iteration:

for (var prop in obj) {
    console.log(prop);
}

In this case, we are going to iterate through the object properties but is it the correct way? The answer is yes and no, depends on your needs. Another way to iterate trough is to exclude all inherited properties, which in some case we do not need. So, we can exclude them by using the JavaScript method hasOwnProperty(). You can find an explanation about in operator and hasOwnProperty() in my previous blog.

Since we learned some object optimization/improvements/usage, now the question is, can we really do an optimization?
The answer is yes. Before I am going to show you how we can do that, let’s spend some time on the loops.

Loop iteration

In order to continue the previous example, I will continue explaining the loops with object iteration (of course you can test it with a list of integers like the speed test examples or anything you want).

To accomplish that, we will need the JavaScript method Object.keys().

  • Object.keys() method returns an array of a given object but only the own properties of the object

Let’s write the standard for loop:

var keys = Object.keys(obj)
for (var i = 0; i < keys.length; i++) {
    console.log(obj[keys[i]]);
}

Now we have a solution where we decreased the iteration time by eliminating the time for the evaluation `keys.length` from O(N) to O(1) which is a big time saving if we iterate big arrays.
So, during the development, if you are not limited (like applying some best practices,…), you can add another optimization, by using while loop.

var i = keys.length;
while (i--) {
    console.log(obj[keys[i]]);
}

In this case, we do not declare a new variable, we don’t execute new operations and the while loop will automatically stop when it reaches -1.

Speed testing:

Since the new browsers like Chrome are very fast and optimized, in order to see the best speed differences, I would suggest executing the loops on IE where you will be able to see a real speed difference between the loops.

var arr = new Array(10000);

Example speed test 1	
console.time();                        
for (var i = 0; i < arr.length; i++) {
    // operations...		
    var sum = i * i;   		
}
Execution 1: 4.4ms		
Execution 2: 5.5ms		
Execution 3: 5ms			
Execution 4: 4.6ms			
Execution 5: 5ms	
					
Example speed test 2                    
console.timeEnd(); 	          
while (i--) {
    // operations...
    var sum = i * i;
}
console.timeEnd();

Execution 1: 3.7ms
Execution 2: 4.8ms
Execution 3: 3.9ms
Execution 4: 3.8ms
Execution 5: 4.2ms

Thank you for reading and I would appreciate any feedback.

What is White Labeling in Software Development? How to implement it for Android?

Reading Time: 3 minutes

What is White Label?

A white-labelled product is basically a software product or a service that is developed or created by some company and other companies can buy and rebrand it to their need so the users of this product don’t know about the real creator of the product but the brand. To explain it better, assume that there is a white-label company that makes an app and sells it to companies A and B and then they rename the app name to A and B and change the content of the app corresponding to their products so then the application becomes the service of companies A and B.

Why to use White Label product?

White label products become handy in certain situations. Especially it is better to go with a white-label solution when you want to enter a market with a minimum cost and time. It might be in case you want to start a startup project and don’t want to invest much in the beginning then the white-label solution is a good choice. Some advantages and disadvantages of using a white-label solution are listed below:

Advantages:

  • Less time to market time
  • Cost-efficient (time, money)
  • No developer hiring needed

Disadvantages:

  • Fewer customization capabilities
  • No control over the quality of the software
  • Dependency on external sources (developer)

Important points to consider

There are some important aspects which should be considered by a company which makes a white-label solution and by a client of this product.

  • Technical documentation – complete technical documentation of the software depending on the agreement between both sides.
  • Scope of customization – both sides must know which parts of the product can be customized, what kind of new functionalities can be added, what kind of limitations might occur or etc.
  • Maintenance & Support – for how long and what kind of maintenance and support should a client expect.

A simple explanation of how to implement a White Label for Android application

In Android, it is simple to do white label implementation thanks to “productFlavors” and “flavorDimensions“. By means of these two terms, it is possible to have different resources for different applications such as different themes, colours, logos, application names and etc. Additionally, using the” gradle” file we can also create some configurations to enable or disable certain functionalities of the app based on the needs of the customer. At the end when we build the application, only the resources which belong to the selected flavour and dimension will be included in the apk file.

Conclusion

To conclude the blogpost I would emphasize the two reasons which I think the most important reasons to use a white label product for your projects. The first one is it would cost you less financial investment (saving money). The second is less time to market time (quick launch) since you don’t need to do everything from scratch. These reasons sound good but it is better you always do your own analysis and comparisons before you decide what is best for your scenario.

Thanks for reading!

Below I have listed some links which I think is worth checking if you are going to implement a white label for an Android project:

Making Swift networking code more readable

Reading Time: 3 minutes

With Swift 5 a new type got introduced:

@frozen public enum Result<Success, Failure> where Failure : Error {

    /// A success, storing a `Success` value.
    case success(Success)

    /// A failure, storing a `Failure` value.
    case failure(Failure)

The Result type is an enum consisting of 2 cases. The success and the failure case. Each of them can hold a generic value. The failure case, however, is limited to Types extending the Error type.

Not a big deal? Sure, but it’s the little things which add up and make a difference in the long run.

Lately, I was migrating from SwiftyJSON to native JSON parsing. Each network call was implemented in the following way:

func fetchSomething(completion: @escaping (SomeReturnValue?, SomeError?) -> Void) {
    NetworkingTool.request { (response) in
        guard response.isValid
            else { completion(nil, .somethingBad); return }
        do {
            let returnValue = try SomeReturnValue(response: response)
            completion(returnValue, nil)
        } catch {
            completion(nil, .scarry)
        }
    }
}

Looks okayish. Good. So let’s use it:

fetchSomething { (result, error) in
    guard error == nil
        else { handleError(error: error); return }
    doSomething(result: result)
}

Ok. But how to implement the doSomething? With an optional? This can’t be right, right? Force unwrap the result? And what about the error case? Force unwrap it? Oh and wait, what about the case where neither a result nor an error is returned? Is this even a thing? Ok, let me look up the implementation…

So a tiny bit of ambiguity paired with different people working on different parts of the network stack for different features can cause a real heterogeneous system. (Which does not imply that this is a bad system!)

If the company you’re working for is in favour of code ownership, you may not encounter this one. But so far no company I worked for was about code ownership. It’s usually your code is my code is our code, comrade. Period. There are simply too many trucks outside.

As long as code ownership isn’t a thing and you do not want to spend time on endless syntax and architectural discussions with little benefit or enforce a (new) best practice on all of your colleagues. Again. It comes really handy to have a built-in Result type which is reasonably unambiguous.

And since we all know that we’re spending more time reading code than writing, this saves us all valuable time.

Tool Showcase: Node-RED

Reading Time: 5 minutes

Node-RED is a flow-oriented tool to wire together hardware devices, APIs and online services. It is mainly targeting the IoT market but can be used for a lot of other things as well. Because of its easy to use browser-based UI and drag and drop programming system, it’s really beginner-friendly with a steep learning curve.

Even though it was developed by IBM in 2013, it’s not really known to most of the IT community. At least none of my colleagues knew it. That’s worth for me to write this tool showcase. Enjoy reading!

Getting started

Instead of just reading along, I encourage everybody to just start Node-RED and try it yourself. If docker is installed, this is just a matter of seconds. Use the following command to start a Node-RED instance locally:

docker run -it -p 1880:1880 --name mynodered nodered/node-red

That’s it. You are ready to go! Open your browser and go to localhost:1880 to access the Node-RED UI.

One of the simplest flows is the following one:

Drag an “http in”, “template” and “http out” node into the flow and connect them. After clicking the deploy button you can access localhost:1880/<whateverPathYouConfiguredInYourHttpInNode> to see whatever you’ve configured in your template node. Congratulations, you have just created your first Node-RED flow!

Of course, rendering static content on an endpoint is not the most exciting thing to do, but between the HTTP in and out nodes, you’re free to do whatever you want. Nodes to make HTTP calls to different URLs, reading and writing files and much more are provided by Node-RED by default.

The power of the community

Node-RED uses Node.js for its nodes (yes, the terminology “node” is overused in the Node-RED context 🙂 ). This has a big advantage, that new nodes can be added easily from the node package manager (npm). You can find these nodes by searching for “node-red-contrib” in the npm repository. An even simpler option is to install these nodes using the “Manage Palette” option in the UI. There you can install new nodes with a single click.

There are nodes for almost everything. Need support for slack? Yep, there’s a node for that. Tired of pressing light switches in your house to turn off and on your Philips Hue lights? Yep, there’s a node for that as well. Whatever you can imagine, there’s a node for it.

A slightly more advanced flow

To test some Node-RED features, I tried to come up with a slightly more complicated example. I own some Philips Hue lamps and a LaMetric Time. After searching some nodes for these devices, I was really surprised that somebody already built some for these two devices (I was especially surprised about the support for the not so well-known LaMetric Time).

My use case was pretty straight forward. Turn on the lights when it gets dark and display a message on my LaMetric near my TV. At midnight, turn off the lights and display a different message. Additionally, I wanted some web endpoints that I could call to trigger both actions manually.

After only a few minutes, I had the following flow:

And it works! I found a node that sends an event as soon as the sun goes down at my particular location. Very cool. All the other nodes (integration for Philips Hue and LaMetric) can also easily be added with the “Manage Palette” option in the GUI. All in all, the implementation of my example use-case was pretty straight forward and required no programming know-how.

Expandability

Even though there are almost 3000 community-contributed nodes available to use, you might have some hardware or API that does not (yet) have some pre-made nodes. In that case, you can implement your own nodes pretty easily. The only thing required is a text editor and some node.js know-how.

The Node-RED documentation provides a good guide on how to create custom nodes: https://nodered.org/docs/creating-nodes/first-node

It is highly recommended to push your custom nodes to the npm repository to be used by the community.

Additional Resources

There are a whole lot more features that are not described in this blogpost.

  • Flows are just .json files and can easily be imported or exported or added to a git repository
  • Flows can be converted to subflows and used like nodes in other flows
  • Multiple flows can run in parallel and trigger each other
  • There are special nodes for error handling or low-level TCP communication
  • There are keyboard shortcuts for everything
  • … and much more!

Feel free to have a look yourself:

Thanks for reading!

AEM 6.5 and SSL

Reading Time: 4 minutes

Today, almost all websites are delivered to the client via HTTPS, but HTTP is still frequently used for backend communication. To increase security Adobe has simplified the SSL configuration with AEM 6.3 and provides it as a feature called SSL By Default. This is intended to ensure that the internal connection to AEM instances is exclusively performed in an encrypted and authenticated way. This blog post describes a simple way to secure a local AEM instance using self-signed certificates for testing purposes.

Prerequisites

The following steps were evaluated on a Windows platform using cmd. It should be possible to perform them in the same way on another environment, but there may be small differences in the syntax of some commands. First, ensure that the tools below are installed:

Create self-signed certificate

AEM requires a private key/certificate pair in DER format for SSL setup. It can be created using the OpenSSL command-line tool.

1. Generate a private key of length 4096 bits.

openssl genrsa -out localhostprivate.key 4096

2. Create the certificate request with common name localhost from the private key.

openssl req -sha256 -new -key localhostprivate.key -out localhost.csr -subj "/CN=localhost"

3. Generate the SSL certificate and sign it with the private key (this is why it is called self-signed certificate). The certificate will be valid for one year.

openssl x509 -req -days 365 -in localhost.csr -signkey localhostprivate.key -out localhost.crt

4. Convert the private key to DER format.

openssl pkcs8 -topk8 -inform PEM -outform DER -in localhostprivate.key -out localhostprivate.der -nocrypt

Install SSL configuration via curl

Execute the following command from the directory where the private key/certificate was created.

curl -u admin:admin -F "keystorePassword=password" -F "keystorePasswordConfirm=password" -F "truststorePassword=password" -F "truststorePasswordConfirm=password" -F "privatekeyFile=@localhostprivate.der" -F "certificateFile=@localhost.crt" -F "httpsHostname=localhost" -F "httpsPort=8443" http://localhost:4502/libs/granite/security/post/sslSetup.html

Here AEM runs on port 4502 with credentials admin:admin. HTTPS is set up on port 8443. The keystore/truststore password has been set to password (Use a stronger secret in production).

If the command was successful you will get the following response:

Now you should be able to open access the AEM instance on port 8443 over HTTPS: https://localhost:8443/

Note that the browser will most likely show a “Not Secure” warning because self-signed certificates are not trusted by default.

Disable HTTP

Currently, the AEM instance is still accessible via http://localhost:4502. In AEM 6.5 there is no option to deactivate HTTP by configuration. Be careful: The OSGi configuration Apache Felix Jetty Based Http Service contains a checkbox to disable HTTP. It has been deprecated but is still active. You may not be able to access your instance anymore if you change something on this dialogue. The new SSL configuration options can be found on Adobe Granite SSL Connector Factory.

Adobe proposes a different approach to deactivate HTTP. A sling mapping can be added to redirect incoming HTTP requests to the HTTPS port. Open CRX DE https://localhost:8443/crx/de/index.jsp and create a new node below /etc/map/http:

  • Name: localhost.4502
  • Type: sling:Mapping

Then add a new property to this node:

  • Name: sling:redirect
  • Type: String
  • Value: https://localhost:8443

Click on Save All. Try to open http://localhost:4502 on the browser. You should now be redirected to https://localhost:8443/.

Conclusion

With the feature, SSL By Default Adobe provided a fast and easy way to set up SSL on an AEM instance. Self-signed certificates can be used for testing purposes on local/test instances. However, many organizations use their own PKI which manages the certificates. In this case, you should avoid self-signed certificates and use certificates signed by the certification authority of your organization.

Typescript/ES7 Decorators to make Vuex modules a breeze

Reading Time: 5 minutes

Overview

Who does not like to use Vuex in a VueJS App? I think no one 🙂

Today I would like to show you a very useful tool written in TypeScript that can boost your productivity with Vuex: vuex-module-decorators. Lately, it’s getting more popular. Below you can see the weekly downloads of the package which raise up constantly:

Weekly downloads on npmjs.com

But what does it exactly do and which benefit does it provide?

  • Typescript classes with strict type of safety
    Create modules where nothing can go wrong. The type check at compile time ensures that you cannot mutate data that is not part of the module or cannot access unavailable fields.
  • Decorators for declarative code
    Annotate your functions with @Action or @Mutation to automatically turn them into Vuex module methods.
  • Autocomplete for actions and mutations
    The shape of modules is fully typed, so you can access action and mutation functions with type-safety and get autocomplete help.

In short, with this library, you can write Vuex module in this format:

import { VuexModule, Module, Mutation, Action } from 'vuex-module-decorators'
import { get } from 'axios'

@Module
export default class Posts extends VuexModule {
    posts: PostEntity[] = [] // initialise empty for now

    get totalComments (): number {
        return posts.filter((post) => {
            // Take those posts that have comments
            return post.comments && post.comments.length
        }).reduce((sum, post) => {
            // Sum all the lengths of comments arrays
            return sum + post.comments.length
        }, 0)
    }

    @Mutation
    updatePosts(posts: PostEntity[]) {
        this.posts = posts
    }

    @Action({commit: 'updatePosts'})
    async function fetchPosts() {
        return get('https://jsonplaceholder.typicode.com/posts')
    }
}

As you can see, thanks to this package, we are able to write a Vuex module by writing a class which provides Mutations, Actions and Getters. Everything in one single file. How cool is that?

Benefits of type-safety

Instead of using the usual way to dispatch and commit…

store.commit('updatePosts', posts)
await store.dispatch('fetchPosts')

…with the getModule Accessor you can now use a more type-safe mechanism that does not offer type safety for the user data and no help for automatic completion in IDEs.

import { getModule } from 'vuex-module-decorators'
import Posts from `~/store/posts.js`

const postsModule = getModule(Posts)

// access posts
const posts = postsModule.posts

// use getters
const commentCount = postsModule.totalComments

// commit mutation
postsModule.updatePosts(newPostsArray)

// dispatch action
await postsModule.fetchPosts()

Core concepts

State

All properties of the class are converted into state props. For example, the following code

import { Module, VuexModule } from 'vuex-module-decorators'

@Module
export default class Vehicle extends VuexModule {
  wheels = 2
}

is equivalent to this:

export default {
  state: {
    wheels: 2
  }
}

Mutations

All functions decorated with @Mutation are converted into Vuex mutations. For example, the following code

import { Module, VuexModule, Mutation } from 'vuex-module-decorators'

@Module
export default class Vehicle extends VuexModule {
  wheels = 2

  @Mutation
  puncture(n: number) {
    this.wheels = this.wheels - n
  }
}

is equivalent to this:

export default {
  state: {
    wheels: 2
  },
  mutations: {
    puncture: (state, payload) => {
      state.wheels = state.wheels - payload
    }
  }
}

Actions

All functions that are decorated with @Action are converted into Vuex actions.

For example this code

import { Module, VuexModule, Mutation, Action } from 'vuex-module-decorators'
import { get } from 'request'

@Module
export default class Vehicle extends VuexModule {
  wheels = 2

  @Mutation
  addWheel(n: number) {
    this.wheels = this.wheels + n
  }

  @Action
  async fetchNewWheels(wheelStore: string) {
    const wheels = await get(wheelStore)
    this.context.commit('addWheel', wheels)
  }
}

is equivalent to this:

const request = require(‘request')

export default {
  state: {
    wheels: 2
  },
  mutations: {
    addWheel: (state, payload) => {
      state.wheels = state.wheels + payload
    }
  },
  actions: {
    fetchNewWheels: async (context, payload) => {
      const wheels = await request.get(payload)
      context.commit('addWheel', wheels)
    }
  }
}

Advanced concepts

Namespaced Modules

If you intend to use your module in a namespaced way, then you need to specify so in the @Module decorator:

@Module({ namespaced: true, name: 'mm' })
class MyModule extends VuexModule {
  wheels = 2

  @Mutation
  incrWheels(extra: number) {
    this.wheels += extra
  }

  get axles() {
    return this.wheels / 2
  }
}

const store = new Vuex.Store({
  modules: {
    mm: MyModule
  }
})

Registering global actions inside namespaced modules

In order to register actions of namespaced modules globally, you can add a parameter root: true to @Action and @MutationAction decorated methods:

@Module({ namespaced: true, name: 'mm' })
class MyModule extends VuexModule {
  wheels = 2

  @Mutation
  setWheels(wheels: number) {
    this.wheels = wheels
  }
  
  @Action({ root: true, commit: 'setWheels' })
  clear() {
    return 0
  }

  get axles() {
    return this.wheels / 2
  }
}

const store = new Vuex.Store({
  modules: {
    mm: MyModule
  }
})

This way the @Action clear of MyModule will be called by dispatching clear although being in the namespaced module mm. The same thing works for @MutationAction by just passing { root: true } to the decorator-options.

Dynamic Modules

Modules can be registered dynamically simply by passing a few properties into the @Module decorator, but an important part of the process is, we first create the store, and then pass the store to the module:

import store from '@/store'
import {Module, VuexModule} from 'vuex-module-decorators'

@Module({dynamic: true, store, name: 'mm'})
export default class MyModule extends VuexModule {
  /*
  Your module definition as usual
  */
}

Installation

The installation of the package is quite simple and does not require many steps:

Download the package

npm install vuex-module-decorators
# or
yarn add vuex-module-decorators

Vue configuration

// vue.config.js
module.exports = {
  // ... your other options
  transpileDependencies: [
    'vuex-module-decorators'
  ]
}

For more details, you can check the plugin’s official documentation.

Conclusion

I personally think this package can rump up your productivity because it embraces the “modularisation” pattern by making your app more scalable. Another big advantage is the fact you have “type-checking” thankfully to TypeScript. If you have a VueJS TypeScript Application, I strongly recommend you this package.

Spring Cloud OpenFeign

Reading Time: 3 minutes

Choosing the microservice architecture and Spring Boot means that you’ll need to pick the cleanest possible way for your services to communicate between themselves. Feign Client is one of the best solutions for this issue. It is a declarative Java web service client initially developed by Netflix. It’s an abstraction over REST-based calls allowing your microservices to communicate cleanly without the need to know REST details happening underneath. The main idea behind Feign Client is to create an interface with method definitions representing your service call. Even if you need some customization on requests or responses, you can do it in a declarative way. In this article, we will learn about integrating Feign in a Spring Boot application with an example for REST-based HTTP calls. An example will be given, in which two microservices will communicate with each other to transfer some data. But, first, let’s get familiar with feign.

What is Feign?

Feign is a declarative web service client that makes writing web service clients easier. We use the various annotations provided by the Spring framework such as Requestmapping, @PathVariable in a Java interface to Feign, a declarative web service client. It makes writing web service clients easier. To use Feign create an interface and annotate it. It has pluggable annotation support including Feign annotations and JAX-RS annotations. Feign also supports pluggable encoders and decoders. Spring Cloud adds support for Spring MVC annotations and for using the same HttpMessageConverters used by default in Spring Web. Spring Cloud integrates Ribbon and Eureka to provide a load-balanced HTTP client when using Feign.

Example Management API simulator

In the following code section, you can see a Feign Client resource example. The interface extends the origin API interface to declare the @FeignClient. The @FeignClient declares that a REST client should be created with this interface.

Setup pom.xml

The following dependency will be added:

    <dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-starter-openfeign</artifactId>
    </dependency>

Enable Feign Client

Now enable the Eureka Feign by using the @EnableFeignClients annotation in a main Spring Boot application class that is also annotated with the @SpringBootApplication annotation.

package com.example.demo;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.openfeign.EnableFeignClients;
@SpringBootApplication
@EnableFeignClients
@EnableDiscoveryClient
public class FeignClientApplication {
  public static void main(String[] args) {
    SpringApplication.run(FeignClientApplication.class, args);
  }
}

Use a Circuit Breaker with a Feign Client

If you want to use the Spring Cloud OpenFeign support for Hystrix circuit breakers, you must set the feign.hystrix.enabled property to true. In the Feign version of the Agency app, this property is configured in application.yml:

feign:
  hystrix:
    enabled: true
@FeignClient(name = "Validations", url = "${validations.host}")
public interface ValidationsClient {

    @GetMapping(value = "/validate-phone")
    InfoMessageResponse<PhoneNumber> validatePhoneNumber(@RequestParam("phoneNumber") String phoneNumber);

}

In the application.yml file, we will store the URL of the microservice with which we need to communicate:

validations:
  host: "http://localhost:9080/validations"

We will need to add a config for Feign as follows:

package com.demo;

import feign.Contract;
import org.springframework.cloud.openfeign.support.SpringMvcContract;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class FeignClientConfiguration {
    @Bean
    public Contract feignContract() {
        return new SpringMvcContract();
    }
}

Congrats! You just managed to run your Feign Client application by which you can easily locate and consume the REST service.

Summary

In this article, we have launched an example of microservice that communicates with one another. This article should be treated as an introduction to the subject of Feign Client and a discussion of integration with other important components of the microservice architecture.

Did we forget the Immutable classes in Java?

Reading Time: 5 minutes

Well, I did. In everyday work we can hear discussions about microservices, containers, beans, entities etc. but it is very hard and rare to hear some talk about immutable or mutable classes. Why is it like that?

Let’s first refresh our memories of what an Immutable class is.

Immutable class means, that once an object is initialized we cannot change its content.

To be more clear, let’s see how we can write Immutable classes in Java.

Basic rules to write some immutable classes are:
  1. Don’t provide “setter” methods — methods that modify fields or objects referred to by fields.
  2. Make all fields final and private.
  3. Don’t allow subclasses to override methods. The simplest way to do this is to declare the class as final. A more sophisticated approach is to make the constructor private and construct instances in factory methods.
  4. If the instance fields include references to mutable objects, don’t allow those objects to be changed:
    • Don’t provide methods that modify the mutable objects.
    • Don’t share references to the mutable objects. Never store references to external, mutable objects passed to the constructor; if necessary, create copies, and store references to the copies. Similarly, create copies of your internal mutable objects when necessary to avoid returning the originals in your methods.

How to make Immutable classes

After defining these basic rules there are several different solutions to write Immutable classes.

Basic one without using external libraries:
final public class ImmutableBasicExample {

   final private Long accountNumber;
   final private String accountCurrency;

   private void check(String accountCurrency, Long accountNumber) {
       // Some constructor rules
       // throw new IllegalArgumentException()
   }

   public ImmutableBasicExample(
           String accountCurrency, Long accountNumber) {
       check(accountCurrency, accountNumber);
       this.accountCurrency = accountCurrency;
       this.accountNumber = accountNumber;
   }

   public String getAccountCurrency() {
       return accountCurrency;
   }

   public Long getAccountNumber() {
       return accountNumber;
   }
}

Use final class, make fields final and set them in the constructor. Don’t write setter for fields, just getter.

We can use Lombok:
import lombok.Value;

@Value
public class LombokImmutable {
   Long accountNumber;
   String accountCurrency;
}

Much shorter, with just one annotation we have done our job. Value is variant of data and is immutable because: all fields are private and final, the class is made final, the getter for each field, additional some basic methods like toString(), equals() and hasCode().

Or the newest way with using a record (Java 14 preview features):
public record RecordRequstBody(
       Long accountNumber,
       String accountCurrency) {
}

By now the most sophisticated way to make an Immutable class. Small,  readable and useful code. Without using external libraries. Compiler auto generates following code: private final field, public read assessor, public constructor with signature same like state description and implementation of following methods: toString(), hashCode() and equals().

I’m sure that there are other ways to write Immutable classes, but these are enough to understand how much code and effort we need to write one.

Use of Immutable classes

We already use Immutable classes every day in our work. All primitives wrapper classes are immutable. Here is one everyday practical example:

Integer integerExample = 6;
System.out.println(integerExample);
changeInteger(integerExample);
System.out.println(integerExample);

where changeInteger is:

private void changeInteger(Integer integerExample) {
   integerExample = integerExample + 1;
}

The output will be:

6
6

It is because the line

integerExample = integerExample + 1;

creates a new reference to the new object for integerExample and the integerExample in the main code is still referencing the old Integer object, which is not changed.

This also applies to all other primitive wrappers that are immutable: Byte, Short, Integer, Long, Float, Double, Character, Boolean. Additionally BigDecimal, BigInteger and LocalDate are immutable.

Other one interesting immutable class in Java is String. If we write the following code:

String string1 = "first string";
string1 += "concatenation";

Concatenation of two Strings with “+” will produce new String. It is fine if we do this with two or a few Strings, but if we build one String with more concatenations then we will initialize more objects. (That is why here it is better to use StringBuilder)

We can create and use Immutable classes for RequestBody (DTO) in our rest controllers. It is a good idea because the state will be validated when the request is created, once, and will be valid all time after that. We will not have any need to change the state of the request. If we are changing the request then we are doing something wrong.

Another scenario where we can use them is when we need to have some business classes (where we will process some data) where the state should be unchanged. We can find a few examples for this, using them for Currency, Account information etc…

How often should we use them?

Well we see that there are some benefits using them:

  • They are thread-safe
  • Safer because their state can not be changed
  • They are simple for construct, test and use
  • It is good to understand them and use them if you work more functional and concurrent programming

But there are disadvantages too:

At first, you can’t change fields in them. To do that you should create a copy of them with changed values. It means that you will have more objects initialized in the VM and for that, you should add some code to copy objects.
To be sure that you will make some classes immutable you should be sure that the state of the object would not be changed. These days developers don’t spend time analyzing where that class will be or will not be changed.

There is one general concept from Effective Java, which describes the use of immutability:

Classes should be immutable unless there’s a very good reason to make them mutable… If a class cannot be made immutable, limit its mutability as much as possible.

Hibernate techniques for mapping sets, lists and enumerations

Reading Time: 4 minutes

As we all know, Hibernate is an Object Relational Mapping (ORM) framework for the Java programming language. This blog post will teach you how to use advanced hibernate techniques for mapping sets, lists and enums in simple and easy steps.

Mapping sets

Set is a collection of objects in which duplicate values are not allowed and the order of the objects is not important. Hibernate uses the following annotation for mapping sets:

  • @ElementCollection – Declares an element collection mapping. The data for the collection is stored in a separate table.
  • @CollectionTable – Specifies the name of a table that will hold the collection. Also provides the join column to refer to the primary table.
  • @Column – The name of the column to map in the collection table.

@ElementCollection is used to define the following relationships: One-to-many relationship to an @Embeddable object and One-to-many relationship to a Basic object, such as Java primitives (wrappers): int, Integer, Double, Date, String, etc…

Now you’re probably asking yourself: Hmmm… How does this compare to @OneToMany?

@ElementCollection is similar to @OneToMany except that the target object is not an @Entity. These annotations give you an еasy way to define a collection with simple/basic objects. But, you can’t query, persist or merge target objects independently of their parent object. ElementCollection does not support a cascade option, so target objects are ALWAYS persisted, merged, removed with their parent object.

Mapping lists

Lists are used when we need to keep track of order position and duplicates of the elements are allowed. Additional annotation that we are going to use here is @OrderColumn, that specified the name of the column to track the element order/position (name defaults to <property>_ORDER):

Mapping maps

When you want to access data via a key rather than integer index, you should probably decide to use maps. Additional annotation used for maps is @MapKeyColumn which helps us to define the name of the key column for a map. Name defaults to <property>_KEY :

Mapping sorted sets

As we mentioned before, the set is an unsorted collection with no duplicates. But what if we don’t need duplicates and the order of retrieval is also important? In that case, we can use @OrderBy and specify the ordering of the elements when a collection is retrieved.

Syntax: @OrderBy(“[field name or property name] [ASC |DESC]”)

Mapping sorted maps

@OrderBy can be also used in maps. In that case, the default value is a key column, ascending.

Mapping Enums

By default, Hibernate maps an enum to a number. This mapping is very efficient, but there is a high risk that adding or removing a value from your enum will change the ordinal of the remaining values. Because of that, you should map the enum value to a String with the @Enumerated annotation. This annotation is used to reference an Enum type and save the field in database as String.

Conclusion

In this article, we have taken a look in the simple techniques for mapping sets, lists and enumerations when we are using Hibernate. I hope you enjoyed reading it and have found it helpful.

GraphQL! 😍😍

Reading Time: 3 minutes

As excited as I am to talk about GraphQL, I don’t have many words to say.

Last year, we had a Front end conference in Konstanz, which was so amazing. Thanks to GraphQL Day Bodensee.

The conference promised to focus on adopting GraphQL and to get the most out of it in production. To learn from a lineup of thought leaders and connect with other forward-thinking local developers and technical leaders.

GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.

https://graphql.org/

Just reading this paragraph made me more interested than ever.

How was the conference?

The conference delivered everything that it promised. Small talks but very informative and helpful. People came from almost every country in Europe and America, they were super friendly. So now lets dive in, so that you all have an idea about this awesome query language.

Just to be clear GraphQL can be used with most of the framework out there.

Steps to use GraphQL?

1. Create a GraphQL service.

First, we define types and fields on those types

type Query {
  me: User
}

type User {
  id: ID
  name: String
}

Then, we create functions for each field on each type

function Query_me(request) {
  return request.auth.user;
}

function User_name(user) {
  return user.getName();
}

When the GraphQL service is running e.g. on a web service, we can send GraphQL queries to validate and execute. First, the received query is checked to ensure it only refers to the types and fields defined, then runs the provided functions to produce a result.

2. Send the Query

{
  me {
    name
  }
}

3. Get the JSON results

{
  "me": {
    "name": "Luke Skywalker"
  }
}

Pros +++

  • Good fit for complex systems and microservices
  • Fetching data with a single API call
  • No over- and under-fetching problems
  • Validation and type checking out-of-the-box

Cons – – –

  • Performance issues with complex queries
  • Overkill for small applications
  • Web caching complexity
  • Takes a while to understand

GraphQL vs Rest

Putting it together

GraphQL is an awesome query language, but as you can see from the pros & cons it doesn’t make sense to use it in every situation, in cases of small projects its an overkill, but in bigger ones, it can make the complexity of Backend Frontend much much easy.

Mobile App Marketing

Reading Time: 7 minutes

Almost everyone uses mobile devices nowadays. The market for mobile applications is really huge. The site https://42matters.com/ has statistics that show that currently, over 6.1 million applications are existing from > 1.9 million publishers. If you have a cool idea and you finally create a mobile application, the first goal is that your application will be downloaded from the bunch of apps in stores. Your app may be exceptional, well coded, without bugs, well designed, but still, the download numbers can be poor. Where is the problem? Why?

The answer is simple: You need a great marketing strategy to succeed.

This blog post will give you some theoretical knowledge for Mobile App Marketing, an overview of the main stages of the marketing funnel, the goals and metrics you should measure, and brief overview of some popular tools.

Mobile App Markets

If we enter a mobile app world we will probably go to one of the most popular ones:

  • Android Play Store (> 2.8 million live applications)
  • Apple App Store (> 2.2 million live applications)
  • Samsung Galaxy Apps
  • LG SmartWorld
  • Huawei App Store
  • Amazon App Store

These numbers show us that the competition is strong and probably our app must compete with at least 50 or 100 other apps on the same topic as ours.

Mobile App Marketing (MAM)

There are many definitions on the internet about mobile app marketing…

“Mobile app marketing is the process of creating marketing campaigns to reach your users at every stage of the marketing funnel. Learn key mobile app marketing tactics for every stage of user engagement with your app.”

The difference between mobile app marketing and mobile marketing:

MAM creates complete campaigns for a mobile application, it follows the complete cycle – from downloading the app, first engagements, becoming a regular user, and using many in-app purchases.

On the other hand, mobile marketing is every marketing that happens on mobile devices, including advertisements on websites, the banners presented on responsive web pages, email marketing, etc.

Like the general conclusion, we can say MAM is a subset of Mobile Marketing.

Mobile App Marketing Funnel

In the terminology of marketing, we often meet the word marketing funnel. Here is one definition of what it is:

“The marketing funnel is a visualization for understanding the process of turning leads into customers, as understood from a marketing (and sales) perspective.”

An example of a Mobile App Marketing funnel

A typical marketing funnel consists following stages:

  • Awareness
  • Consideration
  • Conversion
  • Customer Relationship
  • Retention

Let’s see a little deeper the three most important stages: awareness, conversion and retention.

Awareness

Even before we make our first release of the application we have to make awareness for our app. It’s not enough to start marketing after the release. The goal is to attract targeted users to your app before the production phase.

Here are a few different ways to attract new app users and raise awareness about your app:

  • Using social media
  • Launching paid advertising
  • App store
  • Websites for reviewing apps
  • Using QR codes

Conversion

Conversion is maybe the keyword if we see it from a business perspective. Conversion is every step that leads to financial benefit for our application. Conversion is every paid download of our application, every user that creates a profile and completes the onboarding process, every in-app purchase inside the app. As a conversion, we could count a regular use of our app.

The strategies we could use for a better conversion include the following:

  • Providing an easy registration process, with clear and not confusing steps for finishing the onboarding process
  • App free of bugs
  • Clear UI with great UX
  • Using push notifications and inside app messages to keep users engaged and informed about new things

Retention

Have you ever had this happen to you? Downloaded an app and deleted it in the next few minutes? For me personally yes – multiple times. I can witness that the reasons were difficulties in the registration process, mandatory registration, or apps with only landing page visible and paid strategy to see all other stuff inside.

The user is our KING and we have to make him happy. To make the users happy we must find what their needs are. Understand their desires, the way they want to use the app, what time of the day they are most active, are they happy if we are sending messages in the mornings or later in the night?

It’s 5-25 times more expensive to replace a customer/user than to keep one existing.

Some of the strategies to improve retention are push notifications, in-app messaging, taking surveys, or starting loyalty programs.

Goals and Metrics

There are different tactics and strategies in Mobile App Marketing, but there are two things in general that are present everywhere: Setting up goals and measure the key metrics to know if your strategy is working properly. The goals are necessary to know what we want to achieve with our marketing strategy. For example, our strategy could be increasing the number of downloads, by improving the quality of the keywords used in the store.

There are many different app marketing metrics but I will describe only the most important ones:

Churn rate

This is the percentage of users that stop to use the app. Statistics showed that almost 70 per cent of the users stopped using the app immediately after the installation, and around 98 per cent after 3 months.

Session Length

This parameter is for the time that the user spends in the application from login to close the application. Different applications should measure and consider this metric in different ways. For example, if we have an application like the a Health Fitness Application where the user is tracking the steps or the time spent in the gym, 1-2 minutes could be enough time. On the other side, if we measure the session length for an application for a newsfeed or a complex game – 2 minutes are not enough time.

Retention Rate

This is the percentage of users that come back to the application during a period of time and use the app regularly. Knowing the statistics of the people who use the app is a big benefit because we can target those profiles of users in our marketing channels.

Lifetime Value (LTV)

This metric is related to application revenue. It represents the financial value of our app and how much each customer is worth for us.

Tools

On the market for MAM tools, the offer is huge. There are many online tools. The span of services they offer is from App Store Optimization (ASO), sending a push notification through keywords improvement strategies and A/B testing tools.

Some of the tools offer great user interfaces, a lot of beneficial statistics that can help us build our mobile app marketing strategy.

Here is a list of the most popular ones:

  • Firebase
  • Optimizely
  • App Annie
  • AppRadar
  • Google AdMob
  • Leanplum
  • Airship
  • DeepLink

Personally I have experience with few of these like Firebase and Airship which offer a ton of services. Firebase is a very powerful tool, but I will go in more detail and will compare some of these tools in my next post.

Because of the big competitions, we should be aware that a top-quality application is not enough. A good mobile app marketing strategy is a must!

What is next?

This post will continue with a comparison of some of the most popular MAM tools. We see some great examples of how these tools help big companies to improve their marketing. I will make a comparative analysis of the prices and the plans they are offering. Also at the end, I will give some suggestions on how we can use these tools to improve our mobile strategy.

Crypto Trading Bot

Reading Time: 6 minutes

Every year, N47 as a tech family celebrates a tech festival as Hackdays at the end of the year. In December 2019 we were in Budapest, Hungary for Hackdays. There were five different teams and each team had created some cool projects in a short time. I was also part of a team and we implemented a simple Trading Bot for Crypto. In this blog post, I want to share my experiences.

Trading Platform

To create a Trading Bot, you first need to find the right trading platform. We selected Binance DEX, which can offer a good volume for selected trading pairs, testnet for test purposes and was a Decentralized EXchange (DEX). Thanks to DEX, we can connect the wallet directly and use the credit directly from it.

Binance Chain is a new blockchain and peer-to-peer system developed by Binance and the community. Binance DEX is a secure, native marketplace that is based on the Binance Chain and enables the exchange of digital assets that are issued and listed in the DEX. Reconciliation takes place within the blockchain nodes, and all transactions are recorded in the chain, creating a complete, verifiable activity book. BNB is the native token in the Binance Chain, so users are charged the BNB for sending transactions.

Trading fees are subject to a complex logic, which can lead to individual transactions not being calculated exactly at the rates mentioned here, but instead between them. This is due to the block-based matching engine used in the DEX. The difference between Binance Chain and Ethereum is that there is no idea ​​of gas. As a result, the fees for the remaining transactions are set. There are no fees for a new order.

The testnet is a test environment for Binance Chain network, run by the Binance Chain development community, which is open to developers. The validators on the testnet are from the development team. There is also a web wallet that can directly interact with the DEX testnet. It also provides 200 testnet BNB so that you can interact with the Binance DEX testnet.

For developers, Binance DEX has also provided the REST API for testnet and main net. It also provides different Binance Chain SDKs for different languages like GoLang, Javascript, Java etc. We used Java SDK for the Trading Bot with Spring Boot.

Trading Strategy

To implement a Trading Bot, you need to know which pair and when to buy and sell Crypto for these pairs. We selected a very simple trading strategy for our project. First, we selected the NEXO / BINANCE trading pair (Nexo / BNB) because this pair has the highest trading volume. Perhaps you can choose a different trading pair based on your analysis.

For the purchase and sale, we made a decision based on Candlestick count. We considered the size of Candlestick for 15 minutes. If three are still red (price drops), buy Nexo and if three are still green (price increase), sell Nexo. Once you’ve bought or sold, you’ll have to wait for the next three. Continue with the red or green Candlestick. The purchase and sales volume is always 20 Nexo. You can also choose this based on your analysis.

Let’s Code IT

We have implemented the frontend (Vue.Js) and the backend (Spring Boot) for the Trading Bot, but here I will only go into the backend application as it contains the main logic. As already mentioned, the backend application was created with Spring Boot and Binance Chain Java SDK.

We used ThreadPoolTaskScheduler for the application. This scheduler runs every 2 seconds and checks Candlestick. This scheduler has to be activated once via the frontend app and is then triggered automatically every 2 seconds.

ThreadPoolTaskScheduler.scheduleAtFixedRate(task, 2000);

Based on the scheduler, the execute() method is triggered every two seconds. This method first collects all previous Candlestick for 15 minutes and calculates the green and red Candlestick. Based on this, it will buy or sell.

private double quantity = 20.0;
private String symbol = NEXO-A84_BNB; 
public void execute() {
        List<Candlestick> candleSticks = binanceDexApiRestClient.getCandleStickBars(this.symbol, CandlestickInterval.FIFTEEN_MINUTES);
        List<Candlestick> lastThreeElements = candleSticks.subList(candleSticks.size() - 4, candleSticks.size() - 1);
        // check if last three candlesticks are all red (close - open is negative)
        boolean allRed = lastThreeElements.stream()
                .filter(cs -> Double.parseDouble(cs.getClose()) - Double.parseDouble(cs.getOpen()) < 0.0d).count() == 3;
        // check if last three candlesticks are all green (close - open is positive)
        boolean allGreen = lastThreeElements.stream()
                .filter(cs -> Double.parseDouble(cs.getOpen()) - Double.parseDouble(cs.getClose()) < 0.0d).count() == 3;
        Wallet wallet = new Wallet(privateKey, binanceDexEnvironment);

        // open and closed orders required to check last order creation time
        OrderList closedOrders = binanceDexApiRestClient.getClosedOrders(wallet.getAddress());
        OrderList openOrders = binanceDexApiRestClient.getOpenOrders(wallet.getAddress());

        // order book required for buying and selling price
        OrderBook orderBook = binanceDexApiRestClient.getOrderBook(symbol, 5);
        Account account = binanceDexApiRestClient.getAccount(wallet.getAddress());

        if ((openOrders.getOrder().isEmpty() || openOrders.getOrder().get(0).getOrderCreateTime().plusMinutes(45).isBeforeNow()) && (closedOrders.getOrder().isEmpty() || closedOrders.getOrder().get(0).getOrderCreateTime().plusMinutes(45).isBeforeNow())) {
            if (allRed) {
                if (Double.parseDouble(account.getBalances().stream().filter(b -> b.getSymbol().equals(symbol.split("_")[1])).findFirst().get().getFree()) >= (quantity * Double.parseDouble(orderBook.getBids().get(0).getPrice()))) {
                    order(wallet, symbol, OrderSide.BUY, orderBook.getBids().get(0).getPrice());
                    System.out.println("Buy Order Placed  Quantity:" + quantity + "  Symbol:" + symbol + "  Price:" + orderBook.getAsks().get(0).getPrice());
                    
                } else {
                    System.out.println("do not have enough Token: " + symbol + " in wallet for buy");
                }

            } else if (allGreen) {
                if (Double.parseDouble(account.getBalances().stream().filter(b -> b.getSymbol().equals(symbol.split("_")[0])).findFirst().get().getFree()) >= quantity) {
                    order(wallet, symbol, OrderSide.SELL, orderBook.getAsks().get(0).getPrice());
                    System.out.println("Sell Order Placed  Quantity:" + quantity + "  Symbol:" + symbol + "  Price:" + orderBook.getAsks().get(0).getPrice());
                } else {
                    System.out.println("do not have enough Token:" + symbol + " in wallet for sell");
                }

            } else System.out.println("do nothing");
        } else System.out.println("do nothing");

    }

    private void order(Wallet wallet, String symbol, OrderSide orderSide, String price) {
        NewOrder no = new NewOrder();
        no.setTimeInForce(TimeInForce.GTE);
        no.setOrderType(OrderType.LIMIT);
        no.setSide(orderSide);
        no.setPrice(price);
        no.setQuantity(String.valueOf(quantity));
        no.setSymbol(symbol);

        TransactionOption options = TransactionOption.DEFAULT_INSTANCE;

        try {
            List<TransactionMetadata> resp = binanceDexApiRestClient.newOrder(no, wallet, options, true);
            log.info("TransactionMetadata", resp);
        } catch (Exception e) {
            log.error("Error occurred while order", e);
        }
    }

At first glance, the strategy looks really simple, I agree. After this initial setup, however, it’s easy to add more complex logic with some AI.

Result

Since 12th December 2019, this bot is running on Google Cloud and did 1130 transactions (buy/sell) until 14th April 2020. Initially, I started this bot with 2.6 BNB. On 7th February 2020, the balance was 2.1 BNB in the wallet, but while writing this blog on 14th April 2020, it looks like the bot has recovered the loss and the balance is 2.59 BNB. Hopefully, in future it will make some profit💰🙂.

Let me know your suggestions in a comment on this bot and I would also like to answer your questions if you have anything on this topic. Thanks for the time.

JHipster with Google App Engine and Cloud MySQL

Reading Time: 5 minutes

How does it sound to set up a complete spring application, with front-end and database? With all the models, repositories and controllers? Even with Unit and Integration tests, with mocked data? All within a few hours? Your solution is JHipster!

JHipster

JHipster or “Java Hipster” is a handy application generator, a development platform, to develop and deploy web applications. JHipster has become popular in a short time, and it has been featured in many conferences all around the globe – Montreal, Omaha, Taipei, Richmond, Frankfurt, Paris, London. It supports:

  • Spring Boot (Back-end)
  • Angular/React/Vue (Front-end)
  • Spring microservices

JHipster is used for generating complete applications, it will create for you a Spring Boot and Angular/React/Vue application, high-quality application with most of the things pre-configured, using Java as back-end technology and an extensive set of Spring technologies: Spring Security, Spring Boot, Spring MVC (providing a framework for web-sockets, REST and MVC), Spring Data, etc. and Angular/React/Vue front-end and a suite of pre-configured development tools like Yeoman, Maven, Gradle, Grunt, Gulp.js and Bower.

JHipster gives you a head start in creating Spring Boot application with a set of pre-defined screens for user management, monitoring, and logging. The generated Spring Boot application is specifically tailored to make working with Angular/React/Vue a smoother experience. At the top of all that, JHipster also gives you the tools to update, manage and package the resulting application.

By now you may think it sounds too good to be true… But it is not everything that JHipster offers. If you are a web developer, by now probably you have a lot of questions. 🙂
One important question we will answer in this blog post: is it supported by today’s cloud solutions, is it compatible with all of them? The answer is yes, it is compatible with the popular cloud solutions from Google, Amazon, Microsoft, and Heroku. Let’s see what it takes to make a complete integration in Google’s cloud platform, the app engine.

Compatibility Test - NEXCOM

Google App Engine

Google App Engine is a cloud solution provided by Google, a platform for developing and hosting web applications in data centres managed by Google; Platform as a Service (PaaS). Applications are sandboxed and run across multiple servers. The App Engine supports Java or Python, uses the Google query language and stores data in Google BigTable.

It is free of usage up to a certain amount of resource usage. After the user is exceeding the limited usage rates for storage, CPU resources, requests or number of API calls and concurrent requests can pay for more of these resources.

It is fully compatible with the JHipster generated projects. What it takes to host your application is just to follow the official how-to guide from Google App Engine documentation, as normal Spring Boot Application. To make things easier, Google offers a database which works closely with the Google App Engine, the Cloud SQL.

Cloud SQL

Cloud SQL is a database service offered by Google for their cloud solutions, fully-managed that makes it easy to configure, manage, maintain, and operate your relational databases on Google Cloud Platform.

It offers three database options to integrate with:

  • MySQL
  • PostgreSQL
  • SQL Server

Let’s get into details of integrating with Cloud SQL for MySQL:

  1. The first step is to create a Cloud SQL instance on the Google Cloud Platform, which requires few things like instance ID, password and etc. to be set and it gives you the option to choose the MySQL database version.
  2. The following step is to create the database in the newly created instance. It is possible to have more databases in one instance.
  3. Now, our application, in the case to be able to communicate with the Cloud SQL, without any permission blockers, we need to register the application in the Cloud SQL and manually configure the service account roles.
  4. The final step is connecting your application to the created Cloud SQL instance. It is done through JDBC. All the required properties can be found in the overview of the Cloud SQL, instance connection name, credentials and etc.

So the conclusion: don’t be afraid to invest some time in new technologies, be curious, you never know where they may lead you. Thank you for reading. 🙂

Starting with unit testing using Vue.js 2 and Jest

Reading Time: 4 minutes

As a FrontEnd developer, you may know a lot of FrontEnd technologies and frameworks but in time you need to upgrade yourself as a developer. A good skill to strengthen your knowledge is to learn unit testing.

Since I am working with Vue.js for several years, we are going to see some of the basics for testing Vue components using the Jest JavaScript testing framework.

To start, first, we need a Vue.js 2 project created via the Vue CLI. After that we need to add the Jest framework to the project:

# jest unit testing
vue add @vue/unit-jest

I’ll make a simple component that will increase a number on click of a button:

// testComponent.js
export default {
  template: `
    <div>
      <span class="count">{{ count }}</span>
      <button @click="increase">Increase</button>
    </div>
  `,

  data() {
    return {
      count: 0
    }
  },

  methods: {
    increase () {
      this.count++
    }
  }
}

The way of testing is by mounting the components in isolation, after that comes the mocking the needed inputs like injections, props and user events. In the end, comes the confirmation of the outputs of the rendered results emitted events etc.

After that, the components are returned inside a wrapper. A wrapper is an object that contains a mounted component or a VNode and methods to test them.

Let’s create a wrapper using the mount method:

// jestTest.js

// first we import the mount method
import { mount } from '@/vue/test-utils'
import Calculate from './calculate'

// then we mount (wrap) the component
const wrapper = mount(Calculate)

// this way you can access the Vue instance
const vm = wrapper.vm

// you can inspect the wrapper by logging it into the console
console.log(wrapper)

Next step after we do the wrapping, follows to verify if the rendered HTML output of the component matches the expectations.

import { mount } from '@vue/test-utils'
import Calculate from './calculate'

describe('Calculate', () => {
  // Now mount the component and you have the wrapper
  const wrapper = mount(Calculate)

  it('renders the correct markup', () => {
    expect(wrapper.html()).toContain('<span class="calculate">0</span>')
  })

  // it's also easy to check for the existence of elements
  it('has a button', () => {
    expect(wrapper.contains('button')).toBe(true)
  })
})

Then run the tests with npm test and see them pass.

The code in testComponent.js should increment the number on button click so next step is to simulate the user interaction. For this, we need the wrapper’s method wrapper.find() to get the wrapper for the button and then simulate the click event by calling the method trigger().

it('simulation of button click should increment the calculate by 2', () => {
  expect(wrapper.vm.calculate).toBe(0)
  const button = wrapper.find('button')
  button.trigger('click')
  button.trigger('click')
  expect(wrapper.vm.calculate).toBe(2)
})

For asynchronous updates, we use the Vue.nextTick()(need to receive a function as a parameter) method, which comes from Vue. With this method, we are waiting for the DOM update and after that, we execute the code (the code in the function parameter).

// this will not be caught

it('time out', done => {
  Vue.nextTick(() => {
    expect(true).toBe(false)
    done()
  })
})


// the three following tests will work as expected 
// (1)

it('catch the error using done method', done => {
  Vue.config.errorHandler = done
  Vue.nextTick(() => {
    expect(true).toBe(false)
    done()
  })
})

// (2)
it('catch the error using a promise', () => {
  return Vue.nextTick()
    .then(function() {
      expect(true).toBe(false)
    })
})

it('catch the error using async/await', async () => {
  await Vue.nextTick()
  expect(true).toBe(false)
})

Using nextTick can be tricky for the errors because the errors thrown inside it might not be caught by the test runner. That is happening as a consequence of using promises internally. To fix this we can set the done callback as a Vue’s global error handler (1) or we can use the nextTick method without parameter and return it as a promise (2) like we did earlier.

This article is a guide on how to set up the environment and start writing unit tests using Jest. For more information about testing with Vue and using Jest, you can visit the official site for Vue test utils.

JHipster, is it worth it?

Reading Time: 7 minutes

JHipster is an open-source platform to generate, develop and deploy Spring Boot + Angular / React / Vue web applications. And with over 15 000 stars on Github, it is the most popular code generation framework for Spring Boot. But is it worth the hype or is the generated code too difficult to maintain and not production-ready?

How does it work?

The first thing to note is that JHipster is not a separate framework by itself. It uses yeoman and .jdl files in order to generate code in Spring Boot for backend and Angular or React or Vue for frontend. And after the initial generation of the project, you have the option to use the generated code without ever running JHipster commands again or to use JHipster in order to incrementally grow the projects and develop new features.

What exactly is JDL?

JDL is a JHipster-specific domain language where you can describe all your applications, deployments, entities and their relationships in a single file (or more than one) with a user-friendly syntax.

You can use our online JDL-Studio or one of the JHipster IDE plugins/extensions, which support working with JDL files.

Example of simple JDL file for Blog application:

entity Blog {
  name String required minlength(3)
  handle String required minlength(2)
}

entity Post {
  title String required
  content TextBlob required
  date Instant required
}

entity Tag {
  name String required minlength(2)
}

relationship ManyToOne {
  Blog{user(login)} to User
  Post{blog(name)} to Blog
}

relationship ManyToMany {
  Post{tag(name)} to Tag{entry}
}

paginate Post, Tag with infinite-scroll

Which technologies are used?

On the backend we have the following technologies:

  • Spring Boot as the primary backend framework
  • Maven or Gradle for configuration
  • Spring Security as a Security framework
  • Spring MVC REST + Jackson for REST communication
  • Spring Data JPA + Bean Validation for Object Relational Mapping
  • Liquibase for Database updates
  • MySQL, PostgreSQL, Oracle, MsSQL or MariaDB as SQL databases
  • MongoDB, Counchbase or Cassandra as NoSQL databases
  • Thymleaf as a templating engine
  • Optional Elasticsearch support if you want to have search capabilities on top of your database
  • Optional Spring WebSockets for Web Socket communication
  • Optional Kafka support as a publish-subscribe messaging system

On the frontend side these technologies are used:

  • Angular or React or Vue as a primary frontend framework
  • Responsive Web Design with Twitter Bootstrap
  • HTML5 Boilerplate compatible with modern browsers
  • Full internationalization support
  • Installation of new JavaScript libraries with NPM
  • Build, optimization and live reload with Webpack
  • Testing with Jest and Protractor
  • Optional Sass support for CSS design

How to get started?

  1. Pre-requirements: JavaGit and Node.js.
  2. Install JHipster npm install -g generator-jhipster
  3. Create a new directory and go into it mkdir myApp && cd myApp
  4. Run JHipster and follow instructions on the screen jhipster
  5. Model your entities with JDL Studio and download the resulting jhipster-jdl.jh file
  6. Generate your entities with jhipster import-jdl jhipster-jdl.jh
  7. Run ./mvnw to start generated backend
  8. Run npm start to start generated frontend with live reload support

How does the generated code and application look like?

In case you only want to see a sample generated application without starting the whole framework you can check this official Github repo for the latest up-to-date sample code: https://github.com/jhipster/jhipster-sample-app.

Following are some screen from my up and running JHipster application:

Welcome screen jhipster homepageThis is the initial screen when you open your JHipster app

Create a user screenjhipster user create screenWith this form you can create a new user in the app

View all users screenjhipster user management screenIn this screen you have the option to manage all your existing users

Monitoring of your JHipster application screenjhipster monitoring screenMonitoring of JVM metrics, as well as HTTP requests statistics

What are the pros and cons

The important thing to remember is that JHipster is not a “magic bullet” that will solve all your problems and is not an optimal solution for all the new projects. As a good software engineer, you will have to weigh in the pros and cons of this platform and decide when it makes sense to use and when it’s better to go with a different approach. Having used JHipster for production projects these are some of the pros and cons that I’ve experienced:

Pros

  • Easy bootstrap of a new project with a lot of technologies preconfigured
  • JHipster almost always follows best practices and latest trends in backend and frontend development
  • Login, register, management of users and monitoring comes out-of-the-box
  • Wizard for generating your project, only the technologies that you select are included in the project
  • After defining your own JDL file, all of the models, repository, service and controllers classes for your entities are generated, together with integration tests. This is saving a lot of time in the begging of the project when you want to get to feature development as soon as possible

Cons

  • If you are not familiar with technologies that are being used in the generated project it can be overwhelming and it’s easy to get lost into this mix of lots of different technologies
  • Using JHipster after the initial project is not a smooth experience. Classes and Liquibase scripts are being overwritten and you have to be very careful with changing the initial JDL model. Or you can decide to continue without using JHipster after the initial generation of projects
  • REST responses that are returned from endpoints will not always correspond to business requirements, very often you will have to manually modify your initial JHipster REST responses
  • Not all of the options that are available are at the same level, some technologies that JHipster is using and configuring are more polished than the others. Especially true if you decide to use community modules

What kind of projects are a good fit?

Having said all of this, it’s important to understand that there are projects which can benefit a lot from JHipster and projects that are better without using this platform.

In my experience, a good candidate is a greenfield project where it’s expected to deliver a lot of features fast. JHipster will help a lot to be productive from day one and to cut on the boilerplate code that you need to write. So you will be able, to begin with, feature development really fast. This works well with new projects with tight deadlines, proof of concepts, internal projects, hackathons, and startups.

On the other hand, a not so ideal situation is if you have an already started and up and running project, there is not much a JHipster can do in this case. Or another case would if the application has a lot of specific business logic and its not a simple CRUD application, for example, an AI project, a chatbot or a legacy ecosystem where these new technologies are not suitable or supported.

JHipster, is it worth it?

There is only one sure way to decide if JHipster is worth it for your next project or not and that is to try it out yourself and play around with the different features and configuration that JHipster offers.

At best, you will find a new framework for your next project and save a lot of effort next time you have to start a project. At worst, you will get to know the latest trends in both backend and frontend and learn some of the best practices from a very large community.

JavaScript objects: Why? How? Compared with switch-case and if?

Reading Time: 4 minutes

During the development, we use objects and we are always confused which approach is the best. That was the main reason why I decided to write a blog about it, so I really hope in near future most of your obstacles will be resolved and you will be more confident in choosing which is the best approach.

In this blog, I will explain some of the object features and I will make some comparisons, so let’s start!

Switch vs object literals

We all know what is a switch – case statement and we used them at least one time in our life, no matter which programming language. But since we are talking about JavaScript, have you ever asked yourself if it is clever to use it?

Well, of course, the answer is NO. Now you are asking yourself, then it must be if-else statements, but still, the answer is NO. The best approach is to use objects.

Let’s see why…

Problems:

  • switch-case
    • Hard to maintain which leads you to difficult debugging and testing
    • You are manually forced to use break within each case
    • Nested errors
    • Restrictions, like not allowing to use the same constant in two different cases…
    • In JavaScript, everything is based on curly braces, but not switch
    • Evaluates every case until it finds the right one
  • If-else statements
    • Hard to maintain which leads you to difficult debugging and testing
    • Hard to understand when there is complex logic
    • Hard to test
    • Evaluates every statement until it finds the right one if you don’t end it explicitly

According to these problems, the best solution is objects. The reason for that is the advantages that objects are offering us, like:

  • You are not forced to do anything
  • You can use functions inside the objects which means you are much more flexible
  • You can use closure benefits
  • You are using the standard JavaScript objects, which makes the code friendlier
  • Gives you better readability and maintainability
  • Since the objects approach is like a hash table the performance gets better rather than the average cost of switch case
  • All these advantages lead us to the conclusion that objects are more natural and are part of many design patterns in JavaScript where switch-case is an old way of coding

Let’s see an example with objects how they look like:

<!DOCTYPE html>
<html> 
<body>
<script>
      function getById (id) {
            var ids = {
                  'id1': function () {
                        return 'Id 1';
                  },
                  'id2': function () {
                        return 'Id 2';
                  },
                  'default': function () {
                        return 'Default';
                  }
            }
            return (ids[id] || ids['default'])();
      };

      var ref = getById('id1')
      console.log(ref)         // Id 1

      var ref1 = getById()
      console.log(ref)         // Default

      var ref1 = getById(‘noExistingId’)
      console.log(ref)         // Default
</script>
</body>
</html>

hasOwnProperty(key) vs in

hasOwnProperty() – method which returns Boolean or true only if the object contains that property as its own property.

in – operator which returns Boolean or true only if the object contains that property as its own property or in its prototype chain.

function TestObj(){
      this.name = 'TestName';
}
TestObj.prototype.gender = 'male';

var obj = new TestObj();

console.log(obj.hasOwnProperty('name'));       // true
console.log('name' in obj);                    // true
console.log(obj.hasOwnProperty('gender'));     // false 
console.log('gender' in obj);                  // true

Object properties

There are two ways: dot notation and bracket notation. Most of the developers often ask themselves which approach they should use or maybe if there is any difference? In the end, they use one of them wherein most of the cases both solutions work fine without knowing why and without paying attention to its differences. Let’s make an overview.

Dot notation:

  • s can only be alphanumeric, which means it can only include two special characters “_” and “$”
  • Property identifiers can’t start with numbers and variables

Bracket notation:

  • Property identifiers must be String or a variable which references to a String
  • Property identifier can contain any character in their name
var obj = {
      '$test': 'DolarValue',
      '%test': 'PercentageValue'
      …
}

console.log(obj["$test"])                      // 'DolarValue'
console.log(obj.$test)                          // 'DolarValue'

These both will give the same result because both support `$` in their name, but what happens if we use another special character like `%`?

console.log(obj["%test"])    // 'PercentageValue'
console.log(obj.%test)       // It will throw an error:              
                            Uncaught SyntaxError: Unexpected token ‘%’

In the next blog, I will cover loops optimization using the object properties and testing code speed.

The practical guide: Refactor android application to follow the MVP design pattern

Reading Time: 8 minutes

At the beginning of my carrier, I was very lucky to start a few new projects at my company. And every story was a short one and the same: This is going to be the best project ever -> Ops, it got little messy -> Oooops, everything is a mess, there is no going back. So, I realized that it isn’t enough that I can do anything the PO wants, but I have to find the right way of doing it.

I was thinking like: Ok, I will learn unit testing, my code will be tested, and everything will be fine. So, I got into testing, started to read everything about testing, especially unit testing. I started to understand it a little bit, so I decided to write a few unit tests in my existing (mess) projects. I created test classes for some fragment, started to type some test methods, thinking I will figure it out what to test easily, and when I had to type the name of the test method my mind froze.

I had no idea what to test. I started to divide the code into methods, but it wasn’t helping at all. After a ton of research and frustration, I realized that I can’t test anything until I learn some design patterns and/or architectures.

Design patterns and architectures

In theory, there is a small difference between design patterns and architectures. Basically, they are both terms that define the organization of code, but the difference is the layer. Architecture is a more general term of design pattern. There are many design patterns and architectures, and you can’t say for one that it is the best. Today we will look at the MVP design pattern, and we will try to refactor a really bad code into a better one. Hopefully, then, we will try to refactor the better code into an even better one, with clean-like architecture.

Application example

I created an application that fetches some programming quotes, saves them into a room database and displays them into a RecyclerView. If the network calls fail, we display the quotes from the database. It is a common use case, that complicates our lives a little bit. You can check out the no_pattern branch, and run the app.

public class MainActivity extends AppCompatActivity {

    private QuoteDao quoteDao;
    private RecyclerView rvQuotes;

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);

        quoteDao = QuoteDatabase.getQuoteDatabase(this).quoteDao();

        rvQuotes = findViewById(R.id.rvQuotes);
        rvQuotes.setLayoutManager(new LinearLayoutManager(this));

        QuotesApi quotesApi = RetrofitClient.getRetrofit().create(QuotesApi.class);
        Call<List<Quote>> quotesCall = quotesApi.getQuotes();
        quotesCall.enqueue(new Callback<List<Quote>>() {
            @Override
            public void onResponse(Call<List<Quote>> call, Response<List<Quote>> response) {
                if (response.isSuccessful()) {

                    new Thread(() -> quoteDao.insertQuotes(response.body())).start();

                    QuotesAdapter adapter = new QuotesAdapter(response.body());
                    rvQuotes.setAdapter(adapter);
                }
            }

            @Override
            public void onFailure(Call<List<Quote>> call, Throwable t) {
                DatabaseQuotesAsyncTask asyncTask = new DatabaseQuotesAsyncTask(quoteDao, rvQuotes);
                asyncTask.execute();
            }
        });
    }

    static class DatabaseQuotesAsyncTask extends AsyncTask<Void, Void, List<Quote>> {

        private QuoteDao quoteDao;
        private WeakReference<RecyclerView> rvQuotes;

        public DatabaseQuotesAsyncTask(QuoteDao quoteDao, RecyclerView rvQuotes) {
            this.quoteDao = quoteDao;
            this.rvQuotes = new WeakReference<>(rvQuotes);
        }

        @Override
        protected List<Quote> doInBackground(Void... voids) {
            return quoteDao.getAllQuotes();
        }

        @Override
        protected void onPostExecute(List<Quote> quotes) {
            QuotesAdapter adapter = new QuotesAdapter(quotes);
            rvQuotes.get().setAdapter(adapter);
        }
    }
}

This code is terrible. I mean, it is doing its job, but it is not testable, it is not scalable, it is not maintainable. We can’t test this code. MainActivity is 82 lines of code, just for displaying a list of quotes. Imagine if we add more calls here, more features in this screen (and usually there are more features), this code will easily become a mess.

How to fix this? We will start with a design pattern. We will refactor this code to follow the MVP pattern. Now, what is the MVP design pattern and how to implement it? MVP pattern is one of the most common design patterns. It is very close to MVC and MVVM. All of these design patterns (and others) share the idea that we should define and divide the responsibility of the classes. All of the mentioned design patterns above have 3 types of classes:

  • Model – data layer, used for managing business model classes
  • View – UI layer, used for displaying the data
  • Controller/Presenter/ViewModel – logic layer, intercepts the actions from the UI layer, updates the data and tells the UI layer to update itself

As you can see, Model and View classes are the same between all three patterns (they may have different responsibilities), the difference is the remaining class.

Let’s talk about MVP strictly, and try to convert our app to follow the MVP pattern. What classes belong to which type?

  • Model – Here belongs just the Quote class, so it stays the same
  • View – Here belong Activities and Fragments. In our case MainActivity
  • Presenter – We should create presenter for every Activity/Fragment

So, in the data layer (Model) we have just our Quote class and it stays the same. The View and the Presenter are left. Let’s create interfaces for the Views and Presenters. That way we will create the communication between them.

public interface BaseView<P> {
}

public interface BasePresenter<V> {
    void bindView(V view);
    void dropView();
}

Every Activity/Fragment should implement interfaces extending from BaseView, and every presenter should implement interfaces extending from BasePresenter. In MVP, the communication between the View and the Presenter is both ways (View <—> Presenter). This means that our Activity/Fragment should have an instance of the presenter and the presenter should have an instance of the view. Also, the Presenter should not contain any android component (easy check, you shouldn’t have any android, imports in the presenter). So, let’s create the View and the Presenter for MainActivity. I will organize the packages by feature because it is better when we follow some design patterns.

Now, we have MainActivity that implements MainView, and MainPresenterImpl that implements MainPresenter. We said that MainActivity should have an instance of MainPresenter and MainPresenterImpl should have an instance of MainView.

public interface MainPresenter extends BasePresenter<MainView> {
}

public class MainPresenterImpl implements MainPresenter {
    private MainView view;

    @Override
    public void bindView(MainView view) {
        this.view = view;
    }

    @Override
    public void dropView() {
        this.view = null;
    }
}

public interface MainView extends BaseView<MainPresenter> {
}

public class MainActivity extends AppCompatActivity implements MainView {
    //...
    private MainPresenter presenter;
   // ...
}

You will notice methods bindView() and dropView() in the presenter. That is because the view is responsible for its lifecycle, and it should inform the presenter about its presence/absence. In the lifecycle methods of the Activity/Fragment, we should call these methods. This should be done in one of these three pairs: In onCreate/onResume/onStart call bindView() and in onDestroy/onPause/onStart call dropView(). Use either one of the pairs, but you should not mix them. For example: Don’t call bindView() in onCreate and dropView() in onPause. Let’s call these methods in the MainActivity:

public class MainActivity extends AppCompatActivity implements MainView {
    //...
    private MainPresenter presenter;
    
    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);

        presenter = new MainPresenterImpl();
        presenter.bindView(this);

        //....
    }

    @Override
    protected void onDestroy() {
        presenter.dropView();
        super.onDestroy();
    }
}

Next, we should define methods in MainView and MainPresenter. In MainPresenter we want to get the quotes (it doesn’t matter for the view if the presenter gets them from the network or from the database) so we’ll create a method getAllQuotes(), and in the MainView we want to display the quotes, so we’ll create a method displayQuotes(List<Quote> quotes). After adding these methods in the interfaces, we will get compiler errors in the MainActivity and MainPresenterImpl. We need to implement these new methods. In the MainActivity we’ll just create a new adapter with the quotes and pass the adapter in the recyclerView. In the Presenter, it gets little trickier. We will move the network and database code from MainActivity to MainPresenterImpl, and whenever we create an adapter and set it to the recyclerView, we change that with calling displayQuotes() method of the MainView.

public class MainActivity extends AppCompatActivity implements MainView {
    //...
    @Override
    public void displayQuotes(List<Quote> quotes) {
        QuotesAdapter adapter = new QuotesAdapter(quotes);
        rvQuotes.setAdapter(adapter);
    }
}

// in presenter when we get the quotes
if (view != null) {
    view.displayQuotes(quotes);
}

Moreover, because QuoteDatabase requires context, and we can’t have a context in the Presenter, instead of creating QuoteDao inside the Presenter, we create it in MainActivity and pass it into the Presenter via the constructor. Finally, in the onCreate() method of the Activity, we call presenter.getAllQuotes(); You can check out the mvp_basic branch.

What have we done now? We refactored the MainActivity, to follow the MVP design pattern. Now, MainActvity has the responsibility only to display the quotes. It doesn’t need to be unit tested, because it doesn’t contain any logic. We moved the logic into the Presenter. But unfortunately, also the Presenter is hard to test now. We will try to make it better in the next article.

Testing asynchronous code in a concise and easy to read manner

Reading Time: 7 minutes

We live in a fast-paced world where a standard project delivery strategy is agile or it is a direction which people tend to follow. If you have been part of an agile software delivery practice then somewhere in your coding career you have met with some form of tests. Whether they might be unit or integration ( system ) or some form of E2E test.

You might be familiar with the testing pyramid and with the benefits and scopes of the different types of tests presented in the pyramid.

Let’s take a quick look at the pyramid:

Unit

As shown on the image above tests that we write are grouped into layers from which the pyramid is built. The foundation layer which is the biggest. It shows us their quantity. Meaning we need more of them on our application. They are also called Unit Tests because of the scope which they are testing. A small unit e.g. an if clause.

Integration/System

The tests belonging to the middle layer are called Integration tests and their purpose is to test integration between one or more elements inside an application and in quantitative representation we need fewer tests of this type than Unit tests.

UI/E2E

The last layer is the smallest one meaning that the quantity of those tests should be the smallest. Those types of tests are also called UI or E2E tests. Here a test has the biggest scope meaning that it is checking more interconnected parts of your application i.e whole register scenario from UI perspective.

As we go from the bottom to the top costs for maintenance are increasing, respectively their speed is decreasing. Confidence is also a crucial part. If a test higher in the pyramid passes we are more confident that our application works or some part of it at least.

Our focus is on the middle layer. So-called Integration tests lay there. As we mentioned above those are the tests that check the interconnection between one or more modules inside an application e.g tests which check that a user can be registered by pinging an endpoint. The scope of this test is to prepare data, send a request to the corresponding endpoint and also check whether the user has been successfully created in the underlying datastore. Testing integration between controller and repository layer, therefore, their name “An integration test”.
In my opinion, I think that tests are a must-have for every application.

Therefore we are writing integration tests for asynchronous code.

With multi-threaded data processing systems and increased popularity of reactive programming in Java, we are puzzled with writing proficient tests for asynchronous code.
Writing high-value tests is hard, but writing high-value tests for asynchronous code is harder.

Problem

Let’s take a look at this example where we have a small system that exposes several endpoints for updating a person. We have created several tests each is updating a person with different names. When a test is running it tries to update a person by sending a request via an endpoint. The system receives the request and returns ok status. In the meantime, it spans a different thread for the actual person update. On the side of the tests, we don’t know how much time does it gonna take for the update to happen so the naive approach is to wait for a specific time after which we are going to verify whether the actual update has happened.

We have several tests which ping a different endpoint. The endpoints are differing in the wait time that would be needed to process each request
updatePersonWith1SecondDelay
updatePersonWith2SecondDelay
updatePersonWith3SecondDelay
updatePersonWithDelayFrom1To5Seconds

In order for our tests to pass, I used the naive approach by adding a function waitForCompetition() which is nothing else than some sleep of the test thread. Thread.sleep() in Java.

Example

The first execution of tests with a timeout of 1 second. The total execution is 4 seconds but not all tests have passed.

The second execution of tests with a timeout of 3 seconds. The total execution is 12 seconds but not all tests have passed.

Third execution of tests with a timeout of 5 seconds. The total execution is 20 seconds where all tests have passed.

But in order for all the tests to pass, we would need a max of 5-second sleep wait which is executed after each test. This way we are guaranteeing that every test will pass. However, we add an unnecessary wait of 4 seconds for the first test and respectively add wait time for other tests. This results increased execution time, hence optimum wait time is not guaranteed.

Solution

As stated in the official documentation Awaitility is a small java library for synchronizing asynchronous operation. Which helps expressing expectations in a concise and easy to read manner. Which is a smart option for checking the outcome of some async operation.
It’s fairly easy to incorporate this library into your codebase.

You just need to add the library into pom.xml:

		<dependency>
			<groupId>org.awaitility</groupId>
			<artifactId>awaitility</artifactId>
			<version>3.0.0</version>
			<scope>test</scope>
		</dependency>

And add the import in your test:
import static org.awaitility.Awaitility.await;

Let’s take a look at an example before using this library:

@Test
    public void testDelay1Second() throws Exception {
        Person person = new Person();
        person.setName("Yara");
        person.setAddress("New York");
        person.setAge("23");
        personRepository.save(person);

        ObjectMapper mapper = new ObjectMapper();

        person.setName("Daenerys");

        this.mockMvc.perform(put("/api/endpoint1/" + person.getId())
                .contentType(APPLICATION_JSON)
                .content(mapper.writeValueAsBytes(person)))
                .andExpect(status().isOk())
                .andExpect(content().string(containsString("Request received")));

        waitForCompletion();
        assertThat(personRepository.findById(person.getId()).get().getName())
                .isEqualTo("Daenerys");
    }

An example with Awaitility:

@Test
    public void testDelay1Second() throws Exception {
        Person person = new Person();
        person.setName("Yara");
        person.setAddress("New York");
        person.setAge("23");
        personRepository.save(person);

        ObjectMapper mapper = new ObjectMapper();

        person.setName("Daenerys");

        this.mockMvc.perform(put("/api/endpoint1/" + person.getId())
                .contentType(APPLICATION_JSON)
                .content(mapper.writeValueAsBytes(person)))
                .andExpect(status().isOk())
                .andExpect(content().string(containsString("Request received")));

        await().atMost(Duration.FIVE_SECONDS).untilAsserted(() -> assertThat(personRepository.findById(person.getId()).get().getName())
                .isEqualTo("Daenerys"));
    }

Example of the executed test suite with the library:

As we can see the execution time is greatly reduced from 20 seconds for all tests to pass in just under 10 seconds.
As you can spot the function waitForCompletition() is removed and a new wait is introduced from the library as await().atMost(Duration.FIVE_SECONDS).untilAsserted()

You can also configure the library using static methods from the Awaitility class:
Awaitility.setDefaultPollInterval(10, TimeUnit.MILLISECONDS);
Awaitility.setDefaultPollDelay(Duration.ZERO);
Awaitility.setDefaultTimeout(Duration.ONE_MINUTE);

Conclusion

In this article, we have taken a look at how to improve tests when dealing with asynchronous code using an interesting library. I hope this post helps benefit you and adds to your knowledge. You can find a working example with all of the tests with and without the Awaitility library on this repository.
Also, you can find more about the library here.

Reactive Spring with WebFlux and SQL Databases

Reading Time: 6 minutes

Since SpringBoot 2 the Spring WebFlux was introduced so we can create reactive web applications. This was great and it was working fine with NoSql databases but when it came to relational databases this was an issue. The JDBC database operations are blocking by nature and this will stop you to create a totally non-blocking application. But in order to have an asynchronous and non-blocking application, we will need to cover every layer of the application. The hero that solved this was the R2DBC – Reactive Relational Database Connectivity that gives a possibility to make none-blocking calls to Relational Databases.

The combination of WebFlux and R2DBC is enough to cover every layer in our application that we are going to build. As a relational database, we are going to use H2. So on to the coding!

Go to the spring initializr page from where we are going to build our application and select the following configuration:

  • Group: com.north47 (or your package name)
  • Artifact: spring-r2dbc
  • Dependencies: Spring Reactive Web, Spring Data R2DBC [Experimental], H2 Database, Lombok

(You won’t be able to see the Lombok on this picture, but there it is! If for some reason the Lombok is causing you issues you might need to install a plugin. To do this in Intellij go to File -> Settings -> Plugins search for Lombok, install it and restart your IDE. If you can’t manage to do it just go the old way remove the annotations @Data, @AllArgsConstructor, @NoArgsConstructor in the Book.java class and just create your own setters, getters and constructors).

Now click on Generate, unzip the application and open it via your IDE.

Let’s first create a SQL script that will create our table. Go to src -> main -> resources and right-click on it and select New -> File. Name the file: schema.sql and enter there the following code:

CREATE TABLE BOOK (
ID INTEGER IDENTITY PRIMARY KEY ,
NAME VARCHAR(255) NOT NULL,
AUTHOR VARCHAR (255) NOT NULL
);

This will create a table with name ‘Book’ and the following columns: ID, NAME and AUTHOR.

We will create an additional script that will put us some data in our database. Repeat the following procedure from previous and this time give a name to the file: data.sql and add the following code:

INSERT INTO BOOK (ID,NAME,AUTHOR) VALUES (1,'Angels and Demons','Dan Brown');
INSERT INTO BOOK (ID,NAME, AUTHOR) VALUES (2,'The Matarese Circle', 'Robert Ludlum');
INSERT INTO BOOK (ID,NAME,AUTHOR) VALUES (3,'Name of the Rose', 'Umberto Eco');

This will put some data into our database.

In resources delete the application.properties file and let’s create a new file where we are going to add the following:

logging:
  level:
    org.springframework.data.r2dbc: DEBUG
spring:
  r2dbc:
    url: r2dbc:h2:mem:///test?options=DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE
    name: sa
    password:


Now that we have defined the r2dbc URL and enabled DEBUG logging level for r2dbc let’s go to create our java classes.

Create a new package domain under the ‘com.north47.springr2dbc’ and create a new class Book. This will be our database model:

package com.north47.springr2dbc.domain;

import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import org.springframework.data.annotation.Id;
import org.springframework.data.relational.core.mapping.Column;
import org.springframework.data.relational.core.mapping.Table;

@Table("book")
@Data
@AllArgsConstructor
@NoArgsConstructor
public class Book {

    @Id
    private Long id;

    @Column(value = "name")
    private String name;

    @Column(value = "author")
    private String author;

}

Now to create our repository first create a new package named ‘repository’ under ‘com.north47.springrdbc’. In there create an interface named BookRepository. This interface will extend the R2dbRepository:

package com.north47.springr2dbc.repository;

import com.north47.springr2dbc.domain.Book;
import org.springframework.data.r2dbc.repository.R2dbcRepository;

public interface BookRepository extends R2dbcRepository<Book, Long> {
}

As you may notice we are not extending the JpaRepository as usual. The R2dbcRepository will provide us with methods that can work with objects like Flux, Mono etc…

After this, we will create endpoints from where we can access the previously inserted data or create new, modify it or delete it.

Create a new package ‘resource’ under the ‘com.north47.springr2dbc’ package and in there we will create our BookResource:

package com.north47.springr2dbc.resource;

import com.north47.springr2dbc.domain.Book;
import com.north47.springr2dbc.repository.BookRepository;
import org.springframework.http.MediaType;
import org.springframework.web.bind.annotation.*;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;

@RestController
@RequestMapping(value = "/books")
public class BookResource {

    private final BookRepository bookRepository;

    public BookResource(BookRepository bookRepository) {
        this.bookRepository = bookRepository;
    }

    @GetMapping(produces = MediaType.TEXT_EVENT_STREAM_VALUE)
    public Flux<Book> getAllBooks() {
        return bookRepository.findAll();
    }

    @GetMapping(value = "/{id}")
    public Mono<Book> findById(@PathVariable Long id) {
        return bookRepository.findById(id);
    }

    @PostMapping(consumes = MediaType.APPLICATION_JSON_VALUE)
    public Mono<Book> save(@RequestBody Book book) {
        return bookRepository.save(book);
    }

    @DeleteMapping(value = "/{id}")
    public Mono<Void> delete(@PathVariable Long id) {
        return bookRepository.deleteById(id);
    }
}

And there we have endpoints from where we can access our data and modify it.

On to the postman so we can test our application, but of course, first, let’s start it. When you run the application you can see in the console that your server is started:

Netty started on port(s): 8080

Also since we enabled DEBUG log level you should be able to see al the SQL queries that are executed from the scripts that we wrote previously.

In postman set a GET method and the url: localhost:8080/books. In the Headers add key: ‘Content-Type’, value:’application-json’.

Press that send button and there it is you will get the data:

data:{"id":1,"name":"Angels and Demons","author":"Dan Brown"}

data:{"id":2,"name":"The Matarese Circle","author":"Robert Ludlum"}

data:{"id":3,"name":"Name of the Rose","author":"Umberto Eco"}

You can test also the other endpoints, for example, getting a book by id just by changing the URL to localhost:8080/books/1. The result will be:

{
    "id": 1,
    "name": "Angels and Demons",
    "author": "Dan Brown"
}

Now you can test the other endpoints by creating a new Book by sending a POST request to the localhost:8080/books or delete a book by sending a DELETE to localhost:8080/books/{id}.

Here you can find the whole code:

Spring-R2DBC

Hope you enjoyed it!

Dependency injection and how I use it in Vaccination iOS app

Reading Time: 5 minutes

In programming, dependency injection is a technique where one object serves dependencies to another object. The concept is that instead of the client object to decide what kind of service it will use, another object tells to the client what service he has to use.

We can see the dependency injection as a software pattern. The fundament of this pattern is passing the service or object to the client, instead of allowing the client to find or to build the service on his own. What is the advantage of using this pattern? The main pros of this pattern are the readability of the code and code reusability.

Dependency injection – Injector example

Dependency injection is one form of the broader technique of inversion of control. The client delegates the responsibility of providing dependencies to external code (the injector) (Figure 1). The client is not allowed to call the injector code; it is injecting code that constructs the services and calls the client to inject them. This means the client code does not need to know about the injecting code, how to construct the services or even which actual services it is using; the client only needs to know about the intrinsic interfaces of the services because these define how the client may use the services. This separates the responsibilities of use and construction.

Types of DI

There are 3 types of dependency injection:

  1. Constructor injection: the dependencies are provided through a class constructor
  2. Setter injection: the client exposes a setter method that the injector uses to inject the dependency
  3. Interface injection: the dependency provides an injector method that will inject the dependency into any client passed to it. Clients must implement an interface that exposes a setter method that accepts the dependency

Vaccination app example

In the iOS world, the constructor injection is known as an initializer-based injection. This concept is realized with injecting the dependency object (or service) during initialization of the client class and this dependency is consistent/unchangeable during the life cycle of the client object.

In the previous few months, I’ve worked on the vaccination iOS application for N47 and I’ve decided to use the popular MVVM pattern inside. In the core of this pattern is the dependency injection. The components of the pattern are Model, View, and ViewModel, and each component is responsible for a different thing in the app. The point is to make the code more modular and easy to test.

The ViewModel (VM) component is a structure that contains only the data needed by the View component. The View component is presenting the data injected by the ViewModel. The ViewModel at other side is created by injecting dependency from the Model component. The main advantage of the MVVM is that we are creating views that have only one goal – presenting data. The view itself is not aware of the other task like fetching, persisting, etc.

We can see the initializer-based injection in action with the real example used in the Vaccination Demo App of N47. Let’s see first how the Details ViewModel looks like:

struct VaccineDetailsViewModel {
    let title: String
    let description: String
    let date: String?
}

The vaccine details view only needs title, description, and date for the vaccine. It doesn’t need more information. On the other hand, the vaccine model can contain more details about the vaccine, but this information is useless for the View. Inside the view controller (View component) we define view model property and set it via controller initializer. We can see this in the code snippet below:

 var vacineViewModel: VaccineDetailsViewModel?

class func createController(viewModel: VaccineDetailsViewModel?) -> VaccinesDetailViewController {
     
        let controller = VaccinesDetailViewController(nibName: "VaccinesDetailViewController", bundle: nil)
        controller.vacineViewModel = viewModel

        return controller
    }

This type of injection is preferable because it keeps us the safety of creating incomplete objects and with that, we will avoid coding mistakes.
So when I want to create a controller that will present the details for the vaccines and the scheduled vaccines I’m using injection via initializer in this way:

let details = VaccinesDetailViewController.createController(model: vaccination.createModel())

Other DI types in action…

In the Vaccination App, I’m also using Dependency Injection via setter creating the UITableView cells.

var vaccinationData: Vaccination? = nil {
        didSet {
            guard let vaccineId = vaccinationData?.vaccineId else { return }
            guard let vaccine = VaccineManager.sharedInstance.getVaccineById(vaccineId: vaccineId) else { return }
            let language = ModuleSharedPreferences.shared.language.rawValue
            let translation = vaccine.translations[language]
            
            vaccineTitleLabel.text = translation?.name
            vaccineApplyDateLabel.text = vaccinationData?.date
        }
    }

The code snippet above shows the vaccination data object that should be set with setter if we want the cell to be populated with data. Here is the code that will do the magic:

        let cell = tableView.dequeueReusableCell(withIdentifier: VaccinesTableViewCell.cellIdentifier, for: indexPath) as! VaccinesTableViewCell
        
        let vaccination = vaccinationList[indexPath.row]
        
        cell.vaccinationData = vaccination

Conclusion

Dependency injection is a powerful technique. Our code becomes more readable, reusable and easy for testing. We were able to see this technique in action in a real project and it was used within the popular design patterns MVVM. Using this technique we become sure that our components/services are completed, fully created before we start to use it.

Difference between Normal and Arrow Functions

Reading Time: 3 minutes

Arrow functions have been adopted since ECMAScript 2015. They are very powerful and simple. Many ES5-based projects adopted this feature to refactor the code. However, the arrow functions aren’t the same as the normal functions you’ve got to know. In this blog, I will explain what is the difference between normal and arrow functions.

This Keyword

A normal functions .this – binding is determined by who calls the function:

let a = 10;
let obj = { 
  hello,
  a: 20
};

function hello() {
  console.log(this.a);
}

hello(); // outputs 10
obj.hello(); //outputs 20

The method hello gives a different result by how it’s called. This is because a normal functions this is bound to the object that calls the function.

In contrast to a normal function, an arrow functions this is always bound to the outer function that surrounds the inner function:

let a = 10;
let hello = () => { 
  console.log(this.a);
};

let obj = {
  hello,
  a: 20
}

hello(); // outputs 10
obj.hello(); //outputs 10

In this example, hello is an arrow function. It’s a property of the obj. Even though obj calls hello, it still prints 10 because a function’ this always refers to the outer environments this. And the global this is a window so it points out to window.x.

Arguments

A normal function has a special property called argument:

function hello () {
  console.log(arguments.length)
};

hello(1, 2, 3); // outputs 3

hello(10); // outputs 1

An arrow function doesn’t have an arguments property:

let hello = () => {
  console.log(arguments.length)
};

hello(1, 2, 3); // outputs arguments is not defined

Binds

function.prototype.bind is a method you can use to change the this of the function:

let car = ‘Volvo’ 

function whatCar() {
  console.log(this.car)
};

whatCar(); // outputs ‘Volvo’
whatCar.bind({car: ‘Nissan’})() // outputs ‘Nissan’

whatCar prints the value depending on the assigned this.

But an arrow function doesn’t work with function.prototype.bind because it doesn’t have a local thisBinding. So it just looks at the outer environments this:

let car = ‘Volvo’ 

let whatCar = () => {
  console.log(this.car)
};

whatCar(); // outputs ‘Volvo’
whatCar.bind({car: ‘Nissan’})() // outputs ‘Volvo

Constructor

Normal function can be constructible and callable and arrow function is only callable and not constructible, they can never be invoked with the new keyword:

function hello () {};
let hi = () => {};
new hello(); // works
 hi(); // outputs hi is not a constructor

Arrow functions are more handy and stylish, but there are differences between arrow function and normal function. Maybe an arrow function won’t always be the best option to use. All I want to say is the best way to choose what function to use depends on each situation.

ReactiveX in Android with an example – RxJava

Reading Time: 5 minutes

What is Reactive Programming?

Reactive programming is programming with asynchronous data streams. It enables to create streams of anything – events, fails, variables, messages and etc. By using reactive programming in your application, you are able to create streams which you can then perform actions while the data emitted by those created streams.

Observer Pattern

The observer pattern is a software design pattern which defines a one-to-many relationship between objects. It means if the value/state of the observed object is changed/modified, the other objects which are observing are getting notified and updated.

ReactiveX

ReactiveX is a polyglot implementation of reactive programming which extends observer pattern and provides a bunch of data manipulation operators, threading abilities.

RxJava

RxJava is the JVM implementation of ReactiveX.

  • Observable – is a stream which emits the data
  • Observer – receives the emitted data from the observable
    • onSubscribe() – called when subscription is made
    • onNext() – called each time observable emits
    • onError() – called when an error occurs
    • onComplete() – called when the observable completes the emission of all items
  • Subscription – when the observer subscribes to observable to receive the emitted data. An observable can be subscribed by many observers
  • Scheduler – defines the thread where the observable emits and the observer receives it (for instance: background, UI thread)
    • subscribeOn(Schedulers.io())
    • observeOn(AndroidSchedulers.mainThread())
  • Operators – enable manipulation of the streamed data before the observer receives it
    • map()
    • flatMap()
    • concatMap() etc.

Example usage on Android

Tools, libraries, services used in the example:

  • Libraries:
    • ButterKnife – simplifying binding for android views
    • RxJava, RxAndroid – for reactive libraries
    • Retrofit2 – for network calls
  • Fake online rest API:
  • Java object generator from JSON file

What we want to achieve is to fetch users from 1. show in RecyclerView and load todo list to show the number of todos in the same RecyclerView without blocking the UI.

Here we define our endpoints. Retrofit2 supports return type of RxJava Observable for network calls.

    @GET("/users")
    Observable<List<User>> getUsers();

    @GET("/users/{id}/todos")
    Observable<List<Todo>> getTodosByUserID(@Path("id") int id);

    @GET("/todos")
    Observable<List<Todo>> getTodos();

Let’s fetch users:

  • .getUsers – returns observable of a list of users
  • .subscribeOn(Schedulers.io()) – make getUser() performs on background thread
  • .observeOn(AndroidSchedulers.mainThread()) – we switch to UI thread
  • flatMap – we set data to RecyclerView and return Observable user list which will be needed in fetching todo list
    private Observable<User> getUsersObservable() {
        return ServicesProvider.getDummyApi()
                .getUsers()
                .subscribeOn(Schedulers.io())
                .observeOn(AndroidSchedulers.mainThread())
                .flatMap((Function<List<User>, ObservableSource<User>>) users -> {
                    adapterRV.setData(users);
                    return Observable.fromIterable(users);
                });
    }

Now, fetch todo list of users using the 2nd endpoint.

Since we are not going to make another call, we don’t need Observable type in return of this method. So, here we use map() instead of flatMap() and we return User type.

    private Observable<User> getTodoListByUserId(User user) {
        return ServicesProvider.getDummyApi()
                .getTodosByUserID(user.getId())
                .subscribeOn(Schedulers.io())
                .map(todoList -> {
                    sleep();
                    user.setTodoList(todoList);
                    return user;
                });
    }

Now, fetch todo list of users using the 3rd endpoint.

The difference to the 2nd endpoint is that this returns a list of todos for all users. Here we can see the usage of filter() operator.

    private Observable<User> getAllTodo(User user) {
        return ServicesProvider.getDummyApi()
                .getTodos()
                .subscribeOn(Schedulers.io())
                .flatMapIterable((Function<List<Todo>, Iterable<Todo>>) todoList -> todoList)
                .filter(todo -> todo.getUserId().equals(user.getId()) && todo.getCompleted())
                .toList().toObservable()
                .map(todoList -> {
                    sleep();
                    user.setTodoList(todoList);
                    return user;
                });
    }
  • .flatMapIterable() – is used to convert Observable<List<T>> to Observable<T> which is needed for filter each item in list
  • .filter() – we filter todos to get each user’s completed todo list
  • .toList().toObservable() – for converting back to Observable<List<T>>
  • .map() – we set filtered list to user object which will be used in next code snippet

Now, the last step, we call the methods:

        getUsersObservable()
                .subscribeOn(Schedulers.io())
                .concatMap((Function<User, ObservableSource<User>>) this::getTodoListByUserId) // operator can be concatMap()
                .observeOn(AndroidSchedulers.mainThread())
                .subscribe(new Observer<User>() {
                    @Override
                    public void onSubscribe(Disposable d) {
                        disposables.add(d);
                    }

                    @Override
                    public void onNext(User user) {
                        adapterRV.updateData(user);
                    }

                    @Override
                    public void onError(Throwable e) {
                        Log.e(TAG, e.getMessage());
                    }

                    @Override
                    public void onComplete() {
                        Log.d(TAG, "completed!");
                    }
                });
  • subscribeOn() – makes the next operator performed on background
  • concatMap() – here we call one of our methods getTodoListByUserId() or getAllTodo()
  • .observeOn(), .subscribe() – every time the user’s todo list is fetched from api in background thread, it emits the data and triggers onNext() so we update RecyclerView in UI thread
  • Left
    • getTodoListByUserId()
    • flatMap()
  • Right
    • concatMap()
    • getAllTodo() – filter usage

Difference between flatMap and concatMap is that the former is done in an arbitrary order but the latter preserves the order

Disposable

When an observer subscribes to an observable, a disposable object is provided in onSubscribe() method so it can later be used to terminate the background process to avoid it returning from callback to a dead activity.

private CompositeDisposable disposables = new CompositeDisposable();

observableobject.subscribe(new Observer() {
    @Override
    public void onSubscribe(Disposable d) {
        disposables.add(d);
    }

@Override
protected void onDestroy() {
    super.onDestroy();
    disposables.dispose();
}

Summary

In this post, I tried to give brief information about reactive programming, observer pattern, ReactiveX library and a simple example on android.

Why should you use RxJava in your projects?

  • less boilerplate code
  • easy thread management
  • thread-safety
  • easy error handling

Gitlab Repository

Example sourcecode: https://gitlab.com/47northlabs/public/android-rxjava

Links

Experiences of FrontendConnect 2019 conference Warsaw, Poland

Reading Time: 4 minutes

INTRODUCTION

Everybody has an open lifetime book full of blank pages, waiting to be filled. We write the story as we go, so back in November 2019, I have started the chapter ‘Frontend conferences’ by attending the FrontendConnect2019 in Warsaw, Poland, thanks to my company N47.

My motivation to choose this conference was the fact that I will gain new knowledge, and exchange practical ways of using frontend frameworks. Despite this, given the fact that there were great speakers from the IT world, I had no doubt choosing this tech event. Duration of the event was three days, one workshop day and two speaking conference days.

WHICH WORKSHOP DID I ATTEND TO?

As I was experienced with Vue.js, I wanted to upgrade the knowledge with Nuxt as their workshop description was “It may take it to the next level, thanks to its convention over configuration approach.” I got a certificate of attendance and completion of “My first Nuxt.js application” by the Vue.js Core Team member Darek ‘Gusto’ Wędrychowski. Coding under the eye of ‘Gusto’ and having a wonderful panorama view of Warsaw in my horizon, was definitely a day well spent.

WHICH PRESENTATION DID I ATTEND TO?

Rich agenda with scheduled talks, thoughts about which ones to choose, moreover similar questions were going through my mind. I attended the ones that caught my eye and were mostly within my interests.

At the beginning of each day, there was a high valued speaker opening the day with their talks. The first day I had to meet and listen to the very appreciated, Douglas Crockford with his JSON Saga.

The second day, there was Minko Gechev, a Google engineer working on the Angular framework with the talk ‘The Future of Front-End Frameworks’.

Some other topics that I attended to were about the state management in a world of hooks, some optimizations of the modern JavaScript applications and loading them instantly, as well as Angular and Vue.js 3.0 topics.

WHAT CAUGHT MY MIND?

Two of my favourite talks were ‘The JSON Saga’ – Douglas Crockford and ‘Vue 3.0 for Library Authors’ – Damian Dulisz.

The JSON Saga

Douglas was retelling the story about how he discovered JSON (JavaScript Object Notation). He explained how he did not invent, but found it in the early 2000s, named it and described its usefulness. JSON is a format for storing data and establishing communication between the servers. He explained how some companies complained and did not want to accept JSON because they were used to XML, and could not consider anything else, at that moment. He mentioned that some of the people denied its usage because of it not being a standard. So, what he did next was buying JSON.org, a website which after a few years spread among the users. After a while, JSON got the support of all languages. He announced that there will be no more changes to JSON because for him there is no feature more important than the stability of JSON.

Vue 3.0 for Library Authors

Getting more in details about this topic and Vue 3.0-alpha version will be covered in my next blog.

THE CULTURE AND ENVIRONMENT IN THE CONFERENCE

Frontend Connect was happening in the theatre of the Palace of Culture and Science in Warsaw, Poland where the history and modern world meet at the same time. It is one of the symbolic icons of Warsaw and the place of the city`s rebirth. There were people from all over the world, and the atmosphere was really friendly. Everybody was discussing the topics and shared their work ethics.

CONCLUSION

Visiting conferences is a really good way to meet new friendly people that you have a lot in common with, as well as having an opportunity to reach out to the speaker if you enjoyed the talk, and discuss what you found interesting. We should always strive for more experiences like this and face new challenges within modern technologies. With that being said, we need to nurture our idea to reach our full potential, in order to make a bigger impact in the IT world.

A simple way of using Micrometer, Prometheus and Grafana (Spring Boot 2)

Reading Time: 7 minutes

When we run any java application, we are running JVM. That JVM uses resources like memory, processor etc. Same happens when we run any spring application too; it runs and uses our hardware resources. Monitoring and measuring these parameters is crucial when we are in production or when we like to test the performance of our application. With spring, it is easy. We should just include spring actuator and it will give us access to almost all measurements that we need like:

"jvm.memory.max",
"jvm.threads.states",
"jvm.gc.memory.promoted",
"jvm.memory.used",
"jvm.gc.max.data.size",
"jvm.gc.pause",
"jvm.memory.committed",
"system.cpu.count",
"logback.events",
…

To set up spring actuator add the following dependency in our project:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

and on the following endpoint:

<host/context-path>/actuator

we will have basic links to additional features of the application and monitoring:

{
    "_links": {
        "self": {
            "href": "http://localhost:8080/actuator",
            "templated": false
        },
        "health": {
            "href": "http://localhost:8080/actuator/health",
            "templated": false
        },
        "health-path": {
            "href": "http://localhost:8080/actuator/health/{*path}",
            "templated": true
        },
        "info": {
            "href": "http://localhost:8080/actuator/info",
            "templated": false
        }
    }
}

If these basic information are not enough we can extend them with adding the following parameter in the application configuration file:

management.endpoints.web.exposure.include=*

By following any of these links, we will access the details. For our use it will be http://localhost:8080/actuator/metrics from which we are going to access to the metrics of our application.

Now we have almost everything what we need to monitor our application how it performs. Requests, JVM memory, cache, threads etc…

Micrometer

However, if we have some more logic in our code and we need more precise metrics for our application and want to get metrics for our code we will need some other way to get them. Spring Boot 2 Actuator enrich all this already exiting metrics with the micrometer data provider.

Micrometer is a dimensional-first metrics collection facade whose aim is to allow you to time, count, and gauge your code with a vendor-neutral API.

Moreover, a micrometer is a vendor-neutral data provider and exposes application metrics to other external monitoring systems like Prometheus, AWS Cloudwatch etc…

Micrometer gives a set of Meter primitives and plus including Timer, Counter, Gauge, DistributionSummary, LongTaskTimer, FunctionCounter, FunctionTimer, and TimeGauge. Here we should be aware that every different meter type has a different number of time-series metrics. The gauge has a single metric, but the timer has a count of timed events and a total time of all events timed.

If we write something like this in our code:

List<Integer> gaugeList = registry.gauge("dummy.gauge.list", Collections.emptyList(), someList, List::size);
        List<Integer> gaugeCollectionsSizeList = registry.gaugeCollectionSize("dummy.size.list", Tags.empty(), someList);
        Map<Integer, Integer> gaugeMapSize = registry.gaugeMapSize("dummy.gauge.map", Tags.empty(), someMap);

registry.timer("dummy.timer", Tags.empty()).record(() -> {
            slowDummyMethod();
        });

We will have three parameters for the Timer (dummy_timer_seconds_count, dummy_timer_seconds_max. dummy_timer_seconds_sum) and dummy_gauge_list, dummy_gauge_map, dummy_gauge_list.

All this data can be used from many monitoring systems like Netflix Atlas, CloudWatch,  Datadog, Ganglia etc… Here in our case, we will use Prometheus.

Prometheus

Including Prometheus in our project is easy with adding maven dependency:

<dependency>
    <groupId>io.micrometer</groupId>
    <artifactId>micrometer-registry-prometheus</artifactId>
</dependency>

This will create the new endpoint in the actuator http://localhost:8080/baeldung/actuator/prometheus. If we access this URL we will get the metrics from the micrometer.

To see this data in some graphic UI we will have to start Prometheus server. We can do that directly by downloading the Prometheus server and run it.

https://prometheus.io/download/

The configuration is in the prometheus.yml file.

Basic parameters that we should set up here are:

global:
  scrape_interval:     10s # Scrape interval to every 10 seconds. Default value is every 1 minute.

and

scrape_configs:
  - job_name: 'spring_micrometer'

    metrics_path: '/micromexample/actuator/prometheus' # Path to the prometheus end point in our application. “micromexample” is the context and “actuator/prometheus” is default path for prometheus in our application
    static_configs:
    - targets: ['localhost:8080'] # host where our application is deployed

Or another way to have Prometheus server we can run docker image which will contain Prometheus in it. We can do that with the following command:

docker run -d -p 9090:9090 -v <yours-prometheus-config-file.yml>:/etc/prometheus/prometheus.yml prom/prometheus

“9090” – the port where our Prometheus will listen, this value is the default port

<yours-prometheus-config-file.yml> – our configuration file for Prometheus

“prom/prometheus” – docker image with Prometheus

After we run spring boot application with Prometheus included and we run Prometheus server we should be able to see the metrics in some basic view from Prometheus

http://localhost:9090/graph

this is what we should get from our service:

For this graph, we wrote the following code (to have something to be sure that everything works)

registry.timer("dummy.timer ", Tags.empty()).record(() -> {
    slowDummyMethod();
});

Grafana

If we want, reach graphical UI, easy to browse through the metrics data, dashboard editing, cloud monitoring compatibility then it will be a good idea to use Grafana.

Setting up Grafana is similar to Prometheus, we will need a Grafana server.

Again, we can download and install it locally. Like this, we will have service in our OS:

https://grafana.com/get

Or run docker image with Grafana in it:

docker run -d -p 3000:3000 grafana/grafana

“3000” – port for grafana

“grafana/grafana” – docker image with grafana

Default user and password are admin/admin. On the first login, you will be asked to add a new password.

After we log in we should add source, wherefrom Grafana will read the metrics. Go to the following left menu: Configuration -> Data Sources, chose the “Data Sources” tab and add new data source “Add data source”.

Since we decided to go with Prometheus we will select Prometheus source. In the new page (Configuration), because we did not set any authentication or anything else in Prometheus – everything is default, we need just to set HTTP -> URL field. For our case, it will be “http://localhost:9090”. If everything is ok by clicking “Save and test” we should get a green bar that Grafana is connected to Prometheus and we can access the metrics from it.

Let’s see our first metrics from the timer that we added in our application. For this one we will create our own new dashboard:

Chose “Add Query” and in the new window add following key in the “Metrics”: “dummy_timer_seconds_count”. This will add one metric in our graph.

In the same graph, we can add the second one from the timer “dummy_timer_seconds_max”. With this, we will have both metrics in the same graph.

There are other parameters that you can set, but for basic setup default values are fine.

With this, we have set up everything we need for monitoring our application. Next is to add more graphs for metrics that we want to monitor.

Securing your microservices with OAuth 2.0. Building Authorization and Resource server

Reading Time: 8 minutes

We live in a world of microservices. They give us an easy opportunity to scale our application. But as we scale our application it becomes more and more vulnerable. We need to think of a way of how to protect our services and how to keep the wrong people from accessing protected resources. One way to do that is by enabling user authorization and authentication. With authorization and authentication, we need a way to manage credentials, check the access of the requester and make sure people are doing what they suppose to.

When we speak about Spring (Cloud) Security, we are talking about Service authorization powered by OAuth 2.0. This is how it exactly works:

 

The actors in this OAuth 2.0 scenario that we are going to discuss are:

  • Resource Owner – Entity that grants access to a resource, usually you!
  • Resource Server – Server hosting the protected resource
  • Client – App making protected resource requests on behalf of a resource owner
  • Authorization server – server issuing access tokens to clients

The client will ask the resource owner to authorize itself. When the resource owner will provide an authorization grant with the client will send the request to the authorization server. The authorization server replies by sending an access token to the client. Now that the client has access token it will put it in the header and ask the resource server for the protected resource. And finally, the client will get the protected data.

Now that everything is clear about how the general OAuth 2.0 flow is working, let’s get our hands dirty and start writing our resource and authorization server!

Building OAuth2.0 Authorization server

Let’s start by creating our authorization server using the Spring Initializr. Create a project with the following configuration:

  • Project: Maven Project
  • Artefact: auth-server
  • Dependencies: Spring Web, Cloud Security, Cloud OAuth2

Download the project, copy it into your workspace and open it via your IDE. Go to your main class and add the @EnableAuthorizationServer annotation.

@SpringBootApplication
@EnableAuthorizationServer
public class AuthServerApplication {

    public static void main(String[] args) {
        SpringApplication.run(AuthServerApplication.class, args);
    }

}

Go to the application.properties file and make the following modification:

  • Change the server port to 8083
  • Set the context path to be “/api/auth”
  • Set the client id to “north47”
  • Set the client secret to “north47secret”
  • Enable all authorized grant types
  • Set the client scope to read and write
server.port=8083

server.servlet.context-path=/api/auth

security.oauth2.client.client-id=north47
security.oauth2.client.client-secret=north47secret
security.oauth2.client.authorized-grant-types=authorization,password,refresh_token,password,client_credentials
security.oauth2.client.scope=read,write

The client id is a public identifier for applications. The way that we used it is not a good practice for the production environment. It is usually a 32-character hex string so it won’t be so easy guessable.

Let’s add some users into our application. We are going to use in-memory users and we will achieve that by creating a new class ServiceConfig. Create a package called “config” with the following path: com.north47.authserver.config and in there create the above-mentioned class:

@Configuration
public class ServiceConfig extends GlobalAuthenticationConfigurerAdapter {

    @Override
    public void init(AuthenticationManagerBuilder auth) throws Exception {
        auth.inMemoryAuthentication()
                .withUser("filip")
                .password(passwordEncoder().encode("1234"))
                .roles("ADMIN");
    }

    @Bean
    public BCryptPasswordEncoder passwordEncoder() {
        return new BCryptPasswordEncoder();
    }
}

With this we are defining one user with username: ‘filip’ and password: ‘1234’ with a role ADMIN. We are defining that BCryptPasswordEncoder bean so we can encode our password.

In order to authenticate the users that will arrive from another service we are going to add another class called UserResource into the newly created package resource (com.north47.autserver.resource):

@RestController
public class UserResource {

    @RequestMapping("/user")
    public Principal user(Principal user) {
        return user;
    }
}

When the users from other services will try to send a token for validation the user will also be validated with this method.

And that’s it! Now we have our authorization server! The authorization server is providing some default endpoints which we are going to see when we will be testing the resource server.

Building Resource Server

Now let’s build our resource server where we are going to keep our secure data. We will do that with the help of the Spring Initializr. Create a project with the following configuration:

  • Project: Maven Project
  • Artefact: resource-server
  • Dependencies: Spring Web, Cloud Security, Cloud OAuth2

Download the project and copy it in your workspace. First, we are going to create our entity called Train. Create a new package called domain into com.north47.resourceserver and create the class there.

public class Train {

    private int trainId;
    private boolean express;
    private int numOfSeats;

    public Train(int trainId, boolean express, int numOfSeats) {
        this.trainId = trainId;
        this.express = express;
        this.numOfSeats = numOfSeats;
    }

   public int getTrainId() {
        return trainId;
    }

    public void setTrainId(int trainId) {
        this.trainId = trainId;
    }

    public boolean isExpress() {
        return express;
    }

    public void setExpress(boolean express) {
        this.express = express;
    }

    public int getNumOfSeats() {
        return numOfSeats;
    }

    public void setNumOfSeats(int numOfSeats) {
        this.numOfSeats = numOfSeats;
    }

}

Let’s create one resource that will expose an endpoint from where we can get the protected data. Create a new package called resource and there create a class TrainResource. We will have one method only that will expose an endpoint behind we can get the protected data.

@RestController
@RequestMapping("/train")
public class TrainResource {


    @GetMapping
    public List<Train> getTrainData() {

        return Arrays.asList(new Train(1, true, 100),
                new Train(2, false, 80),
                new Train(3, true, 90));
    }
}

Let’s start the application and send a GET request to http://localhost:8082/api/services/train. You will be asked to enter a username and password. The username is user and the password you can see from the console where the application was started. By entering this credentials will give the protected data.

Let’s change the application now to be a resource server by going to the main class ResourceServerApplication and adding the annotation @EnableResourceServer.

@SpringBootApplication
@EnableResourceServer
public class ResourceServerApplication {

    public static void main(String[] args) {
        SpringApplication.run(ResourceServerApplication.class, args);
    }

}

Go to the application properties file and do the following changes:

server.port=8082
server.servlet.context-path=/api/services
security.oauth2.resource.user-info-uri=http://localhost:8083/api/auth/user 

What we have done here is:

  • Changed our server port to 8082
  • Set context path: /api/services
  • Gave user info URI where the user will be validated when he will try to pass a token

Now if you try to get the protected data by sending a GET request to http://localhost:8082/api/services/train the server will return to you a message that you are unauthorized and that full authentication is required. That means that without a token you won’t be able to access the resource.

So that means that we need a fresh new token in order to get the data. We will ask the authorization server to give us a token for the user that we previously created. Our client in this scenario will be the postman. The authorization server that we previously created is exposing some endpoints out of the box. To ask the authorization server for a fresh new token send a POST request to the following URL: localhost:8083/api/auth/oauth/token.

As it was said previously that postman in this scenario is the client that is accessing the resource, it will need to send the client credentials to the authorization server. Those are the client id and the client secret. Go to the authorization tab and add as a username the client id (north47) and the password will be the client secret (north47secret). On the picture below is presented how to set the request:

What is left is to say the username and password of the user. Open the body tab and select x-www-form-urlencoded and add the following values:

  • key: ‘grant_type’, value: ‘password’
  • key: ‘ client_id’, value: ‘north47’
  • key: ‘ username’, value: ‘filip’
  • key: ‘password’, value ‘1234’

Press send and you will get a response with the access_token:

{
    "access_token": "ae27c519-b3da-4da8-bacd-2ffc98450b18",
    "token_type": "bearer",
    "refresh_token": "d97c9d2d-31e7-456d-baa2-c2526fc71a5a",
    "expires_in": 43199,
    "scope": "read write"
}

Now that we have the access token we can call our protected resource by inserting the token into the header of the request. Open postman again and send a GET request to localhost:8082/api/services/train. Open the header tab and here is the place where we will insert the access token. For a key add “Authorization” and for value add “Bearer ae27c519-b3da-4da8-bacd-2ffc98450b18”.

 

And there it is! You have authorized itself and got a new token which allowed you to get the protected data.

You can find the projects in our repository:

And that’s it! Hope you enjoyed it!

Project Lombok explained

Reading Time: 4 minutes

In this article, I want to present a very powerful tool called Project Lombok. It acts as an annotation processor that allows us to modify the classes at compile time. Project Lombok enables us to reduce the amount of boilerplate code that needs to be written. The main idea is to give the users an option to put annotation processors into the build classpath.

Add Project Lombok to the project

  • using gradle
 compileOnly "org.projectlombok:lombok:1.16.16"
  • using maven
<dependencies>
     <dependency>
          <groupId>org.projectlombok</groupId>
           <artifactId>lombok</artifactId>
           <version>1.16.16</version>
           <scope>provided</scope>
      </dependency> 

Project Lombok

Project Lombok provides the following annotations:

  • @Getter and @Setter: create getters and setters for your fields
  • @EqualsAndHashCode: implements equals() and hashCode()
  • @ToString: implements toString()
  • @Data: uses the four previous features
  • @Cleanup: closes your stream
  • @Synchronized: synchronize on objects
  • @SneakyThrows: throws exceptions
    and many more. Check the full list of available annotations: https://projectlombok.org/features/all

Common object methods

In this example, we have a class that represents User and holds five attributes, for which we want to have an additional constructor for all attributes, toString representation, getters, and setters and overridden equals and hashCode in terms of the email attribute:

 private String email;
    private String firstName;
    private String lastName;
    private String password;
    private int age;

    // empty constructor
    // constructor for all attributes
    // getters and setters
    // toString
    // equals() and hashCode()
}

With some help from Lombok, the class now looks like this:

import lombok.AllArgsConstructor;
import lombok.EqualsAndHashCode;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.Setter;
import lombok.ToString;

@Getter
@Setter
@NoArgsConstructor
@AllArgsConstructor
@ToString
@EqualsAndHashCode(of = {"email"})
public class User {

    private String email;
    private String firstName;
    private String lastName;
    private String password;
    private int age;
}

As you can see, the annotations are replacing the boilerplate code that needs to be written for all the fields, constructor, toString, etc. The annotations do the following:

  • using @Getter and @Setter Lombok is instructed to generate getters and setters for all attributes
  • using @NoArgsConstructor and @AllArgsConstructors Lombok created the default empty constructor and an additional one for all the attributes
  • using @ToString generates toString() method
  • using @EqualsAndHashCode we get the pair of equals() and hashCode() methods defined for the email field (Note that more than one field can be specified here)

Customize Lombok Annotations

We can customize the existing example with the following:

  • in case we want to restrict the visibility of the default constructor we can use AccessLevel.PACKAGE
  • in case we want to be sure that the method fields won’t get null values assigned to them, we can use @NonNull annotation
  • in case we want to exclude some property from toString generated code, we can use excludes argument in @ToString annotation
  • we can change the access level of the setters from public to protected with AccessLevel.PROTECTED for @Setter annotation
  • in case we want to do some kind of checks in case the field gets modified we can implement the setter method by yourself. Lombok will not generate the method because it already exists

Now the example looks like the following:

import lombok.AccessLevel;
import lombok.AllArgsConstructor;
import lombok.EqualsAndHashCode;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.NonNull;
import lombok.Setter;
import lombok.ToString;

@Getter
@Setter(AccessLevel.PROTECTED)
@NoArgsConstructor(access = AccessLevel.PACKAGE)
@AllArgsConstructor
@ToString(exclude = {"age"})
@EqualsAndHashCode(of = {"email"})
public class User {

    private @NonNull String email;
    private @NonNull String firstName;
    private @NonNull String lastName;
    private @NonNull String password;
    private @NonNull int age;

    protected void setEmail(String email) {
        // Check for null and valid email code
        this.email = email;
    }
}

Builder Annotation

Lombok offers another powerful annotation called @Builder. Builder annotation can be placed on a class, or on a constructor, or on a method.

In our example, the User can be created using the following:

User user = User
        .builder()
            .email("dimitar.gavrilov@north-47.com")
            .password("secret".getBytes(StandardCharsets.UTF_8))
            .firstName("Dimitar")
            .registrationTs(Instant.now())
        .build();

Delegation

Looking at our example the code can be further improved. If we want to follow the rule of composition over inheritance, we can create a new class called ContactInformation. The object can be modelled via an interface:

public interface HasContactInformation {
    String getEmail();
    String getFirstName();
    String getLastName();
}

The class can be defined as the following:

@Data
public class ContactInformation implements HasContactInformation {

    private String email;
    private String firstName;
    private String lastName;
}

In the end, our User example will look like the following:

import lombok.AccessLevel;
import lombok.AllArgsConstructor;
import lombok.Builder;
import lombok.EqualsAndHashCode;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.NonNull;
import lombok.Setter;
import lombok.ToString;
import lombok.experimental.Delegate;

@Getter
@Setter(AccessLevel.PROTECTED)
@NoArgsConstructor(access = AccessLevel.PACKAGE)
@AllArgsConstructor
@ToString(exclude = {"password"})
@EqualsAndHashCode(of = {"contactInformation"})
public class User implements HasContactInformation {

    @Getter(AccessLevel.NONE)
    @Delegate(types = {HasContactInformation.class})
    private final ContactInformation contactInformation = new ContactInformation();

    private @NonNull byte[] password;

    private @NonNull Instant registrationTs;

    private boolean payingCustomer = false;
}

Conclusion

This article covers some basic features and there is a lot more that can be investigated. I hope you have found a motivation to give Lombok a chance in your project if you find it applicable.

3 shell scripts to help you manage AEM Instances

Reading Time: 5 minutes

The Problem

Every AEM developers career starts by double-clicking the AEM jar file. And that’s where you can do your first mistake. AEM might shout at to you “Hey NOOB you need a license!”. You forgot to copy the license file next to the jar… *sigh*. But don’t worry, this happens to everybody.

After copying the license file, AEM will start and you will get a nice little GUI.

Congratulations! You are ready to go! You know where AEM is running (localhost:4502) and you even have a little stop button in the bottom left corner. But this is probably the only time you see this GUI. All the cool kids start AEM with the start script in crx-quickstart/bin/. But how do you know if AEM is running? How can you find out which port is used and how do you stop the instance?

This blogpost will provide you with three little shell scripts to address these problems and help you to manage your AEM instances without the need for a GUI.

#1 Get information about running AEM instances

To figure out which AEM instances are currently running you might smash something like this into your terminal window:

This method has several disadvantages. The command is hard to remember and the output is very hard to read.

Here’s a little challenge for you: Can you figure out the port, debug port and runmodes in the gif above before it loops? Give it a try.

And anyway, what does this command even mean?

Bonus Tip – Great page to explain shell commands (bookmark it!): https://explainshell.com/explain?cmd=ps+aux+%7C+grep+cq

Back to the topic. Our first script (“aeminfo”) provides a simple overview of all running AEM instances. It turns the complicated output into a readable form:

Nice! All you have to do is adding this script to your .bash_profile. The script does not use any external dependencies and should run many unix/linux based systems (tested on OSX and CentOS).

# Displays all running AEM instances
function aeminfo(){

    if [ "$(ps aux | grep [c]q | grep crx)" ]
        then

            count=0;
            echo ""
            ps auxeww | grep [c]q | grep sling | while read -r line ; do

                ((count++));
                params=($line);

                username=(${params[0]});
                pid=(${params[1]});
                port="not found";
                debugPort="not found";
                xmx="not found";
                runmodes="not found";
                path="not found";

                portRegex="-p ([0-9]+)";
                debugPortRegex="address=([0-9]+)";
                xmxRegex="-Xmx([^[:space:]]+)";
                runmodesRegex="-Dsling.run.modes=([^[:space:]]+)";
                pathRegex="PWD=([^[:space:]]+)";

                [[ $line =~ $portRegex ]] && port="${BASH_REMATCH[1]}";
                [[ $line =~ $runmodesRegex ]] && runmodes="${BASH_REMATCH[1]}";
                [[ $line =~ $debugPortRegex ]] && debugPort="${BASH_REMATCH[1]}";
                [[ $line =~ $pathRegex ]] && path="${BASH_REMATCH[1]}";
                [[ $line =~ $xmxRegex ]] && xmx="${BASH_REMATCH[1]}";

                echo "----------------------";
                echo "AEM Instance" $count;
                echo "----------------------";
                echo "username:  "$username;
                echo "pid:       "$pid;
                echo "port:      "$port;
                echo "debugPort: "$debugPort;
                echo "memory:    "$xmx;
                echo "runmodes:  "$runmodes;
                echo "path:      "$path;
                echo "----------------------";
                echo "";

            done

        else
            echo "";
            echo "No running AEM instances found";
            echo "";
        fi
}

#2 Death to all AEM instances!

Our next script (“killaem”) is pretty small. It just shuts down a running AEM instance:

function killaem() {
    kill $(ps aux | grep '[c]rx-quickstart' | awk '{print $2}')
}

But experienced AEM developers might know that asking AEM politely to stop is sometimes not enough.

Especially when it’s Friday evening and you’re looking forward to a cold beer in your local pub. For this emergency situation you could use a slight variation of the script (“KILLAEM”) (notice the capitalisation!). This will kill your instance definitely. No matter what. The consequences might be that the instance will be in the same condition as you are after too many beers in said pub. Broken and unable to work. So be careful with this command (and with drinking for that matter!)

function KILLAEM() {
    kill -9 $(ps aux | grep '[c]rx-quickstart' | awk '{print $2}')
}

#3 Improved debugging

So you destroyed your instance and it won’t start properly? What to do next? Have a look at your log files!

If you use the regular “tail -f error.log” command you will quickly notice that AEM logs a lot of stuff. How could you ever find that one useful log entry in the haystack of unnecessary INFO logs? This “aemlog” script might help you! It displays different log levels with different colours:

function aemlog() {
    tail -fn 128 "$@" | awk '
    /SEVERE/ {print "\033[35m" $0 "\033[39m"}
    /ERROR/ {print "\033[31m" $0 "\033[39m"}
    /WARN/ {print "\033[33m" $0 "\033[39m"}
    /DEBUG/ {print "\033[30m" $0 "\033[39m"}
    !/SEVERE|ERROR|WARN|DEBUG/ {print $0 }
';}
You could even observe multiple log files at once. Just pass them as parameters to the command: aemlog error.log access.log audit.log

Conclusion

The three little scripts might not do very much but they are easy to set up, don’t need any third-party software and provide some useful functionalities.

Just copy all shell scripts from this page into your ~/.bash_profile and you are ready to go!

If you have any suggestions for improvements feel free to leave a comment!

Why I loved Flutter despite being a fan of Native App Development

Reading Time: 5 minutes

In this post, I am going to write about Flutter based on my opinions. Please don’t hesitate to comment if you want to discuss, support or contradict any of the mentioned points 🙂

What is Flutter

“Flutter is Google’s portable UI toolkit for crafting beautiful, natively compiled applications for mobile, web, and desktop from a single codebase. Flutter works with existing code, is used by developers and organizations around the world, and is free and open source.” (from official Flutter website)

In the mobile development perspective, Flutter is a cross-platform mobile application development framework that is developed by Google and supports building apps for IOS and Android operating systems from a single codebase with native-like performance.

Cross-platform frameworks in general

I was always against developing apps with cross-platform tools however they are good options in terms of required resources such as they require less development time, less budget, less effort to maintain/support, etc. But for me, these factors are not much important somehow. Which things I didn’t like and made me avoid learning/developing cross-platform are:

  • Low performance, Poor UI – to be honest, I haven’t even played around with any cross-platform development. The fact which makes me come up with the ideas of cross-platform having low performance and poor UI is originated from the impressions I got from the usage cross-platform developed apps. Most of the time when I start using apps and see the bad performance and UI, unfortunately, I end up knowing that they are cross-platform developed apps.
  • It requires learning a new language – since I’m an android developer, I would rather choose to learn Swift for native IOS development so I will be able to develop for both platforms.

Dart language

Dart is a client-optimized programming language for fast apps on multiple platforms. It is developed by Google and is used to build mobile, desktop, backend and web applications. Dart is an object-oriented, class defined, garbage-collected language using a C-style syntax that transcompiles optionally into JavaScript. (wikipedia)

from dart.dev

Dart code is capable of being compiled into machine code. For mobile application development, Flutter uses this type of compilation which is Ahead-of-time compilation. This is the reason which makes Flutter distinguish from other cross-platform app development frameworks and lets an app developed in Flutter have native-like performance.

Things I liked about Flutter development

  • Faster code writing
  • Rich in UI components – lots of flutter widgets to build eye-catching UI
  • Native-like application performance
  • Instant hot reload – see the changes within seconds
  • No need to extra code to support older Android versions

Limitation

  • Less third-party libraries are available

Conclusion

The reason I wrote this blogpost is that Flutter caught my attention and I decided to give it a try. So I cloned the main page of the simple news application which is in the Play Store. Then I started to love Flutter because while the development process I easily found all the widgets that I needed to make the clone app look similar as much as possible to the original app. In the end, the result impressed me so much and I was convinced to learn more about flutter development.

I would also recommend you to give it a try. Especially, if you need to have MVP for your project then go for it. You will benefit from the advantages of cross-platform development such as faster development, less time to test, less effort to bugfix and maintenance and advantages of flutter such as native-like performance, great UI.

There is also one more thing I want to mention that, apps for Google’s Fuchsia OS are written in Flutter. I am not going to write about Fuchsia OS here, but briefly, it is assumed that Fuchsia OS can be Google’s next mobile OS or maybe replacement to Android OS (depending on the result of the problem between Google and Oracle). But it is hard to say how probable is that to happen. Anyway… The reason I mentioned this issue is to show you that – assuming that happened, then it is gonna be really valuable if you already know and have experience with Flutter. 🙂

Helpful resources which I think are worth to check:

CQRS and Event Sourcing with Axon Framework

Reading Time: 5 minutes

What is CQRS?

CQRS stands for Command Query Responsibility Segregation. It is a pattern where allowed actions in the application are divided into two groups: Commands and Queries. A command changes the state of an object but does not return any data, while a query returns data and does not change any state. This design style comes from the need for different strategies when scaling the reading part and the writing part of our application. By dividing methods into these two categories, you can use a different model to update information than the model you use to read information. By separate models we most commonly mean different object models, probably running in different logical processes, perhaps on separate hardware. A web example would see a user looking at a web page that’s rendered using the query model. If they initiate a change that change is routed to the separate command model for processing, the resulting change is communicated to the query model to render the updated state.

Event Sourcing

Event Sourcing is a specialized pattern for data storage. Instead of storing the current state for an entity, every change of state is stored as a separate event that makes sense to a business user. The current state is calculated by applying all events that changed the state of an entity. In terms of CQRS, the events stored are the results of executing a command against an aggregate on the right side. The event store also transmits events that it saves. The read side can process these events and builds the targeted data sets it needs for queries.

AXON Framework

Axon is “CQRS Framework” for Java. It is an Open-source framework that provides the building blocks that CQRS requires and helps to create scalable and extensible applications while maintaining application consistency in distributed systems. It provides basic building blocks for writing aggregates, commands, queries, events, sagas, command handlers, event handlers, query handlers, repositories, communication buses and so on. Axon Framework comes with a Spring Boot Starter, making using it as easy as adding a dependency. Axon will automatically configure itself based on best practices and other dependencies you have set. Providing explicit configuration is a matter of adding a bean of the component that needs to be configured differently. Furthermore, Axon Framework integrates with Spring Cloud Discovery, Spring Messaging and Spring Cloud AMQP.

AXON components

  • Command – a combination of expressed intent (which describes what you want to be done) as well as the information required to undertake action based on that intent. The Command Model is used to process the incoming command, to validate it and define the outcome. Commands are typically represented by simple and straightforward objects that contain all data necessary for a command handler to execute it
  • Command Bus – receives commands and routes them to the Command Handlers
  • Command Handler – the component responsible for handling commands of a certain type and taking action based on the information contained inside it. Each command handler responds to a specific type of command and executes logic based on the contents of the command
  • Event bus – dispatches events to all interested event listeners. This can either be done synchronously or asynchronously. Asynchronous event dispatching allows the command execution to return and hand over control to the user, while the events are being dispatched and processed in the background. Not having to wait for event processing to complete makes an application more responsive. Synchronous event processing, on the other hand, is simpler and is a sensible default. By default, synchronous processing will process event listeners in the same transaction that also processed the command
  • Event Handler – are the components that act on incoming events. They typically execute logic based on decisions that have been made by the command model. Usually, this involves updating view models or forwarding updates to other components, such as 3rd party integrations
  • Query Handler – components act on incoming query messages. They typically read data from the view models created by the Event listeners
  • Query Bus receives queries and routes them to the Query Handlers. A query handler is registered at the query bus with both the type of query it handles as well as the type of response it provides. Both the query and the result type are typically simple, read-only DTO objects

Will each application benefit from Axon?

Unfortunately not. Simple CRUD (Create, Read, Update, Delete) applications which are not expected to scale will probably not benefit from CQRS or Axon. Fortunately, there is a wide variety of applications that do benefit from Axon.
Axon platform is being used by a wide range of companies in highly demanding sectors such as healthcare, banking, insurance, logistics and public sector. Axon Framework is free and the source code is available in a Git repository: https://github.com/AxonFramework.

JMeter

Reading Time: 4 minutes

Today, we are gonna take a look at JMeter. You can embed it in your application as a library and make an external integration testing solution. You don’t have to use it for load testing, it could simply send onej request, check the return status code, check the return value and move on. There is an argument that JMeter may be overkill for that, but it provides an easy way to verify the return, allows you to set it up using JMeter desktop app and then you can move into testing latency under load.

First, we need to create a test file that will be put later in our spring boot application. The steps for creating the .jmx file are as follows:

1 – Open the JMeter window by clicking on /home/apache-jmeter-5.1.1/bin/jmeter.bat. The next step you want to do with every JMeter Test Plan is to add a thread group element. Set the “Loop Count” parameter equal to 1, as shown below:

2 – The next step is to create a while controller. The purpose the while controller is to repeat a given set of actions until the condition is broken. While is a basic programming concept to run actions where the number of iterations to run are unknown or varying.

3 – Create an HTTP request as shown in the figure below:

4 – Afterwards, we are gonna create a CSV Data Set Config. This step refers to the CSV file for which the partner users will be collected and replaced as in the httprequest.

5- After running our test, we want to see the results e.g. which calls have been done, and which ones have failed. This is done through Listeners, a recording mechanism that shows results, including logging and debugging.

The View Results Tree is the most common Listener.

Right-click – Add->Listener->View Results Tree

6 – At the end, it should be something like the figure below:

Now click ‘Save’. Your test will be saved as a .jmx file. To run the test, click the green arrow on top. After the test completes running, you can view the results on the Listener as in the figure below. In this example, you can see the tests were successful because they’re green. On the right, you can see more detailed results, like load time, connect time, errors, the request data, the response data, etc. You can also save the results if you want to.

JMeter can also be used for Maven testing through a plugin and work quite nicely with variables and prerequisites etc. Integrating Performance Testing in your projects has many benefits:

  • It provides a constant check of the performances of the application
  • Secures continuous delivery in production
  • Allows early detection of performance problems or performance regressions
  • Automates the process means less manual work, allowing your team to focus on more valuable tasks like performance analysis and optimisation

First of all, you need to add your plugin to the project. So go to Maven project directory (jmeter-testproject in this case) and edit the pom.xml file. Here you must add the plugin. You can find the basic configuration here. You just need to copy the configuration text and paste it in your pom.xml file.

Finally, you have a pom.xml file that looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<plugin>
   <groupId>com.lazerycode.jmeter</groupId>
   <artifactId>jmeter-maven-plugin</artifactId>
   <version>2.9.0</version>
   <executions>
      <execution>
         <id>jmeter-tests</id>
         <phase>test</phase>
         <goals>
            <goal>jmeter</goal>
         </goals>
      </execution>
      <execution>
         <id>jmeter-check-results</id>
         <phase>test</phase>
         <goals>
            <goal>results</goal>
         </goals>
      </execution>
   </executions>
   <configuration>
      <testFilesDirectory>src/test/resources/jmeter</testFilesDirectory>
      <ignoreResultFailures>false</ignoreResultFailures>
   </configuration>
</plugin>

In the code above we will be running 2 goals:

  • jmeter in phase Test: This goal will run the load test and generate the HTML report
  • results in phase Test: This goal runs some verification on error rate and fails the build

New to programming? 5 things you should pay more attention to

Reading Time: 5 minutes

You decided to start learning programming. You have started to learn programming concepts, you have decided which language you want to learn, and everything looks great.

Except it isn’t.

It’s frustrating; it’s boring; it’s painful. I am not here to make your life easy, but I hope that I will make it a little easier. Here are the 5 things that I believe will help you to become a better programmer.

Find the right source to learn from

I had a professor who said:

“It’s better to spend more time researching where to learn from, than actually learning from one source.”

And this is gold.

Let’s say that you have found a great book or a great video course that everyone is loving. You think that you will love it too, every word that you read/hear in the book/course you will understand, and after you finish it, you will become a master of the things you will learn (at least, I thought like that).

And maybe you will, but probably you won’t. Most (or let’s say, some) of the things you won’t understand, and it’s natural. You will try to read/watch again and again, but it’s not getting any clearer.

My advice is, try to find a great book/course, and start learning from, but use it more as a reference than learning source.

I am not suggesting to only go through the content. Try to understand the concept, but also research it (on Google). Look for more resources, more explanation, more examples. When you will understand the concept, save the source that helped you the most (bookmark the page), and search for examples that you can solve.

This way, it is easier to learn, because you are combining the explanations of different sources, and you are sticking with the most simple explanation that is working for you. Also, research is more interesting than reading\listening the same thing all over again.

Understand the base (minimum) necessary logic rather than implementation

This is important for a few reasons:

  • First, if you understand the logic, it will be easier to learn the implementation
  • Second, the implementation may change, but the base necessary logic won’t

At the very beginning, it will be difficult to differentiate between logic and implementation, and maybe you should try to learn and remember everything, but later try to understand and study just the minimum necessary required things.

I still google some basic things. But because I know what I have to do, I exactly know what to search for (only the implementation/syntax).

With this approach, you will spend your time wisely, and you will be able to learn more important things.

To learn your first programming language is very hard, but that’s because you have to learn programming concepts (the logic). After you learn that, you can learn any language (the implementation) you want, in a matter of weeks.

Code, code, code…

Learning programming is like learning how to drive, except it’s safer (at least, physically). You can read, you can learn, but when you sit down and start to drive, you’ll realize that you haven’t learned anything.

That’s why you should focus on coding. When you study something, try to learn the minimum, so you can start to code, and then code as you learn. There is a great answer on Quora, that mentions 3 rules that you should follow when coding.

  • Write at least one line of code per day
  • First, write code, then refactor
  • No distractions when coding

Here, you can check the answer, that has reasoning for these rules. Maybe you can forget the second rule, but the other 2 are very important.

Attitude

I had to mention attitude. It is a hard path, especially at the beginning, so the right attitude is required. Hard work, believing in yourself, learning to say YES to everything is needed. More precisely, you say NO only when you are 100% sure that it isn’t possible to do. In any other case, you say YES, and you investigate, you try different approaches, you ask for help if it is necessary, you do everything you can. A time will come when you will need to learn to say NO, but first, you have to learn to say YES.

Rest

Of course, don’t forget to rest. You have to rest from the hard work you have done. Most of the stupid things I have done were when I was too tired. When you are tired, you don’t think rationally. You just want to finish your task, no matter. That’s when the biggest mistakes come. You won’t learn anything, you won’t do anything well, you are just wasting your time and nerves.

Spring Cloud Function meets AWS Lambda

Reading Time: 5 minutes

Why spring cloud functions? 🤔

  • Serverless architecture
  • Ignore transport details and infrastructure, and focus on business logic
  • Keep using Spring Boot features
  • Run same code as REST API, a stream processor, or a task

AWS Lambda is one of the most popular serverless solutions. In this blog, we will create a simple spring function and deploy it as an AWS Lambda function.

First, we will create spring function

Let’s create a new project from https://start.spring.io/

In this example, we will create a simple function that will receive some name and will return the sum of all ASCII values of characters. For that purpose, we will create two DTO classes.

  • InputDTO
public class InputDTO {

    private String name;

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }
}
  • OutputDTO
public class OutputDTO {
    private int sum;

    public OutputDTO(int sum) {
        this.sum = sum;
    }

    public int getSum() {
        return sum;
    }

    public void setSum(int sum) {
        this.sum = sum;
    }
}

Let’s update our pom file. We will add all the dependencies we need for this demo. This is how the pom file should look like:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.1.7.RELEASE</version>
        <relativePath/> <!-- lookup parent from repository -->
    </parent>
    <groupId>com.north</groupId>
    <artifactId>north-demo-spring-aws</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <name>north-demo-spring-aws</name>
    <description>Demo project for Spring Boot</description>

    <dependencies>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-function-adapter-aws</artifactId>
            <version>${spring-cloud-function.version}</version>
        </dependency>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-function-web</artifactId>
            <version>${spring-cloud-function.version}</version>
        </dependency>
        <dependency>
            <groupId>com.amazonaws</groupId>
            <artifactId>aws-lambda-java-events</artifactId>
            <version>${aws-lambda-events.version}</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>com.amazonaws</groupId>
            <artifactId>aws-lambda-java-core</artifactId>
            <version>${aws-lambda-java-core.version}</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-deploy-plugin</artifactId>
                <configuration>
                    <skip>true</skip>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
                <dependencies>
                    <dependency>
                        <groupId>org.springframework.boot.experimental</groupId>
                        <artifactId>spring-boot-thin-layout</artifactId>
                        <version>1.0.10.RELEASE</version>
                    </dependency>
                </dependencies>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-shade-plugin</artifactId>
                <configuration>
                    <createDependencyReducedPom>false</createDependencyReducedPom>
                    <shadedArtifactAttached>true</shadedArtifactAttached>
                    <shadedClassifierName>aws</shadedClassifierName>
                </configuration>
            </plugin>
        </plugins>
    </build>

    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
        <spring-cloud-function.version>2.1.1.RELEASE</spring-cloud-function.version>
        <aws-lambda-events.version>2.2.6</aws-lambda-events.version>
        <aws-lambda-java-core.version>1.2.0</aws-lambda-java-core.version>
    </properties>

</project>

Now let’s write the function. We are implementing the function interface and override the method “apply”. All of the business logic we need, we are writing in that method.

public class UseCaseHandler implements Function<InputDTO, OutputDTO> {

    @Override
    public OutputDTO apply(InputDTO inputDTO) {
        int sum = 0;
        for (int i = 0; i < inputDTO.getName().length(); i++) {
            sum += ((int) inputDTO.getName().charAt(i));
        }
        return new OutputDTO(sum);
    }
}

UseCaseHandler class is created in com.north.northdemospringaws.function. Because of that, application.yml file needs an update.

spring:
  cloud:
    function:
      scan:
        packages: com.north.northdemospringaws.function

Now we will test our function. I will try it with my name Antonie Zafirov.
First, let’s create a simple unit test to check if the function works correctly.

@RunWith(MockitoJUnitRunner.class)
public class UseCaseHandlerTest {

    @InjectMocks
    private UseCaseHandler useCaseHandler;

    @Test
    public void testUseCaseHandler() {
        InputDTO inputDTO = new InputDTO();
        inputDTO.setName("Antonie Zafirov");
        OutputDTO outputDTO = useCaseHandler.apply(inputDTO);
        assertEquals(1487, outputDTO.getSum());
    }
}

We can check with postman if the function is acting like RESTful API

And it works!

Next step is exposing our function and uploading as a Lambda function.

The magic is done with extending SpringBootRequestHandler from AWS adapter. This class is acting as the entry point of the Lambda function and also defining its input and output.

package com.north.northdemospringaws.function;

import com.north.northdemospringaws.dto.InputDTO;
import com.north.northdemospringaws.dto.OutputDTO;
import org.springframework.cloud.function.adapter.aws.SpringBootRequestHandler;

public class DemoLambdaFunctionHandler extends SpringBootRequestHandler<InputDTO, OutputDTO> {

}

You should have AWS account for this step, so if you do not have you should create one. After that, go to AWS console and select Lambda from the services list.

From submenu select functions and click on the create function.
Add name on function and select runtime Java 8.
In my case, the function name is “demo”.

Build jar from the application with simple maven command, mvn clean package.

Upload the aws jar, in my case north-demo-spring-aws-0.0.1-SNAPSHOT-aws.jar

In the handler part, we should write the path to the DemoLambdaFunctionHandler. In this example, the path is “com.north.northdemospringaws.function.DemoLambdaFunctionHandler”.

We create environment variable FUNCTION_NAME with the name of our function as value starting with a lowercase letter useCaseHandler. Now let’s save it all and we are done!!! And the last step is to test it.
Create a test event with the name testEvent and value:

{
  "name": "Antonie Zafirov"
}

Choose testEvent as event and execute the function with clicking Test button. The result is:

And we are done, and it works!!!

Download the source code

Project is freely available on our GitLab repository. Feel free to fix any mistakes and to comment here if you have any questions or feedback.

https://gitlab.com/47northlabs/public/spring-functions-aws

WeAreDevelopers 2019 Berlin – The Good and the Bad parts

Reading Time: 7 minutes

You can find my expectations here. And here are my actual impressions:

Arrival

The conference took place on the 6th and 7th of June at the CityCube in Berlin. The CityCube is an exhibition site which is located roughly 10km from Berlin centre. My hotel was in Berlin-Charlottenburg, which is a nice area located just 2 stations from the exhibition site.

I arrived late on the 5th of June. Having not really planned my attendance I wanted to check the usual suspects (eg the web page) for more information. But wait. There is an app for that right? And there was. After downloading the app from the Appstore, I quickly set up an account and was ready to take a look. Overall the app was solid (with some bugs or undesired features; later more on that). I found the activity stream pretty useful. The activity stream is like a chatroom where attendees can exchange thoughts. After reading a little bit, I was able to gather most of the important info and got a little bit hyped by the people in the room. There was also an agenda section in the app which I could browse either by date and time or speakers. Each talk or workshop had a brief description. Each talk could also be added to a favourites section which was nice. In that way, I gathered rather quick my favourites and after going through them again by time and eliminating the less interesting ones I was ready for the first day.

Pro-Tip 1: A calendar or timeline section in the app where one could view one’s favourites would be nice. In that way, intersecting talks could be easily spotted.

Day 1

I arrived a little bit early at the venue because in the chatroom there were also some concerns about long queues. The weather was really nice, so I was not really worried to wait a little bit. But the fear of waiting for hours was not justified. There was enough stuff to take care of the attendees. I waited maybe 10 minutes. So all good.

Once I arrived at the main stage I also realized that there was more than enough space to accommodate everyone. With my schedule it went from here like this (Some of the talks I attended):

Welcome by WeAreDevelopers

The usual greetings and organizational stuff. It didn’t take so long so it was okayish. But already here I realized that the sound was really bad. Sitting somewhat in the middle you would get an echo. And that was without any other talks being held in parallel.

Pro-Tip 2: Please test the venues sound properties beforehand and adjust accordingly.

Where Machine Intelligence Ends and Human Creativity Begins – Garry Kasparov

As a starter, this talk was really good. Garry Kasparov is really a charismatic person. The main claim of Garry Kasparov is that eventually a huge amount of (non-creative) work will be replaced by AI. And there is nothing that we can do about it. His conclusion was that it doesn’t have to be a bad thing. We will have more time to do more important stuff.

Business vs Agile – Crimes against development teams continuously committed by management – Gerta Sheganaku

This one I chose based on the title. I was hoping for some entertaining session, actually against management (Sorry I’m a dev). The thing which I’ve got from the talk is that agile works best with top devs in an organization. For this group productivity and “happiness” increases. With diminishing skill set of the devs involved the gains of agile decreases and are even counterproductive in case of devs with decreasing skill set (I do not know how the skill set of devs was measured here and the productivity output either; it’s a company offering, so they have probably some empirical data on this). The fun story was that one consulted company consulted by this company fired almost all stuff based to rehire again for agile.

Lunch

Lunch was an epic disaster. Honestly, I thought lunch was included in the ticket price. It was not. There were around 5 food trucks with different types of food. So this was ok. The problem was that the capacity was way too low. You had to wait like 1 hour in full shining sun to get your food. After waiting for about 15 mins. I decided to go somewhere else to get some food. Doing so I discovered that the part of the CityCube the conference was held in was about 15 min away from the nearest restaurants. Ok, that’s another minus. (PS There was a massive rant about this in the app. Even some invited companies jumped in to deliver some food and free water. Shame, shame, shame…)

Pro-Tip 3: You know how many people will attend. Throughput of the trucks should also be known. Calculate with the worst case.

The Quake Postmortem: The End of the Original Id – John Romero

Yes, this John Romero. The talk was about people, growth, success and the price paid. Romero pictured a small company which was overwhelmed by its “astonishing” success. In the beginning, there is a passion but with success there comes the appetite for larger games and one have to scale. In the end, it’s not about fun anymore. Delivering is what counts. This reflects on the team.

Flutter – Google’s latest innovation for mobile, web, and desktop apps

It was nice. But we have SwiftUI now. Thank you.

Pro-Tip 4: When designing the app, please think about people on a mobile data plan. Scale down pictures posted in the chat (I had like 400MB data usage by the end of the day). 

Day 2

Once in my hotel room I quickly set up my schedule for the last day (Here some of it):

Thoughts on the Future of Programmable Money – Andreas M. Antonopoulos

This talk was epic. Andreas drew a coherent picture of what is wrong with the mindset of closed ecosystems. Starting with castle walls and getting to modern times firewalls was a nice analogy to draw. Eventually we “outgrew” castle walls/castles. Will we be able to break out of closed systems? Andreas has no doubt about it.

25 Years of PHP – Rasmus Lerdorf

I’m not a PHP dev but I wanted to hear the inventor of PHP talk about this un-opinionated language. So Mr Lerdorf is a nice, near to the ground guy. He explained very well that PHP was developed out of necessity. The desire to have a “simple layer” over CGI/C to write programs faster and more readable.

2nd day the lunch situation didn’t really change. There were water and free beer though.

Conclusion

I’m pretty undecided if I would visit this conference again. The speakers were really great but the organisation was seriously lacking.

Unit testing with Mockito

Reading Time: 4 minutes

A unit is the smallest testable part of an application. Mockito is a well known mock framework that allows us to create configure mock objects. With Mockito we can mock both interfaces and classes in the class under test. Mockito also helps us to reduce the number of boilerplate code while using mockito annotations.

Adding Mockito to the project

  • using gradle
testCompile "org.mockito:mockito−core:2.7.7"
  • using maven
<dependency>
      <groupId>org.mockito</groupId>
      <artifactId>mockito-core</artifactId>
      <version>2.7.7</version>
      <scope>test</scope>
</dependency>

Mockito annotations

  • @Mock – used for mock creation.
  • @Spy – creates a spy object.
  • @InjectMocks – instantiates the tested object and injects all the annotated field dependencies into it
  • @Captor – used to capture argument values for further assertions

Mockito example @Mock

Let’s say we have the following classes and we want to write a test for the CalculationService:

public class CalculationService {

   private AddService addService;
  
   public int calculate(int x, int y) {
       return addService.add(x, y);
   }
}

public class AddService {

   public int add(int x, int y) {
       return x+y;
   }
}

The usage of the @Mock and @InjectMock annotations is shown in the following sample code:

@InjectMocks
private CalculationService calculationService;

@Mock
private AddService addService;

@Before
public void setUp() {
   // initializes objects annotated with @Mock, @Spy, @Captor, or @InjectMocks
   MockitoAnnotations.initMocks(this);
}

@Test
public void testCalculationService() {
    // mock the result from method add in addService
    doReturn(20).when(addService).add(10, 10);

    // verify that the calculate method from calculationService will return the same value
    assertEquals(20, calculationService.calculate(10, 10));
}

@Spy

Mockito spy is used to spying on a real object. The main difference between a spy and mock is that with spy the tested instance will behave as a normal instance. The following example will explain it:

@Test
public void testSpyInstance() {
    List<String> spyList = spy(new ArrayList());
    spyList.add("firstElement");
    spyList.add("secondElement");
    verify(spyList).add("firstElement");
    verify(spyList).add("secondElement");

    assertEquals(2, spyList.size());
}

Note that method add is called and the size of the spy list is 2.

@Captor

Mockito framework gives us plenty of useful annotations. One of the most recent that I’ve had a chance to use is @Captor. ArgumentCaptor is used to capture the inner data in a method that is either void or returns a different type of object.
Let’s say we have the following method snippet:

public class AnyClass {
    public void doSearch(SearchData searchData) {
        CustomData data = new CustomData("custom data");
        searchData.doSomething(data);
    }
}

We want to capture the argument data so we can verify its inner data. So, to check that, we can use ArgumentCaptor from Mockito:

// Create a mock of the SearchData
SearchData data = mock(SearchData.class);

// Run the doSearch method with the mock
new AnyClass().doSearch(data);

// Capture the argument of the doSomething function
ArgumentCaptor<CustomData> captor = ArgumentCaptor.forClass(CustomData.class);
verify(data, times(1)).doSomething(captor.capture());

// Assert the argument
CustomData actualData = captor.getValue();
assertEquals("custom data", actualData.customData);

New features in Mockito 2.x

Since its inception, Mockito lacked mocking finals. One of the major features in the 2.X version is the support stubbing of the final method and final class. This feature has to be explicitly activated by creating the file MockMaker in this directory src/test/resources/mockito-extensions/org.mockito.plugins.MockMaker containing a single line:
mock-maker-inline

public final class MyFinalClass {

    public String hello() {
        return "my final class says hello";
    }
}

public class MyCallingClass {

    final MyFinalClass myFinalClass = new MyFinalClass();

    public String executeFinal() {
        return myFinalClass.hello();
    }
}

public class MyCallingClassTest {

    @Test
    public void testFinalClass() {
        MyCallingClass myCallingClass = new MyCallingClass();
        MyFinalClass myFinalClass = mock(MyFinalClass.java);

        when(myFinalClass.hello()).thenReturn("testString");

        assertEquals("testString", myCallingClass.executeFinal());
    }
}

Given the following example, without the file org.mockito.plugins.MockMaker and its content, we get the following error:

When the file is in the resources and the content is valid, we are all good.

The plan for the future is to have a programmatic way of using this feature.

Conclusion

In this article, I gave a brief overview of some of the features in Mockito test framework. Like any other tool, it must be used in a proper way to be useful. Now go and bring your unit tests to the next level.

Testing Spring Boot application with examples

Reading Time: 7 minutes

Why bother writing tests is already a well-discussed topic in software engineering. I won’t go into much details on this topic, but I will mention some of the main benefits.

In my opinion, testing your software is the only way to achieve confidence that your code will work on the production environment. Another huge benefit is that it allows you to refactor your code without fear that you will break some existing features.

Risk of bugs vs the number of tests

In the Java world, one of the most popular frameworks is Spring Boot, and part of the popularity and success of Spring Boot is exactly the topic of this blog – testing. Spring Boot and Spring framework offer out-of-the-box support for testing and new features are being added constantly. When Spring framework appeared on the Java scene in 2005, one of the reasons for its success was exactly this, ease of writing and maintaining tests, as opposed to JavaEE where writing integration requires additional libraries like Arquillian.

In the following, I will go over different types of tests in Spring Boot, when to use them and give a short example.

Testing pyramid

We can roughly group all automated tests into 3 groups:

  • Unit tests
  • Service (integration) tests
  • UI (end to end) tests

As we go from the bottom of the pyramid to the top tests become slower for execution, so if we measure execution times, unit tests will be in orders of few milliseconds, service in hundreds milliseconds and UI will execute in seconds. If we measure the scope of tests, unit as the name suggest test small units of code. Service will test the whole service or slice of that service that involve multiple units and UI has the largest scope and they are testing multiple different services. In the following sections, I will go over some examples and how we can unit test and service test spring boot application. UI testing can be achieved using external tools like Selenium and Protractor, but they are not related to Spring Boot.

Unit testing

In my opinion, unit tests make the most sense when you have some kind of validators, algorithms or other code that has lots of different inputs and outputs and executing integration tests would take too much time. Let’s see how we can test validator with Spring Boot.

Validator class for emails

public class Validators {

    private static final String EMAIL_REGEX = "(?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*|\"(?:[\\x01-\\x08\\x0b\\x0c\\x0e-\\x1f\\x21\\x23-\\x5b\\x5d-\\x7f]|\\\\[\\x01-\\x09\\x0b\\x0c\\x0e-\\x7f])*\")@(?:(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?|\\[(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?|[a-z0-9-]*[a-z0-9]:(?:[\\x01-\\x08\\x0b\\x0c\\x0e-\\x1f\\x21-\\x5a\\x53-\\x7f]|\\\\[\\x01-\\x09\\x0b\\x0c\\x0e-\\x7f])+)\\])";

    public static boolean isEmailValid(String email) {
        return email.matches(EMAIL_REGEX);
    }
}

Unit tests for email validator with Spring Boot

@RunWith(MockitoJUnitRunner.class)
public class ValidatorsTest {
    @Test
    public void testEmailValidator() {
        assertThat(isEmailValid("valid@north-47.com")).isTrue();
        assertThat(isEmailValid("invalidnorth-47.com")).isFalse();
        assertThat(isEmailValid("invalid@47")).isFalse();
    }
}

MockitoJUnitRunner is used for using Mockito in tests and detection of @Mock annotations. In this case, we are testing email validator as a separate unit from the rest of the application. MockitoJUnitRunner is not a Spring Boot annotation, so this way of writing unit tests can be done in other frameworks as well.

Integration testing of the whole application

If we have to choose only one type of test in Spring Boot, then using the integration test to test the whole application makes the most sense. We will not be able to cover all the scenarios, but we will significantly reduce the risk. In order to do integration testing, we need to start the application context. In Spring Boot 2, this is achieved with following annotations @RunWith(SpringRunner.class) and @SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT. This will start the application on some random port and we can inject beans into our tests and do REST calls on application endpoints.

In the following is an example code for testing book endpoints. For making rest API calls we are using Spring TestRestTemplate which is more suitable for integration tests compared to RestTemplate.

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
public class SpringBootTestingApplicationTests {

    @Autowired
    private TestRestTemplate restTemplate;

    @Autowired
    private BookRepository bookRepository;

    private Book defaultBook;

    @Before
    public void setup() {
        defaultBook = new Book(null, "Asimov", "Foundation", 350);
    }

    @Test
    public void testShouldReturnCreatedWhenValidBook() {
        ResponseEntity<Book> bookResponseEntity = this.restTemplate.postForEntity("/books", defaultBook, Book.class);

        assertThat(bookResponseEntity.getStatusCode()).isEqualTo(HttpStatus.CREATED);
        assertThat(bookResponseEntity.getBody().getId()).isNotNull();
        assertThat(bookRepository.findById(1L)).isPresent();
    }

    @Test
    public void testShouldFindBooksWhenExists() throws Exception {
        Book savedBook = bookRepository.save(defaultBook);

        ResponseEntity<Book> bookResponseEntity = this.restTemplate.getForEntity("/books/" + savedBook.getId(), Book.class);

        assertThat(bookResponseEntity.getStatusCode()).isEqualTo(HttpStatus.OK);
        assertThat(bookResponseEntity.getBody().getId()).isEqualTo(savedBook.getId());
    }

    @Test
    public void testShouldReturn404WhenBookMissing() throws Exception {
        Long nonExistingId = 999L;
        ResponseEntity<Book> bookResponseEntity = this.restTemplate.getForEntity("/books/" + nonExistingId, Book.class);

        assertThat(bookResponseEntity.getStatusCode()).isEqualTo(HttpStatus.NOT_FOUND);
    }
}

Integration testing of web layer (controllers)

Spring Boot offers the ability to test layers in isolation and only starting the necessary beans that are required for testing. From Spring Boot v1.4 on there is a very convenient annotation @WebMvcTest that only the required components in order to do a typical web layer test like controllers, Jackson converters and similar without starting the full application context and avoid startup of unnecessary components for this test like database layer. When we are using this annotation we will be making the REST calls with MockMvc class.

Following is an example of testing the same endpoints like in the above example, but in this case, we are only testing if the web layer is working as expected and we are mocking the database layer using @MockBean annotation which is also available starting from Spring Boot v1.4. Using these annotations we are only using BookController in the application context and mocking database layer.

@RunWith(SpringRunner.class)
@WebMvcTest(BookController.class)
public class BookControllerTest {
    @Autowired
    private MockMvc mockMvc;

    @MockBean
    private BookRepository repository;

    @Autowired
    private ObjectMapper objectMapper;

    private static final Book DEFAULT_BOOK = new Book(null, "Asimov", "Foundation", 350);

    @Test
    public void testShouldReturnCreatedWhenValidBook() throws Exception {
        when(repository.save(Mockito.any())).thenReturn(DEFAULT_BOOK);

        this.mockMvc.perform(post("/books")
                .content(objectMapper.writeValueAsString(DEFAULT_BOOK))
                .contentType(MediaType.APPLICATION_JSON)
                .accept(MediaType.APPLICATION_JSON))
                .andExpect(status().isCreated())
                .andExpect(MockMvcResultMatchers.jsonPath("$.name").value(DEFAULT_BOOK.getName()));
    }

    @Test
    public void testShouldFindBooksWhenExists() throws Exception {
        Long id = 1L;
        when(repository.findById(id)).thenReturn(Optional.of(DEFAULT_BOOK));

        this.mockMvc.perform(get("/books/" + id)
                .accept(MediaType.APPLICATION_JSON))
                .andExpect(status().isOk())
                .andExpect(MockMvcResultMatchers.content().string(Matchers.is(objectMapper.writeValueAsString(DEFAULT_BOOK))));
    }

    @Test
    public void testShouldReturn404WhenBookMissing() throws Exception {
        Long id = 1L;
        when(repository.findById(id)).thenReturn(Optional.empty());

        this.mockMvc.perform(get("/books/" + id)
                .accept(MediaType.APPLICATION_JSON))
                .andExpect(status().isNotFound());
    }
}

Integration testing of database layer (repositories)

Similarly to the way that we tested web layer we can test the database layer in isolation, without starting the web layer. This kind of testing in Spring Boot is achieved using the annotation @DataJpaTest. This annotation will do only the auto-configuration related to JPA layer and by default will use an in-memory database because its fastest to startup and for most of the integration tests will do just fine. We also get access TestEntityManager which is EntityManager with supporting features for integration tests of JPA.

Following is an example of testing the database layer of the above application. With these tests we are only checking if the database layer is working as expected we are not making any REST calls and we are verifying results from BookRepository, by using the provided TestEntityManager.

@RunWith(SpringRunner.class)
@DataJpaTest
public class BookRepositoryTest {
    @Autowired
    private TestEntityManager entityManager;

    @Autowired
    private BookRepository repository;

    private Book defaultBook;

    @Before
    public void setup() {
        defaultBook = new Book(null, "Asimov", "Foundation", 350);
    }

    @Test
    public void testShouldPersistBooks() {
        Book savedBook = repository.save(defaultBook);

        assertThat(savedBook.getId()).isNotNull();
        assertThat(entityManager.find(Book.class, savedBook.getId())).isNotNull();
    }

    @Test
    public void testShouldFindByIdWhenBookExists() {
        Book savedBook = entityManager.persistAndFlush(defaultBook);

        assertThat(repository.findById(savedBook.getId())).isEqualTo(Optional.of(savedBook));
    }

    @Test
    public void testFindByIdShouldReturnEmptyWhenBookNotFound() {
        long nonExistingID = 47L;
        
        assertThat(repository.findById(nonExistingID)).isEqualTo(Optional.empty());
    }
}

Conclusion

You can find a working example with all of these tests on the following repo: https://gitlab.com/47northlabs/public/spring-boot-testing.

In the following table, I’m showing the execution times with the startup of the different types of tests that I’ve used as examples. We can clearly see that unit tests, as mentioned in the beginning, are the fastest ones and that separating integration tests into layered testing leads to faster execution times.

Type of testExecution time with startup
Unit test80 ms
Integration test620 ms
Web layer test190 ms
Database layer test220 ms

Editable Templates in AEM 6.5

Reading Time: 8 minutes

Editable templates have been introduced in AEM 6.2 and since then with each next version they are constantly improving. They allow authors to create and edit templates. Template authors can create and configure templates without the help of the development team. To be able to create and edit templates the authors must be members of the template-authors group.

Here are some of the benefits of using editable templates:

  • editable templates provide flexibility to define content policies to persist in design properties. There is no need for design mode which requires extra permission to give author to set design properties along with a replication of design page after making any design changes
  • it maintains a dynamic connection between pages and template which gives power to template authors to change template structure along with locked content which will be reflected across all pages based on the editable template
  • this doesn’t require any extra training for authors to create a new page based on an editable template. Authors can similarly create a new page as they create with static templates
  • can be created and edited by your authors
  • after the new page is created, a dynamic connection is maintained between the page and the template. This means that changes to the template structure are reflected on any pages created with that template (changes to the initial content will bot be reflected)
  • uses content policies (edited from the template author) to persist the design properties (does not use Design mode within the page editor)
  • are stored under /conf

Here are the tasks that the template author can do with the help of the AEM’s template editor:

  • create a new template or copy an existing template
  • manage the life cycle of the template
  • add components to the template and position them on a responsive grid
  • pre-configure the components using policies
  • define which components can be edited on pages created with the template

Create editable templates from the configuration browser

Go to Tools -> General -> Configuration Browser

It will create the basic hierarchy of templates in /conf directory.

There are three parts of template editor:

  • templates: all dynamic (editable) templates created by authors
  • policies: there are two types of policies
    template level policy: used for adding policies to the template
    component level policy: used for adding policies to the component level
  • template-types: base templates for the creation of new templates in runtime

There are three parts of a template:

  • initial: the initial content of the page created – based on the template. This content could be edited or removed by the author
  • policies: here a particular template is linked to a policy by using cq:policy property
  • structure:
    – the structure allows you to define the structure of the template
    – the components defined in the template level can’t be removed from the resulting page
    – if you want that template authors can add and remove components, add parsys to the template
    – components can be locked and unlocked to allow you to define initial content

How to create base template-type

To start working on editable templates, we need to create a page component. Navigate to /apps/47northlabs-sample-project/components/page and click on create component.

Next would be to create a template-type which helps the authors to create its editable templates. (Please note that the configuration is available on GitLab).

How can authors create editable templates

Next step would be to create the editable template. That can be done from Tools -> General -> Templates -> 47northlabs-sample-project -> Choose empty template.

Add template title, description and open the template. There should be a responsive grid available.

Configure policies

With defining policies, authors will be able to configure different components on the page for regular authors to place on the page. Since there is no defined policy on the template, no component is assigned to this template. Click on the first (Policy) icon will take the author to a new screen where the author can define what components can put on this template.

Create a new policy.

Once done, components will be available on the page to drag & drop on the page.

Enable template

Once template authors are done with creating templates, authors must enable templates to make them available in sites section where regular authors can select templates to create pages.

Create editable templates from code

Another possibility is to We can create a sample project based on Adobe Archetype using the following command:

mvn archetype:generate -DarchetypeGroupId=com.adobe.granite.archetypes -DarchetypeArtifactId=aem-project-archetype -DarchetypeVersion=19

The generated sample project already contains content-page editable template by default. We are going to create three additional editable templates i.e.

  • home-page
  • landing-page
  • language-page

We are going to add some components as default in the templates, and few pages in ui.content project. The general idea is to have some test content and to play around with some corner cases. Let’s start.

The demo content is like on the image below.

We have four editable templates available. With property cq:allowedTemplates we control the available templates that can be used to create child pages under a given page. For example, for the content-page we want to make available only the landing-page, so the initial content will look like:

<jcr:content
    ...
    cq:allowedTemplates="[conf/aem-editable-templates-demo/settings/wcm/templates/landing-page]">
    ...
 </jcr:content>

Similar, the initial content of the landing-page will have similar configuration:

<jcr:content
    ...
    cq:allowedTemplates="[conf/aem-editable-templates-demo/settings/wcm/templates/content-page]">
    ...
</jcr:content>

What happens in case we want to add a fancy new template which should be available only for a particular page types? These two solutions come to my mind:

  • write a groovy script that will update existing cq:allowedTemplates property of all the pages created for a given template with the new template
  • update the structure of the given template, so all the existing pages created with that template will be updated

With the second approach, for example, if we want to add that fancy page template to the content page template, we have to add the following configuration to the structural element of the content-page template:

<jcr:content
    ...
    cq:allowedTemplates="[conf/aem-editable-templates-demo/settings/wcm/templates/fancy-page]">
    ...
</jcr:content>

Given this, it is important to point out the differences between initial content and structural elements of the template definition.

Initial Content

  • is merged with the structure during page creation
  • changes to the initial content will not be reflected in all pages created with the given template
  • the jcr:content node is copied to any new page
  • the root node holds a list of components that defines what will be available in the resulting page

Structure

  • is merged with the initial content during page creation
  • changes to the structure will be reflected in all pages created with the given template
  • the cq:responsive node is responsible for the responsive layout
  • the root node defines the available components in the resulting page

Conclusion

With an editable template, you give template authors the flexibility to create and modify the template as they want. It acts as a central place to manage everything about the template (structure, initial content, layout) and components used in the template. If you are migrating to use an editable template, make sure you accessed the requirements for not only the template but also the components.

The code from this blog post is available on GitLab.

The way to the professional VueJS-Project ( Part 1 )

Reading Time: 4 minutes

Okay, it’s usually easy to start a VueJS project. There are many tutorials or Vue-Cli templates and with the Vue-Cli 3.x, it’s super easy to create your own. Here are some links:

BUT what if the requirements increase and become more demanding? Or if component/functional testing and typification are required? Or “newer” technologies such as GraphQL, serverless, state machines/diagrams and module dependency management come into play?

How to start?

We’ll start with the easiest way and use the VueCli API.

# via console
vue create professional-world
# or via the vue cli gui
vue ui

We will be prompted to pick a preset. First, select manually features.

After we select these features, we will choose the following setup.

Some points about our setup:

  • class-style
    I choose this mode to show you the TypeScript decorator for class-style Vue components. But of course, you can take the normal style it is not so many new stuff for the first time.
  • history mode
    We do not choose this router mode, because for a simpler environment setup. If you don’t like this – feel free to change this. Read more about it here.
  • css pre-processor
    You can choose whatever you want. I prefer stylus regarding the less code. 😏
  • lintner
    It makes sense to activate here TSLint and also the auto-fix on commit. But we will add husky instead of a git-hook.
  • cypress vs nightwatch
    We choose cypress because it has some nice other testing features e.g. debuggability, automatic waiting, network traffic control, spies, stubs, clocks and screenshots and videos testing. But we will pay for it with the limited browser compatibility at the moment – later we will close this gap with regression tests.
  • config placing
    I prefer to use in dedicated config files. It is easier to change and also the package.json is more readable if you add more dependencies.

Now we will add some more dependencies before we can start:

yarn add -D husky vue-cli-plugin-pug eslint-plugin-pug jest-image-snapshot
  • husky
    It makes git hooks easy
  • pug
    It’s a robust, elegant, feature-rich template engine for Node.js
  • jest-image-snapshot
    It’s a jest matcher for image comparisons. Most commonly used for visual regression testing.

Last configurations

Husky needs to add the following file.huskyrc.js (If you want you can delete the links for the git-hook in the package.json😎

module.exports = {
  "hooks": {
    "pre-commit": "lint-staged"
  }
}

For pug, we add a vue plugin and also an eslint plugin. Eslint itself needs the following configuration in tslint.json.

"plugins": [ "pug" ],

Start coding

After this configurations we can start coding ☺️ Ok we start first with refactoring the example files from the Vue-Cli template to pug syntax. You can use for this a formatter e.g. html-to-pug.com.

Extra tipp

Create a new file named .editorconfig and add following content. It helps you with keep the coding style – you do not need to worry about the format.

root = true

[*]
charset = utf-8
indent_style = space
indent_size = 2
end_of_line = lf
insert_final_newline = true
trim_trailing_whitespace = true

After this you should have this status from your project:
https://gitlab.com/47northlabs/public/a-professional-vue-world/tree/part-1

Following parts

  • Coding with typescript, stylus and pug ( Part 2 )
  • First steps with unit component, functional and e2e tests ( Part 3 )
  • Vue and VueX meets state machines ( Part 4 )
  • Apollo/GraphQL with serverless services ( Part 5 )
  • Module dependency management in VUE ( Part 6 )

My opinion on talks from JPoint Moscow 2019

Reading Time: 4 minutes

If you have read my previous parts, this is the last one in which I will give my highlights on the talks that I have visited.

First stop was the opening talk from Anton Keks on topic The world needs full-stack craftsmen. Interesting presentation about current problems in software development like splitting development roles and what is the real result of that. Another topic was about agile methodology and is it really helping the development teams to build a better product. Also, some words about startup companies and usual problems. In general, excellent presentation.

Simon Ritter, in my opinion, he had the best talks about JPoint. First day with the topic JDK 12: Pitfalls for the unwary. In this session, he covered the impact of application migration from previous versions of Java to the last one, from aspects like Java language syntax, class libraries and JVM options. Another interesting thing was how to choose which versions of Java to use in production. Well balanced presentation with real problems and solutions.

Next stop Kohsuke Kawaguchi, creator of Jenkins, with the topic Pushing a big project forward: the Jenkins story. It was like a story from a management perspective, about new projects that are coming up and what the demands of the business are. To be honest, it was a little bit boring for me, because I was expecting superpowers coming to Jenkins, but he changed the topic to this management story.

Sebastian Daschner from IBM, his topic was Bulletproof Java Enterprise applications. This session covered which non-functional requirements we need to be aware of to build stable and resilient applications. Interesting examples of different resiliency approaches, such as circuit breakers, bulkheads, or backpressure, in action. In the end, adding telemetry to our application and enhancing our microservice with monitoring, tracing, or logging in a minimalistic way.

Again, Simon Ritter, this time, with the topic Local variable type inference. His talk was about using var and let the compiler define the type of the variable. There were a lot of examples, when it makes sense to use it, but also when you should not. In my opinion, a very useful presentation.

Rafael Winterhalter talked about Java agents, to be more specific he covered the Byte Buddy library, and how to program Java agents with no knowledge of Java bytecode. Another thing was showing how Java classes can be used as templates for implementing highly performant code changes, that avoid solutions like AspectJ or Javassist and still performing better than agents implemented in low-level libraries.

To summarize, the conference was excellent, any Java developer would be happy to be here, so put JPoint on your roadmap for sure. Stay tuned for my next conference, thanks for reading, THE END 🙂

Impressions from UIKonf Berlin 2019

Reading Time: 5 minutes

With little doubts in the beginning, big uncertainty and the questions in my head “Is this the right conference?”, “Should I’ve chosen another conference?”… But it happened and I’m satisfied at the end of the day with my choice. Some of my colleagues were surprised of my choice (Berlin), but yes, I can definetly say “It was not a mistake”.

So how was it?

Day 1: Social events

I’ve chosen to be with the walking tour group. There were possibilities to be in different groups like Bicycle group, Boat trip group… But this was my choice. We were split into two groups by 12 people and a local tour guide. We visited different spots, like the Memorial to the Murdered Jews of Europe, the popular Checkpoint Charlie, the Berlin French Cathedral, the Brandenburg Gate, etc…

The day finished in a big restaurant where we received our conference badges and promo materials of the conference. It was a very relaxed atmosphere and new chance to meet and introduce yourself to new developers. Some of the participants were professionally oriented and immediately started to talk about iOS topics. Some were on their way to the bar ordering german beers and the popular “wursts”. There also was a small group that played some kind of table tennis game. I’ve attended in all of these social activities.

Day 2: Opening day of the UIKonf 2019

After the opening words and the short introduction, the conference officially started. The first day consisted of 9 presentations. The strongest impression of the day was the presentation of Ellie Shin about Mock Generator for Swift and how they at Uber solved the problem of mocking. They optimized the app to build the mocks in around 10 seconds instead of the previous time needed; it was more than 1 hour.

One of the best presentations of the day was from the lovely Julietta Yaunches. She talked about consistency principles in programming, how to keep the coding style consistent and not make big changes every day, how to decide when to introduce something new in your code, etc.

It were good presentations from Kristina Fox about Internationalization of the iOS applications, and the opening presentation from Kaya Tomas about Accessibility and Inclusion in the apps. A topic I wasn’t aware before this conference.

I also have to mention the presentation of Glenna Buford about how to organize the network stack of your iOS application.

The day finished with a social event named Ambassador’s dinner. The local participants of the conference had the task to show the other participants (foreigners, including me) the typical restaurants and bars in Berlin. I was in a group that visited Hofbrau Munich Restaurant in Berlin. We had some good discussions with colleagues from all over the World. Throughout the evening we enojyed some tasty good german beer and pork meat.

Day 3: 2nd day of UIKonf 2019

The second day had 9 new presentations and 9 new speakers.

The best of the day in my opinion was the presentation of Kate Castellano. She talked about applications with backend driven UI.

Among the better presentations was of Neha Kulkarni about Advanced Colors in Swift.

I will also mention Erica Sadun. She talked about Swift Strings. In her presentation she showed best practices about using strings in Swift, interpolation of strings, etc.

Also there was a choice to visit 2 workshops on MyTaxi boat stage near the conference hall. I visited this stage and it was about the management process, recruiting and organizing the teams in MyTaxi company. The 2nd presentation was about Tips and Tricks they are using for testing their apps.

The day finished with a big party in a l