Microservice architecture: Using Java thread locals and Tomcat/Spring capabilities for automated information propagation

Reading Time: 13 minutes

Inter-microservice communication has always brought questions and challenges to software architects. For example, when it comes to propagating certain information (via HTTP headers for instance) through a whole chain of calls in the scope of one transaction, we want this to happen outside of the microservices’ business logic. We do not want to tackle and work with these headers in the presentation or service layers of the application, especially if they are not important to the microservice for completing some business logic task. I would like to show you how you can automate this process by using Java thread locals and Tomcat/Spring capabilities, showing a simple microservice architecture.

Architecture overview

Architecture overview

This is the sample architecture we will be looking at. We have a Zuul Proxy Server that will act as a gateway towards our two microservices: the Licensing Microservice and the Organization Service. Those three will be the main focus of this article. Let’s say that a single License belongs to a single Organization and a single Organization can deal with multiple Licenses. Additionally, our services are registered to a Eureka Service Discovery and they pull their application config from a separate Configuration Server.

Simple enough, right? Now, what is the goal we want to achieve?

Let’s say that we have some sort of HTTP headers related to authentication or tracking of the chain of calls the application goes through. These headers arrive at the proxy along with each request from the client-side and they need to be propagated towards each microservice participating in the action of completing the user’s request. For simplicity’s sake, let’s introduce two made up HTTP headers that we need to send: correlation-id and authentication-token. You may say: “Well, the Zuul proxy gateway will automatically propagate those to the corresponding microservices, if not stated otherwise in its config”. And you are correct because this is a gateway that has an out-of-the-box feature for achieving that. But, what happens when we have an inter-microservice communication, for example, between the Licensing Microservice and the Organization Microservice. The Licensing Microservice needs to make a call to the Organization Microservice in order to complete some task. The Organization Microservice needs to have the headers sent to it somehow. The “go-to, technical debt” solution would be to read these headers in the controller/presentation layer of our application, then pass them down to the business logic in the service layer, which in turn is gonna pass them to our configured HTTP client, which in the end is gonna send them to the Organization Microservice. Ugly right? What if we have dozens of microservices and need to do this in each and every single one of them? Luckily, there is a lot prettier solution that includes using a neat Java feature: ThreadLocal.

Java thread locals

The Java ThreadLocal class provides us with thread-local variables. What does this mean? Simply put, it enables setting a context (tying all kinds of objects) to a certain thread, that can later be accessed no matter where we are in the application, as long as we access them within the thread that set them up initially. Let’s look at an example:

public class Main {

    public static final ThreadLocal<String> threadLocalContext = new ThreadLocal<>();

    public static void main(String[] args) {
        threadLocalContext.set("Hello from parent thread!");
        Thread childThread = new Thread(() -> {
            System.out.println("Child thread: " + threadLocalContext.get()); // Prints null
        });
        childThread.start();
        childThread.join(); // Waiting for the child thread to finish
        System.out.println("Parent thread: " + threadLocalContext.get()); // Prints "Hello from parent thread!"
    }
}

We have a single static final ThreadLocal<String> reference that we use for setting some information to the thread (in this case, the string “Hello from parent thread!” to the main thread). Accessing this variable via threadLocalContext.get() (no matter in which class we are, as long as we are on the same main thread) produces the expected string we have set previously. Accessing it in a child thread produces a null result. What if we set some context to the child thread as well:

threadLocalContext.set("Hello from parent thread!");
Thread childThread = new Thread(() -> {
    threadLocalContext.set("Hello from child thread!");
    System.out.println("Child thread: " + threadLocalContext.get()); // Prints "Hello from child thread!"
});
childThread.start();
childThread.join(); // Waiting for the child thread to finish
System.out.println("Parent thread: " + threadLocalContext.get()); // Prints "Hello from parent thread!"

We can notice that the two threads have completely separate contexts. Even though they access the same threadLocalContext reference, in the background, the context is relative to the calling thread. What if we wanted the child thread to inherit its parent context:

public class Main {

    private static final ThreadLocal<String> threadLocalContext = new InheritableThreadLocal<>();

    public static void main(String[] args) throws InterruptedException {
        threadLocalContext.set("Hello from parent thread!");
        Thread childThread = new Thread(() -> {
            System.out.println("Child thread: " + threadLocalContext.get()); // Prints "Hello from parent thread!"
            threadLocalContext.set("Hello from child thread!");
            System.out.println("Child thread: " + threadLocalContext.get()); // Prints "Hello from child thread!"
        });
        childThread.start();
        childThread.join(); // Waiting for the child thread to finish
        System.out.println("Parent thread: " + threadLocalContext.get()); // Prints "Hello from parent thread!"
    }
}

We only changed the ThreadLocal to an InheritableThreadLocal in order to achieve that. We can notice that the first print inside the child thread does not render null anymore. The moment we set another context to the child thread, the two contexts become disconnected and the parent keeps its old one. Note that by using the InheritableThreadLocal, the reference to the parent context gets copied to the child, meaning: this is not a deep copy, but two references pointing to the same object (in this case, the string “Hello from parent thread!”). If, for example, we used InheritableThreadLocal<SomeCustomObject> and tackled directly some of the properties of the object inside the child thread (threadLocalContext.get().setSomeProperty("some value")), then this would also be reflected in the parent thread and vice versa. If we want to disconnect the two contexts completely, we just call .set(new SomeCustomObject()) on one of the threads, which will turn its local reference to point to the newly created object.

Now, you may be wondering: “What does this have to do with automatically propagating headers to a microservice?”. Well, by using Servlet containers such as Tomcat (which Spring Boot has it embedded by default), we handle each new HTTP request (whether we like/know it or not :-)) in a separate thread. The servlet container picks an idle thread from its dedicated thread pool each time a new call is made. This thread is then used by Spring Boot throughout the processing of the request and the return of the response. Now, it is only a matter of setting up Spring filters and HTTP client interceptors that will set and get the local thread context that will contain the HTTP headers.

Solution

First off, let’s create a simple POJO class that is going to contain both of the headers that need propagating:

@Getter
@Setter
@ToString
@AllArgsConstructor
@NoArgsConstructor
public class RequestHeadersContext {

    public static final String CORRELATION_ID = "correlation-id";
    public static final String AUTHENTICATION_TOKEN = "authentication-token";

    private String correlationId;
    private String authenticationToken;
}

Next, we will create a utility class for setting and retrieving the thread-local context:

public final class RequestHeadersContextHolder {

    private static final ThreadLocal<RequestHeadersContext> requestHeaderContext = new ThreadLocal<>();

    public static void clearContext() {
        requestHeaderContext.remove();
    }

    public static RequestHeadersContext getContext() {
        RequestHeadersContext context = requestHeaderContext.get();
        if (context == null) {
            context = createEmptyContext();
            requestHeaderContext.set(context);
        }
        return context;
    }

    public static void setContext(RequestHeadersContext context) {
        Assert.notNull(context, "Only not-null RequestHeadersContext instances are permitted");
        requestHeaderContext.set(context);
    }

    public static RequestHeadersContext createEmptyContext() {
        return new RequestHeadersContext();
    }
}

The idea is to have a Spring filter, that is going to read the HTTP headers from the incoming request and place them in the RequestHeadersContextHolder:

@Configuration
public class RequestHeadersServiceConfiguration {

    @Bean
    public Filter getFilter() {
        return new RequestHeadersContextFilter();
    }

    private static class RequestHeadersContextFilter implements Filter {

        @Override
        public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain filterChain) throws IOException, ServletException {
            HttpServletRequest httpServletRequest = (HttpServletRequest) servletRequest;
            RequestHeadersContext context = new RequestHeadersContext(
                    httpServletRequest.getHeader(RequestHeadersContext.CORRELATION_ID),
                    httpServletRequest.getHeader(RequestHeadersContext.AUTHENTICATION_TOKEN)
            );
            RequestHeadersContextHolder.setContext(context);
            filterChain.doFilter(servletRequest, servletResponse);
        }
    }
}

We created a RequestHeadersServiceConfiguration class which, at the moment, has a single Spring filter bean defined. This filter is going to read the needed headers from the incoming request and set them in the RequestHeadersContextHolder (we will need to propagate those later when we make an outgoing request to another microservice). Afterwards, it will resume the processing of the request and will give control to the other filters that might be present in the filter chain. Keep in mind that, all the while, this code executes within the boundaries of the dedicated Tomcat thread, which the container had assigned to us.

Next, we need to define an HTTP client interceptor which we are going to link to a RestTemplate client, which in turn is going to execute the interceptor’s code each time before it makes a request to an outer microservice. We can add this new RestTemplate bean inside the same configuration file:

@Configuration
public class RequestHeadersServiceConfiguration {
    
    // .....

    @LoadBalanced
    @Bean
    public RestTemplate getRestTemplate() {
        RestTemplate restTemplate = new RestTemplate();
        List<ClientHttpRequestInterceptor> interceptors = restTemplate.getInterceptors();
        interceptors.add(new RequestHeadersContextInterceptor());
        return restTemplate;
    }

    private static class RequestHeadersContextInterceptor implements ClientHttpRequestInterceptor {

        @Override
        @NonNull
        public ClientHttpResponse intercept(@NonNull HttpRequest httpRequest,
                                            @NonNull byte[] body,
                                            @NonNull ClientHttpRequestExecution clientHttpRequestExecution) throws IOException {
            RequestHeadersContext context = RequestHeadersContextHolder.getContext();
            HttpHeaders headers = httpRequest.getHeaders();
            headers.add(RequestHeadersContext.CORRELATION_ID, context.getCorrelationId());
            headers.add(RequestHeadersContext.AUTHENTICATION_TOKEN, context.getAuthenticationToken());
            return clientHttpRequestExecution.execute(httpRequest, body);
        }
    }
}

As you might have guessed, the interceptor reads the header values from the thread-local context and sets them up for the outgoing request. The RestTemplate just adds this interceptor to the list of its already existing ones.

A good-to-have thing will be to eventually clear/remove the thread-local variables from the thread. When we have an embedded Tomcat container, missing out on this point will not impose a problem, since along with the Spring application, the Tomcat container dies as well. This means that all of the threads will altogether be destroyed and the thread-local memory released. However, if we happen to have a separate servlet container and we deploy our app as a .war instead of a standalone .jar, not clearing the context might introduce some memory leaks. Imagine having multiple applications on our standalone servlet container and each of them messing around with thread locals. The container shares its threads with all of the applications. When one of the applications is turned off, the container is going to continue to run and the threads which it borrowed to the application will not cease to exist. Hence, the thread-local variables will not be garbage collected, since there are still references to them. That is why we are going to define and add an interceptor to the Spring interceptor registry, which will clear the context after a request finishes and the thread can be assigned to other tasks:

@Configuration
public class WebMvcInterceptorsConfiguration implements WebMvcConfigurer {

    @Override
    public void addInterceptors(InterceptorRegistry registry) {
        registry.addInterceptor(new RequestHeadersContextClearInterceptor()).addPathPatterns("/**");
    }

    private static class RequestHeadersContextClearInterceptor implements HandlerInterceptor {

        @Override
        public void afterCompletion(HttpServletRequest request, HttpServletResponse response, Object handler, Exception exception) {
            RequestHeadersContextHolder.clearContext();
        }
    }
}

All we need to do now is wire these configurations into our microservices. We can create a separate library extracting the config (and maybe upload it to an online repository, such as Maven Central, or our own Nexus) so that we do not need to copy-paste all of the code into each of our microservices. Whatever the case, it is good to make this library easy to use. That is why we are going to create a custom annotation for enabling it:

@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
@Import({RequestHeadersServiceConfiguration.class, WebMvcInterceptorsConfiguration.class})
public @interface EnableRequestHeadersService {
}

Usage

Let’s see how we can leverage and use this library from inside a microservice. Only a couple of things are needed.

First, we need to annotate our application with the @EnableRequestHeadersService:

@SpringBootApplication
@EnableRequestHeadersService
public class LicensingServiceApplication {

    public static void main(String[] args) {
        SpringApplication.run(LicensingServiceApplication.class, args);
    }
}

Second, we need to inject the already defined RestTemplate in our microservice and use it as given:

@Component
public class OrganizationRestTemplateClient {

    private final RestTemplate restTemplate;

    public OrganizationRestTemplateClient(RestTemplate restTemplate) {
        this.restTemplate = restTemplate;
    }

    public Organization getOrganization(String organizationId) {
        ResponseEntity<Organization> restExchange = restTemplate.exchange(
                "http://organizationservice/v1/organizations/{organizationId}",
                HttpMethod.GET,
                null,
                Organization.class,
                organizationId
        );
        return restExchange.getBody();
    }
}

We can notice that the getOrganization(String organizationId) method does not handle any HTTP headers whatsoever. It just passes the URL and the HTTP method and lets the imported configuration do its magic. As simple as that! We can now call the getOrganization method wherever we like, without having any sort of knowledge about the headers that are being sent in the background. If we have the need to read them somewhere in our code, or even change them, then we can use the RequestHeadersContextHolder.getContext()/setContext() static methods wherever we like in our microservice, without the need to parse them from the request object.

Feign HTTP Client

If we want to leverage a more declarative type of coding we can always use the Feign HTTP Client. There are ways to configure interceptors here as well, so, using the RestTemplate is not strictly required. We can add the new interceptor configuration to the already existing RequestHeadersServiceConfiguration class:

@Configuration
public class RequestHeadersServiceConfiguration {

    // .....

    @Bean
    public RequestInterceptor getFeignRequestInterceptor() {
        return new RequestHeadersContextFeignInterceptor();
    }

    private static class RequestHeadersContextFeignInterceptor implements RequestInterceptor {

        @Override
        public void apply(RequestTemplate requestTemplate) {
            RequestHeadersContext context = RequestHeadersContextHolder.getContext();
            requestTemplate.header(RequestHeadersContext.CORRELATION_ID, context.getCorrelationId());
            requestTemplate.header(RequestHeadersContext.AUTHENTICATION_TOKEN, context.getAuthenticationToken());
        }
    }
}

The new bean we created is going to automatically be wired as a new Feign interceptor for our client.

Next, in our microservice, we can annotate our application class with @EnableFeignClients and then create our Feign client:

@FeignClient("organizationservice")
public interface OrganizationFeignClient {

    @GetMapping(value = "/v1/organizations/{organizationId}")
    Organization getOrganization(@PathVariable("organizationId") String organizationId);
}

All that we need to do now, is just inject our new client anywhere in our services and use it from there. In comparison to the RestTemplate, this is a more concise way of making HTTP calls.

Asynchronous HTTP requests

What if we do not want to wait for the request to the Organization Microservice to finish and want to execute it asynchronously and concurrently (using the @EnableAsync and @Async annotations from Spring, for example). How are we going to access the headers that need to be propagated in this case? You might have guessed it: by using InheritableThreadLocal instead of ThreadLocal. As mentioned earlier above, the child threads we create separately (aside from the Tomcat ones which will be the parents) can inherit their parent’s context. That way we can send header-populated requests in an asynchronous manner. There is no need to clear the context for the child threads (side note: clearing it will not affect the parent thread, and clearing the parent thread will not affect the child thread; it will only set the current thread’s local context reference to null), since these will be created from a separate thread pool that has nothing to do with the container’s one. The child threads’ memory will be cleared after execution or after the Spring application exits because eventually, they die off.

Summary

I hope you will find this neat little trick useful while refactoring your microservices. A lot of Spring’s functionality is actually based on thread locals. Looking into its source code, you will find a lot of similar/same concepts as the ones mentioned above. Spring Security is one example, Zuul Proxy is another.

The full code for this article can be found here.

References

Spring Microservices In Action by John Carnell

The practical guide – Part 4: Dependency injection with Hilt

Reading Time: 9 minutes

The practical guide – Part 1: Refactor android application to follow the MVP design pattern
The practical guide – Part 2: MVP -> MVVM
The practical guide – Part 3: Clean Architecture

Developing an application with clean architecture and design patterns is a path to success. But if we don’t know how to handle the dependencies that we created, this path is not going to be straightforward. It is important to understand what dependency injection is, and how to handle the dependencies properly, so we don’t end up with a mess. In the previous article, we created DependencyProvider object, where we provided the dependencies. Now, we will use the framework Hilt that will provide the dependencies for us. But let’s start with the basics:

Dependency Injection

Before going into dependency injection, let’s define what is dependency alone? When Class A uses some methods of Class B, we say that A is dependent on B. So we have dependency A -> B. Now, imagine that A creates an instance of B, so whenever we create A, we don’t need to supply B, because A automatically creates B for itself.

This is not good. Why? Mostly, because we cannot “inject” or provide other implementation of B. Why do we need other implementations of B? Well, the most common use case is for testing. If we want to test A, it will be very helpful for us to be able to provide mock or test implementation of B, instead of the real one. So, what is the fix? The fix is very obvious: instead of letting A create B, we pass B through the constructor/method/parameter of A. That means, whenever we want to create an instance of A, we must provide an instance of B first. So, we will have:

This little tweak that we just made, has a fancy name Inversion of control which is a general term of the other fancy term Dependency Injection. For the difference between these two terms, you can check this SO answer.

Too many dependencies

In the example above, A has only one dependency, and B doesn’t have any. What if B, has any dependencies, and that dependencies have their own dependencies and so on…? Then, when we want to create an instance of A, we will have to create all of those dependencies. And, if we use A in many places? You see where I am going, right?

How to fix this? One thing you can think of is by creating some class where you handle all the dependencies there (Like our DependencyProvider class). And, that is ok, you can do it by yourself. But, you can also use some framework that can help you.

Hilt

As you may suspect, Hilt is a library that helps us with handling the dependencies. It is a Google library, made specifically for Android. It is the most popular dependency injection library for Android development and is much easier to use than the more general Dagger 2 library.

Before going to the code, we have to check the architecture of the Hilt implementation, and the concept it uses. The three main concepts are:

  • Module – a class in which we provide the dependencies. Here we create methods that return the actual implementation of the dependency.
  • Component – an interface or an abstract class that connects dependencies from the Module and the class where we use those dependencies (In Dagger 2, we had to create these components, but Hilt creates most of them for us).
  • Scope – annotation which connects the lifecycle of the objects that we provide in the module, and the component’s lifecycle. (Hilt also has the most common scopes created for us)

There are a lot of other things that we have to learn, but we will do it with the implementation.

Implementing Hilt

First, we have to add the library to our project. In the project’s root build.gradle file, we have to add hilt-android-gradle-plugin:

buildscript {
    // ...
    dependencies {
        // ...
        classpath 'com.google.dagger:hilt-android-gradle-plugin:2.38.1'
    }
}

Then, we have to apply the Gradle plugin and add the dependencies:

plugins {
  id 'kotlin-kapt'
  id 'dagger.hilt.android.plugin'
}

dependencies {
    implementation "com.google.dagger:hilt-android:2.38.1"
    kapt "com.google.dagger:hilt-compiler:2.38.1"
}

Next, we’ll annotate our application class with @HiltAndroidApp. If you don’t have an application class, create one (Don’t forget to add the class in the AndroidManifest file).

@HiltAndroidApp
class QuotesApp: Application() {
}

Once we annotate our application class, we can provide dependencies to other Android classes by annotating them with @AndroidEntryPoint. Hilt supports these Android classes: Application (by using @HiltAndroidApp), ViewModel (by using @HiltViewModel), Activity, Fragment, View, Service and BroadcastReceiver. So, in our case, we will annotate our ViewModel with @HiltViewModel and move the getQuotesUseCase property in the constructor:

@HiltViewModel
class MainViewModel @Inject constructor(private val getQuotesUseCase: GetQuotesUseCase) : ViewModel() {
…
}

Next, we will annotate our MainActivity with @AndroidEntryPoint and we will remove every usage of DependencyProvider.

@AndroidEntryPoint
public class MainActivity extends AppCompatActivity {
    ...
}

Now, we told Hilt, that it should provide us an instance of GetQuotesUseCase, but it doesn’t know how to. Because GetQuotesUseCase is our class, we can use constructor injection to tell Hilt how to create GetQuotesUseCase instances.

class GetQuotesUseCase @Inject constructor(private val quotesRepository: QuotesRepository) {
    ...
}

We can do this for every class that we want to inject: QuotesRepositoryImplementation, LocalDataSourceImplementation and RemoteDataSourceImplementation. This is cool, but we still haven’t told Hilt how to provide the interfaces (QuotesRepository, LocalDataSource, …).

Hilt modules

For interfaces or classes that we cannot constructor-inject (Classes from some outside library), we have to create a Hilt module, where we can tell Hilt how to provide instances for those classes. In our case, we will create a few modules: RepositoryModule, DataSourceModule, NetworkModule and DatabaseModule, where we will provide all of the dependencies that cannot be constructor-injected.

@Module
@InstallIn(SingletonComponent::class)
abstract class RepositoryModule{
}

We have to annotate the class with @Module and we have to tell in which component this module will be installed. Hilt has created some components that we can use. Every component defines the scope of the dependencies provided by the module. We installed QuotesModule in SingletonComponent, which means that the dependencies in this module will be Singleton (they will be created only once per application). Here are the other scopes that Hilt supports:

Hilt componentInjector for
SingletonComponentApplication
ActivityRetainedComponentN/A
ViewModelComponentViewModel
ActivityComponentActivity
FragmentComponentFragment
ViewComponentView
ViewWithFragmentComponentView annotated with @WithFragmentBindings
ServiceComponentService

There are two ways to inject dependencies in the module. With @Binds and with @Provides. When using @Binds, we tell Hilt which interface we want to return in the return type of the function, and as a parameter of the function we specify which implementation of the interface we want to provide.

@Module
@InstallIn(SingletonComponent::class)
abstract class RepositoryModule {
    @Binds
    abstract fun bindQuotesRepository(impl: QuotesRepositoryImplementation): QuotesRepository
}

@Module
@InstallIn(SingletonComponent::class)
abstract class DataSourceModule {
    @Binds
    abstract fun bindLocalDataSource(impl: LocalDataSourceImplementation): LocalDataSource

    @Binds
    abstract fun bindRemoteDataSource(impl: RemoteDataSourceImplementation): RemoteDataSource
}

The second way is with @Provides. A function annotated with @Provides supplies the following information for Hilt:

  • The function return type tells Hilt what type the function provides instances of.
  • The function parameters tell Hilt the dependencies of the corresponding type.
  • The function body tells Hilt how to provide an instance of the corresponding type. Hilt executes the function body every time it needs to provide an instance of that type.
@Module
@InstallIn(SingletonComponent::class)
class DatabaseModule {
    @Provides
    @Singleton
    fun provideQuotesDao(quoteDatabase: QuoteDatabase): QuoteDao {
        return quoteDatabase.quoteDao()
    }

    @Provides
    @Singleton
    fun provideQuoteDatabase(@ApplicationContext context: Context): QuoteDatabase {
        return Room.databaseBuilder(
            context,
            QuoteDatabase::class.java,
            "quotes_db"
        ).build()
    }
}

@Module
@InstallIn(SingletonComponent::class)
class NetworkModule {
    @Provides
    fun provideQuotesApi(): QuotesApi {
        return RetrofitClient.getRetrofit().create(QuotesApi::class.java)
    }
}

In provideQuotesDao() we are asking for QuoteDatabase and with that, we can create our dao instance. Because we cannot construct-inject QuoteDatabase, we have to provide it too. For that, we need the application context. We can ask to get it as a parameter annotated with the Qualifier @ApplicationContext. Let’s see what qualifiers are:

Qualifiers

In some cases, we need multiple bindings for the same type. For instance, we might need two bindings for the retrofit client. One with authentication and another without. In order to implement it, we need to create two qualifiers:

@Qualifier
@Retention(AnnotationRetention.BINARY)
annotation class AuthenticatedRetrofitClient

@Qualifier
@Retention(AnnotationRetention.BINARY)
annotation class NotAuthenticatedRetrofitClient

And then, we will have to annotate the binding where we bind/provide it, and where we inject (use) it. We’ll have to change our NetworkModule:

@Module
@InstallIn(SingletonComponent::class)
class NetworkModule {
    @Provides
    @Singleton
    fun provideQuotesApi(@NotAuthenticatedRetrofitClient retrofit: Retrofit): QuotesApi {
        return retrofit.create(QuotesApi::class.java)
    }

    @Provides
    @Singleton
    @NotAuthenticatedRetrofitClient
    fun provideNotAuthenticatedRetrofitClient(): Retrofit {
        return Retrofit.Builder()
            .baseUrl("https://programming-quotes-api.herokuapp.com/")
            .addConverterFactory(GsonConverterFactory.create())
            .build()
    }
}

We don’t use authenticated retrofit clients in our case, but it is a nice example. There are some predefined qualifiers provided by Hilt, one example is the @ApplicationContext that we used, there is also @ActivityContext and you can find some more here.

One last thing that I want to mention is the Component Scopes. By default, all bindings in Hilt are unscoped. This means that each time your app requests the binding, Hilt creates a new instance of the needed type. Hilt allows us to scope a binding to a particular component. This means that the same instance of the binding will be used during the lifetime of the component. For every component that Hilt has predefined, it also has a scope for it. For instance, SingletonComponent has @Singleton scope, ActivityComponent has @ActivityScoped etc…

For our application, that’s it. We can get rid of the DependencyProvider file and we have successfully implemented Dagger Hilt. You can check out the code here.

The practical guide – Part 3: Clean Architecture

Reading Time: 7 minutes

The practical guide – Part 1: Refactor android application to follow the MVP design pattern
The practical guide – Part 2: MVP -> MVVM

We know what design patterns are and how to implement them. But, if we want to have a more robust, scalable, maintainable, and testable application we have to do more. Today we will learn how to implement Clean Architecture proposed by Robert C. Martin (Uncle Bob).

Why is Architecture important?

Architecture is important for managing the complexity of the project. So, for a small project, we may not need one, but for big ones, it is a lifesaver. It makes the project easier to maintain, scale and test. It also makes the project more organized, so everyone can understand what the project is about with only looking at the classes.

Clean Architecture Introduction

Design patterns told us how to separate presentation and manipulation with the data. A clean architecture (like any other architecture) goes a little deeper and shows us how we should separate the manipulation of the data. Clean architecture has only a few layers, and each layer has its own responsibilities.

All credit for the image goes to Uncle Bob.

You have probably seen this image. It tells us how the layers are organized. As we can see, there are outer layers and inner layers. That is important because there are few rules that we have to follow:

  • Every layer can communicate only with the inner layers. So, the layers don’t know anything about the outer layers. The dependencies are provided by outer layers with Dependency Injection (hopefully I will make a post about this).
  • The most inner layer is the most abstract, and the most outer layer is the most concrete. This means that inner layers should contain business logic and outer layers should contain the implementation.

You may have noticed that I mentioned a few layers and not an exact number. That is because the clean architecture doesn’t define an exact number of layers. You can adapt the number of layers to your needs.

The flow

  • View The responsibility for the view stays the same as specified in the design patterns. Its only responsibilities are to display the data and react to user actions.
  • View Model/Presenter – Also, specified in the design patterns, their responsibility is to pass the data between the view and the model. But, instead of making the network/database calls, it uses the Use Cases for it. So, these classes shouldn’t know, where the data comes from, or where it goes. It just passes the data between the Use Cases and the Views.
  • Use Case (or Interactor) – These are the actions that the users can trigger. It contains the data that the action needs for it to be completed, and calls the repository to do the action. It can decide on which thread the action should be done, and on which thread the result should be posted.
  • Repository – The responsibility of the repository is to decide which data source the data needs to be handled. For every Use Case, there should be a method in the repository.
  • Data Source – There may be few data sources per application, like Network, Database, Cache etc. It contains the actual implementation of the data source.

Between every layer, we can have Mapper classes since the data can differ between layers. For example, we would like to store different data in a database from the one that we get from the network.

The implementation

Let’s start with the data sources. We will create two data source interfaces: RemoteDataSource and LocalDataSource.

interface RemoteDataSource {
  fun getQuotes(): Call<List<Quote>>
}

class RemoteDataSourceImplementation(private val api: QuotesApi) : RemoteDataSource {
  override fun getQuotes(): Call<List<Quote>> = api.quotes
}
interface LocalDataSource {
  fun insertQuotes(quotes: List<Quote>)
  fun getQuotes(): List<Quote>
}

class LocalDataSourceImplementation(private val quoteDao: QuoteDao) : LocalDataSource {
  override fun insertQuotes(quotes: List<Quote>) {
    quoteDao.insertQuotes(quotes)
  }

  override fun getQuotes(): List<Quote> = quoteDao.allQuotes
}

As you can see, we added only one method in RemoteDataSource, just for getting the quotes, but there are two methods in LocalDataSource since we have to store the quotes to the database after getting them from remote. The very important thing here is to notice that we are not creating the DAO and API objects, but we are asking for them to be provided in the constructor (Dependency Injection). This will enable us easily to switch to different network or database libraries and we won’t have to change anything here.

Let’s continue with the repository. We said that its responsibility is to decide where the data will come from. In our case, we want to return the quotes from the network, but if something fails we want to display the quotes from the database.

interface QuotesRepository {
  fun getQuotes(): List<Quote>
}

class QuotesRepositoryImplementation(
  private val localDataSource: LocalDataSource,
  private val remoteDataSource: RemoteDataSource
) : QuotesRepository {
  override fun getQuotes(): List<Quote> {
    return try {
      val response = remoteDataSource.getQuotes().execute()
      if (response.isSuccessful) {
        val quotes = response.body() ?: return localDataSource.getQuotes()
        localDataSource.insertQuotes(quotes)
        return quotes
      }
      localDataSource.getQuotes()
    } catch (e: Exception) {
      localDataSource.getQuotes()
    }
  }
}

It is a dumb logic, but I made it like that just for simplicity. Also very important is that we are using the interfaces, and not the actual implementation.

Let’s move to the use case. Here we will use the repository and we will switch between threads. We will use Kotlin coroutines, but you can use anything. If you are working with RxJava, here you will specify the Schedulers.

class GetQuotesUseCase(private val quotesRepository: QuotesRepository) {
  fun getQuotes(onResult: (List<Quote>) -> Unit = {}) {
    GlobalScope.launch(Dispatchers.IO) {
      val response = quotesRepository.getQuotes()
      GlobalScope.launch(Dispatchers.Main.immediate) {
        onResult(response)
      }
    }
  }
}

Usually, there is a base UseCase class, where you abstract the logic for threading. For simplicity, I skipped the error handling.

Last, we can clean up the ViewModel. I also converted it to Kotlin, and now it looks like this:

class MainViewModel : ViewModel() {
  lateinit var getQuotesUseCase: GetQuotesUseCase

  var quotesLiveData = MutableLiveData<List<Quote>>()

  fun getAllQuotes() {
    getQuotesUseCase.getQuotes { quotes: List<Quote> ->
      quotesLiveData.postValue(quotes)
    }
  }
}

I won’t explain anything here, I will just let you admire. Just kidding! You must be asking how we create getQuotesUseCase. But that is a story for another time. For now, I will create a class DependencyProvider, and I will provide everything there. You don’t have to worry about this for now.

And that’s it. Now we have an application that follows Clean Architecture guidelines. Here is the link for the project.

Final Notes

Now that our project follows Clean Architecture guidelines we can do many things. We can easily change frameworks and libraries with as little changes as possible (just changes at the DependencyProvider and everything will work). We can organize the application better with many repositories and many data sources. The project will be easy to understand, just by looking at the use cases. Testing of the application will be very easy (hopefully I will make a post about that, too).

I hope that this post will help you understand how Clean Architecture works in practice. If you have any questions or need any help don’t hesitate to ask. Happy Coding!

CloudFormation: Passing values and parameters to nested stacks

Reading Time: 7 minutes

Why CloudFormation?

CloudFormation allows provisioning and managing AWS resources with simple configuration files, which let us spend less time managing those resources and have more time to focus on our applications that run on AWS instead.

We can simply write a configuration template (YAML/JSON file) that describes the resources we need in our application (like EC2 instances, Dynamo DB tables, or having the entire app monitoring automated in CloudWatch). We do not need to manually create and configure individual AWS resources and figure out what is dependent on what, and more importantly, it is scalable so we can re-use the same template, with a bunch of parameters, and have the entire infrastructure replicated in different stages/environments.

Another important aspect of CloudFormation is that we have our infrastructure as code, which can be version controlled, reviewed and easily maintained.

Nested stacks

CloudFormation nested stacks diagram

As our infrastructure grows, common patterns can emerge, which can be separated into dedicated templates, and re-used later in other templates. A good example is load balancers and VPC network. There is another reason, that may look unimportant, but CloudFormation stacks have a limit, which is 200 resources per stack, which can be easily reached as our application grows. That is why nested stacks can be really useful.

A nested stack is a simple stack resource of type AWS::CloudFormation::Stack. Nested stacks can have themselves contain other nested stacks, resulting in a hierarchy of stacks, as shown in the diagram on the right-hand side. There must be only one root stack, which is called parent.

Passing parameters to the nested stacks

One of the biggest challenges when having nested stacks is parameters exchange between stacks. Without parameters, it would be impossible to have robust and dynamic stacks, that are scalable and flexible.

The simplest example would be deploying the same CloudFormation stack to multiple stages, like beta, gamma and prod (dev, pre-prod, prod, or any other naming convention you prefer).

Depending on which stage you deploy your application, you may want to set different properties to certain resources. For example, in the development stage, you will not have the same traffic as prod, therefore you can fine-grain the resources for your needs, and prevent spending extra money for unused resources.

Another example is when an application is deployed to various regions, that have different traffic consumption and time spikes. For instance, an application may have 1 million users in Europe, but only 100 000 in Asia. Using stack parameters, allows you to reduce the resources you use in the latter region, which can significantly impact your finances.

Below is a code snippet, showing a simple use case where a DynamoDB table is created in a nested stack, that receives the stage parameter from the parent stack. Depending on which stage, at deploy time, we set different read and write capacity to our table resource.

Root stack

In the parent stack, we define Stage parameter under the Properties section. We later pass the parameters to the nested stack, which is created from a template child_stack.yml, stored in an S3 bucket.

---
AWSTemplateFormatVersion: '2010-09-09'
Description: Root stack
Parameters:
  Stage:
    Type: String
    Default: beta
    AllowedValues:
        - beta
        - gamma
        - prod
  TestRegion:
    Type: String
Resources:
    DynamoDBTablesStack:
      Type: AWS::CloudFormation::Stack
      Properties:
        TemplateURL: https://n47-cloudformation.s3.eu-central-1.amazonaws.com/child_stack.yml
        Parameters:
            Stage:
                Ref: Stage
Child stack

In the nested stack, we define the Stage parameter, just like we did in the parent. If we do not define it here either, the creation will fail because the passed parameter (from the parent) is not recognized. Whatever parameters we pass to the nested stack, have to be defined in its template parameters.

---
AWSTemplateFormatVersion: '2010-09-09'
Description: Nested stack
Parameters:
  Stage:
    Type: String
    Default: beta
    AllowedValues:
        - beta
        - gamma
        - prod
Mappings:
    UsersDDBReadWriteCapacityPerStage:
        beta:
            ReadCapacityUnits: 10
            WriteCapacityUnits: 10
        gamma:
            ReadCapacityUnits: 50
            WriteCapacityUnits: 50
        prod:
            ReadCapacityUnits: 500
            WriteCapacityUnits: 1000
Resources:
    UserTable:
        Type: AWS::DynamoDB::Table
        Properties:
            AttributeDefinitions:
                - AttributeName: user_id
                  AttributeType: 'S'
            KeySchema:
                - AttributeName: user_id
                  KeyType: HASH
            ProvisionedThroughput:
                ReadCapacityUnits: !FindInMap [UsersDDBReadWriteCapacityPerStage, !Ref Stage, ReadCapacityUnits]
                WriteCapacityUnits: !FindInMap [UsersDDBReadWriteCapacityPerStage, !Ref Stage, WriteCapacityUnits]
            TableName: Users

The Mappings section used in the child template is used for fetching the corresponding Read/Write capacity value at deploy time when the actual value for Stage parameter is available. More about Mappings can be found in the official documentation.

Output resources from nested stacks

Having many nested stacks usually implies cross-stack communication. This encourages more template code reuse.

We will do a simple illustration by extracting the DynamoDB table name we created in the nested stack before, and pass it as a parameter to a second nested stack, and also by exporting its value.

In order to expose resources from a stack, we need to define them in the Outputs section of the template. We start by adding an output resource, in the child stack, with logical id UsersDDBTableName, and an export named UsersDDBTableExport.

Outputs:
    UsersDDBTableName:
        # extract the table name from the arn
        Value: !Select [1, !Split ['/', !GetAtt UserTable.Arn]] 
        Export:
            Name: UsersDDBTableExport

Note: For each AWS account, Export names must be unique within a region.

Then we create a second nested stack, which will contain two DynamoDB tables, one named UsersWithParameter and the second one UsersWithImportValue. The former is created by passing the table name from the first child stack as a parameter, and the latter by importing the value that has been exported UsersDDBTableExport.

(Note, that this is just an example to showcase the two options to access resources between stacks, and is no real-world scenario)

For that, we added this stack definition in the root’s stack resources:

SecondChild:
    Type: AWS::CloudFormation::Stack
    Properties:
        TemplateURL: https://n47-cloudformation.s3.eu-central-1.amazonaws.com/child_stack_2.yml
        Parameters:
            TableName:
                Fn::GetAtt:
                  - DynamoDBTablesStack
                  - Outputs.UsersDDBTableName

Below is the entire content of the second child stack:

---
AWSTemplateFormatVersion: '2010-09-09'
Description: Nested stack
Parameters:
    TableName:
        Type: String
        
Resources:
    UserTableWithParameter:
        Type: AWS::DynamoDB::Table
        Properties:
            AttributeDefinitions:
                - AttributeName: customer_id
                  AttributeType: 'S'
            KeySchema:
                - AttributeName: customer_id
                  KeyType: HASH
            ProvisionedThroughput:
                ReadCapacityUnits: 1
                WriteCapacityUnits: 1
            TableName: !Join ['', [!Ref TableName, 'WithParameter'] ]
    UserTableWithImportValue:
        Type: AWS::DynamoDB::Table
        Properties:
            AttributeDefinitions:
                - AttributeName: customer_id
                  AttributeType: 'S'
            KeySchema:
                - AttributeName: customer_id
                  KeyType: HASH
            ProvisionedThroughput:
                ReadCapacityUnits: 1
                WriteCapacityUnits: 1
            TableName: !Join ['', [!ImportValue UsersDDBTableExport, 'WithImportValue'] ]

Even though we achieved the same thing by using nested stacks outputs, and exporting values, there is a difference between them. When you do an export, the exporting value is accessible to external stacks, within the same region, on the other hand, nested stacks outputs can be only passed, as a parameter to the other nested stacks within the same parent.

Notes:

  • Cross-stack references across regions cannot be created. You can use the intrinsic function Fn::ImportValue to import only values that have been exported within the same region
  • You cannot delete a stack if another stack references one of its outputs
  • You cannot modify or remove an output value that is referenced by another stack

Below are some screenshots from the AWS console, illustrating the created stacks, from the code snippets shared above:

Figure 1: root stack containing two nested stacks

Figure 2: first nested stack containing Users DynamoDB table

Figure 3: second nested stack containing UsersWithImportValue and UsersWithParameter DynamoDB tables

You can download the source templates here.


If you have any questions or feedback, feel free to comment here.

The practical guide – Part 2: MVP -> MVVM

Reading Time: 5 minutes

The practical guide – Part 1: Refactor android application to follow the MVP design pattern

This is the second part of the practical guide. In the first part, we talked about refactoring android application to MVP. Now, instead of refactoring the base code to MVVM, I would like to refactor the MVP application to MVVM. That way we will learn the differences between MVP and MVVM.

Why MVVM?
First, I should tell you that Google accepted MVVM as preferred design pattern for building Android applications. So, they have build tools that helps us following this pattern. This is a great reason to learn and use this pattern, but why would Google choose MVVM over MVP (or other design patterns). Well, they know best but, my opinion is that they chose MVVM because it has one less dependency, due to the difference in communication between ViewModel and View. I will explain this in the next section.

Difference between MVP and MVVM
As we know, MVP stands for Model-View-Presenter. On the other hand, MVVM stands for Model-View-ViewModel. So, the Model and the View in MVVM are the same as in the MVP pattern. The only difference remains between the Presenter and View Model. More precisely, the difference is in the communication between the View and the ViewModel/Presenter.

As you can see in the diagrams, the only difference is the arrow from Presenter to View. What does it mean? It means that in MVP you have an instance of the Presenter in the View, and you have an instance of the View in the Presenter, hence double arrow in the Diagram. In MVVM you only have an instance of the ViewModel in the View. But, how do you communicate with the View? How can the View know when the ViewModel has made changes in the Model? For that, we need the Observer pattern. We have observables(subjects) in the ViewModel, and the View subscribes to this observables. So, when the observable is changed, then the View is automatically informed about that change, and it can update its views.

For practical implementation of this Observer pattern, we have to get help either from some external libraries like RxJava, or we can use the new architecture components from Android. We will use this later in this example.

Refactoring

MVP classes
MVVM Classes

First, we can get rid of the MainPresenter and MainView interfaces. We can have only one new class MainViewModel, that replaces the Presenter. Then we can extend MainViewModel from androidx.lifecycle.ViewModel. This is the first class that we will use from the android lifecycle components. This class helps us to deal with the lifecycle of the view model. It survives configuration changes, so it is a nice place for storing UI related data. Next, we will add quoteDao and quotesApi fields. We will initialize them with setters, instead of the constructor, because the creation of the ViewModel is a little bit different. We don’t have the MainView, and also we don’t need bindView() and dropView() methods.

Next, we will create the observable objects. These are objects that we want to display in the MainActivity, wrapped with androidx.lifecycle.LiveData or some other implementation of LiveData. This class helps us with the implementation of the observer pattern. We will create the objects in the MainViewModel, and we will observe these objects in the MainActivity. We want to display a list of Quote objects, so we will create MutableLiveData<List<Quote>> object. MutableLiveData because we want to change the value of the object manually.

getAllQuotes() will be very similar as in the Presenter, except minus the interaction with the MainView. So, instead of:

if (view != null) {
  view.displayQuotes(response.body());
}

we will have:

quotesLiveData.setValue(response.body());

We will also change the implementation of the DatabaseQuotesAsyncTask, so instead of sending the MainView, we will create a new interface that will get us the quotes from the async task and we will send the implementation of this interface there. In the implementation, we will update quotesLiveData, same as above.

In the MainActivity, we remove the implementation of the MainView. We won’t need to override onDestroy() method. We can replace the MainPresenter with the MainViewModel. We will create the ViewModel as follows:

viewModel = new ViewModelProvider(this, new ViewModelProvider.NewInstanceFactory()).get(MainViewModel.class);
viewModel.setQuoteDao(quoteDao);
viewModel.setQuotesApi(quotesApi);

Then we will observe to the quotesLiveData observable, and display the list.

viewModel.quotesLiveData.observe(this, quotes -> {
    QuotesAdapter adapter = new QuotesAdapter(quotes);
    rvQuotes.setAdapter(adapter);
});

And in the end, we call viewModel.getAllQuotes() to fetch the quotes.

And that’s it! Now we have an application that follows the MVVM design pattern. You can check the full code here.

The practical guide: Refactor android application to follow the MVP design pattern

Reading Time: 8 minutes

At the beginning of my carrier, I was very lucky to start a few new projects at my company. And every story was a short one and the same: This is going to be the best project ever -> Ops, it got little messy -> Oooops, everything is a mess, there is no going back. So, I realized that it isn’t enough that I can do anything the PO wants, but I have to find the right way of doing it.

I was thinking like: Ok, I will learn unit testing, my code will be tested, and everything will be fine. So, I got into testing, started to read everything about testing, especially unit testing. I started to understand it a little bit, so I decided to write a few unit tests in my existing (mess) projects. I created test classes for some fragment, started to type some test methods, thinking I will figure it out what to test easily, and when I had to type the name of the test method my mind froze.

I had no idea what to test. I started to divide the code into methods, but it wasn’t helping at all. After a ton of research and frustration, I realized that I can’t test anything until I learn some design patterns and/or architectures.

Design patterns and architectures

In theory, there is a small difference between design patterns and architectures. Basically, they are both terms that define the organization of code, but the difference is the layer. Architecture is a more general term of design pattern. There are many design patterns and architectures, and you can’t say for one that it is the best. Today we will look at the MVP design pattern, and we will try to refactor a really bad code into a better one. Hopefully, then, we will try to refactor the better code into an even better one, with clean-like architecture.

Application example

I created an application that fetches some programming quotes, saves them into a room database and displays them into a RecyclerView. If the network calls fail, we display the quotes from the database. It is a common use case, that complicates our lives a little bit. You can check out the no_pattern branch, and run the app.

public class MainActivity extends AppCompatActivity {

    private QuoteDao quoteDao;
    private RecyclerView rvQuotes;

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);

        quoteDao = QuoteDatabase.getQuoteDatabase(this).quoteDao();

        rvQuotes = findViewById(R.id.rvQuotes);
        rvQuotes.setLayoutManager(new LinearLayoutManager(this));

        QuotesApi quotesApi = RetrofitClient.getRetrofit().create(QuotesApi.class);
        Call<List<Quote>> quotesCall = quotesApi.getQuotes();
        quotesCall.enqueue(new Callback<List<Quote>>() {
            @Override
            public void onResponse(Call<List<Quote>> call, Response<List<Quote>> response) {
                if (response.isSuccessful()) {

                    new Thread(() -> quoteDao.insertQuotes(response.body())).start();

                    QuotesAdapter adapter = new QuotesAdapter(response.body());
                    rvQuotes.setAdapter(adapter);
                }
            }

            @Override
            public void onFailure(Call<List<Quote>> call, Throwable t) {
                DatabaseQuotesAsyncTask asyncTask = new DatabaseQuotesAsyncTask(quoteDao, rvQuotes);
                asyncTask.execute();
            }
        });
    }

    static class DatabaseQuotesAsyncTask extends AsyncTask<Void, Void, List<Quote>> {

        private QuoteDao quoteDao;
        private WeakReference<RecyclerView> rvQuotes;

        public DatabaseQuotesAsyncTask(QuoteDao quoteDao, RecyclerView rvQuotes) {
            this.quoteDao = quoteDao;
            this.rvQuotes = new WeakReference<>(rvQuotes);
        }

        @Override
        protected List<Quote> doInBackground(Void... voids) {
            return quoteDao.getAllQuotes();
        }

        @Override
        protected void onPostExecute(List<Quote> quotes) {
            QuotesAdapter adapter = new QuotesAdapter(quotes);
            rvQuotes.get().setAdapter(adapter);
        }
    }
}

This code is terrible. I mean, it is doing its job, but it is not testable, it is not scalable, it is not maintainable. We can’t test this code. MainActivity is 82 lines of code, just for displaying a list of quotes. Imagine if we add more calls here, more features in this screen (and usually there are more features), this code will easily become a mess.

How to fix this? We will start with a design pattern. We will refactor this code to follow the MVP pattern. Now, what is the MVP design pattern and how to implement it? MVP pattern is one of the most common design patterns. It is very close to MVC and MVVM. All of these design patterns (and others) share the idea that we should define and divide the responsibility of the classes. All of the mentioned design patterns above have 3 types of classes:

  • Model – data layer, used for managing business model classes
  • View – UI layer, used for displaying the data
  • Controller/Presenter/ViewModel – logic layer, intercepts the actions from the UI layer, updates the data and tells the UI layer to update itself

As you can see, Model and View classes are the same between all three patterns (they may have different responsibilities), the difference is the remaining class.

Let’s talk about MVP strictly, and try to convert our app to follow the MVP pattern. What classes belong to which type?

  • Model – Here belongs just the Quote class, so it stays the same
  • View – Here belong Activities and Fragments. In our case MainActivity
  • Presenter – We should create presenter for every Activity/Fragment

So, in the data layer (Model) we have just our Quote class and it stays the same. The View and the Presenter are left. Let’s create interfaces for the Views and Presenters. That way we will create the communication between them.

public interface BaseView<P> {
}

public interface BasePresenter<V> {
    void bindView(V view);
    void dropView();
}

Every Activity/Fragment should implement interfaces extending from BaseView, and every presenter should implement interfaces extending from BasePresenter. In MVP, the communication between the View and the Presenter is both ways (View <—> Presenter). This means that our Activity/Fragment should have an instance of the presenter and the presenter should have an instance of the view. Also, the Presenter should not contain any android component (easy check, you shouldn’t have any android, imports in the presenter). So, let’s create the View and the Presenter for MainActivity. I will organize the packages by feature because it is better when we follow some design patterns.

Now, we have MainActivity that implements MainView, and MainPresenterImpl that implements MainPresenter. We said that MainActivity should have an instance of MainPresenter and MainPresenterImpl should have an instance of MainView.

public interface MainPresenter extends BasePresenter<MainView> {
}

public class MainPresenterImpl implements MainPresenter {
    private MainView view;

    @Override
    public void bindView(MainView view) {
        this.view = view;
    }

    @Override
    public void dropView() {
        this.view = null;
    }
}

public interface MainView extends BaseView<MainPresenter> {
}

public class MainActivity extends AppCompatActivity implements MainView {
    //...
    private MainPresenter presenter;
   // ...
}

You will notice methods bindView() and dropView() in the presenter. That is because the view is responsible for its lifecycle, and it should inform the presenter about its presence/absence. In the lifecycle methods of the Activity/Fragment, we should call these methods. This should be done in one of these three pairs: In onCreate/onResume/onStart call bindView() and in onDestroy/onPause/onStart call dropView(). Use either one of the pairs, but you should not mix them. For example: Don’t call bindView() in onCreate and dropView() in onPause. Let’s call these methods in the MainActivity:

public class MainActivity extends AppCompatActivity implements MainView {
    //...
    private MainPresenter presenter;
    
    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);

        presenter = new MainPresenterImpl();
        presenter.bindView(this);

        //....
    }

    @Override
    protected void onDestroy() {
        presenter.dropView();
        super.onDestroy();
    }
}

Next, we should define methods in MainView and MainPresenter. In MainPresenter we want to get the quotes (it doesn’t matter for the view if the presenter gets them from the network or from the database) so we’ll create a method getAllQuotes(), and in the MainView we want to display the quotes, so we’ll create a method displayQuotes(List<Quote> quotes). After adding these methods in the interfaces, we will get compiler errors in the MainActivity and MainPresenterImpl. We need to implement these new methods. In the MainActivity we’ll just create a new adapter with the quotes and pass the adapter in the recyclerView. In the Presenter, it gets little trickier. We will move the network and database code from MainActivity to MainPresenterImpl, and whenever we create an adapter and set it to the recyclerView, we change that with calling displayQuotes() method of the MainView.

public class MainActivity extends AppCompatActivity implements MainView {
    //...
    @Override
    public void displayQuotes(List<Quote> quotes) {
        QuotesAdapter adapter = new QuotesAdapter(quotes);
        rvQuotes.setAdapter(adapter);
    }
}

// in presenter when we get the quotes
if (view != null) {
    view.displayQuotes(quotes);
}

Moreover, because QuoteDatabase requires context, and we can’t have a context in the Presenter, instead of creating QuoteDao inside the Presenter, we create it in MainActivity and pass it into the Presenter via the constructor. Finally, in the onCreate() method of the Activity, we call presenter.getAllQuotes(); You can check out the mvp_basic branch.

What have we done now? We refactored the MainActivity, to follow the MVP design pattern. Now, MainActvity has the responsibility only to display the quotes. It doesn’t need to be unit tested, because it doesn’t contain any logic. We moved the logic into the Presenter. But unfortunately, also the Presenter is hard to test now. We will try to make it better in the next article.

ReactiveX in Android with an example – RxJava

Reading Time: 5 minutes

What is Reactive Programming?

Reactive programming is programming with asynchronous data streams. It enables to create streams of anything – events, fails, variables, messages and etc. By using reactive programming in your application, you are able to create streams which you can then perform actions while the data emitted by those created streams.

Observer Pattern

The observer pattern is a software design pattern which defines a one-to-many relationship between objects. It means if the value/state of the observed object is changed/modified, the other objects which are observing are getting notified and updated.

ReactiveX

ReactiveX is a polyglot implementation of reactive programming which extends observer pattern and provides a bunch of data manipulation operators, threading abilities.

RxJava

RxJava is the JVM implementation of ReactiveX.

  • Observable – is a stream which emits the data
  • Observer – receives the emitted data from the observable
    • onSubscribe() – called when subscription is made
    • onNext() – called each time observable emits
    • onError() – called when an error occurs
    • onComplete() – called when the observable completes the emission of all items
  • Subscription – when the observer subscribes to observable to receive the emitted data. An observable can be subscribed by many observers
  • Scheduler – defines the thread where the observable emits and the observer receives it (for instance: background, UI thread)
    • subscribeOn(Schedulers.io())
    • observeOn(AndroidSchedulers.mainThread())
  • Operators – enable manipulation of the streamed data before the observer receives it
    • map()
    • flatMap()
    • concatMap() etc.

Example usage on Android

Tools, libraries, services used in the example:

  • Libraries:
    • ButterKnife – simplifying binding for android views
    • RxJava, RxAndroid – for reactive libraries
    • Retrofit2 – for network calls
  • Fake online rest API:
  • Java object generator from JSON file

What we want to achieve is to fetch users from 1. show in RecyclerView and load todo list to show the number of todos in the same RecyclerView without blocking the UI.

Here we define our endpoints. Retrofit2 supports return type of RxJava Observable for network calls.

    @GET("/users")
    Observable<List<User>> getUsers();

    @GET("/users/{id}/todos")
    Observable<List<Todo>> getTodosByUserID(@Path("id") int id);

    @GET("/todos")
    Observable<List<Todo>> getTodos();

Let’s fetch users:

  • .getUsers – returns observable of a list of users
  • .subscribeOn(Schedulers.io()) – make getUser() performs on background thread
  • .observeOn(AndroidSchedulers.mainThread()) – we switch to UI thread
  • flatMap – we set data to RecyclerView and return Observable user list which will be needed in fetching todo list
    private Observable<User> getUsersObservable() {
        return ServicesProvider.getDummyApi()
                .getUsers()
                .subscribeOn(Schedulers.io())
                .observeOn(AndroidSchedulers.mainThread())
                .flatMap((Function<List<User>, ObservableSource<User>>) users -> {
                    adapterRV.setData(users);
                    return Observable.fromIterable(users);
                });
    }

Now, fetch todo list of users using the 2nd endpoint.

Since we are not going to make another call, we don’t need Observable type in return of this method. So, here we use map() instead of flatMap() and we return User type.

    private Observable<User> getTodoListByUserId(User user) {
        return ServicesProvider.getDummyApi()
                .getTodosByUserID(user.getId())
                .subscribeOn(Schedulers.io())
                .map(todoList -> {
                    sleep();
                    user.setTodoList(todoList);
                    return user;
                });
    }

Now, fetch todo list of users using the 3rd endpoint.

The difference to the 2nd endpoint is that this returns a list of todos for all users. Here we can see the usage of filter() operator.

    private Observable<User> getAllTodo(User user) {
        return ServicesProvider.getDummyApi()
                .getTodos()
                .subscribeOn(Schedulers.io())
                .flatMapIterable((Function<List<Todo>, Iterable<Todo>>) todoList -> todoList)
                .filter(todo -> todo.getUserId().equals(user.getId()) && todo.getCompleted())
                .toList().toObservable()
                .map(todoList -> {
                    sleep();
                    user.setTodoList(todoList);
                    return user;
                });
    }
  • .flatMapIterable() – is used to convert Observable<List<T>> to Observable<T> which is needed for filter each item in list
  • .filter() – we filter todos to get each user’s completed todo list
  • .toList().toObservable() – for converting back to Observable<List<T>>
  • .map() – we set filtered list to user object which will be used in next code snippet

Now, the last step, we call the methods:

        getUsersObservable()
                .subscribeOn(Schedulers.io())
                .concatMap((Function<User, ObservableSource<User>>) this::getTodoListByUserId) // operator can be concatMap()
                .observeOn(AndroidSchedulers.mainThread())
                .subscribe(new Observer<User>() {
                    @Override
                    public void onSubscribe(Disposable d) {
                        disposables.add(d);
                    }

                    @Override
                    public void onNext(User user) {
                        adapterRV.updateData(user);
                    }

                    @Override
                    public void onError(Throwable e) {
                        Log.e(TAG, e.getMessage());
                    }

                    @Override
                    public void onComplete() {
                        Log.d(TAG, "completed!");
                    }
                });
  • subscribeOn() – makes the next operator performed on background
  • concatMap() – here we call one of our methods getTodoListByUserId() or getAllTodo()
  • .observeOn(), .subscribe() – every time the user’s todo list is fetched from api in background thread, it emits the data and triggers onNext() so we update RecyclerView in UI thread
  • Left
    • getTodoListByUserId()
    • flatMap()
  • Right
    • concatMap()
    • getAllTodo() – filter usage

Difference between flatMap and concatMap is that the former is done in an arbitrary order but the latter preserves the order

Disposable

When an observer subscribes to an observable, a disposable object is provided in onSubscribe() method so it can later be used to terminate the background process to avoid it returning from callback to a dead activity.

private CompositeDisposable disposables = new CompositeDisposable();

observableobject.subscribe(new Observer() {
    @Override
    public void onSubscribe(Disposable d) {
        disposables.add(d);
    }

@Override
protected void onDestroy() {
    super.onDestroy();
    disposables.dispose();
}

Summary

In this post, I tried to give brief information about reactive programming, observer pattern, ReactiveX library and a simple example on android.

Why should you use RxJava in your projects?

  • less boilerplate code
  • easy thread management
  • thread-safety
  • easy error handling

Gitlab Repository

Example sourcecode: https://gitlab.com/47northlabs/public/android-rxjava

Links

CQRS and Event Sourcing with Axon Framework

Reading Time: 5 minutes

What is CQRS?

CQRS stands for Command Query Responsibility Segregation. It is a pattern where allowed actions in the application are divided into two groups: Commands and Queries. A command changes the state of an object but does not return any data, while a query returns data and does not change any state. This design style comes from the need for different strategies when scaling the reading part and the writing part of our application. By dividing methods into these two categories, you can use a different model to update information than the model you use to read information. By separate models we most commonly mean different object models, probably running in different logical processes, perhaps on separate hardware. A web example would see a user looking at a web page that’s rendered using the query model. If they initiate a change that change is routed to the separate command model for processing, the resulting change is communicated to the query model to render the updated state.

Event Sourcing

Event Sourcing is a specialized pattern for data storage. Instead of storing the current state for an entity, every change of state is stored as a separate event that makes sense to a business user. The current state is calculated by applying all events that changed the state of an entity. In terms of CQRS, the events stored are the results of executing a command against an aggregate on the right side. The event store also transmits events that it saves. The read side can process these events and builds the targeted data sets it needs for queries.

AXON Framework

Axon is “CQRS Framework” for Java. It is an Open-source framework that provides the building blocks that CQRS requires and helps to create scalable and extensible applications while maintaining application consistency in distributed systems. It provides basic building blocks for writing aggregates, commands, queries, events, sagas, command handlers, event handlers, query handlers, repositories, communication buses and so on. Axon Framework comes with a Spring Boot Starter, making using it as easy as adding a dependency. Axon will automatically configure itself based on best practices and other dependencies you have set. Providing explicit configuration is a matter of adding a bean of the component that needs to be configured differently. Furthermore, Axon Framework integrates with Spring Cloud Discovery, Spring Messaging and Spring Cloud AMQP.

AXON components

  • Command – a combination of expressed intent (which describes what you want to be done) as well as the information required to undertake action based on that intent. The Command Model is used to process the incoming command, to validate it and define the outcome. Commands are typically represented by simple and straightforward objects that contain all data necessary for a command handler to execute it
  • Command Bus – receives commands and routes them to the Command Handlers
  • Command Handler – the component responsible for handling commands of a certain type and taking action based on the information contained inside it. Each command handler responds to a specific type of command and executes logic based on the contents of the command
  • Event bus – dispatches events to all interested event listeners. This can either be done synchronously or asynchronously. Asynchronous event dispatching allows the command execution to return and hand over control to the user, while the events are being dispatched and processed in the background. Not having to wait for event processing to complete makes an application more responsive. Synchronous event processing, on the other hand, is simpler and is a sensible default. By default, synchronous processing will process event listeners in the same transaction that also processed the command
  • Event Handler – are the components that act on incoming events. They typically execute logic based on decisions that have been made by the command model. Usually, this involves updating view models or forwarding updates to other components, such as 3rd party integrations
  • Query Handler – components act on incoming query messages. They typically read data from the view models created by the Event listeners
  • Query Bus receives queries and routes them to the Query Handlers. A query handler is registered at the query bus with both the type of query it handles as well as the type of response it provides. Both the query and the result type are typically simple, read-only DTO objects

Will each application benefit from Axon?

Unfortunately not. Simple CRUD (Create, Read, Update, Delete) applications which are not expected to scale will probably not benefit from CQRS or Axon. Fortunately, there is a wide variety of applications that do benefit from Axon.
Axon platform is being used by a wide range of companies in highly demanding sectors such as healthcare, banking, insurance, logistics and public sector. Axon Framework is free and the source code is available in a Git repository: https://github.com/AxonFramework.

My opinion on talks from JPoint Moscow 2019

Reading Time: 4 minutes

If you have read my previous parts, this is the last one in which I will give my highlights on the talks that I have visited.

First stop was the opening talk from Anton Keks on topic The world needs full-stack craftsmen. Interesting presentation about current problems in software development like splitting development roles and what is the real result of that. Another topic was about agile methodology and is it really helping the development teams to build a better product. Also, some words about startup companies and usual problems. In general, excellent presentation.

Simon Ritter, in my opinion, he had the best talks about JPoint. First day with the topic JDK 12: Pitfalls for the unwary. In this session, he covered the impact of application migration from previous versions of Java to the last one, from aspects like Java language syntax, class libraries and JVM options. Another interesting thing was how to choose which versions of Java to use in production. Well balanced presentation with real problems and solutions.

Next stop Kohsuke Kawaguchi, creator of Jenkins, with the topic Pushing a big project forward: the Jenkins story. It was like a story from a management perspective, about new projects that are coming up and what the demands of the business are. To be honest, it was a little bit boring for me, because I was expecting superpowers coming to Jenkins, but he changed the topic to this management story.

Sebastian Daschner from IBM, his topic was Bulletproof Java Enterprise applications. This session covered which non-functional requirements we need to be aware of to build stable and resilient applications. Interesting examples of different resiliency approaches, such as circuit breakers, bulkheads, or backpressure, in action. In the end, adding telemetry to our application and enhancing our microservice with monitoring, tracing, or logging in a minimalistic way.

Again, Simon Ritter, this time, with the topic Local variable type inference. His talk was about using var and let the compiler define the type of the variable. There were a lot of examples, when it makes sense to use it, but also when you should not. In my opinion, a very useful presentation.

Rafael Winterhalter talked about Java agents, to be more specific he covered the Byte Buddy library, and how to program Java agents with no knowledge of Java bytecode. Another thing was showing how Java classes can be used as templates for implementing highly performant code changes, that avoid solutions like AspectJ or Javassist and still performing better than agents implemented in low-level libraries.

To summarize, the conference was excellent, any Java developer would be happy to be here, so put JPoint on your roadmap for sure. Stay tuned for my next conference, thanks for reading, THE END 🙂

Spring Cloud Stream (event-driven microservice) with Apache Kafka… in 15 Minutes!

Reading Time: 5 minutes

Introduction

In March 2019 Shady and I visited Voxxed Days Romania in Bucharest. If you haven’t seen our post about that, check it out now! There were some really cool talks and so I decided to pick one and write about it.

At my previous employer, we switched from a monolithic service to a microservice architecture. After implementing about 20 different microservices in 2 years, the communication between them got more complex. In addition to that, all microservices where communicating synchronously! Did we build another monolith? I just recently read a blog post about that on another site: https://thenewstack.io/synchronous-rest-turns-microservices-back-monoliths/

I tried to recreate the complexity of synchronous communication in microservices in this picture 😅

So back to the topic… This is why I always was interested in asynchronous communication (streams, message bus, pubsub, whatever). I heard a lot from Uber using Google Clouds PubSub, how it’s highly scalable, asynchronous and most important: just cool to use! I was inspired by Mark Heckler’s talk “Drinking from the Stream: How to Use Messaging Platforms for Scalability&Performance” and tried it out myself. Of course, I’m sharing my experiences and example with you…

Technologies

Spring Cloud Stream

Spring Cloud Stream is a framework for building highly scalable event-driven microservices connected with shared messaging systems.

https://spring.io/projects/spring-cloud-stream#overview

Spring Cloud Stream supports a variety of binder implementations:

We will use Spring Cloud Stream to create 3 different projects (microservices), with the Apache Kafka Binder using the Spring Initializr.

Documentation

https://cloud.spring.io/spring-cloud-static/spring-cloud-stream/2.1.2.RELEASE/single/spring-cloud-stream.html

Kafka

Apache Kafka is a distributed streaming platform. Communication between endpoints is driven by messaging-middleware parties like Apache Kafka or RabbitMQ.

Documentation

https://kafka.apache.org/documentation/

Let’s get started!

Prerequisites

So this is all you need to get yourself started:

  • Maven 3.2+
  • Java 7+ (Java 8 highly recommended!)
  • Docker

The idea: Money money money 💰

Let’s build a money-printing machine 🤑! So the idea is…

  • Producer
    • Prints money (coins and notes) in different currencies, values and qualities.
  • Processor
    • Fetch money and polish coins/notes to”perfect” quality. This is quality assurance 😉.
  • Consumer
    • Fetch (spend) money and show type, currency, value and quality.
Three microservices communicating through Kafka

Bootstrap your application with Spring Initializr

Create a new project just with a few clicks 🖱

  • Project: Maven Project
  • Language: Java
  • Spring Boot: 2.1.4
  • Project Metadata
    • Group: com.47northlabs
    • Artefact: moneyprinter-producer
  • Dependencies
    • Web
    • Cloud Stream
    • Kafka
    • Lombok
Screenshot from my setup in the Spring Initializr

Implementation of the producer

Create or edit /src/main/resources/application.properties

server.port=0

spring.cloud.stream.bindings.output.destination=processor
spring.cloud.stream.bindings.output.group=processor

spring.cloud.stream.kafka.binder.auto-add-partitions=true
spring.cloud.stream.kafka.binder.min-partition-count=4

The destination defines to which pipeline (or topic) the message is published to.

Create or edit /src/main/java/com/northlabs/lab/moneyprinterproducer/MoneyprinterProducerApplication.java

package com.northlabs.lab.moneyprinterproducer;

import lombok.AllArgsConstructor;
import lombok.Data;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.messaging.Source;
import org.springframework.scheduling.annotation.EnableScheduling;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;
import org.springframework.messaging.support.MessageBuilder;

import java.util.Random;
import java.util.UUID;

@SpringBootApplication
public class MoneyprinterProducerApplication {

	public static void main(String[] args) {
		SpringApplication.run(MoneyprinterProducerApplication.class, args);
	}

}

@EnableBinding(Source.class)
@EnableScheduling
@AllArgsConstructor
class Spammer {
	private final Source source;
	private final SubscriberGenerator generator;

	@Scheduled(fixedRate = 1000)
	private void spam() {
		Money money = generator.printMoney();
		System.out.println(money);
		source.output().send(MessageBuilder.withPayload(money).build());
	}
}

@Component
class SubscriberGenerator {
	private final String[] type = "Coin, Note".split(", ");
	private final String[] currency = "CHF, EUR, USD, JPY, GBP".split(", ");
	private final String[] value = "1, 2, 5, 10, 20, 50, 100, 200, 500, 1000".split(", ");
	private final String[] quality = "poor, fair, good, premium, flawless, perfect".split(", ");
	private final Random rnd = new Random();
	private int i = 0, j = 0, k=0, l=0;

	Money printMoney() {
		i = rnd.nextInt(2);
		j = rnd.nextInt(5);
		k = rnd.nextInt(10);
		l = rnd.nextInt(6);

		return new Money(UUID.randomUUID().toString(), type[i], currency[j], value[k], quality[l]);
	}
}

@Data
@AllArgsConstructor
class Money {
	private final String id, type, currency, value, quality;
}

Here we simply create the whole microservice in one class. The most important code is highlighted here. SUPER SIMPLE! Now you already have a microservice, which prints money and publishes it to the destination topic/pipeline “processor” 👏.

Implementation Processor

https://gitlab.com/47northlabs/public/spring-cloud-stream-money/blob/master/moneyprinter-processor/src/main/resources/application.properties

https://gitlab.com/47northlabs/public/spring-cloud-stream-money/blob/master/moneyprinter-processor/src/main/java/com/northlabs/lab/moneyprinterprocessor/MoneyprinterProcessorApplication.java

Implementation Consumer

https://gitlab.com/47northlabs/public/spring-cloud-stream-money/blob/master/moneyprinter-consumer/src/main/resources/application.properties

https://gitlab.com/47northlabs/public/spring-cloud-stream-money/blob/master/moneyprinter-consumer/src/main/java/com/northlabs/lab/moneyprinterconsumer/MoneyprinterConsumerApplication.java

Docker for Kafka and Zookeeper

Run these commands to create a network and run Kafka and Zookeeper in docker containers.

docker network create kafka

docker run -d --net=kafka --name=zookeeper -e ZOOKEEPER_CLIENT_PORT=2181 confluentinc/cp-zookeeper:5.0.0
docker run -d --net=kafka --name=kafka -p 9092:9092 -e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092 -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 confluentinc/cp-kafka:5.0.0

If you can’t connect, add this line to /etc/hosts to ensure proper routing to container network “kafka”:

127.0.0.1 kafka

Start messaging platforms with the docker start command:

docker start zookeeper
docker start kafka

It’s a wrap!

Congratulations! You made it. Now just run your producer, processor and consumer and it should look something like this:

My example

Getting started

  1. Run docker/runKafka.sh
  2. Run docker/startMessagingPlatforms.sh
  3. Start producer, processor and consumer microservice (e.g. inside IntelliJ)
  4. Enjoy the log output 👨‍💻📋

Download the source code

The whole project is freely available on our Gitlab repository. Feel free to fix any mistakes and to comment here if you have any questions or feedback.

https://gitlab.com/47northlabs/public/spring-cloud-stream-money

Live from JPoint, Moscow 2019

Reading Time: 3 minutes

The conference took place at the World Trade Center in Moscow and started at 9 am. It looked like it will be huge from the beginning, well organized and big conference halls. The first step was an attendee registration.

After completing the registration and picking up some welcome packages, we had some starting coffee break and drinks. Also, we had visited most of the big company representative stands, that were in front of the conference halls. You can find interesting free materials there, like stickers, manuals and packages from the company you are visiting.

The next step was the conference. There were four conference halls, each one with different speakers. The opening talk was made by Anton Keks from Codeborne on the topic The world needs full-stack craftsmen.

After the opening ceremony talk, the conference started with different speakers on every track. Some of them were Russian speakers, so we focused on the English ones. Every talk was one to one and a half hour long and after that was a coffee break in the lounge room. There were also two lunch breaks included. In the end, the party at 20:00. You can check the full schedule here.

Day two was completely the same setup, some different speakers or the same one with a different topic. In general, the whole organization of the conference was amazing, like it should be for a world-class event. I highly recommend visiting if you have a chance.

Stay tuned for my next part where I will describe my opinion of the talks that I have visited…

DEVOXX UKRAINE, Here I come

Reading Time: 2 minutes

As a developer, when you need to extend your programming knowledge theoretical, practical, or either or, you need to go to a conference. Also, conferences are a good change to peer others in your field. Unfortunately, most software engineering conferences focus on introducing new technologies more than defining how a software engineer becomes an architect. That makes developer conferences a place to broaden the technical horizons, but not the vertical horizons. Exactly this makes DEVOXX so special. I have already had the pleasure to visit a DEVOXX conference in Europe and other conferences. Check out the articleabout that here!

What we expect from this conference 👤💬?

Normally, I focus on the new technical topics like what is new in Java. What do the new versions of Java offer? However, at this time, I would like to focus on both, the technical topics and software architecture, as it is a massive and fast-moving discipline. I would like to expect some training and insights to help you stay current with the latest trends in technologies, frameworks, and techniques — and build the skills needed to advance your career.

Source: https://earlycoders.com/so-you-want-to-learn-to-code-are-you-a-newbie-programmer-developer-or-a-software-engineer/

Organization to visit Devoxx Ukraine conference

The conference will be held in Kiev. So, my colleague Jeremy and I will be travelling from Zurich airport to Kiev. According to some articles, Kiev is considered one of the cheapest cities in Europe. We will try to explore the nightlife of Kiev. To be honest, I didn’t expect that the conference ticket is so cheap, it just costs 150 usd.

My private trips:

I will write another blog to explain what I and my colleague Jeremy did in Kiev. I can say one thing at the end: “Stay Tuned”!

FrontCon 2019 in Riga, Latvia

Reading Time: 3 minutes

Only a few weeks left until I go to my first tech conference this year. Travelling means for me learning something new. And I like learning. Especially the immersion in a foreign culture and the contact to people of other countries makes me happy.

It’s always time to grow beyond yourself. 🤓

BUT why visit Riga just for a conference? Riga is a beautiful city on the Baltic Sea and the capital of Latvia. Latvia is a small country with the neighbours: Russia, Lithuania, Estonia and the sea. AND it’s a childhood dream of me to get to know this city. 😍

The dream was created by an old computer game named “The Patrician”. It’s a historical trading simulation computer game and my brothers and I loved it. We lost a lot of hours to play it instead of finding a way to hack it. 😅
For this dream, I will take some extra private days to visit Riga and the Country as well. 😇

Preparation

The most important preparation such as flight, hotel, workshop and conference are completed.

Furthermore, I also plan to visit some of the famous Latvian palaces and the Medieval Castle of Riga. I also need some tips for the evenings: restaurants and sightseeings from you. Feel free to share them in the comments. 😊

Some facts about the conference

There are four workshops available on the first day:

  • Serverless apps development for frontend developers
  • Vue state management with vuex
  • From Zero to App: A React Workshop
  • Advanced React Workshop

I chose the workshop with VueJS of course 😏 and I’m really happy to see that I can visit most of the talks in the following days. There are some interesting speeches like “Building resilient frontend architecture”, “AAA 3D graphics” and secure talks and server-less frontend development. Click here for the full list of tracks.

My expectations

Above all, I’m open to events to learn new things. Therefore, I have no great expectations in advance. So I’m looking forward to the

  • VueJS & Reacts parts
  • Visit the speakers from Wix, N26 and SumUp

I’m particularly curious about the open spaces between the speeches. I will be glad to have some great talks with the guys. 🤩

For my private trips:

That’s all for now

to be continued…

Hackdayz #18: SMS Forwarding Android App

Reading Time: 5 minutes

Team members

Youssef Idelhoussain, Senior Front-end Engineer
Shehab Eltobgy, Test Manager

Abstract

This is the real deal.. Prepare your battery, connect to a good network to get thousands of SMS. Whoever you are, maybe you come from a faraway land, maybe you don’t understand my language, maybe you are from a country that I never heard the name of…
One thing is for sure, you will get the SMS. So, whoever you are, wherever you are, our app has special skills which will make this world easier for you, starting with getting an SMS 📲🤩

Having such nice days during our Hackdayz did not prevent us from thinking into adding more practical benefit to our company by improving the current app. And after we had our lunch, we had the power to start working, nevertheless, my vegetarian lunch did not taste good at all.
Our aim was by the end of Hackdayz that the app should be released in PlayStore and the code should be made as open-source for further improvements!

Side Notes

The app should have:

  • Rules: number and where it should be posted
  • Environment: Slack, Email, and others…
  • Some new settings: such as the ability for the user to set a password… (we were so optimistic)

Agenda

  • What is the problem you want to solve?
  • Who experiences that problem?
  • How do you want to solve that problem?
  • Why is this a better solution?

Having such a funny combination of a team with a front-end developer and test manager trying to develop an android app included so much fun these days, as we were literally underdogs. But, just to get our spirit up, we went to the gym, and then to the sauna where I could not even see my hands, and finally to the swimming pool.

Working on the project 👨‍💻 at Hackdayz18 in St. Gallen

Although, we were so ambitious that we set our plan to create an app with an infinite number of environments, and with so flexible rules (such as amateur dreams). After some time as Thomas A. Edison stated “I have not failed. I’ve just found 10,000 ways that won’t work.”, we realized that we are not gonna create the app as it actually was planned 🤯. Nevertheless, the days were cool enough to make us laugh while we were failing for several times.

Youssef was really ambitious that he told me “I will never go to bed mad. I’m gonna stay up and fight!”. After 10 minutes, each of us went to his room to sleep 😴. Due to the effort, I spent during 3 hours in the gym, sauna, and swimming, I wanted to sleep because by looking at my hand I couldn’t recognize how many fingers I did have.

The next day, we started working again. I wanted to start now with my real work since when kings start the party 🎉, my first task was to find a beautiful design. I decided to choose a simple design due to the time pressure. Besides improvisation is too good to leave to chance.

GitLab Repository

And using the mentioned GitLab we were able to create the app.

https://gitlab.com/47northlabs/public/sms-to-slack

Results

We were somehow not so much satisfied with the results, actually shocked 😱😱😱!!!

  • The app has been developed with the ability to add up to 5 environments. Unlike what we have expected to reach an infinite number of environments.. such youth dreams 😅
  • The app could not set the email as one of the environment due to inability to find a library via which the app can send the message to the email while it is in the background…. experience is simply the name we gave our mistakes 😄

Conclusion and implication

The app has been created successfully and applied to one of our android devices using +41 76 75x xxxx.

Screenshot of our Slack and Slackbot channel

Future features and challenges would be…

1. Adding email as a new environment. Let us see how this gonna be manageable 🤔.
2. Adding a password for the app. We are still so optimistic 😁.
3. Adding the ability to add more (unlimited environments) with the recycle bin 🧹 to remove them when needed.

By the end of the day, I just was totally shocked f the difference between what has been planned and what actually has been achieved. But, it was just a funny and exhausting experience.

Voxxed Days Bucharest & Devoxx Ukraine – HERE WE COME!

Reading Time: 4 minutes

Last year conferences

Already in 2018, we had the pleasure to visit 2 conferences in Europe:

We had a great time visiting these two cities 🙌 and we can’t wait to do that again this year 😎.

What do we expect from the two conferences in 2019 👤💬?

Like last year, we are interested in several different topics. For me, I am looking forward to Methodology & Culture slots, Shady is most interested in Java stuff. All in all, we hope that there are several different interesting speeches about:

  • Architecture & Security
  • Cloud, Containers & Infrastructure
  • Java
  • Big Data & Machine Learning
  • Methodology & Culture
  • Other programming languages
  • Web & UX
  • Mobile & IoT

We ❤️ food!

The title is speaking for itself. We just love food 🍴! Travelling ✈️ gives a good opportunity to see and taste something new 👅. All over the world, every culture has unique and special cuisine. Each cuisine is very different because of the different methods of cooking food. We try to taste (almost) everything when we arrive in new countries and cities.

We are really looking forward to seeing, what Bucharest’s and Kiev’s specialities are 🍽 and to trying them all! Here some snapshots from our trips to the conferences in Amsterdam Netherlands and Krakow Poland in 2018…

What about the costs 💸?

One great thing at N47 is, that your personal development 🧠 is important to the company. Besides hosted internal events and workshops, you can also visit international conferences 🛫 and everything is paid 💰. Every employee can choose his desired conferences/workshops, gather the information about the costs and request his visit. One step of the approval process is writing 📝 about his expectation in a blogpost. This is exactly what you are reading 🤓📖 at the moment.

Costs breakdown (per person)

Flights: 170 USD
Hotel: 110 USD CHF (3 days, 2 nights)
Conference: 270 USD
Food and public transportation: 150 USD
Knowledge gains: priceless
Explore new country and food: priceless
Spend time with your buddy: priceless
—–
Total: 700 USD
—–

Any recommendations for Bucharest or Kiev?

We never visited the two cities 🏙, so if you have any tips or recommendations, please let us know in the comments 💬!

Microservices vs. Monoliths

Reading Time: 3 minutes

What are microservices and what are monoliths?

The difference between monolith and microservice architecture

The task that microservices perform is quite simple: The mapping of software in modules. Now the statement could be made that classes, packages etc. also fulfill the same task. That’s right, but the main difference lies in deployment. It is possible to deploy a microservice without “touching” the other microservices.

Classic monoliths, on the other hand, force deployment of the entire “project”.

Advantages of microservices and disadvantages of monoliths

1. Imagine that you are working on a project that contains thousands or even tens of thousands of lines of code. With each new function, the lines of code grow. Every DEV loses the overview here. Some a little earlier, the other a little later. Ultimately, it is impossible to keep track.
In addition, with each new feature, strange things are created elsewhere. This makes it very difficult to locate bugs and robs any developer of the last nerve.
Unlike monoliths, microservices are defined in small modules. Each microservice serves a specific task. Thus, the manageability is granted a lot easier.

2. The data for monoliths are located in a pool, to which each submodule can access via the interface. If you make a change to the data structure, you have to adapt each submodule, otherwise you have to expect errors.
Microservices are responsible for their own data, and the structure is absolutely irrelevant. Each service can define its structure. Changes to the structure also have no impact on other services, which saves a lot of time and, above all, prevents errors.

3. Microservices are only dependent on microservices that communicate with each other so that in the event of a bug, not the entire system fails. In the monolithic approach, however, the bug of one module means the failure of the entire system.

4. Another disadvantage arises with an update. All monoliths are overinstalled, which costs an enormous amount of time.
For the microsevices, only the services where changes have been made are installed. This saves time and nerves.

5. Detecting errors in the monolithic approach can take a long time for large projects.
Microservices, on the other hand, are “small” and greatly simplify troubleshooting.

6. The team of a monolithic architecture works as a whole, which makes the technical coordination difficult.
The teams of a microservice architecture, however, are divided into small teams, so that the technical coordination is simplified.

Conclusion

The microservice approach divides a big task into small subtasks. This method greatly simplifies the work for developers because on the one hand the overview is easy to keep in contrast to the monolithic approach and on the other hand the microservices are independent of the other microservices.