The practical guide – Part 2: MVP -> MVVM

Reading Time: 5 minutes

The practical guide – Part 1: Refactor android application to follow the MVP design pattern

This is the second part of the practical guide. In the first part, we talked about refactoring android application to MVP. Now, instead of refactoring the base code to MVVM, I would like to refactor the MVP application to MVVM. That way we will learn the differences between MVP and MVVM.

Why MVVM?
First, I should tell you that Google accepted MVVM as preferred design pattern for building Android applications. So, they have build tools that helps us following this pattern. This is a great reason to learn and use this pattern, but why would Google choose MVVM over MVP (or other design patterns). Well, they know best but, my opinion is that they chose MVVM because it has one less dependency, due to the difference in communication between ViewModel and View. I will explain this in the next section.

Difference between MVP and MVVM
As we know, MVP stands for Model-View-Presenter. On the other hand, MVVM stands for Model-View-ViewModel. So, the Model and the View in MVVM are the same as in the MVP pattern. The only difference remains between the Presenter and View Model. More precisely, the difference is in the communication between the View and the ViewModel/Presenter.

As you can see in the diagrams, the only difference is the arrow from Presenter to View. What does it mean? It means that in MVP you have an instance of the Presenter in the View, and you have an instance of the View in the Presenter, hence double arrow in the Diagram. In MVVM you only have an instance of the ViewModel in the View. But, how do you communicate with the View? How can the View know when the ViewModel has made changes in the Model? For that, we need the Observer pattern. We have observables(subjects) in the ViewModel, and the View subscribes to this observables. So, when the observable is changed, then the View is automatically informed about that change, and it can update its views.

For practical implementation of this Observer pattern, we have to get help either from some external libraries like RxJava, or we can use the new architecture components from Android. We will use this later in this example.

Refactoring

MVP classes
MVVM Classes

First, we can get rid of the MainPresenter and MainView interfaces. We can have only one new class MainViewModel, that replaces the Presenter. Then we can extend MainViewModel from androidx.lifecycle.ViewModel. This is the first class that we will use from the android lifecycle components. This class helps us to deal with the lifecycle of the view model. It survives configuration changes, so it is a nice place for storing UI related data. Next, we will add quoteDao and quotesApi fields. We will initialize them with setters, instead of the constructor, because the creation of the ViewModel is a little bit different. We don’t have the MainView, and also we don’t need bindView() and dropView() methods.

Next, we will create the observable objects. These are objects that we want to display in the MainActivity, wrapped with androidx.lifecycle.LiveData or some other implementation of LiveData. This class helps us with the implementation of the observer pattern. We will create the objects in the MainViewModel, and we will observe these objects in the MainActivity. We want to display a list of Quote objects, so we will create MutableLiveData<List<Quote>> object. MutableLiveData because we want to change the value of the object manually.

getAllQuotes() will be very similar as in the Presenter, except minus the interaction with the MainView. So, instead of:

if (view != null) {
  view.displayQuotes(response.body());
}

we will have:

quotesLiveData.setValue(response.body());

We will also change the implementation of the DatabaseQuotesAsyncTask, so instead of sending the MainView, we will create a new interface that will get us the quotes from the async task and we will send the implementation of this interface there. In the implementation, we will update quotesLiveData, same as above.

In the MainActivity, we remove the implementation of the MainView. We won’t need to override onDestroy() method. We can replace the MainPresenter with the MainViewModel. We will create the ViewModel as follows:

viewModel = new ViewModelProvider(this, new ViewModelProvider.NewInstanceFactory()).get(MainViewModel.class);
viewModel.setQuoteDao(quoteDao);
viewModel.setQuotesApi(quotesApi);

Then we will observe to the quotesLiveData observable, and display the list.

viewModel.quotesLiveData.observe(this, quotes -> {
    QuotesAdapter adapter = new QuotesAdapter(quotes);
    rvQuotes.setAdapter(adapter);
});

And in the end, we call viewModel.getAllQuotes() to fetch the quotes.

And that’s it! Now we have an application that follows the MVVM design pattern. You can check the full code here.

RECOMMENDATION SYSTEMS AND COLLABORATIVE ALGORITHM

Reading Time: 8 minutes

WHY DO WE NEED RECOMMENDATION SYSTEMS?

Walking through the steps of technology, which has rapid growth nowadays, represents a huge challenge for humanity. Software systems are currently creating a dynamic world, which undoubtedly facilitates human life and enables its improvement to the highest point of a digital being.

Many mobile and web systems offer easy usage and search through the internet. They are a necessary segment of education, health, employment, trade and of course fun. In such a fast and dynamic life, it is necessary to have more and more systems that will help us enable fast recommendation search when in need of finding relevant information, all in order to save us time. Usually, generated recommendation systems are according to the collaboration filtering algorithms or content-based methods.

RECOMMENDATION SYSTEMS IN REAL LIFE

In real life, people are overwhelmed with making a lot of decisions, no matter of its importance, either minor or major. Understanding human choices is a field studied by cognitive psychology.

One of the most important factors influencing the decisions an individual makes is ‘the past experience’, i.e. the decisions made by the man in the past, affect those he will make in the future.

Human actions are also dependent on the experiences they have gained in interactions with other people. The opinion of others affects our lives without us being aware of it. Relationship with friends affects what neighbourhood we will live in, which place we will visit during our vacation, in which bar we will have a drink, etc.

A real life recommendation system

If one person has positive experiences with another one, then he/she has gained trust and authority over the particular individual and is more likely to follow their advice, as well as choosing the decisions that the person chose when they were in a similar situation.

RECOMMENDATION SYSTEMS IN THE DIGITAL WORLD

All large companies and complex systems use collaborative filtering, as an example of this is the social network “Facebook” with the phrase “People you may know”. Facebook is a hugely complex system and has a massive database, which is why they have a need for an optimization of the user data set so that they can provide a precise recommendation. They also have collaborating systems for the news feed, as well as for the game, fun pages, groups, and event sections.

Another, well-known technology and media service provider which uses those collaboration systems is Netflix, with the “Because you watched” phrase. Netflix uses algorithms and machine learning, probably based on genres, history of the watched movies, ratings and the amount of all ratings of the users that have a similar content taste as ours.

Here is as well Amazon, the multinational technology company, which uses the algorithms for a product recommendation for their clients. They use the item-to-item approach for the recommendation.

Hint: Click on the picture if you want to know more about Item-to-Item Collaborative Filtering

Last example but not least, is the most successful business social network LinkedIn, which uses ex. “People in the Information Technology & Services industry you may know”, “People you may know from Faculty XXX”, “Trending pages in your network”, “Online events for you” and a number of other phrases.

I made a research on the collaborative filtering algorithm, so I will deeply explain how this algorithm works, please read the analysis in the sections below.

RECOMMENDATION SYSTEM AND COLLABORATIVE FILTERING

Based on the selected data processing algorithm, the systems use different recommendation techniques.

Content-based system

People who liked this also likes that as well

Collaborative filtering

Analyzing a huge amount of information

Hybrid recommendation systems

COLLABORATIVE FILTERING – DETAILED ANALYSIS

On a coordinate system, we can show the popularity of products, as well as the number of orders.

The X-axis is presenting the product curve, which shows the popularity of a variety of products. The most popular product is on the left part – at the head of the tail, and the less popular ones are in the right part. Under popularity, I mean how many times the product has been ordered, and viewed by others.

The Y-axis is representing the number of orders and product overviews over a certain time interval.

By analyzing the curve, it is noticeable that the often ordered products usually are considered most popular, and those that have not been ordered recently are omitted. That is what the collaborative filtering algorithm offers.

A measure of similarity is how similar two data objects are to each other. The measure of similarity in a dataset usually described as distance with dimensions, which represent characteristics of the objects that are in comparison. If the distance is small, then the degree of similarity is large, and vice versa. The similarities are very subjective and highly dependent on the domain of the systems.

The similarities are in the range of 0 to 1 [0, 1].

Two main similarities:

  • Similarity = 1 if X = Y
  • Similarity = 0 if X != Y

Collaborative filtering is processing the similarity of the data we have, with the help of several theorems, such as Cosine similarity, Euclidean Distance, Manhattan distance etc.

COLLABORATIVE FILTERING – COSINE SIMILARITY

In the beginning, we need to have a database and characteristics of the items.

For cosine similarity implementation, we need a matrix of similarity from the user database. In this matrix, the vector A are the products, and vector B are the users. Matrix is in format AXB. The fields of the matrix represent the grade/rating of the users’ Ai over the products Bj.

Therefore, we can imagine that we have users from 1 to n {1, …n} and grades/ratings on the products {1,…10}. Every row represents a different user, and every column represents one product. Every field of the matrix consists of the product grade/rating that the user has entered. Now, with this generated matrix, we can use the formula for finding the similarity between the users:

STEP 1:

Similarity (UserN, User1) =

 STEP 2:

In step 1, we can see that User N has the most similarities with User 2, but we can see that in the data we have a deficiency for some product ratings, so we should count the priority of the products that User N, has not set a rating. Now we need the values for the most similar users with User N, and those are User 2 and User 4. The following formula should be used:

Priority (product) = User2 (value*similarity) + User4 (value*similarity).

Example:

Priority(product3) = 8 * 0.66 = 5.28

Priority(product4) = 8 * 0.71 = 5.68

Priority(product5) = 7 * 0.71 + 8 * 0.66 = 10.25

STEP 3:

If we want to recommend two products to User N, these will be product5 and product4.

CONCLUSION:

Similarity theorems have their advantages and disadvantages, depending on what data set they apply. From the above analysis, we came to a conclusion that if the data contains zero values and are rarely distributed, we use the metric for computed a cosine similarity that handles nonzero values. Otherwise, if the data are densely distributed and diversity instead of similarity of users/products, and we have non-zero values, then we use the measures for calculating Euclidean distance. Such systems are under constant pressure from large volumes of data in databases and will undergo to even more challenges due to the daily increasing volume of the data. Therefore, there is a growing need for such new technologies that will dramatically improve the scalability of the recommendation systems.

QUESTION: WHAT WILL HAPPEN IN THE FUTURE?
ANSWER: ONLY TIME WILL TELL.

Was there something good in waterfall that lacks in agile – DDD

Reading Time: 9 minutes

There is just one part, one piece step/phase of both methodologies agile and waterfall in which the old fashion way, thinking of one aspect, has some advantage over the new one agile. This does not have to mean that agile (scrum) is bad and developing projects by this methodology are doomed to fail and all other projects managed by waterfall will have great success. The idea of this blog will be to motivate managers and others to think a little bit more out of the box, instead of blindly implementing and using everything from the books what is new. Before we start to talk about whether something is missing or not in agile let’s first introduce basics of both methodologies.

Agile way

These days every software developer knows the basics of agile which are given in the agile manifesto:

We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:


Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan


That is, while there is value in the items on the right, we value the items on the left more.

https://agilemanifesto.org/

These thoughts are maybe more abstract, but looking more from real life experience we can define twelve principles:

  1. Customer satisfaction by early and continuous delivery of valuable software
  2. Welcome changing requirements, even late in development
  3. Deliver working software frequently (weeks rather than months)
  4. Close, daily cooperation between business people and developers
  5. Projects are built around motivated individuals, who should be trusted
  6. Face-to-face conversation is the best form of communication (co-location)
  7. Working software is the primary measure of progress
  8. Sustainable development, able to maintain a constant pace
  9. Continuous attention to technical excellence and good design
  10. Simplicity—the art of maximizing the amount of work not done—is essential
  11. Best architectures, requirements, and designs emerge from self-organizing teams
  12. Regularly, the team reflects on how to become more effective and adjusts accordingly

This is a theory what should some agile methodology have. And again there are no strict phases that one project has to go through during its lifecycle.

The most used methodology that is based on agile is scrum, so we are going to talk about it. It is a methodology which implements the principles of agile. Because it is a real practical process of developing some software it has strictly defined processes of how the software should be developed.

The project should be done in a series of iterative small steps which are called sprints. One sprint should have a maximum period of one month. Most common is two weeks. Following pictures shows how scrum phases and sprints look like:

What can we focus on from all of this? The main goal in Scrum – Agile is to develop incrementally. Always split all requirements on smaller ones and work on a few of them every sprint. The idea is to make small requirements so that several can fit and can be done in one sprint. After every sprint, there will be a release and will be presented to the PO for review (features bugs etc).

Waterfall way

Many of the older generations may have been working on some projects organized in this way and they know and are familiar with the main differences compared with agile. But for all others, we will try to explain it in few words.

Basic model flow for the waterfall has following stages:

  1. Software Requirements
  2. Analysis
  3. System Design
  4. Coding and Implementation
  5. Testing and Verification
  6. Operation and Maintenance

And we can see few diagrams and models how waterfall looks like:

Here we can see that nothing is repeatable and everything is strictly defined. When the project starts, it has to go through every stage and every stage has to be confirmed that is correctly done. But what means no repeatable. It means that in all of these phases should be covered all scenarios that have to be implemented and are planned for the project.

Or more detailed explanation, when the analysing of the requirements starts, it will be for all requirements that should be in the project. It can take maybe one month or more. It can not be done with just a few meetings between the clients and business analysts, because in this phase all requirements should be defined in detail.

After both sides agree that requirements/business scenarios defined in these stages are good and are everything that will be done, the next stage, the design will start. In some ways, this is the stage when real implementation starts. Architects and designers are analysing all the requirements, written in the first stage, and with that information and knowledge, they will start to design the model and set up the application (technologies and frameworks which will be used).

Next step will be implementing all requirements. The process with development and testing of every requirement after it is done. In the same phase, bugs will be fixed along with everything else.

After all requirements are done and tested, the project goes in the final phase which will be the deadline when the project has to go live – production.

Right way of implementing Agile

Comparing these two ways of managing/doing one project we can agree that agile is much better and more reliable to be used instead of waterfall. We don’t have to wait a long period to be sure that we are doing what is right and if it will be accepted from the clients. Acceptance of the project is with every iteration.

Why is this more and more adopted from companies that develop different applications, small and big? It is because they can be sure that everything that they are going to do this month will be accepted. If it is not they are going to lose the effort only for that month. Not the effort and work for one year like in waterfall.

Starting from this point companies are forcing to have something from the start. Don’t like to spend time on analyzing and defining the whole data domain, taking in advance all requirements and setting up the whole architecture by the real needs of the project. That will cost them time and money.

Everything will start great. All stories, epics will be defined after the projects start. Split them in small requirements which can be done during one sprint, they will be defined during the sprints or few sprints before they have to be done. There will be no useless work and effort.

The data model will be defined and created with every new requirement, with every new feature or maybe some bug. At the beginning that data model will serve very well. Requirements can be done fast because there will be no issue of creating a new entity and reference it to some existing one. DDD will be fulfilled while we solve every ticket. First, define the data then implement the code.

But this will be done only for the small parts of the data model and of the project. There will be no defining big data domain that will be useful for the features that will come next. For a week, month or year.

After some time and set up some data model, with every new ticket that will require model changing we will have more and more difficulties to adapt existing data model for the new requirements. In one moment we will have to do bigger changes in the model. The main reason for this is that when we implemented the requirements we cared only for the needs that we had in the sprint when we worked on the previous ticket, story etc and we were not aware of the needs of the model for the featured requirements.

Then we will have let’s say two solutions:

  • To continue to try to work on the model that we made it by now. Which will make implementing our new changes harder, and with that more time-consuming.
  • Or at one moment stop with new features and spend time on a big refactor of the data model. Which will take a long time without doing anything productive on the project with new features, just refactoring.

In both scenarios we will have less productivity then we had before and less than we expected.

If we make a retrospective what we have done, we will see that we save time at the beginning on analyzing and defining the whole data model but after some time, year or more, we will spend the same or maybe more time on dealing with the bad data model or big data refactoring.

These possible issues that we can have in future, with forcing fully agile from the very start, can be easily avoided if we add one more phase before implementing requirements. Just spend time on analyzing, defining and modelling of the main big crucial epic stories that we have to cover with our project. Which is more like DDD in the waterfall methodology.