Spring I/O is Back as an in-person event in the beautiful city of Barcelona.
Spring I/O is the leading European conference focused on the Spring Framework ecosystem. This year marks the 9th in-person edition. The conference is held between 26-27 May. There isn’t a better place for Spring like-minded people to discuss and share ideas with other Spring like-minded people than Spring IO. The place chosen for this year is Palau de Congressos de Barcelona (Fira de Barcelona) and its maravilloso.
Preparation
The initial preparation is done as mentioned below:
Ticket booking, The Conference ✔️
Travel booking, Skopje – Macedonia to Barcelona – Spain✔️
Hotel booking ✔️
Due to Corona, a direct flight from Skopje to Barcelona was discontinued and we had to find another travel route. The route chosen was Skopje – Macedonia ( 🚗 ) -> Sofia – Bulgaria ( ✈️ ) -> Barcelona – Spain and back.
The Conference took place in Palau de Congressos de Barcelona (Fira de Barcelona) Barcelona.
Topics
Some of the topics presented. You can find the full list here.
Modular Applications with Spring
Reactive Microservices with Spring Boot and JHipster
Declarative Clients in Spring
To Java 18 and Beyond!
Making sense of Configuration and Service Discovery on Kubernetes
Getting modules right with Domain-driven Design
Ahead Of Time and Native in Spring Boot 3.0
How fixing a broken window cut down our build time by 50%
Testcontainers and Spring Boot from integration tests to local development!
All of the presentations were fun and engaging. There were 4 different rooms ( tracks ) at the same time and also there were three workshops for each day of the conference. It was a really big challenge to choose which one to participate in.
Conference recap
After the opening ceremony, the keynote speech by Juergen Hoeller (Spring Framework project lead and co-founder) discussed the current state of Spring Framework and what the future holds. He discussed the upcoming version of Spring Framework 6 and Spring boot 3.
The new generation is very forward-looking and meant for 2023 and beyond
The next generation of Spring comes with JDK 17+ baseline, support for AOT ( Ahead-of-time ), Virtual Threads ( Project Loom ).
AOT: It is a form of static compilation which translates the program to machine-readable code before the execution of the program. This compilation offers small packaging fast start-up times and a low memory footprint.
Virtual Threads: Java uses OS kernel threads for concurrency implementation. There are several problems with this approach. A thread for every transaction, user, or session is not feasible as the number of threads per kernel is often smaller. Expensive context switch when synchronization between threads is needed. Project loom is an attempt by the OpenJDK community to introduce a new concept of a lightweight concurrency model in Java using Virtual Threads. Virtual threads are not tied to OS threads but are managed by the JVM which then maps several Virtual Threads to one or more OS Threads.
To find out more about AOT and Project Loom follow the links.
Another interesting presentation was To Java 18 and Beyond! which introduced us with what are the new features of Java 18
UTF-8 by Default – this encoding becomes default now and not the charset determined by the host. Simple Web Server – a minimalistic server that could serve static files only. Vector API – API for usage of computation array of numbers as a single entity Code Snippets in Java API Documentation Some improvements, deprecation removal, etc. The presentation can be found here.
Also, another presentationHow fixing a broken window cut down our build time by 50% I suggest watching what was interesting is how by fixing a broken window speaker Philip Riecks explained how his team improved build time by 50%. I am always fascinated by the theory of broken windows which indicated that if there is any sign of something broken, not cleaned up, something messy or wrong this encourages other similar behavior. We should encourage refactoring and improvement behavior which will ultimately make our code better and our daily jobs easier and might increase build time similar to with speaker’s case.
We should always follow the boy scouts’ rule
Leave the campground cleaner than you have found it.
On the Redis conference stand our colleague won a prize that granted us a private session with the Redis team where we discussed what makes Redis so fast. We discussed Redis Enterprise which is Redis open source but on steroids. The most powerful fully-managed Redis.
Conclusion
Overall It was an excellent experience and I am so grateful for being a part of the Spring IO conference. It was awesome to meet many new people from the Spring Community, and experience the beautiful city of Barcelona.
https://www.north-47.com/wp-content/uploads/2022/11/SpringIO2022.png14021866Vladimir Dimikjhttps://www.north-47.com/wp-content/uploads/2018/09/logo_menu-copy-1-300x138.pngVladimir Dimikj2022-12-13 13:44:012022-12-13 13:53:16Spring I/O - Barcelona 2022
I have been asking developers these days if they know why do we need to use volatile in java. Most of the answers that I received were like reading a definition from a book and like nobody cared to dig a little bit to figure out what is going on in the background. In this article, I am going to talk about why to use it and what will happen to the cache memory when we are using volatile.
Let’s say we are writing an application that will turn on the electricity and of course turn it off and in it we have two threads for each action. One of the threads is responsible to turn the electricity on and the second thread will turn the electricity off. We will also have a circuit breaker which when set to true will turn on the electricity, and when set to false will turn off the electricity. Let’s look at how that will look in java:
public class ElectricityDemo {
public static boolean circuitBreaker = true;
public static void main(String[] args) {
Thread electricityOn = new Thread(() -> {
while (circuitBreaker) {
System.out.println("POWER ON");
}
});
Thread electricityOff = new Thread(() -> circuitBreaker = false);
electricityOn.start();
electricityOff.start();
}
}
So, as we can see in the code, while variable circuitBreaker is set to true we will print the POWER ON and as soon as this variable is set to false in the electricityOff thread, we will expect for electricityOn to stop running.
However, this might not be the case and let’s see the reason for it. On the picture below we have a CPU with two cores and the value to which the circuitBreaker is set:
As we can see on the picture, the thread electricityOn is running on core1 and electricityOff on core2. Each core has its own local cache and there is a shared cache. Since both of the threads are using the variable circuitBreaker they will load it in their local cache. From the code we can see that local cache 1 will have the variable value set to true and the local cache 2 will have the new value false. Unfortunately, this update of the circuitBreaker = false in the local cache 2 won’t be visible to local cache 1, so when thread electricityOff will set the variable circuitBreaker to false this information won’t be visible in the local cache 1 and this is what we call a visibility problem. In order to solve this issue we want to make the value of the variable visible to local cache 1. This visibility problem can be solved by making the circuitBreakervolatile.
So, our code will look like this:
public class ElectricityDemo {
public static volatile boolean circuitBreaker = true;
public static void main(String[] args) {
Thread electricityOn = new Thread(() -> {
while (circuitBreaker) {
System.out.println("POWER ON");
}
});
Thread electricityOff = new Thread(() -> circuitBreaker = false);
electricityOn.start();
electricityOff.start();
}
}
Let’s look at the picture below to see what is happening now:
When setting the variable to volatile, we are pushing this information to the shared cache and now the value will be false. Additionally, the local cache 1 will be refreshed and it will have the actual value of the variable. Finally the electricityOn will have the correct value of the circuitBreaker and will stop the electricity.
Every time when we have two or more threads accessing a variable and executing actions based on the variable value it will be smart decision to set that variable as a volatile.
So we solved our issue with the electricity running on the whole time. Hope you enjoyed it and saw the actual benefit of adding volatile to your variables.
https://www.north-47.com/wp-content/uploads/2022/09/Blog-Post-Image-Multithreading-In-Java.jpg3821024Filip Trajkovskihttps://www.north-47.com/wp-content/uploads/2018/09/logo_menu-copy-1-300x138.pngFilip Trajkovski2022-10-07 14:14:212022-12-13 13:53:46Why to use "Volatile" in Java
What are event listeners in AEM? Is this same as event handlers? What is difference in this two concepts? We will try to answer all these questions in this article.
A little searching on the web, probably you will find something like this:
The most important here is the first row: JCR level vs Sling level? What does this mean in practice? The answer of this question will be through a experiment.
The task: Please provide UUID for every new page created.
It seems that we can use either EventHandler or EventListener, and we can achieve same result, uuid for every page that is created. So, why AEM provide two different functionality for the same thing? It must be some differences between them!? Yes, indeed, there is. The tricky part here is where from we create the page. If we create from AEM , the image bellow
creation of page from AEM
then both methods are good enough. But suppose someone with admin privileges, that can access CRX. It could happen to copy-paste page, just as ordinary node.
copy-paste page from CRX
In this case EventListener will fire, but EventHandler will not. That is the key difference between them.
The Experiment:
Implementation of EventHandler
@Component(service = EventHandler.class, immediate = true, property = {EventConstants.EVENT_TOPIC + "=" + PageEvent.EVENT_TOPIC}) public class PageEventHandler implements EventHandler {
Inter-microservice communication has always brought questions and challenges to software architects. For example, when it comes to propagating certain information (via HTTP headers for instance) through a whole chain of calls in the scope of one transaction, we want this to happen outside of the microservices’ business logic. We do not want to tackle and work with these headers in the presentation or service layers of the application, especially if they are not important to the microservice for completing some business logic task. I would like to show you how you can automate this process by using Java thread locals and Tomcat/Spring capabilities, showing a simple microservice architecture.
Architecture overview
This is the sample architecture we will be looking at. We have a Zuul Proxy Server that will act as a gateway towards our two microservices: the Licensing Microservice and the Organization Service. Those three will be the main focus of this article. Let’s say that a single License belongs to a single Organization and a single Organization can deal with multiple Licenses. Additionally, our services are registered to a Eureka Service Discovery and they pull their application config from a separate Configuration Server.
Simple enough, right? Now, what is the goal we want to achieve?
Let’s say that we have some sort of HTTP headers related to authentication or tracking of the chain of calls the application goes through. These headers arrive at the proxy along with each request from the client-side and they need to be propagated towards each microservice participating in the action of completing the user’s request. For simplicity’s sake, let’s introduce two made up HTTP headers that we need to send: correlation-id and authentication-token. You may say: “Well, the Zuul proxy gateway will automatically propagate those to the corresponding microservices, if not stated otherwise in its config”. And you are correct because this is a gateway that has an out-of-the-box feature for achieving that. But, what happens when we have an inter-microservice communication, for example, between the Licensing Microservice and the Organization Microservice. The Licensing Microservice needs to make a call to the Organization Microservice in order to complete some task. The Organization Microservice needs to have the headers sent to it somehow. The “go-to, technical debt” solution would be to read these headers in the controller/presentation layer of our application, then pass them down to the business logic in the service layer, which in turn is gonna pass them to our configured HTTP client, which in the end is gonna send them to the Organization Microservice. Ugly right? What if we have dozens of microservices and need to do this in each and every single one of them? Luckily, there is a lot prettier solution that includes using a neat Java feature: ThreadLocal.
Java thread locals
The Java ThreadLocal class provides us with thread-local variables. What does this mean? Simply put, it enables setting a context (tying all kinds of objects) to a certain thread, that can later be accessed no matter where we are in the application, as long as we access them within the thread that set them up initially. Let’s look at an example:
public class Main {
public static final ThreadLocal<String> threadLocalContext = new ThreadLocal<>();
public static void main(String[] args) {
threadLocalContext.set("Hello from parent thread!");
Thread childThread = new Thread(() -> {
System.out.println("Child thread: " + threadLocalContext.get()); // Prints null
});
childThread.start();
childThread.join(); // Waiting for the child thread to finish
System.out.println("Parent thread: " + threadLocalContext.get()); // Prints "Hello from parent thread!"
}
}
We have a single static final ThreadLocal<String> reference that we use for setting some information to the thread (in this case, the string “Hello from parent thread!” to the main thread). Accessing this variable via threadLocalContext.get() (no matter in which class we are, as long as we are on the same main thread) produces the expected string we have set previously. Accessing it in a child thread produces a null result. What if we set some context to the child thread as well:
threadLocalContext.set("Hello from parent thread!");
Thread childThread = new Thread(() -> {
threadLocalContext.set("Hello from child thread!");
System.out.println("Child thread: " + threadLocalContext.get()); // Prints "Hello from child thread!"
});
childThread.start();
childThread.join(); // Waiting for the child thread to finish
System.out.println("Parent thread: " + threadLocalContext.get()); // Prints "Hello from parent thread!"
We can notice that the two threads have completely separate contexts. Even though they access the same threadLocalContext reference, in the background, the context is relative to the calling thread. What if we wanted the child thread to inherit its parent context:
public class Main {
private static final ThreadLocal<String> threadLocalContext = new InheritableThreadLocal<>();
public static void main(String[] args) throws InterruptedException {
threadLocalContext.set("Hello from parent thread!");
Thread childThread = new Thread(() -> {
System.out.println("Child thread: " + threadLocalContext.get()); // Prints "Hello from parent thread!"
threadLocalContext.set("Hello from child thread!");
System.out.println("Child thread: " + threadLocalContext.get()); // Prints "Hello from child thread!"
});
childThread.start();
childThread.join(); // Waiting for the child thread to finish
System.out.println("Parent thread: " + threadLocalContext.get()); // Prints "Hello from parent thread!"
}
}
We only changed the ThreadLocal to an InheritableThreadLocal in order to achieve that. We can notice that the first print inside the child thread does not render null anymore. The moment we set another context to the child thread, the two contexts become disconnected and the parent keeps its old one. Note that by using the InheritableThreadLocal, the reference to the parent context gets copied to the child, meaning: this is not a deep copy, but two references pointing to the same object (in this case, the string “Hello from parent thread!”). If, for example, we used InheritableThreadLocal<SomeCustomObject> and tackled directly some of the properties of the object inside the child thread (threadLocalContext.get().setSomeProperty("some value")), then this would also be reflected in the parent thread and vice versa. If we want to disconnect the two contexts completely, we just call .set(new SomeCustomObject()) on one of the threads, which will turn its local reference to point to the newly created object.
Now, you may be wondering: “What does this have to do with automatically propagating headers to a microservice?”. Well, by using Servlet containers such as Tomcat (which Spring Boot has it embedded by default), we handle each new HTTP request (whether we like/know it or not :-)) in a separate thread. The servlet container picks an idle thread from its dedicated thread pool each time a new call is made. This thread is then used by Spring Boot throughout the processing of the request and the return of the response. Now, it is only a matter of setting up Spring filters and HTTP client interceptors that will set and get the local thread context that will contain the HTTP headers.
Solution
First off, let’s create a simple POJO class that is going to contain both of the headers that need propagating:
@Getter
@Setter
@ToString
@AllArgsConstructor
@NoArgsConstructor
public class RequestHeadersContext {
public static final String CORRELATION_ID = "correlation-id";
public static final String AUTHENTICATION_TOKEN = "authentication-token";
private String correlationId;
private String authenticationToken;
}
Next, we will create a utility class for setting and retrieving the thread-local context:
public final class RequestHeadersContextHolder {
private static final ThreadLocal<RequestHeadersContext> requestHeaderContext = new ThreadLocal<>();
public static void clearContext() {
requestHeaderContext.remove();
}
public static RequestHeadersContext getContext() {
RequestHeadersContext context = requestHeaderContext.get();
if (context == null) {
context = createEmptyContext();
requestHeaderContext.set(context);
}
return context;
}
public static void setContext(RequestHeadersContext context) {
Assert.notNull(context, "Only not-null RequestHeadersContext instances are permitted");
requestHeaderContext.set(context);
}
public static RequestHeadersContext createEmptyContext() {
return new RequestHeadersContext();
}
}
The idea is to have a Spring filter, that is going to read the HTTP headers from the incoming request and place them in the RequestHeadersContextHolder:
@Configuration
public class RequestHeadersServiceConfiguration {
@Bean
public Filter getFilter() {
return new RequestHeadersContextFilter();
}
private static class RequestHeadersContextFilter implements Filter {
@Override
public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain filterChain) throws IOException, ServletException {
HttpServletRequest httpServletRequest = (HttpServletRequest) servletRequest;
RequestHeadersContext context = new RequestHeadersContext(
httpServletRequest.getHeader(RequestHeadersContext.CORRELATION_ID),
httpServletRequest.getHeader(RequestHeadersContext.AUTHENTICATION_TOKEN)
);
RequestHeadersContextHolder.setContext(context);
filterChain.doFilter(servletRequest, servletResponse);
}
}
}
We created a RequestHeadersServiceConfiguration class which, at the moment, has a single Spring filter bean defined. This filter is going to read the needed headers from the incoming request and set them in the RequestHeadersContextHolder (we will need to propagate those later when we make an outgoing request to another microservice). Afterwards, it will resume the processing of the request and will give control to the other filters that might be present in the filter chain. Keep in mind that, all the while, this code executes within the boundaries of the dedicated Tomcat thread, which the container had assigned to us.
Next, we need to define an HTTP client interceptor which we are going to link to a RestTemplate client, which in turn is going to execute the interceptor’s code each time before it makes a request to an outer microservice. We can add this new RestTemplate bean inside the same configuration file:
As you might have guessed, the interceptor reads the header values from the thread-local context and sets them up for the outgoing request. The RestTemplate just adds this interceptor to the list of its already existing ones.
A good-to-have thing will be to eventually clear/remove the thread-local variables from the thread. When we have an embedded Tomcat container, missing out on this point will not impose a problem, since along with the Spring application, the Tomcat container dies as well. This means that all of the threads will altogether be destroyed and the thread-local memory released. However, if we happen to have a separate servlet container and we deploy our app as a .war instead of a standalone .jar, not clearing the context might introduce some memory leaks. Imagine having multiple applications on our standalone servlet container and each of them messing around with thread locals. The container shares its threads with all of the applications. When one of the applications is turned off, the container is going to continue to run and the threads which it borrowed to the application will not cease to exist. Hence, the thread-local variables will not be garbage collected, since there are still references to them. That is why we are going to define and add an interceptor to the Spring interceptor registry, which will clear the context after a request finishes and the thread can be assigned to other tasks:
@Configuration
public class WebMvcInterceptorsConfiguration implements WebMvcConfigurer {
@Override
public void addInterceptors(InterceptorRegistry registry) {
registry.addInterceptor(new RequestHeadersContextClearInterceptor()).addPathPatterns("/**");
}
private static class RequestHeadersContextClearInterceptor implements HandlerInterceptor {
@Override
public void afterCompletion(HttpServletRequest request, HttpServletResponse response, Object handler, Exception exception) {
RequestHeadersContextHolder.clearContext();
}
}
}
All we need to do now is wire these configurations into our microservices. We can create a separate library extracting the config (and maybe upload it to an online repository, such as Maven Central, or our own Nexus) so that we do not need to copy-paste all of the code into each of our microservices. Whatever the case, it is good to make this library easy to use. That is why we are going to create a custom annotation for enabling it:
@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
@Import({RequestHeadersServiceConfiguration.class, WebMvcInterceptorsConfiguration.class})
public @interface EnableRequestHeadersService {
}
Usage
Let’s see how we can leverage and use this library from inside a microservice. Only a couple of things are needed.
First, we need to annotate our application with the @EnableRequestHeadersService:
@SpringBootApplication
@EnableRequestHeadersService
public class LicensingServiceApplication {
public static void main(String[] args) {
SpringApplication.run(LicensingServiceApplication.class, args);
}
}
Second, we need to inject the already defined RestTemplate in our microservice and use it as given:
@Component
public class OrganizationRestTemplateClient {
private final RestTemplate restTemplate;
public OrganizationRestTemplateClient(RestTemplate restTemplate) {
this.restTemplate = restTemplate;
}
public Organization getOrganization(String organizationId) {
ResponseEntity<Organization> restExchange = restTemplate.exchange(
"http://organizationservice/v1/organizations/{organizationId}",
HttpMethod.GET,
null,
Organization.class,
organizationId
);
return restExchange.getBody();
}
}
We can notice that the getOrganization(String organizationId) method does not handle any HTTP headers whatsoever. It just passes the URL and the HTTP method and lets the imported configuration do its magic. As simple as that! We can now call the getOrganization method wherever we like, without having any sort of knowledge about the headers that are being sent in the background. If we have the need to read them somewhere in our code, or even change them, then we can use the RequestHeadersContextHolder.getContext()/setContext() static methods wherever we like in our microservice, without the need to parse them from the request object.
Feign HTTP Client
If we want to leverage a more declarative type of coding we can always use the Feign HTTP Client. There are ways to configure interceptors here as well, so, using the RestTemplate is not strictly required. We can add the new interceptor configuration to the already existing RequestHeadersServiceConfiguration class:
@Configuration
public class RequestHeadersServiceConfiguration {
// .....
@Bean
public RequestInterceptor getFeignRequestInterceptor() {
return new RequestHeadersContextFeignInterceptor();
}
private static class RequestHeadersContextFeignInterceptor implements RequestInterceptor {
@Override
public void apply(RequestTemplate requestTemplate) {
RequestHeadersContext context = RequestHeadersContextHolder.getContext();
requestTemplate.header(RequestHeadersContext.CORRELATION_ID, context.getCorrelationId());
requestTemplate.header(RequestHeadersContext.AUTHENTICATION_TOKEN, context.getAuthenticationToken());
}
}
}
The new bean we created is going to automatically be wired as a new Feign interceptor for our client.
Next, in our microservice, we can annotate our application class with @EnableFeignClients and then create our Feign client:
All that we need to do now, is just inject our new client anywhere in our services and use it from there. In comparison to the RestTemplate, this is a more concise way of making HTTP calls.
Asynchronous HTTP requests
What if we do not want to wait for the request to the Organization Microservice to finish and want to execute it asynchronously and concurrently (using the @EnableAsync and @Async annotations from Spring, for example). How are we going to access the headers that need to be propagated in this case? You might have guessed it: by using InheritableThreadLocal instead of ThreadLocal. As mentioned earlier above, the child threads we create separately (aside from the Tomcat ones which will be the parents) can inherit their parent’s context. That way we can send header-populated requests in an asynchronous manner. There is no need to clear the context for the child threads (side note: clearing it will not affect the parent thread, and clearing the parent thread will not affect the child thread; it will only set the current thread’s local context reference to null), since these will be created from a separate thread pool that has nothing to do with the container’s one. The child threads’ memory will be cleared after execution or after the Spring application exits because eventually, they die off.
Summary
I hope you will find this neat little trick useful while refactoring your microservices. A lot of Spring’s functionality is actually based on thread locals. Looking into its source code, you will find a lot of similar/same concepts as the ones mentioned above. Spring Security is one example, Zuul Proxy is another.
https://www.north-47.com/wp-content/uploads/2022/03/header-image.jpg3821024Kiril Pepovskihttps://www.north-47.com/wp-content/uploads/2018/09/logo_menu-copy-1-300x138.pngKiril Pepovski2022-04-15 11:58:502022-04-15 11:58:51Microservice architecture: Using Java thread locals and Tomcat/Spring capabilities for automated information propagation
As a developer, there is a 90% chance that the client or the business will come to you one day and say “look we are doing this the old way, but we want software now for this”. And there is a big chance that they will have some procedure with a lot of if-else scenarios. There also might be an option for a part of the procedure to be repeated for every customer that they have which means you will have to execute the same logic multiple times. If that happens, don’t be scared, Camunda is here to help and I hope that this blog will help you. We are going to dive into Camunda processes and how we can handle a part of it being repeated multiple times. In order to follow it may be some basic Camunda knowledge, Java and Spring Boot can be of help.
Let’s look at the following scenario: We have a bank and in it, there is one guy responsible for the whole process of giving loans to people. He is sitting there and a guy John comes and he wants to get a loan. So the bank guy first gives an application to John. Once John is done with the application and returns it to the bank guy, the bank guy requests some documents from his place of work so they can be sure that he is employed full-time. Once that is done, the bank guy needs to send this to corporate so that they can make the final decision.
If we make a diagram of it, it will look something like this:
This is great and everything if you have one customer, but if you have around 100 customers per day the bank guy will have no clue which guy gave him the work information or filled the application.
So let’s make software for this guy and make his life easier, let’s open our Intellij and write some code!
I have already created the project (you can download it from here) and will go through some of the more important things.
I have added the camunda dependencies because we are going to create camunda processes and an h2 database as well. Even though we are not going to write code that will access the database or create some tables, Camunda needs a database to create its own tables.
Above we are setting some database properties and some Camunda properties. Mainly we are enabling the admin-user for Camunda to open their cockpit with user admin and we are saying where the processes will be and what kind of database will Camunda use.
Now, having said all that, let’s create the whole scenario as a process in Camunda. First thing you will need is a Camunda modeller which you can download from here. Once you have the modeller up and running go to File -> New File -> BPMN Diagram. What we need to build (or you can just open the existing bpmn diagram from the application) is something that will look like this:
As a first step, we will generate some credit loaners (we are doing this just to have some data to play with) and then set this collection of credit loaners as a value for a variable called creditLoanerList. This will be achieved by using a service task. In this scenario, if you click on the Generate Credit Loaners you will see that we are providing as implementation: Java class and as a java class we are pointing to: north.com.multiprocessinstance.camunda.task.GenerateCreditLoanersTask.java:
package north.com.multiprocessinstance.camunda.task;
import north.com.multiprocessinstance.entity.CreditLoaner;
import org.camunda.bpm.engine.delegate.DelegateExecution;
import org.camunda.bpm.engine.delegate.JavaDelegate;
import java.util.Arrays;
import java.util.List;
public class GenerateCreditLoanersTask implements JavaDelegate {
@Override
public void execute(DelegateExecution delegateExecution) throws Exception {
List<CreditLoaner> creditLoaners = Arrays.asList(new CreditLoaner("Sarah", "Cox"),
new CreditLoaner("John", "Doe"),
new CreditLoaner("Filip", "Trajkovski"));
delegateExecution.setVariable("creditLoanerList", creditLoaners);
}
}
If you click on the subprocess (it is the largest rectangle with the three dashes) you will see the reason why we set the list of credit loaners as a variable value to creditLoanerList:
In order to pass all the credit loaners and for each of them a subprocess to be started we need to provide a collection of credit loaners. That is why we are setting the Collection field to: creditLoanerList and in order for every credit loaner to be accessed in the appropriate subprocess we are setting an Element Variable that we will call: creditLoaner. What will happen is that we will have this process once for Filip, once for John and once for Sarah.
Let’s look at the tasks in our subprocess. The first one Fulfill Application is a user task and in it, we want the credit loaner to fill out an application. For the next step Get work recommendation to be completed, the credit loaner needs to provide some documents that he is employed and has a regular salary (this is not completely implemented, but we are describing a possible scenario and currently only form fields of type string are defined). In the third step, a service task will be executed and in it, the NotifyCorporate.java class will be executed:
package north.com.multiprocessinstance.camunda.task;
import lombok.extern.slf4j.Slf4j;
import north.com.multiprocessinstance.entity.CreditLoaner;
import org.camunda.bpm.engine.delegate.DelegateExecution;
import org.camunda.bpm.engine.delegate.JavaDelegate;
@Slf4j
public class NotifyCorporate implements JavaDelegate {
@Override
public void execute(DelegateExecution delegateExecution) throws Exception {
final CreditLoaner creditLoaner = (CreditLoaner) delegateExecution.getVariable("creditLoaner");
log.info("Notify corporate that person {} wants to get credit", creditLoaner.getFirstName());
delegateExecution.setVariable("creditLoanerFirstName", creditLoaner.getFirstName());
log.info("This is the user: {}", creditLoaner);
}
}
Here we will notify the corporate for the credit loaner and to get the information about the credit loaner we will use the variable that was previously set as a Element Variable which is creditLoaner (look at the image of the subprocess details). One additional thing that we will do here is to set one more variable called creditLoanerFirstName and the reason for that will be revealed when we take a look at the next task: Wait for feedback from corporate.
In this Wait for feedback from corporate step one option is that the bank guy got a mail from corporate that the credit loaner is ok and he can get his loan so the bank guy will complete this step and will continue with the next one. For the other option, we are providing a boundary event where the corporate will send a message STOP and for that credit loaner, the request will be rejected. But in our scenario, we have three subprocesses, one for each credit loaner. When sending this STOP message, we need to somehow specify for which credit loaner it is sent. That’s why in the previous step we set the creditLoanerFirstName variable (in real cases don’t use the first name but something like email or other unique value) and one last thing that we will need to do is to set an input parameter in the Wait for feedback from corporate task.
In the Camunda modeller if you click on the task with the name: Wait for feedback from corporate and go to the tab Input/Output you should be able to see the input parameter which will be of type text and the value will be the variable that we set previously, called creditLoanerFirstName.
This input parameter provides us with an option to specify for which credit loaner we want to STOP the procedure and decline his loan request.
But let’s start our application and then start our bank process. Once you have started the application you can open the Camunda cockpit on the following URL: http://localhost:8080/camunda/app/tasklist/default/. The user name and password are admin and admin. Once you have entered the credentials you should be able to start a process from the top right by clicking on Start Process and the BankProcess will be provided (previously we needed to be sure that the BPMN file is saved under resources/processes in your application, since we define in our application.yaml that this will be the place where we will keep the processes).
Let’s take a look at the Camunda cockpit:
Camunda cockpit
Once you started the process you should be able to see the task list (if for some reason they are not visible just click on Add simple filter and they should be displayed). If you click on one of the tasks, tasks details will be displayed and from here we are able to complete it or to open the process details:
Here we have a visual presentation at which step the process is currently. Let’s complete all three tasks Fullfill application by claiming them and completing them and let’s do the same for Get work recommendation. Now we are at the step Wait for feedback from corporate and we can test the boundary event STOP that we added.
There is already an exposed post endpoint in the ProcessController for which we need to provide the process instance id and a MessageEventRequest. In this MessageEventRequest we need to provide the message name, which in our scenario is STOP and a list of correlation keys. The correlation key is where we will specify the credit loaner whose loan request we want to decline. The key will be set to creditLoanerFirstName and let’s decline Sarah’s request (before executing it, make sure that there is a task: Wait for feedback from corporate for Sarah) so we will set the value to Sarah. What we need to do is execute a post request towards: localhost:8080/{processInstanceId}/messageEvent with the following request body:
As I mentioned previously you can figure out the process instance id and monitor the whole flow of the task using the camunda cockpit that will be available on the following URL: http://localhost:8080/camunda/app/tasklist/default/.
So we started a whole process, started three subprocesses, finished a service task and a user task. I hope that you enjoyed it and also you can find the code on the following link.
https://www.north-47.com/wp-content/uploads/2022/02/Camunda-Multi-Instance-Process-01.jpg3821024Filip Trajkovskihttps://www.north-47.com/wp-content/uploads/2018/09/logo_menu-copy-1-300x138.pngFilip Trajkovski2022-03-31 10:28:062022-03-31 10:28:07Multi Instance Process in Camunda
Until now as everybody knows, microservice architecture represents a collection of multiple services. Each service contains it’s own business logic in comparison to the monolithic architecture which contains everything in one place. This means we have to maintain multiple services usually at the same time.
The microservices communicate with each other in order to fulfil their needs. As usually, when you need it the most, an instance of one of the microservices can go down or have delay in response, what we usually call unreachable service. The chances of failure need to be taken into consideration and to be handled in an appropriate way.
Why taking care of latency is important in the microservice architecture?
Increased latency one can face is when one of the microservices is:
Reading/writing to database
Synchronously calling another service
Hitting the timeout of asynchronous communication
If we consider the following scenario: We have 5 microservices that communicate with each other. If microservice 5 goes down, all the other services that depend on it can be affected.
In this type of scenario, the solution for this is the strategy of fault-tolerance.
Circuit breaker
A circuit breaker is a pattern that can help in achieving fault tolerance. The circuit breaker detects when external service fails and in that case, the circuit breaker is open. All the incoming requests to the unhealthy service will be rejected and errors will be returned instead of trying to reach out to the unhealthy service over and over again. For this, we can use Hystrix.
What is Hystrix?
Hystrix is a library which implements the fault-tolerance strategy and is used for isolating the access to remote services and increasing resilience, in order to prevent cascading failures and offer ability to recover quickly from disaster.
So, how does Hystrix actually work?
Let’s take again the architecture from above. So suppose that there are multiple user requests from microservice One that are requiring a piece of information from microservice Five. In this situation, the possibility of microservice One being blocked is very obvious since it might wait for responses from microservice Five. Also microservice Five can be overloaded with the requests so the outcome would be blocking the whole service. This is when Hystrix kicks in and helps avoiding the problem.
The external requests to the service of microservice Five are wrapped in HystrixCommand which defines the behaviour of the requests. The behaviour is defined as the available number of threads that can handle the requests. In our example, the service in microservice Five can be defined as ten available threads for handling external requests. By wrapping the service in HystrixCommand, we are limiting the number of requests which service is supposed to get.
By default, Hystrix uses ten threads. If there are more concurrent threads than the default value, the rest of the requests are rejected – redirected to the fallback method.
Add the @HystrixCommand annotation to the main class
@SpringBootApplication
@EnableHystrix
public class HystrixApplication {
public static void main(String[] args) {
SpringApplication.run(HystrixApplication.class, args);
}
}
The next step is to define the fallback method for HystrixCommand:
@Service
@Slf4j
public class HystrixService {
@HystrixCommand(fallbackMethod="fallbackHystrix",
commandProperties = {@HystrixProperty(name =
"execution.isolation.thread.timeoutInMilliseconds", value = "2000")})
public String testHystrix(String message) throws InterruptedException {
Thread.sleep(4000);
return message != null ? message : "Message is null";
}
public String fallbackHystrix(String message) {
log.error("Request took to long. Timeout limit: 2000ms.");
return "Request took to long. Timeout limit: 2000ms. Message: " + message;
}
}
This is just a simple example of how the library can be used.
In order to improvise a timeout, Thread.sleep(4000) is set in milliseconds and the timeout for response is set to 2000 milliseconds as a HystrixProperty in HystrixCommand annotation, after which the call should end up in the fallback method.
Now we can test the implementation by executing the following request:
If we want to change the default thread pool size of HystrixCommand, we can add the following thread pool properties:
@Service
@Slf4j
public class HystrixService {
@HystrixCommand(fallbackMethod = "fallbackHystrix",
commandProperties = {@HystrixProperty(name =
"execution.isolation.thread.timeoutInMilliseconds", value = "2000")},
threadPoolProperties = {@HystrixProperty(name = "coreSize", value = "3")})
public String testHystrix(String message) throws InterruptedException {
return message != null ? message : "Message is null";
}
public String fallbackHystrix(String message) {
log.error("Request took to long. Timeout limit: 2000ms.");
return "Request took to long. Timeout limit: 2000ms. Message: " + message;
}
}
The fallback method is called when some fault occurs. An important thing to notice here is that the signature of the fallback method should be the same as the method on which HystrixCommand annotation is defined.
The working example of the above exercise can be found here hystrix-example
Summary
Moving away from monolithic architecture to microservices is usually coming with quite some challenges. In this blog post, we took a look at one of them, but we just scratched the surface. As more challenges are coming in the pipeline, stay tuned and hope to see you in one of the next posts.
https://www.north-47.com/wp-content/uploads/2021/09/hystrix.jpg3821024Gjurgjica Minovahttps://www.north-47.com/wp-content/uploads/2018/09/logo_menu-copy-1-300x138.pngGjurgjica Minova2021-10-22 09:44:352021-10-22 09:44:36Using Hystrix as a fault-tolerant strategy
When working with microservices architecture, one of the most important aspects is inter-service communication. Usually, each microservice stores data in its own database, and if we follow the MVC design pattern, we probably have model classes that map the relational database to object models, and components that contain methods for performing CRUD operations. These components are exposed by controller endpoints.
So that one microservice calls another, the caller needs to know the exact request and response model classes. This article will show a simple example of how to generate such models with SpringDoc OpenAPI.
I will create two services that will provide basic CRUD operations. For demonstrating purposes I chose to store data about vehicles:
vehicle-manager- the microservice that provides vehicles’ data to the client
vehicle-manager-client – the client microservice that requests vehicles’ data
For the purpose of this tutorial, I created empty Spring Boot projects via SpringInitializr.
In order to use the OpenAPI in our Spring Boot project, we need to add the following Maven dependency in our pom file:
The important annotations here from openAPI are @Schema and @Tag. The former is used to define the actual class that needs to be included in the API documentation. The latter is used for grouping operations, such as all methods under one controller.
Figure 1: Vehiclemanager Swagger API documentation interface
The swagger documentation interface for Vehiclemanager microservice is shown on Figure 1, and can be accessed on the following links:
If we open http://localhost:8080/api-docs in our browser (or any other port we set our Spring boot app to run on), we can get the entire documentation for the Vehiclemanager microservice. The important part for the model generation is right under components/schemas, while the controller endpoints are under paths.
I am going to create a Vehiclemanager-client service, running on port 8082, that will get vehicle information for a given registration plate, by calling the Vehiclemanager microservice. In order to do so, we need to generate the Vehicle model class defined in the original Vehicle microservice. We can generate it by adding the swagger codegen plugin in the pom’s plugins section, in the new demo service, like this:
After running the corresponding maven profile with:
> mvn clean compile -P generateModels
the models defined in <modelsToGenerate> tag will be created under the specified package in <modelPackage> tag.
Figure 2: OpenAPI generated model class
Codegen generates for us the entire model class with all classes that are defined inside it.
It is important to note that we can have models generated from different services. In each execution (see line 30 from the XML snippet) we can define the corresponding API documentation link in the <inputSpec> tag (line 37).
To demo data transfer from Vehiclemanager to Vehiclemanager-client microservice, we can send a simple request via Postman. The request I am going to use will be a GET request, that accepts a parameter registrationPlate which is used to query the vehicles stored in the Vehiclemanager microservice. The response is shown in Figure 3, which is a JSON containing the vehicle’s data that I hardcoded in the Vehiclemanager microservice.
Figure 3: Postman request to Vehiclemanager-client
Using OpenAPI helps us getting rid of copy-paste and boilerplate code, and more importantly, we have an automated mechanism that on each Maven clean compile generates the latest models from other microservices.
You can find the full code example microservices in the links below:
Most if not all of the todays’ applications expose some API for interaction. Either for customers or other applications. Application programming interface or API is a software mediator that allows two applications. Each time when we are using Facebook, YouTube or some other app, essentially we are using an API. API is a set of HTTP endpoints that use to send and retrieve data in some form, JSON or XML. Making sure those HTTP endpoints are sending and retrieving correct data thus are working according to the specifications is a vital requirement. Testing APIs belongs to the last (E2E) layer of the testing pyramid for which you may find more information in my previous blog.
Introduction to Rest Assured
Rest Assured is an open-source Java library that is used for testing RESTfull web services. It allows us to write tests using the BDD pattern. Rest Assured is a headless client for accessing Rest web services. The library is highly customizable, allowing us to create a wide variety of request combinations to test different application core business logic combinations.
High customizability also comes in handy when we want to verify the responses from the server, where we can verify the Status code, Status message, Body, Headers, etc. This makes Rest-Assured a versatile library and is often used for API testing.
The syntax of Rest Assured.io is the most interesting part, it’s using the BDD syntax and it’s very understandable.
Explanation:
Code
Description
Given()
‘Given’ keyword, is used to set up the (Pre-conditions/ Context), here, you pass the request headers, query and path param, body, cookies. This is optional if these items are not needed in the request
When()
‘when’ keyword is a notion that marks the premise of the test
Method()
Specifies the action of HTTP method (POST,GET,PUT,PATCH,DELETE)
Then()
Specifies the (Result/Outcomes) and is used for assertions
As we can see from the above examples the tests are enclosed in the sense that a single call is performed to the server and only a single response is evaluated. The above test navigates to the Users API of the application and then verifies the response from the server. The verification first verifies that the status code from the code is OK. Then we verify that the response has 10 items after that we verify that the first item has the corresponding data. We are able to assert also inner data of the user object inaddress[0].city and company[0].name. The assertions which we use are from org.hamcrest which are incorporated into Rest-Assured.
Conclusion
Even though here we have scratched the surface, I hope that you now have a better understanding of Rest-Assured. You can find a working example with the tests on this repository. Also, you can find more about the Rest-assured usage here.
https://www.north-47.com/wp-content/uploads/2021/06/rest-assured-resized-copy.jpg3821024Vladimir Dimikjhttps://www.north-47.com/wp-content/uploads/2018/09/logo_menu-copy-1-300x138.pngVladimir Dimikj2021-07-08 10:12:592021-07-08 10:13:00Rest assure your API
In this tutorial, I will try to explain step by step, how you can set up Kubernetes, deploy your microservice on Kubernetes, and check the result via the Kubernetes dashboard. All other things will be “as simple as possible”. As a cloud platform gcloud will be used. We will cover the following aspects of the problem:
Create microservice to be deployed
Placed application in your docker container
What is Kubernetes and how to install it?
Create a new Kubernetes project
Create new Cluster
Allow access from your local machine
Create service account
Activate service account
Connect to cluster
Gcloud initialization
Generate access token
Deploy and start Kubernetes dashboard
Deploy microservice
Step 1: Create microservice to be deployed
Traditionally, in the programming world, everything starts with “Hello World”. So, as mentioned previously, to keep things simple, create a microservice that returns just “Hello World”. You can usehttps://start.spring.io/ for this goal. Create HelloController like this:
package com.example.demojooq.controllers;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
@RestController
@RequestMapping("/api/v1")
public class HelloController {
@GetMapping("/say-hello")
public String sayHello() {
return "Hello world";
}
}
Step 2: Placed application in your docker container
We have a microservice, need to put this microservice in a docker container and upload it on Kubernetes. From that point, Kubernetes will orchestrate the container according to your settings. Let’s create the first image from the microservice. Normally, as you might guess, it is called Dockerfile (without any extension), and the content is:
Dockerfile
FROM adoptopenjdk/openjdk11:jre-11.0.8_10-debianslim
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","app.jar"]
The next step is to create the docker-compose file. For that purpose, a call to Dockerfile will be made to build the image. You can do it manually, but the best way is from the docker-compose file, as you have a permanent track of the solution. This is a .yaml file. (picture below)
After starting docker, go to the folder where docker-compose is located and execute the command “docker-compose up”. The expectation is to reach this microservice on 8099 port. If everything is ok, in your docker will be something like this:
Kubernetes is an open-source container orchestrator that automates many tasks involved in deploying, managing, and scaling containerized applications. What happens when you use Docker, and your container fails? Probably the first thing to do is to restart your container. You do that manually (if you don’t have Kubernetes). Here comes Kubernetes, observe that container is down and start new container automatically. This is just a basic use case. Please read more on the internet, there is a bunch of information about this.
How to install Kubernetes?
Ok, until now you are sure that Kubernetes is needed, but where to find it, what are the costs, and how to install it? First of all, try “download Kubernetes” on Google… Pick the site https://kubernetes.io/docs/tasks/tools/… Options for Windows, Mac, Linux appear… A different installation like kind, minikube, kubeadmin… So, is it worth spending so much time setting properly this Kubernetes? You do not have to ask me, I agree with you, it is too much time. Fortunately, we can make a “go around” and skip all that: Go to Gcloud where Kubernetes is offered as a service and just use it. Somebody else takes care of this, we can focus just on the business logic in our microservice and use out-of-the-box Kubernetes installation from Gcloud. Sounds good, doesn’t it? The last and most important question; money. Is it for free? No, it is not. You have to pay for the Gcloud services and here is the price list: https://cloud.google.com/kubernetes-engine/pricing. But for ordinary people like you and me, Gcloud offers a free account for 3 months up to 300$ and it seems fair. It is enough time to learn about deploying microservices on Kubernetes. For any professional use in future, the company should stay behind this. Here is the link where you can create your free cloud account https://cloud.google.com/. One more thing, during the creation of a free account, Google will ask for your bank account, to automatically charge you. But do not worry, you are safe for the first three months and below 300$. And for any charging, you will be asked for permission before… So, until now my personal experience is positive, as Google is keeping the promise when you create an account. But the final decision is up to you.
Step 4: Create new Kubernetes project
Open up your Google account, sign in and go to the console.
Create a new project from the main dashboard; the name of the new project is “hello-world”. From now on, this is your active project.
Step 5: Create new cluster
Create new cluster (named it cluster2). Accept default values for others fields.
Step 6: Allow access from your local machine
Now, we must allow access from our local machine to Kubernetes, via kubectl. For that purpose, we need to follow these steps:
Click on cluster2
Find your local IP address and add it here according to the CIDR standard in the Edit control plane authorized networks
Step 7: Create service account
Give new account role “Owner”. Accept default values for other fields. After a service account is created, you should have something like this:
Generate keys for this service account with key type JSON. When the key is downloaded, it has some random name like hello-world-315318-ab0c74d58a70.json. Keep this file in a safe place, we will need it later.
Now, install Google Cloud SDK Shell on your machine according to your OS. Let’s do the configuration so kubectl can reach cluster2. Copy the file hello-world-315318-ab0c74d58a70.json and put it in the CLOUD SDK folder. For the Windows environment, it looks like this:
Step 8: Activate service account
The first thing to do is to activate the service account with the command: gcloud auth activate-service-account hello-world-service-account@hello-world-315318.iam.gserviceaccount.com –key-file=hello-world-315318-ab0c74d58a70.json
Step 9: Connect to cluster
Now go to cluster2 again and find the connection string to connect to the new cluster
Execute this connection string in Google Cloud Shell: gcloud container clusters get-credentials cluster2 –zone us-central1-c –project hello-world-315318
Step 10: Gcloud initialization
The next command to execute is gcloud init, to initialize connection with the new project. Here is the complete code on how to do that from the Gcloud Shell:
C:\Users\Dimche Trifunov\AppData\Local\Google\Cloud SDK>gcloud init
Welcome! This command will take you through the configuration of gcloud.
Settings from your current configuration [dev] are:
accessibility:
screen_reader: 'False'
compute:
region: europe-west3
zone: europe-west3-a
core:
account: hello-world-service-account@hello-world-315318.iam.gserviceaccount.com
disable_usage_reporting: 'True'
project: dops-containers
Pick configuration to use:
[1] Re-initialize this configuration [dev] with new settings
[2] Create a new configuration
[3] Switch to and re-initialize existing configuration: [database-connection]
[4] Switch to and re-initialize existing configuration: [default]
Please enter your numeric choice: 2
Enter configuration name. Names start with a lower case letter and
contain only lower case letters a-z, digits 0-9, and hyphens '-': hello-world
Your current configuration has been set to: [hello-world]
You can skip diagnostics next time by using the following flag:
gcloud init --skip-diagnostics
Network diagnostic detects and fixes local network connection issues.
Checking network connection...done.
Reachability Check passed.
Network diagnostic passed (1/1 checks passed).
Choose the account you would like to use to perform operations for
this configuration:
[1] cicd-worker@devops-platform-n47.iam.gserviceaccount.com
[2] d.trifunov74@gmail.com
[3] dimche.trifunov@north-47.com
[4] dtrifunov@lunar-sled-314616.iam.gserviceaccount.com
[5] hello-world-service-account@hello-world-315318.iam.gserviceaccount.com
[6] service-account-demo-dime@blissful-epoch-305214.iam.gserviceaccount.com
[7] Log in with a new account
Please enter your numeric choice: 5
You are logged in as: [hello-world-service-account@hello-world-315318.iam.gserviceaccount.com].
API [cloudresourcemanager.googleapis.com] not enabled on project
[580325979968]. Would you like to enable and retry (this will take a
few minutes)? (y/N)? y
Enabling service [cloudresourcemanager.googleapis.com] on project [580325979968]...
Operation "operations/acf.p2-580325979968-f1bf2515-deea-49d5-ae35-a0adfef9973e" finished successfully.
Pick cloud project to use:
[1] hello-world-315318
[2] Create a new project
Please enter numeric choice or text value (must exactly match list
item): 1
Your current project has been set to: [hello-world-315318].
Do you want to configure a default Compute Region and Zone? (Y/n)? n
Error creating a default .boto configuration file. Please run [gsutil config -n] if you would like to create this file.
Your Google Cloud SDK is configured and ready to use!
* Commands that require authentication will use hello-world-service-account@hello-world-315318.iam.gserviceaccount.com by default
* Commands will reference project `hello-world-315318` by default
Run `gcloud help config` to learn how to change individual settings
This gcloud configuration is called [hello-world]. You can create additional configurations if you work with multiple accounts and/or projects.
Run `gcloud topic configurations` to learn more.
Some things to try next:
* Run `gcloud --help` to see the Cloud Platform services you can interact with. And run `gcloud help COMMAND` to get help on any gcloud command.
* Run `gcloud topic --help` to learn about advanced features of the SDK like arg files and output formatting
Step 11: Generate access token
Type kubectl get namespace, access token is generated in .kube folder (in home folder), in config file:
If you open this config file, you will find your access token. You will need this later.
Step 12: Deploy and start Kubernetes dashboard
Now, deploy Kubernetes dashboard with the next command: kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
C:\Users\Dimche Trifunov\AppData\Local\Google\Cloud SDK>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
C:\Users\Dimche Trifunov\AppData\Local\Google\Cloud SDK>kubectl proxy
Starting to serve on 127.0.0.1:8001
Now, you need the token from the config file that we spoke about a moment ago. Open the config file with Notepad (on Windows), find your access token, and copy from there and paste it in the Enter token* field. Be careful when you are copying token from the config file as there might be several tokens. You must choose yours (image below).
Finally, the stage is prepared to deploy microservice.
Step 13: Deploy microservice
Build the docker image from Dockerfile with the command: docker build -t docker2222/dimac:latest. docker2222/dimac is my public docker repository. Push the image on docker hub with the command: docker image push docker2222/dimac:latest. Execute kubectl apply -f k8s.yaml where k8s.yaml is the file below:
Implementing Spring Boot internationalization can be easily achieved using Resource Bundles. I will show you a code example of how you can implement it in your projects.
Let’s create a simple Spring Boot application from start.spring.io.
The first step is to create a resource bundle (a set of properties files with the same base name and language suffix) in the resources package.
I will create properties files with base name texts and only one key greeting:
texts_en.properties
texts_de.properties
texts_it.properties
texts_fr.properties
In all of that files I will add the value “Hello World !!!”, and the translations for that phrase. I was using Google Translate so please do not judge me if something is wrong :).
After that, I will add some simple YML configuration in application.yml file which I will use later.
server:
port: 7000
application:
translation:
properties:
baseName: texts
defaultLocale: de
Now, let’s create the configuration. I will create two Beans LocaleResolver and ResourceBundleMessageSource. Let’s explain both of them.
With the LocaleResolver interface, we are defining which implementation we are going to use. For this example, I chose to use AcceptHeaderLocaleResolver implementation. It means that the language value must be provided via Accept-Language header.
@Bean
public LocaleResolver localeResolver() {
AcceptHeaderLocaleResolver acceptHeaderLocaleResolver = new AcceptHeaderLocaleResolver();
acceptHeaderLocaleResolver.setDefaultLocale(new Locale(defaultLocale));
return acceptHeaderLocaleResolver;
}
With ResourceBundleMessageSource we are defining which bundle we are going to use in the Translator component (I will create it later 🙂 ).
@Bean(name = "textsResourceBundleMessageSource")
public ResourceBundleMessageSource messageSource() {
ResourceBundleMessageSource rs = new ResourceBundleMessageSource();
rs.setBasename(propertiesBasename);
rs.setDefaultEncoding("UTF-8");
rs.setUseCodeAsDefaultMessage(true);
return rs;
}
Now, let’s create the Translator component. In this component, I will create only one method, toLocale. In that method, I will fetch the Locale from the LocaleContexHolder and I will take the translation from the resource bundle.
@Component
public class Translator {
private static ResourceBundleMessageSource messageSource;
public Translator(@Qualifier("textsResourceBundleMessageSource") ResourceBundleMessageSource messageSource) {
this.messageSource = messageSource;
}
public static String toLocale(String code) {
Locale locale = LocaleContextHolder.getLocale();
return messageSource.getMessage(code, null, locale);
}
}
That’s all the configuration we need for this feature. Now, let’s create Controller, Service and TranslatorCodes Util classes so we can test the APIs.
@RestController
@RequestMapping("/index")
public class IndexController {
private final TranslationService translationService;
public IndexController(TranslationService translationService) {
this.translationService = translationService;
}
@GetMapping("/translate")
public ResponseEntity<String> getTranslation() {
String translation = translationService.translate();
return ResponseEntity.ok(translation);
}
}
@Service
public class TranslationService {
public String translate() {
return toLocale(GREETINGS);
}
}
public class TranslatorCode {
public static final String GREETINGS = "greetings";
}
Now, you can start the application. After the application is started successfully, you can start making API calls.
Here is an example of API call that you can use as a cURL command.
curl –location –request GET “localhost:7000/index/translate” –header “Accept-Language: en”
These are some of the responses from the calls I made:
You can change the default behaviour, add some protection, add multiple resource bundles, you are not limited to using this feature.
Download the source code
This project is available on our BitBucket repository. Feel free to fix any mistakes and to leave a comment here if you have any questions or feedback.
Quite often, there is a need to automate a specific process. In this case, a client had a manual process in place where people printed specific type of documents at certain periods. There was some space for human error, people forgetting to print something, people not being able to print everything on time, people printing the same documents twice, etc. The human task of people manually printing documents can be automated on the application level, by creating a scheduled task for printing documents on a network printer. In order to achieve that, we came up with this…
The solution is a containerized CUPS server with appropriate drivers and printer configuration. We had to create a new Docker image with CUPS, which will serve as the CUPS server, then get the correct drives for the printer (since we are going to use a network printer and make the appropriate printer configuration). Let’s get to know more about CUPS before we go into the actual implementation.
What is CUPS?
CUPS is a modular printing system for Unix-like computer operating systems which allows a computer to act as a print server. A computer running CUPS is a host which can accept print jobs from client computers, process them, and send them to the appropriate printer. CUPS uses the Internet Printing Protocol (IPP) as the basis for managing print jobs and queues. CUPS is free software, provided under the Apache License.
How does it work?
The initial step requires a queue that keeps track of printers status. When you print to a printer, CUPS creates a queue for tracking the printer status and any pages you have printed. A queue can point to a local USB port connected printer, but it can also be a network printer or maybe even many printers on the internet. Where the printer resides doesn’t matter, the queue is independent of this fact and looks the same in any given printer environment.
Every time you print something, CUPS creates a print job which is consisted of the destination queue where documents are sent to, name of those documents, and its page descriptions. Job is numbered queue-1, queue-2, etc. so you can track the job as it is printed or cancel. CUPS is deterministic. When CUPS gets a job for printing, it determines the best programs filters, printer drivers, port monitors, and backends to convert the pages into a printable format and then runs them to actually print the job. After the print job is completed, the job is removed from the queue and then CUPS moves on to the next one. Notifications are also available when the job is finished or some errors occurred during printing there are multiple ways to get a notification on the outcome.
Ready, steady, Docker run
Let’s containerize first. The initial step is to set the base docker image. For this Dockerfile, we have decided that we are going with CentOS Linux distribution, by RHEL since it provides the cups packages from the regular repository. Other distributions might require premium repositories in order for cups packages to be available. The entry instruction which specified the OS architecture:
FROM centos:8
The next and more important step is, installing the packages: cups and cups-filters. The first one, cups, provides support for the actual printing system backend, filters and other software, whereas cups-filter is a required package for using printer drivers. With the dandified yum we update and install necessary dependencies:
With that, JDK is available and we can confirm by running java –version.
Next follows the configuration for the cups server. This is done in the file named cupsd.conf, which resides in the /etc/cups directory of the image. A good practice here would be to create a copy of the original file. In the cupsd.conf file each line can be configuration directive, blank line or a comment. Directive name and values are case insensitive, comments start with a # character.
The patching we did, on top-level directive DefaultEncryption set the value of IfRequested, to only enable encryption if it is requested. The other directive, Listen, add value 0.0.0.0:631 in order to allow all incoming connections.
RUN sed -e '0,/^</s//DefaultEncryption IfRequested\n&/' -i /etc/cups/cupsd.conf
RUN sed -i 's/Listen.*/Listen 0.0.0.0:631/g' /etc/cups/cupsd.conf
Allow the cups service to be reachable:
RUN /usr/sbin/cupsd \
&& while [ ! -f /var/run/cups/cupsd.pid ]; do sleep 1; done \
&& cupsctl --remote-admin --remote-any --share-printers \
&& kill $(cat /var/run/cups/cupsd.pid)
After the service setup is done, the network printer and its drivers’ configuration follows. In our scenario, we used Ricoh C5500 printer. A good resource for finding appropriate driver files for the printers would be: https://www.openprinting.org/
A bit more general info on printer drivers: PostScript printer driver consists of a PostScript Printer Description (PPD) file that describes the features and capabilities of the device, then, filter programs that prepare print data for the device, and support files for colour management. Not only that but also, links with online help, etc. These PPD files include references to all of the filters and support files used by the driver, meaning there are details on all features that are provided by the driver. Every time a user prints something the scheduler program, cupsd service first, determine the format of the print job and the programs required to convert that job into something the printer can understand and perform. CUPS also includes filter programs for many common formats, for example, to convert PDF files into device-dependent/independent PostScript. All printer-specific configuration such is an IP address of the printer should be done in the printers.conf file.
Last but not least, we need to start the CUPS service:
CMD ["/usr/sbin/cupsd", "-f"]
Now everything is in place on the docker side. But then, somehow the print job needs to be triggered. That brings us to the final step, creating a client from the application mid-layer, which needs to set off a print job and the CUPS server will take care of the rest.
CUPS4J
For our solution, we used cups4j, a java library which is available in the mvn central repository. Basic usage of cups4j requires:
Setting up a CupsClient
Fetching an actual file
Creating a print job for that file
Printing (triggers the print job)
We also implemented a scheduler which will trigger this job weekly, meaning all documents will be run in a print queue once a week. If we want to specify custom host, then we need to provide the IP address of that host and the appropriate port number.
CupsClient cupsClient = new CupsClient("127.0.0.1", 631);
CupsPrinter cupsPrinter = cupsClient.getDefaultPrinter();
InputStream inputStream = new FileInputStream("test-file.pdf");
PrintJob printJob = new PrintJob.Builder(inputStream).build();
PrintRequestResult printRequestResult = cupsPrinter.print(printJob);
Summary
We managed to create dockerized solution step by step. First, we created an image that runs CUPS server, which we configured to a specific network printer. Then the printer waits for a print job to be triggered by the client. As a client, we created a cups4j simple client which raises the print job. Meaning all CUPS related configuration is done in Docker and the client only triggers the print job.
https://www.north-47.com/wp-content/uploads/2020/12/rsz_1automation.jpg3821024Jovan Ivanovskihttps://www.north-47.com/wp-content/uploads/2018/09/logo_menu-copy-1-300x138.pngJovan Ivanovski2020-12-31 07:48:212020-12-31 07:48:22Network printing with CUPS from Docker
If you are a web developer, you probably have developed some endpoint which has a slow response time. The issue for that might be that you are calling some 3rd party API, you have file processing or it might be how your entities are retrieved from the database.
In this article, we are going to take a look at how the Entity Graph might help us to improve our query performance when using JPA and Spring Boot.
Let’s discuss the following scenario:
We want to build an application where we can keep track of buildings, how many apartments every building has and how many tenants every apartment has. I have already created a simple application that can be downloaded from here.
In order to achieve the previously mentioned scenario, we will need to have the following entities:
package com.north47.entitygraphdemo.repository.model;
import lombok.AllArgsConstructor;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.Setter;
import javax.persistence.*;
import java.util.ArrayList;
import java.util.List;
@Entity
@NoArgsConstructor
@AllArgsConstructor
@Getter
@Setter
public class Building {
@Id
@GeneratedValue(strategy = GenerationType.AUTO)
private Long id;
private String buildingName;
@OneToMany(mappedBy = "building", cascade = CascadeType.ALL)
private List<Apartment> apartments;
public void addApartment(Apartment apartment) {
if (apartments == null) {
apartments = new ArrayList<>();
}
apartments.add(apartment);
apartment.setBuilding(this);
}
}
package com.north47.entitygraphdemo.repository.model;
import lombok.AllArgsConstructor;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.Setter;
import javax.persistence.*;
import java.util.ArrayList;
import java.util.List;
@Entity
@NoArgsConstructor
@AllArgsConstructor
@Getter
@Setter
public class Apartment {
@Id
@GeneratedValue(strategy = GenerationType.AUTO)
private Long id;
private String type;
@JoinColumn(name = "building_id")
@ManyToOne
private Building building;
@OneToMany(mappedBy = "apartment", cascade = CascadeType.ALL)
private List<Tenant> tenants;
public void addTenant(Tenant tenant) {
if (tenants == null) {
tenants = new ArrayList<>();
}
tenants.add(tenant);
tenant.setApartment(this);
}
}
We want to observe what will happen when we want to retrieve all the entities. For that purpose, a service method is created in BuildingService called iterate that will get all the buildings and loop through all remaining entities. For this method to be visible to the outer world a BuildingController is created that exposes GET endpoint from where we can access the iterate method in BuildingService. In order to have some data in our database, there is a SQL script data.sql that will insert some data and will be executed on startup. I would strongly suggest to start your application in debug mode and iterate through every step of the method iterate.
If you have already started your application insert the following URL: http://localhost:8080/building/iterate in your browser or some API application (Postman for example) and execute this GET request. This will execute the iterate method that was created previously.
Let’s see the content of the iterate service method we are calling with this endpoint and observe the console while executing it:
package com.north47.entitygraphdemo.service;
import com.north47.entitygraphdemo.repository.BuildingRepository;
import com.north47.entitygraphdemo.repository.model.Apartment;
import com.north47.entitygraphdemo.repository.model.Building;
import com.north47.entitygraphdemo.repository.model.Tenant;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Service;
import java.util.List;
@Slf4j
@Service
@RequiredArgsConstructor
public class BuildingService {
private final BuildingRepository buildingRepository;
public void iterate() {
log.debug("Iteration started");
log.debug("Get all buildings");
final List<Building> buildings = buildingRepository.findAll();
buildings.forEach(building -> {
log.debug("Get all apartments for building with id: {}", building.getId());
final List<Apartment> apartments = building.getApartments();
apartments.forEach(apartment -> {
log.debug("Get all tenants for apartment with id: {}", apartment.getId());
final List<Tenant> tenants = apartment.getTenants();
log.debug("Apartment with id : {} has {} tenants", apartment.getId(), tenants.size());
});
});
}
}
If you are in debug mode you may notice that after buildingRepository.findAll() is executed we can see the following log in the console:
Hibernate: select building0_.id as id1_1_, building0_.building_name as building2_1_ from building building0_
Let’s continue with executing the rest of the code. What will appear in the console is the following:
Hibernate: select apartments0_.building_id as building3_0_0_, apartments0_.id as id1_0_0_, apartments0_.id as id1_0_1_, apartments0_.building_id as building3_0_1_, apartments0_.type as type2_0_1_ from apartment apartments0_ where apartments0_.building_id=?
Hibernate: select tenants0_.apartment_id as apartmen4_2_0_, tenants0_.id as id1_2_0_, tenants0_.id as id1_2_1_, tenants0_.apartment_id as apartmen4_2_1_, tenants0_.last_name as last_nam2_2_1_, tenants0_.name as name3_2_1_ from tenant tenants0_ where tenants0_.apartment_id=?
Hibernate: select tenants0_.apartment_id as apartmen4_2_0_, tenants0_.id as id1_2_0_, tenants0_.id as id1_2_1_, tenants0_.apartment_id as apartmen4_2_1_, tenants0_.last_name as last_nam2_2_1_, tenants0_.name as name3_2_1_ from tenant tenants0_ where tenants0_.apartment_id=?
Even though we are not calling some repository methods, SQL queries are executed. This is happening because it is not specified the fetch type in the entities and the default one is the LAZY for OneToMany relationships. This means that when we will try to get the entities (in our case call methods getApartments in Building and getTenants in Aparment) that are annotated with @OneToMany, additional query will be executed. Imagine having a lot’s of data, and we want to execute a similar logic, then this will cause executing a lot more additional queries which will cause a huge latency. One solution is that we can always switch to changing the fetch type to EAGER, but that means that these collections will be always called and we won’t be able to change that in runtime.
One of the solutions can be the JPA Entity Graph. Let’s see how it can make our life easier. We will do the following changes in our domain class Building:
So what happened here? We have defined an entity graph with named Building.List. With the attribute.nodes we are saying which collections to be loaded. Since we also want to get the tenants, we have defined a subgraph called Building.Apartment and in the subgraphs, we are saying to load all the tenants for every apartment. In order for this entity graph to be used we need to create a method in our BuildingRepository to whom we will specify to use this entity graph:
package com.north47.entitygraphdemo.repository;
import com.north47.entitygraphdemo.repository.model.Building;
import org.springframework.data.jpa.repository.EntityGraph;
import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.data.jpa.repository.Query;
import java.util.List;
public interface BuildingRepository extends JpaRepository<Building, Long> {
@Override
List<Building> findAll();
@EntityGraph(value = "Building.List")
@Query("select b from Building as b")
List<Building> findAllWithEntityGraph();
}
And of course, we will provide a service method that has the same logic but findAllWithEntityGraph() will be called:
public void iterateWithEntityGraph() {
log.debug("Iteration with entity started");
log.debug("Get all buildings");
final List<Building> buildings = buildingRepository.findAllWithEntityGraph();
buildings.forEach(building -> {
log.debug("Get all apartments for building with id: {}", building.getId());
final Set<Apartment> apartments = building.getApartments();
apartments.forEach(apartment -> {
log.debug("Get all tenants for apartment with id: {}", apartment.getId());
final Set<Tenant> tenants = apartment.getTenants();
log.debug("Apartment with id : {} has {} tenants", apartment.getId(), tenants.size());
});
});
}
And what is remaining is to expose this method using the BuildingController so we can test our new functionality:
@GetMapping(value = "/iterate/entityGraph")
public ResponseEntity<Void> iterateWithEntityGraph() {
buildingService.iterateWithEntityGraph();
return new ResponseEntity<>(HttpStatus.OK);
}
Now if we put the following URL http://localhost:8080/building/iterate/entityGraph in our browser and observe our console we can see that only one query is executed:
Hibernate: select building0_.id as id1_1_0_, apartments1_.id as id1_0_1_, tenants2_.id as id1_2_2_, building0_.building_name as building2_1_0_, apartments1_.building_id as building3_0_1_, apartments1_.type as type2_0_1_, apartments1_.building_id as building3_0_0__, apartments1_.id as id1_0_0__, tenants2_.apartment_id as apartmen4_2_2_, tenants2_.last_name as last_nam2_2_2_, tenants2_.name as name3_2_2_, tenants2_.apartment_id as apartmen4_2_1__, tenants2_.id as id1_2_1__ from building building0_ left outer join apartment apartments1_ on building0_.id=apartments1_.building_id left outer join tenant tenants2_ on apartments1_.id=tenants2_.apartment_id
So we managed to reduce the number of queries from 4 to 1 and we still have the possibility to call the findAll() method in the BuildingRepository where we won’t load all the apartments or the tenants. In a real case scenario, you can define as many entity graphs as you want and specify which collections to be loaded or not.
Hope you had fun, you can find the code in our repository.
https://www.north-47.com/wp-content/uploads/2020/11/JPA-Entity-Graph_Spring-Boot.png3821024Filip Trajkovskihttps://www.north-47.com/wp-content/uploads/2018/09/logo_menu-copy-1-300x138.pngFilip Trajkovski2020-12-08 07:39:012020-12-08 07:39:02Improve your performance using JPA Entity Graph
Nowadays, every serious company has options on its website to present the products to potential customers. When we talk about big companies, with a lot of products and huge traffic, AEM is one of the best solutions. But, how are the products imported in AEM, where are they placed in AEM are?… you can learn here. We will cover the following aspects of the problem:
Fetch the products from the server
Convert server response (JSON String) into Java object
Where to place products in AEM repository
Product node structure
Save products in CRX
How to start product importer and follow the process
How to fetch the products from the server
Typically large companies are keeping their products on a separate dedicated server. With the API that they will provide to you, a connection to the server will be established, and you could fetch the products. In most cases, an OSGI service is created that keeps the configuration data for connecting the remote server. Typically we got the response as a JSON String. Bellow is just one idea of how to get a response. The URL parameter and the access token is provided by the client, and usually, we keep it in OSGI service configuration.
private String getResponseBodyAsString(HttpResponse response) {
try (Scanner sc = new Scanner(response.getEntity().getContent())) {
StringBuilder sb = new StringBuilder();
while (sc.hasNext()) {
sb.append(sc.nextLine());
}
return sb.toString();
} catch (IOException e) {
LOGGER.error("Failed to retreive response body", e);
}
return null;
}
How to convert server response into Java object(APIModel)
Once we got a response as JSON String, the biggest challenge is converting response from the server (String apiResponse) into java class (APIModel). For that purpose we use the com.google.gson.Gson class. Sometimes it is unpredictable how Gson will de-serialize apiResponse into java objects. As for advice, if something goes wrong in mapping, just put “Object” in the mapping of the value, and later when debugging can check how Gson actually maps that value.
public APIModel convertIntoAPIModel(String apiResponse) {
try {
Gson gson = new Gson();
return gson.fromJson(apiResponse, APIModel.class);
} catch (RuntimeException e) {
LOGGER.error("Error while converting into APIModel",e);
throw e;
}
}
@Model(adaptables = Resource.class)
public class APIModel {
private List<ProductImportedModel> results;
public List<ProductImportedModel> getResults() {
return results;
}
}
public class ProductImportedModel {
private String nameOfProduct;
private Date lastModifiedAt;
private Date createdAt;
........
........
}
Where to place products in AEM repository
First, let’s look at the very base of this commerce part. We will look at the repository level in CRX to see the location and structure of the products in AEM. For that purpose most appropriate is an out of the box solution for we-retail which is part of the AEM installation. Products are stored in /var/commerce/products/your-company-name.
Product node structure
Let’s check the structure of one product in we-retail (on the image above “eqbisucos”). The product consists of one “master” product which contains the general properties of the product. These properties can be anything including price, rating, origin… and the most important properties are these two which mark it as a product:
Under this master node, there are subnodes as “image” and variants of the products. Regarding variants, it is important to mention that the difference with the product is that property commerceType has the value ‘variant’.
In the image above , we can see different variants as size-s, size-m, size-l.
Now, when we know the structure of the out of the box commerce product, let’s see how we can use our APIModel and transform it into node structure under /var/commerce.
Product node is without strictly defined structure. It depends on the concrete situation and the data for the product that we need to store. However, there are some rules to take in consideration:
define the master product node
create variants as sub-nodes of the master node. It could happen both (master and variant) to have very similar properties with a small difference, but that is acceptable. At least one property must be different
Product should have “image” sub-node with an image. It is good practice and it is not mandatory. Could be just one “image node” for master, or furthermore, every variant can have its own “image” node
It is possible to have other sub-nodes with different information for product or variants. Number of “Other nodes” is not limited. They can keep any information
Every product can have different node structure for master or variant. Some sub-nodes could be missing for some masters or variants
Persist products
Once we have determinate the node structure of the product, it is time to create node structure and store values as node properties. As first, for that purpose, we need service user with write permissions in /var/commerce part. The best approach is to use sling API, with all methods to create resources. Here is one example to create product node with properties.
AEM JMX console is place where we trigger Product importer. Possible options are to trigger it manually or periodic as a cron job. It is a separate thread that does not return a result, so it is very hard to follow the process without server logs. But this is possible just for the developer. So, any solution?
Most suitable is to send a mail to the responsible person with time and place where error happen.
https://www.north-47.com/wp-content/uploads/2020/11/implementation_of_product_importer_in_aem.jpg3821024Dimche Trifunovhttps://www.north-47.com/wp-content/uploads/2018/09/logo_menu-copy-1-300x138.pngDimche Trifunov2020-11-24 10:53:392022-05-23 09:58:28Implementation of Product Importer in AEM
A CI/CD pipeline helps in automating your software delivery process. What the pipeline does is building code, running tests, and deploying a newer version of the application.
Not long ago GitHub announced GitHub Actions. Meaning that they have built it system for support for CI/CD. This means that developers can use GitHub Actions to create a CI/CD pipeline.
With Actions, GitHub now allows developers not just to host the code on the platform, but also to run it.
Let’s create a CI/CD pipeline using GitHub Actions, the pipeline will deploy a spring boot app to AWS Elastic Beanstalk.
First of all, let’s find a project
For this purpose, I will be using this project which I have forked: When forked we need to open the project. Upon opening, we will see the section for GitHub Actions.
GitHub Actions Tool
Add predefined Java with Maven Workflow
Get started with GitHub Actions
By clicking on Actions we are provided with a set of predefined workflows. Since our project is Maven based we will be using the Java with Maven workflow.
By clicking “Start commit” GitHub Will add a commit with the workflow, the commit can be found here.
Let’s take a look at the predefined workflow:
name: Java CI with Maven
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up JDK 1.8
uses: actions/setup-java@v1
with:
java-version: 1.8
- name: Build with Maven
run: mvn -B package --file pom.xml
name: Java CI with Maven This is just specifying the name for the workflow
on: push,pull_request On command is used for specifying the events that will trigger the workflow. In this case, those events are push and pull_request on the master branch in this case
job: A job is a set of steps that execute the same runner
runs-on: ubuntu-latest The runs-on is specifying the underlaying OS we want for our workflow to run on for which we are using the latest version of ubuntu
steps: A step is an individual task that can run commands (known as actions). Each step in a job executes on the same runner, allowing the actions in that job to share data with each other
actions: Actions are the smallest portable building block of a workflow which are combined into steps to create a job. We can create our own actions, or use actions created by the GitHub community
Our steps actually setup Java and execute Maven commands needed for the build of the project.
Since we added the workflow by creating commit from the GUI the pipeline has automatically started and verified the commit – which we can see on the following image:
Pipeline report
Create an application in AWS Elastic Beanstalk
The next thing that we need to do is to create an app on Elastic Beanstalk where our application is going to be deployed. For that purpose, an AWS account is needed.
AWS Elastic Beanstalk service
Upon opening the Elastic Beanstalk service we need to choose the application name:
Application name
For the platform choose Java8.
Choose platform
For the application code, choose Sample application and click Create application. Elastic Beanstalk will create and initialize an environment with a sample application.
Let’s continue working on our pipeline
We are going to use an action created from the GitHub community for deploying an application on Elastic Beanstalk. The action is einaregilsson/beanstalk-deploy. This action requires additional configuration which is added using the keyboard with:
- name: Deploy to EB
uses: einaregilsson/beanstalk-deploy@v13
with:
aws_access_key: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws_secret_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
application_name: {change this with aws application name}
environment_name: {change this with aws environment name}
version_label: ${{github.SHA}}
region: {change this with aws region}
deployment_package: target/spring-petclinic-rest-2.2.5.jar
Add variables
We need to add value into the properties AWS Elastic Beanstalk application_name, environment_nameAWS region and, AWS APIkey.
Go to AWS Elastic Beanstalk and copy the previously created Environment name and Application name. Go to AWS Iam under your user in the security credentials section either create a new AWS access Key or use an existing one. The AWS Access Key and AWS Secret access key should be added into the GitHub project settings under the secrets tab which looks like this:
The deployed application in order to be considered a healthy instance from Elastic Beanstalk has to return an ok response when accessed from the Load Balancer which is standing upfront the Elastic Beanstalk. The load balancer is accessing the application on the root path. The forked application when accessed on the root path is forwarding the request towards swagger-ui.html. For that purpose, we need to remove the forwarding.
Change RootController.class:
@RequestMapping(value = "/", method = RequestMethod.GET, produces = "application/json")
public ResponseEntity<Void> getRoot() {
return new ResponseEntity<>(HttpStatus.OK);
}
Change application.properties server port to 5000 since, by default, Spring Boot applications will listen on port 8080. Elastic Beanstalk assumes that the application will listen on port 5000.
server.port=5000
And remove the server.servlet.context-path=/petclinic/.
The successful commit which deployed our app on AWS Elastic Beanstalk can be seen here:
Pipeline build
And the Elastic Beanstalk with a green environment:
Elastic Beanstalk green environment
Voila, there we have it a CI/CD pipeline with GitHub Actions and deployment on AWS Elastic Beanstalk. You can find the forked project here.
Pet Clinic Swagger UI
https://www.north-47.com/wp-content/uploads/2020/10/CI-CD-pipeline-1.png12112479Vladimir Dimikjhttps://www.north-47.com/wp-content/uploads/2018/09/logo_menu-copy-1-300x138.pngVladimir Dimikj2020-10-29 09:06:222020-10-29 09:51:58Create a CI/CD pipeline with GitHub Actions
GraphQL is a query language for your APIs and a runtime for fulfilling those queries with existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more. GraphQL is designed to make APIs fast, flexible, and developer-friendly.
GraphQL SPQR
GraphQL SPQR (GraphQL Schema Publisher & Query Resolver, pronounced like speaker) is a simple-to-use library for rapid development of GraphQL APIs in Java. It works by dynamically generating a GraphQL schema from Java code.
In this tutorial, we are going to explain simple steps for how to integrate Graphql in your microservice.
@Configuration
public class GraphQLConfiguration {
@Bean
public GraphQLSchema schema(GraphQLRootQuery graphQLRootQuery,
GraphQLRootMutation graphQLRootMutation,
GraphQLRootSubscription graphQLRootSubscription,
GraphQLResolvers graphQLResolvers) {
GraphQLSchema schema = new GraphQLSchemaGenerator()
.withBasePackages("com.myproject.microservices")
.withOperationsFromSingletons(graphQLRootQuery, graphQLRootMutation, graphQLRootSubscription, graphQLResolvers)
.generate();
return schema;
}
@Bean
public GraphQLResolvers resolvers(MyOtherMicroserviceClient myOtherMicroserviceClient) {
return new GraphQLResolvers(myOtherMicroserviceClient);
}
@Bean
public GraphQLRootQuery query(MyOtherMicroserviceClient myOtherMicroserviceClient) {
return new GraphQLRootQuery(myOtherMicroserviceClient);
}
@Bean
public GraphQLRootMutation mutation(MyOtherMicroserviceClient myOtherMicroserviceClient) {
return new GraphQLRootMutation(myOtherMicroserviceClient);
}
// define your own scalar types (custom data type) if you need to.
@Bean
public GraphQLEnumProperty graphQLEnumProperty() {
return new GraphQLEnumProperty();
}
@Bean
public JsonScalar jsonScalar() {
return new JsonScalar();
}
/* Add your own custom error handler if you need to.
This is needed, if you want to propagate any custom information error messages propagated to the client. */
@Bean
public GraphQLErrorHandler errorHandler() {
....
}
}
GraphQL class for query operations:
public class GraphQLRootQuery {
@GraphQLQuery(description = "Retrieve list of your attributes by search criteria")
public List<AttributeDTO> getMyAttributes(@GraphQLId @GraphQLArgument(name = "id", description = "Id of your attribute") String id,
@GraphQLArgument(name = " myQueryParam ", description = "…") String myQueryParam) {
return …;
}
}
GraphQL class for mutation operations:
public class GraphQLRootMutation {
@GraphQLMutation(description = "Update attribute")
public AttributeDTO updateAttribute(@GraphQLId @GraphQLNonNull @GraphQLArgument(name = "id", description = "Id of your attribute") String id,
@GraphQLArgument(name = "payload", description = "Payload for update") UpdateRequest payload) {
return …
}
}
GraphQL resolvers:
public class GraphQLResolvers {
@GraphQLQuery(description = "Retrieve additional information")
public List<AdditionalInfoDTO> getAdditionalInfo(@GraphQLContext AttributesDTO attributesDTO) {
return …
}
}
Note: All the Java classes (AdditionalInfoDTO, AttributesDTO, UpdateRequest) are just examples for data transfer objects and requests that needs to be replaced with your custom classes in order the code to compile and be executable.
How to use GraphQL from client side?
Finally, we want to have a look, how GraphQL looks from the front end side. We are using a tool, called GraphiQL (https://www.electronjs.org/apps/graphiql) to test it.
GraphQL Endpoint: URL of your service, defaults to /graphql
Method: it is always POST
HTTP Header: You can include authorization tokens with the request
Left pane: the query, must be always in JSON format
Right pane: response from the server, always JSON
Note: You get what you request, only those attribute are returned which you request.
Simple examples for query and mutation:
In this tutorial, you learned how to create your GraphQL API in Java with Spring Boot. But you are not limited to Spring Boot for that. You can use the GraphQL SPQR in pretty much any Java environment.
https://www.north-47.com/wp-content/uploads/2020/09/graphql-una-alternativa-a-rest-2de2-1024x478-1.jpg3821024Elena Stojanovskahttps://www.north-47.com/wp-content/uploads/2018/09/logo_menu-copy-1-300x138.pngElena Stojanovska2020-09-29 11:49:552020-09-29 11:49:56How to integrate GraphQL in the Microservice
Why should you consider implementing multitenancy in your project?
Cost: Multi-tenant architecture allows the sharing of resources, databases, and the application itself, thus the cost to run the system is fixed.
Maintenance: Users do not have to pay a considerable amount of fees to keep the software up to date. This reduces the overall cost of maintenance for each tenant.
Performance: Easier to assess and optimize speed, utilization, response time across the entire system, and even update the technology stack when needed.
In this blog we will implement multitenancy in our Spring Boot project.
Let’s create a simple Spring Boot project from start.spring.io, with only basic dependencies (Spring Web, Spring Data JPA, Spring Configuration Processor, MySQL Driver).
The good thing for implementing multitenancy is that we do not need additional dependencies. We will split this example into two parts. In the first one, we will explain the idea/logic behind it and split the approach into 7 configuration steps, and explain every step. In the second part, we will see how it’s implemented in real life and we will test the solution.
1. Let’s start with creating Tenant Storage. We will use it for keeping the tenant value while the request is executing.
public class TenantStorage {
private static ThreadLocal<String> currentTenant = new ThreadLocal<>();
public static void setCurrentTenant(String tenantId) {
currentTenant.set(tenantId);
}
public static String getCurrentTenant() {
return currentTenant.get();
}
public static void clear() {
currentTenant.remove();
}
}
2. Next, we will create the Tenant Interceptor. For every request, we will set the value at the beginning and clear it at the end. As you can see, in Tenant Interceptor, I decided for this demo to fetch the value of the tenant from request header (X-Tenant), this is up to you. Just keep an eye on data security when using this in production. Maybe you want to fetch from a cookie or some other header name.
@Component
public class TenantInterceptor implements WebRequestInterceptor {
private static final String TENANT_HEADER = "X-Tenant";
@Override
public void preHandle(WebRequest request) {
TenantStorage.setCurrentTenant(request.getHeader(TENANT_HEADER));
}
@Override
public void postHandle(WebRequest webRequest, ModelMap modelMap) {
TenantStorage.clear();
}
@Override
public void afterCompletion(WebRequest webRequest, Exception e) {
}
}
3. Next thing is to add the tenant Interceptor in the interceptor registry. For that purpose, I will create WebConfiguration that will implement WebMvcConfigurer.
@Configuration
public class WebConfiguration implements WebMvcConfigurer {
private final TenantInterceptor tenantInterceptor;
public WebConfiguration(TenantInterceptor tenantInterceptor) {
this.tenantInterceptor = tenantInterceptor;
}
@Override
public void addInterceptors(InterceptorRegistry registry) {
registry.addWebRequestInterceptor(tenantInterceptor);
}
}
4. Now, let’s update the application.yml file with some properties for the database connections.
5. Following, we will wrap the tenant’s values to map with key = tenant name, value = data source in DataSourceProperties.
@ConfigurationProperties(prefix = "tenants")
@Component
public class DataSourceProperties {
private Map<Object, Object> dataSources = new LinkedHashMap<>();
public Map<Object, Object> getDataSources() {
return dataSources;
}
public void setDataSources(Map<String, Map<String, String>> datasources) {
datasources.forEach((key, value) -> this.dataSources.put(key, convert(value)));
}
public DataSource convert(Map<String, String> source) {
return DataSourceBuilder.create()
.url(source.get("jdbcUrl"))
.driverClassName(source.get("driverClassName"))
.username(source.get("username"))
.password(source.get("password"))
.build();
}
}
6. Afterwards, we should create DataSource Bean, and for that purpose, I will create DataSourceConfig.
@Configuration
public class DataSourceConfig {
private final DataSourceProperties dataSourceProperties;
public DataSourceConfig(DataSourceProperties dataSourceProperties) {
this.dataSourceProperties = dataSourceProperties;
}
@Bean
public DataSource dataSource() {
TenantRoutingDataSource customDataSource = new TenantRoutingDataSource();
customDataSource.setTargetDataSources(dataSourceProperties.getDataSources());
return customDataSource;
}
}
7. At last, we will extend the AbstractRoutingDataSource and implement our lookup key.
public class TenantRoutingDataSource extends AbstractRoutingDataSource {
@Override
protected Object determineCurrentLookupKey() {
return TenantStorage.getCurrentTenant();
}
}
And we are done with the first part.
Let’s see how it looks in the real world:
For this example, we will use two schemas from the same database instance, we will create a user and get all users. Also, I will show you how you can implement Flyway and test the solution.
First, let’s configure our databases. In my local instance of MySQL server, we will create two schemas: n47schema1 and n47schema2.
Next step is to execute this CREATE statement on both schemas:
Then, we will create two APIs, one for creating a user, and the other one to fetch all users.
@RestController
@RequestMapping("/users")
public class UserController {
private final UserRepository userRepository;
public UserController(UserRepository userRepository) {
this.userRepository = userRepository;
}
@PostMapping
public UserDomain addUser(@RequestBody UserRequestBody userRequestBody) {
UserDomain userDomain = new UserDomain(userRequestBody.getName());
return userRepository.save(userDomain);
}
@GetMapping
public List<UserDomain> getAll() {
return userRepository.findAll();
}
}
Also we need to create UserDomain, UserRepository and UserRequestBody.
@Entity
@Table(name = "users")
public class UserDomain {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
private String name;
public UserDomain() {
}
public UserDomain(String name) {
this.name = name;
}
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
public interface UserRepository extends JpaRepository<UserDomain, Long> {
}
public class UserRequestBody {
private String name;
public UserRequestBody() {
}
public UserRequestBody(String name) {
this.name = name;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
And we are done with coding.
We can run our application and start making a request.
First, let’s create some users with a POST request to http://localhost:8080/users. The most important thing not to forget is that we need to provide header X-Tenant with the value n47schema1 or n47schema2.
We will create two users for tenant n47schema1: Antonie and John. Example:
After that, we will change the X-Tenant header value to n47schema2 and create two users: William and Joseph.
You will notice that the ids retrieved in the response are the same as the first tenant value. Now let’s fetch the users by the API.
When you make a GET request to http://localhost:8080/users with header X-Tenant having value n47schema1 you will fetch the users from the n47schema1 schema, and when you make a request with a header value n47schema2 you will fetch from the n47schema2 schema.
You can also check the data in the database to be sure that it is stored correctly.
You can always set fallback if the X-Tenant header is not provided, or it’s with a wrong value.
As the last thing, I will show you how you can implement Flyway with multitenancy. First, you need to add flyway as a dependency and disable it in the application.yml
spring:
flyway:
enabled: false
Add PostConstruct method in DataSourceConfig configuration:
https://www.north-47.com/wp-content/uploads/2020/07/springBootTenancy.jpg3821024Antonie Zafirovhttps://www.north-47.com/wp-content/uploads/2018/09/logo_menu-copy-1-300x138.pngAntonie Zafirov2020-07-14 12:53:332020-07-14 12:53:34Multitenancy with Spring Boot
Well, I did. In everyday work we can hear discussions about microservices, containers, beans, entities etc. but it is very hard and rare to hear some talk about immutable or mutable classes. Why is it like that?
Let’s first refresh our memories of what an Immutable class is.
Immutable class means, that once an object is initialized we cannot change its content.
To be more clear, let’s see how we can write Immutable classes in Java.
Basic rules to write some immutable classes are:
Don’t provide “setter” methods — methods that modify fields or objects referred to by fields.
Make all fields final and private.
Don’t allow subclasses to override methods. The simplest way to do this is to declare the class as final. A more sophisticated approach is to make the constructor private and construct instances in factory methods.
If the instance fields include references to mutable objects, don’t allow those objects to be changed:
Don’t provide methods that modify the mutable objects.
Don’t share references to the mutable objects. Never store references to external, mutable objects passed to the constructor; if necessary, create copies, and store references to the copies. Similarly, create copies of your internal mutable objects when necessary to avoid returning the originals in your methods.
How to make Immutable classes
After defining these basic rules there are several different solutions to write Immutable classes.
Basic one without using external libraries:
final public class ImmutableBasicExample {
final private Long accountNumber;
final private String accountCurrency;
private void check(String accountCurrency, Long accountNumber) {
// Some constructor rules
// throw new IllegalArgumentException()
}
public ImmutableBasicExample(
String accountCurrency, Long accountNumber) {
check(accountCurrency, accountNumber);
this.accountCurrency = accountCurrency;
this.accountNumber = accountNumber;
}
public String getAccountCurrency() {
return accountCurrency;
}
public Long getAccountNumber() {
return accountNumber;
}
}
Use final class, make fields final and set them in the constructor. Don’t write setter for fields, just getter.
We can use Lombok:
import lombok.Value;
@Value
public class LombokImmutable {
Long accountNumber;
String accountCurrency;
}
Much shorter, with just one annotation we have done our job. Value is variant of data and is immutable because: all fields are private and final, the class is made final, the getter for each field, additional some basic methods like toString(), equals() and hasCode().
Or the newest way with using a record (Java 14 preview features):
public record RecordRequstBody(
Long accountNumber,
String accountCurrency) {
}
By now the most sophisticated way to make an Immutable class. Small, readable and useful code. Without using external libraries. Compiler auto generates following code: private final field, public read assessor, public constructor with signature same like state description and implementation of following methods: toString(), hashCode() and equals().
I’m sure that there are other ways to write Immutable classes, but these are enough to understand how much code and effort we need to write one.
Use of Immutable classes
We already use Immutable classes every day in our work. All primitives wrapper classes are immutable. Here is one everyday practical example:
creates a new reference to the new object for integerExample and the integerExample in the main code is still referencing the old Integer object, which is not changed.
This also applies to all other primitive wrappers that are immutable: Byte, Short, Integer, Long, Float, Double, Character, Boolean. Additionally BigDecimal, BigInteger and LocalDate are immutable.
Other one interesting immutable class in Java is String. If we write the following code:
Concatenation of two Strings with “+” will produce new String. It is fine if we do this with two or a few Strings, but if we build one String with more concatenations then we will initialize more objects. (That is why here it is better to use StringBuilder)
We can create and use Immutable classes for RequestBody (DTO) in our rest controllers. It is a good idea because the state will be validated when the request is created, once, and will be valid all time after that. We will not have any need to change the state of the request. If we are changing the request then we are doing something wrong.
Another scenario where we can use them is when we need to have some business classes (where we will process some data) where the state should be unchanged. We can find a few examples for this, using them for Currency, Account information etc…
How often should we use them?
Well we see that there are some benefits using them:
They are thread-safe
Safer because their state can not be changed
They are simple for construct, test and use
It is good to understand them and use them if you work more functional and concurrent programming
But there are disadvantages too:
At first, you can’t change fields in them. To do that you should create a copy of them with changed values. It means that you will have more objects initialized in the VM and for that, you should add some code to copy objects. To be sure that you will make some classes immutable you should be sure that the state of the object would not be changed. These days developers don’t spend time analyzing where that class will be or will not be changed.
There is one general concept from Effective Java, which describes the use of immutability:
Classes should be immutable unless there’s a very good reason to make them mutable… If a class cannot be made immutable, limit its mutability as much as possible.
https://www.north-47.com/wp-content/uploads/2020/04/jrecrd1024x382-1.png3821023Miodrag Cvetkovichttps://www.north-47.com/wp-content/uploads/2018/09/logo_menu-copy-1-300x138.pngMiodrag Cvetkovic2020-04-30 11:50:052020-04-30 11:50:06Did we forget the Immutable classes in Java?
As we all know, Hibernate is an Object Relational Mapping (ORM) framework for the Java programming language. This blog post will teach you how to use advanced hibernate techniques for mapping sets, lists and enums in simple and easy steps.
Mapping sets
Set is a collection of objects in which duplicate values are not allowed and the order of the objects is not important. Hibernate uses the following annotation for mapping sets:
@ElementCollection – Declares an element collection mapping. The data for the collection is stored in a separate table.
@CollectionTable – Specifies the name of a table that will hold the collection. Also provides the join column to refer to the primary table.
@Column – The name of the column to map in the collection table.
@ElementCollection is used to define the following relationships: One-to-many relationship to an @Embeddable object and One-to-many relationship to a Basic object, such as Java primitives (wrappers): int, Integer, Double, Date, String, etc…
Now you’re probably asking yourself: Hmmm… How does this compare to @OneToMany?
@ElementCollection is similar to @OneToMany except that the target object is not an @Entity. These annotations give you an еasy way to define a collection with simple/basic objects. But, you can’t query, persist or merge target objects independently of their parent object. ElementCollection does not support a cascade option, so target objects are ALWAYS persisted, merged, removed with their parent object.
Mapping lists
Lists are used when we need to keep track of order position and duplicates of the elements are allowed. Additional annotation that we are going to use here is @OrderColumn, that specified the name of the column to track the element order/position (name defaults to <property>_ORDER):
Mapping maps
When you want to access data via a key rather than integer index, you should probably decide to use maps. Additional annotation used for maps is @MapKeyColumn which helps us to define the name of the key column for a map. Name defaults to <property>_KEY :
Mapping sorted sets
As we mentioned before, the set is an unsorted collection with no duplicates. But what if we don’t need duplicates and the order of retrieval is also important? In that case, we can use @OrderBy and specify the ordering of the elements when a collection is retrieved.
Syntax: @OrderBy(“[field name or property name] [ASC |DESC]”)
Mapping sorted maps
@OrderBy can be also used in maps. In that case, the default value is a key column, ascending.
Mapping Enums
By default, Hibernate maps an enum to a number. This mapping is very efficient, but there is a high risk that adding or removing a value from your enum will change the ordinal of the remaining values. Because of that, you should map the enum value to a String with the @Enumerated annotation. This annotation is used to reference an Enum type and save the field in database as String.
Conclusion
In this article, we have taken a look in the simple techniques for mapping sets, lists and enumerations when we are using Hibernate. I hope you enjoyed reading it and have found it helpful.
https://www.north-47.com/wp-content/uploads/2020/04/hibernate.png3881024Elena Stojanovskahttps://www.north-47.com/wp-content/uploads/2018/09/logo_menu-copy-1-300x138.pngElena Stojanovska2020-04-28 11:24:222020-04-28 11:24:23Hibernate techniques for mapping sets, lists and enumerations
Every year, N47 as a tech family celebrates a tech festival as Hackdays at the end of the year. In December 2019 we were in Budapest, Hungary for Hackdays. There were five different teams and each team had created some cool projects in a short time. I was also part of a team and we implemented a simple Trading Bot for Crypto. In this blog post, I want to share my experiences.
Trading Platform
To create a Trading Bot, you first need to find the right trading platform. We selected Binance DEX, which can offer a good volume for selected trading pairs, testnet for test purposes and was a Decentralized EXchange (DEX). Thanks to DEX, we can connect the wallet directly and use the credit directly from it.
Binance Chain is a new blockchain and peer-to-peer system developed by Binance and the community. Binance DEX is a secure, native marketplace that is based on the Binance Chain and enables the exchange of digital assets that are issued and listed in the DEX. Reconciliation takes place within the blockchain nodes, and all transactions are recorded in the chain, creating a complete, verifiable activity book. BNB is the native token in the Binance Chain, so users are charged the BNB for sending transactions.
Trading fees are subject to a complex logic, which can lead to individual transactions not being calculated exactly at the rates mentioned here, but instead between them. This is due to the block-based matching engine used in the DEX. The difference between Binance Chain and Ethereum is that there is no idea of gas. As a result, the fees for the remaining transactions are set. There are no fees for a new order.
The testnet is a test environment for Binance Chain network, run by the Binance Chain development community, which is open to developers. The validators on the testnet are from the development team. There is also a web wallet that can directly interact with the DEX testnet. It also provides 200 testnet BNB so that you can interact with the Binance DEX testnet.
For developers, Binance DEX has also provided the REST API for testnet and main net. It also provides different Binance Chain SDKs for different languages like GoLang, Javascript, Java etc. We used Java SDK for the Trading Bot with Spring Boot.
Trading Strategy
To implement a Trading Bot, you need to know which pair and when to buy and sell Crypto for these pairs. We selected a very simple trading strategy for our project. First, we selected the NEXO / BINANCE trading pair (Nexo / BNB) because this pair has the highest trading volume. Perhaps you can choose a different trading pair based on your analysis.
For the purchase and sale, we made a decision based on Candlestick count. We considered the size of Candlestick for 15 minutes. If three are still red (price drops), buy Nexo and if three are still green (price increase), sell Nexo. Once you’ve bought or sold, you’ll have to wait for the next three. Continue with the red or green Candlestick. The purchase and sales volume is always 20 Nexo. You can also choose this based on your analysis.
Let’s Code IT
We have implemented the frontend (Vue.Js) and the backend (Spring Boot) for the Trading Bot, but here I will only go into the backend application as it contains the main logic. As already mentioned, the backend application was created with Spring Boot and Binance Chain Java SDK.
We used ThreadPoolTaskScheduler for the application. This scheduler runs every 2 seconds and checks Candlestick. This scheduler has to be activated once via the frontend app and is then triggered automatically every 2 seconds.
Based on the scheduler, the execute() method is triggered every two seconds. This method first collects all previous Candlestick for 15 minutes and calculates the green and red Candlestick. Based on this, it will buy or sell.
private double quantity = 20.0;
private String symbol = NEXO-A84_BNB;
public void execute() {
List<Candlestick> candleSticks = binanceDexApiRestClient.getCandleStickBars(this.symbol, CandlestickInterval.FIFTEEN_MINUTES);
List<Candlestick> lastThreeElements = candleSticks.subList(candleSticks.size() - 4, candleSticks.size() - 1);
// check if last three candlesticks are all red (close - open is negative)
boolean allRed = lastThreeElements.stream()
.filter(cs -> Double.parseDouble(cs.getClose()) - Double.parseDouble(cs.getOpen()) < 0.0d).count() == 3;
// check if last three candlesticks are all green (close - open is positive)
boolean allGreen = lastThreeElements.stream()
.filter(cs -> Double.parseDouble(cs.getOpen()) - Double.parseDouble(cs.getClose()) < 0.0d).count() == 3;
Wallet wallet = new Wallet(privateKey, binanceDexEnvironment);
// open and closed orders required to check last order creation time
OrderList closedOrders = binanceDexApiRestClient.getClosedOrders(wallet.getAddress());
OrderList openOrders = binanceDexApiRestClient.getOpenOrders(wallet.getAddress());
// order book required for buying and selling price
OrderBook orderBook = binanceDexApiRestClient.getOrderBook(symbol, 5);
Account account = binanceDexApiRestClient.getAccount(wallet.getAddress());
if ((openOrders.getOrder().isEmpty() || openOrders.getOrder().get(0).getOrderCreateTime().plusMinutes(45).isBeforeNow()) && (closedOrders.getOrder().isEmpty() || closedOrders.getOrder().get(0).getOrderCreateTime().plusMinutes(45).isBeforeNow())) {
if (allRed) {
if (Double.parseDouble(account.getBalances().stream().filter(b -> b.getSymbol().equals(symbol.split("_")[1])).findFirst().get().getFree()) >= (quantity * Double.parseDouble(orderBook.getBids().get(0).getPrice()))) {
order(wallet, symbol, OrderSide.BUY, orderBook.getBids().get(0).getPrice());
System.out.println("Buy Order Placed Quantity:" + quantity + " Symbol:" + symbol + " Price:" + orderBook.getAsks().get(0).getPrice());
} else {
System.out.println("do not have enough Token: " + symbol + " in wallet for buy");
}
} else if (allGreen) {
if (Double.parseDouble(account.getBalances().stream().filter(b -> b.getSymbol().equals(symbol.split("_")[0])).findFirst().get().getFree()) >= quantity) {
order(wallet, symbol, OrderSide.SELL, orderBook.getAsks().get(0).getPrice());
System.out.println("Sell Order Placed Quantity:" + quantity + " Symbol:" + symbol + " Price:" + orderBook.getAsks().get(0).getPrice());
} else {
System.out.println("do not have enough Token:" + symbol + " in wallet for sell");
}
} else System.out.println("do nothing");
} else System.out.println("do nothing");
}
private void order(Wallet wallet, String symbol, OrderSide orderSide, String price) {
NewOrder no = new NewOrder();
no.setTimeInForce(TimeInForce.GTE);
no.setOrderType(OrderType.LIMIT);
no.setSide(orderSide);
no.setPrice(price);
no.setQuantity(String.valueOf(quantity));
no.setSymbol(symbol);
TransactionOption options = TransactionOption.DEFAULT_INSTANCE;
try {
List<TransactionMetadata> resp = binanceDexApiRestClient.newOrder(no, wallet, options, true);
log.info("TransactionMetadata", resp);
} catch (Exception e) {
log.error("Error occurred while order", e);
}
}
At first glance, the strategy looks really simple, I agree. After this initial setup, however, it’s easy to add more complex logic with some AI.
Result
Since 12th December 2019, this bot is running on Google Cloud and did 1130 transactions (buy/sell) until 14th April 2020. Initially, I started this bot with 2.6 BNB. On 7th February 2020, the balance was 2.1 BNB in the wallet, but while writing this blog on 14th April 2020, it looks like the bot has recovered the loss and the balance is 2.59 BNB. Hopefully, in future it will make some profit💰🙂.
Let me know your suggestions in a comment on this bot and I would also like to answer your questions if you have anything on this topic. Thanks for the time.
How does it sound to set up a complete spring application, with front-end and database? With all the models, repositories and controllers? Even with Unit and Integration tests, with mocked data? All within a few hours? Your solution is JHipster!
JHipster
JHipster or “Java Hipster” is a handy application generator, a development platform, to develop and deploy web applications. JHipster has become popular in a short time, and it has been featured in many conferences all around the globe – Montreal, Omaha, Taipei, Richmond, Frankfurt, Paris, London. It supports:
Spring Boot (Back-end)
Angular/React/Vue (Front-end)
Spring microservices
JHipster is used for generating complete applications, it will create for you a Spring Boot and Angular/React/Vue application, high-quality application with most of the things pre-configured, using Java as back-end technology and an extensive set of Spring technologies: Spring Security, Spring Boot, Spring MVC (providing a framework for web-sockets, REST and MVC), Spring Data, etc. and Angular/React/Vue front-end and a suite of pre-configured development tools like Yeoman, Maven, Gradle, Grunt, Gulp.js and Bower.
JHipster gives you a head start in creating Spring Boot application with a set of pre-defined screens for user management, monitoring, and logging. The generated Spring Boot application is specifically tailored to make working with Angular/React/Vue a smoother experience. At the top of all that, JHipster also gives you the tools to update, manage and package the resulting application.
By now you may think it sounds too good to be true… But it is not everything that JHipster offers. If you are a web developer, by now probably you have a lot of questions. 🙂 One important question we will answer in this blog post: is it supported by today’s cloud solutions, is it compatible with all of them? The answer is yes, it is compatible with the popular cloud solutions from Google, Amazon, Microsoft, and Heroku. Let’s see what it takes to make a complete integration in Google’s cloud platform, the app engine.
Google App Engine
Google App Engine is a cloud solution provided by Google, a platform for developing and hosting web applications in data centres managed by Google; Platform as a Service (PaaS). Applications are sandboxed and run across multiple servers. The App Engine supports Java or Python, uses the Google query language and stores data in Google BigTable.
It is free of usage up to a certain amount of resource usage. After the user is exceeding the limited usage rates for storage, CPU resources, requests or number of API calls and concurrent requests can pay for more of these resources.
It is fully compatible with the JHipster generated projects. What it takes to host your application is just to follow the official how-to guide from Google App Engine documentation, as normal Spring Boot Application. To make things easier, Google offers a database which works closely with the Google App Engine, the Cloud SQL.
Cloud SQL
Cloud SQL is a database service offered by Google for their cloud solutions, fully-managed that makes it easy to configure, manage, maintain, and operate your relational databases on Google Cloud Platform.
It offers three database options to integrate with:
MySQL
PostgreSQL
SQL Server
Let’s get into details of integrating with Cloud SQL for MySQL:
The first step is to create a Cloud SQL instance on the Google Cloud Platform, which requires few things like instance ID, password and etc. to be set and it gives you the option to choose the MySQL database version.
The following step is to create the database in the newly created instance. It is possible to have more databases in one instance.
Now, our application, in the case to be able to communicate with the Cloud SQL, without any permission blockers, we need to register the application in the Cloud SQL and manually configure the service account roles.
The final step is connecting your application to the created Cloud SQL instance. It is done through JDBC. All the required properties can be found in the overview of the Cloud SQL, instance connection name, credentials and etc.
So the conclusion: don’t be afraid to invest some time in new technologies, be curious, you never know where they may lead you. Thank you for reading. 🙂
https://www.north-47.com/wp-content/uploads/2020/04/Jhipster-main.jpg5711600Kristijan Petrovskihttps://www.north-47.com/wp-content/uploads/2018/09/logo_menu-copy-1-300x138.pngKristijan Petrovski2020-04-14 11:14:252020-04-14 11:18:03JHipster with Google App Engine and Cloud MySQL
JHipster is an open-source platform to generate, develop and deploy Spring Boot + Angular / React / Vue web applications. And with over 15 000 stars on Github, it is the most popular code generation framework for Spring Boot. But is it worth the hype or is the generated code too difficult to maintain and not production-ready?
How does it work?
The first thing to note is that JHipster is not a separate framework by itself. It uses yeoman and .jdl files in order to generate code in Spring Boot for backend and Angular or React or Vue for frontend. And after the initial generation of the project, you have the option to use the generated code without ever running JHipster commands again or to use JHipster in order to incrementally grow the projects and develop new features.
What exactly is JDL?
JDL is a JHipster-specific domain language where you can describe all your applications, deployments, entities and their relationships in a single file (or more than one) with a user-friendly syntax.
You can use our online JDL-Studio or one of the JHipster IDE plugins/extensions, which support working with JDL files.
Example of simple JDL file for Blog application:
entity Blog {
name String required minlength(3)
handle String required minlength(2)
}
entity Post {
title String required
content TextBlob required
date Instant required
}
entity Tag {
name String required minlength(2)
}
relationship ManyToOne {
Blog{user(login)} to User
Post{blog(name)} to Blog
}
relationship ManyToMany {
Post{tag(name)} to Tag{entry}
}
paginate Post, Tag with infinite-scroll
Which technologies are used?
On the backend we have the following technologies:
Spring Boot as the primary backend framework
Maven or Gradle for configuration
Spring Security as a Security framework
Spring MVC REST + Jackson for REST communication
Spring Data JPA + Bean Validation for Object Relational Mapping
Liquibase for Database updates
MySQL, PostgreSQL, Oracle, MsSQL or MariaDB as SQL databases
MongoDB, Counchbase or Cassandra as NoSQL databases
Thymleaf as a templating engine
Optional Elasticsearch support if you want to have search capabilities on top of your database
Optional Spring WebSockets for Web Socket communication
Optional Kafka support as a publish-subscribe messaging system
On the frontend side these technologies are used:
Angular or React or Vue as a primary frontend framework
Create a new directory and go into it mkdir myApp && cd myApp
Run JHipster and follow instructions on the screen jhipster
Model your entities with JDL Studio and download the resulting jhipster-jdl.jh file
Generate your entities with jhipster import-jdl jhipster-jdl.jh
Run ./mvnw to start generated backend
Run npm start to start generated frontend with live reload support
How does the generated code and application look like?
In case you only want to see a sample generated application without starting the whole framework you can check this official Github repo for the latest up-to-date sample code: https://github.com/jhipster/jhipster-sample-app.
Following are some screen from my up and running JHipster application:
Welcome screen This is the initial screen when you open your JHipster app
Create a user screenWith this form you can create a new user in the app
View all users screenIn this screen you have the option to manage all your existing users
Monitoring of your JHipster application screenMonitoring of JVM metrics, as well as HTTP requests statistics
What are the pros and cons
The important thing to remember is that JHipster is not a “magic bullet” that will solve all your problems and is not an optimal solution for all the new projects. As a good software engineer, you will have to weigh in the pros and cons of this platform and decide when it makes sense to use and when it’s better to go with a different approach. Having used JHipster for production projects these are some of the pros and cons that I’ve experienced:
Pros
Easy bootstrap of a new project with a lot of technologies preconfigured
JHipster almost always follows best practices and latest trends in backend and frontend development
Login, register, management of users and monitoring comes out-of-the-box
Wizard for generating your project, only the technologies that you select are included in the project
After defining your own JDL file, all of the models, repository, service and controllers classes for your entities are generated, together with integration tests. This is saving a lot of time in the begging of the project when you want to get to feature development as soon as possible
Cons
If you are not familiar with technologies that are being used in the generated project it can be overwhelming and it’s easy to get lost into this mix of lots of different technologies
Using JHipster after the initial project is not a smooth experience. Classes and Liquibase scripts are being overwritten and you have to be very careful with changing the initial JDL model. Or you can decide to continue without using JHipster after the initial generation of projects
REST responses that are returned from endpoints will not always correspond to business requirements, very often you will have to manually modify your initial JHipster REST responses
Not all of the options that are available are at the same level, some technologies that JHipster is using and configuring are more polished than the others. Especially true if you decide to use community modules
What kind of projects are a good fit?
Having said all of this, it’s important to understand that there are projects which can benefit a lot from JHipster and projects that are better without using this platform.
In my experience, a good candidate is a greenfield project where it’s expected to deliver a lot of features fast. JHipster will help a lot to be productive from day one and to cut on the boilerplate code that you need to write. So you will be able, to begin with, feature development really fast. This works well with new projects with tight deadlines, proof of concepts, internal projects, hackathons, and startups.
On the other hand, a not so ideal situation is if you have an already started and up and running project, there is not much a JHipster can do in this case. Or another case would if the application has a lot of specific business logic and its not a simple CRUD application, for example, an AI project, a chatbot or a legacy ecosystem where these new technologies are not suitable or supported.
JHipster, is it worth it?
There is only one sure way to decide if JHipster is worth it for your next project or not and that is to try it out yourself and play around with the different features and configuration that JHipster offers.
At best, you will find a new framework for your next project and save a lot of effort next time you have to start a project. At worst, you will get to know the latest trends in both backend and frontend and learn some of the best practices from a very large community.
https://www.north-47.com/wp-content/uploads/2020/02/jhipster2-scaled.jpg9802048Bojan Trajkovskihttps://www.north-47.com/wp-content/uploads/2018/09/logo_menu-copy-1-300x138.pngBojan Trajkovski2020-04-02 09:39:042020-04-02 09:39:05JHipster, is it worth it?
We live in a fast-paced world where a standard project delivery strategy is agile or it is a direction which people tend to follow. If you have been part of an agile software delivery practice then somewhere in your coding career you have met with some form of tests. Whether they might be unit or integration ( system ) or some form of E2E test.
You might be familiar with the testing pyramid and with the benefits and scopes of the different types of tests presented in the pyramid.
Let’s take a quick look at the pyramid:
Unit
As shown on the image above tests that we write are grouped into layers from which the pyramid is built. The foundation layer which is the biggest. It shows us their quantity. Meaning we need more of them on our application. They are also called Unit Tests because of the scope which they are testing. A small unit e.g. an if clause.
Integration/System
The tests belonging to the middle layer are called Integration tests and their purpose is to test integration between one or more elements inside an application and in quantitative representation we need fewer tests of this type than Unit tests.
UI/E2E
The last layer is the smallest one meaning that the quantity of those tests should be the smallest. Those types of tests are also called UI or E2E tests. Here a test has the biggest scope meaning that it is checking more interconnected parts of your application i.e whole register scenario from UI perspective.
As we go from the bottom to the top costs for maintenance are increasing, respectively their speed is decreasing. Confidence is also a crucial part. If a test higher in the pyramid passes we are more confident that our application works or some part of it at least.
Our focus is on the middle layer. So-called Integration tests lay there. As we mentioned above those are the tests that check the interconnection between one or more modules inside an application e.g tests which check that a user can be registered by pinging an endpoint. The scope of this test is to prepare data, send a request to the corresponding endpoint and also check whether the user has been successfully created in the underlying datastore. Testing integration between controller and repository layer, therefore, their name “An integration test”. In my opinion, I think that tests are a must-have for every application.
Therefore we are writing integration tests for asynchronous code.
With multi-threaded data processing systems and increased popularity of reactive programming in Java, we are puzzled with writing proficient tests for asynchronous code. Writing high-value tests is hard, but writing high-value tests for asynchronous code is harder.
Problem
Let’s take a look at this example where we have a small system that exposes several endpoints for updating a person. We have created several tests each is updating a person with different names. When a test is running it tries to update a person by sending a request via an endpoint. The system receives the request and returns ok status. In the meantime, it spans a different thread for the actual person update. On the side of the tests, we don’t know how much time does it gonna take for the update to happen so the naive approach is to wait for a specific time after which we are going to verify whether the actual update has happened.
We have several tests which ping a different endpoint. The endpoints are differing in the wait time that would be needed to process each request updatePersonWith1SecondDelay updatePersonWith2SecondDelay updatePersonWith3SecondDelay updatePersonWithDelayFrom1To5Seconds
In order for our tests to pass, I used the naive approach by adding a function waitForCompetition() which is nothing else than some sleep of the test thread. Thread.sleep() in Java.
Example
The first execution of tests with a timeout of 1 second. The total execution is 4 seconds but not all tests have passed.
The second execution of tests with a timeout of 3 seconds. The total execution is 12 seconds but not all tests have passed.
Third execution of tests with a timeout of 5 seconds. The total execution is 20 seconds where all tests have passed.
But in order for all the tests to pass, we would need a max of 5-second sleep wait which is executed after each test. This way we are guaranteeing that every test will pass. However, we add an unnecessary wait of 4 seconds for the first test and respectively add wait time for other tests. This results increased execution time, hence optimum wait time is not guaranteed.
Solution
As stated in the official documentation Awaitility is a small java library for synchronizing asynchronous operation. Which helps expressing expectations in a concise and easy to read manner. Which is a smart option for checking the outcome of some async operation. It’s fairly easy to incorporate this library into your codebase.
And add the import in your test: import static org.awaitility.Awaitility.await;
Let’s take a look at an example before using this library:
@Test
public void testDelay1Second() throws Exception {
Person person = new Person();
person.setName("Yara");
person.setAddress("New York");
person.setAge("23");
personRepository.save(person);
ObjectMapper mapper = new ObjectMapper();
person.setName("Daenerys");
this.mockMvc.perform(put("/api/endpoint1/" + person.getId())
.contentType(APPLICATION_JSON)
.content(mapper.writeValueAsBytes(person)))
.andExpect(status().isOk())
.andExpect(content().string(containsString("Request received")));
waitForCompletion();
assertThat(personRepository.findById(person.getId()).get().getName())
.isEqualTo("Daenerys");
}
An example with Awaitility:
@Test
public void testDelay1Second() throws Exception {
Person person = new Person();
person.setName("Yara");
person.setAddress("New York");
person.setAge("23");
personRepository.save(person);
ObjectMapper mapper = new ObjectMapper();
person.setName("Daenerys");
this.mockMvc.perform(put("/api/endpoint1/" + person.getId())
.contentType(APPLICATION_JSON)
.content(mapper.writeValueAsBytes(person)))
.andExpect(status().isOk())
.andExpect(content().string(containsString("Request received")));
await().atMost(Duration.FIVE_SECONDS).untilAsserted(() -> assertThat(personRepository.findById(person.getId()).get().getName())
.isEqualTo("Daenerys"));
}
Example of the executed test suite with the library:
As we can see the execution time is greatly reduced from 20 seconds for all tests to pass in just under 10 seconds. As you can spot the function waitForCompletition() is removed and a new wait is introduced from the library as await().atMost(Duration.FIVE_SECONDS).untilAsserted()
You can also configure the library using static methods from the Awaitility class: Awaitility.setDefaultPollInterval(10, TimeUnit.MILLISECONDS); Awaitility.setDefaultPollDelay(Duration.ZERO); Awaitility.setDefaultTimeout(Duration.ONE_MINUTE);
Conclusion
In this article, we have taken a look at how to improve tests when dealing with asynchronous code using an interesting library. I hope this post helps benefit you and adds to your knowledge. You can find a working example with all of the tests with and without the Awaitility library on this repository. Also, you can find more about the library here.
https://www.north-47.com/wp-content/uploads/2020/02/awaitility.jpg3821024Vladimir Dimikjhttps://www.north-47.com/wp-content/uploads/2018/09/logo_menu-copy-1-300x138.pngVladimir Dimikj2020-03-24 11:07:182020-03-24 11:07:19Testing asynchronous code in a concise and easy to read manner
Reactive programming is programming with asynchronous data streams. It enables to create streams of anything – events, fails, variables, messages and etc. By using reactive programming in your application, you are able to create streams which you can then perform actions while the data emitted by those created streams.
Observer Pattern
The observer pattern is a software design pattern which defines a one-to-many relationship between objects. It means if the value/state of the observed object is changed/modified, the other objects which are observing are getting notified and updated.
ReactiveX
ReactiveX is a polyglot implementation of reactive programming which extends observer pattern and provides a bunch of data manipulation operators, threading abilities.
RxJava
RxJava is the JVM implementation of ReactiveX.
Observable – is a stream which emits the data
Observer – receives the emitted data from the observable
onSubscribe() – called when subscription is made
onNext() – called each time observable emits
onError() – called when an error occurs
onComplete() – called when the observable completes the emission of all items
Subscription – when the observer subscribes to observable to receive the emitted data. An observable can be subscribed by many observers
Scheduler – defines the thread where the observable emits and the observer receives it (for instance: background, UI thread)
subscribeOn(Schedulers.io())
observeOn(AndroidSchedulers.mainThread())
Operators – enable manipulation of the streamed data before the observer receives it
map()
flatMap()
concatMap() etc.
Example usage on Android
Tools, libraries, services used in the example:
Libraries:
ButterKnife – simplifying binding for android views
What we want to achieve is to fetch users from 1. show in RecyclerView and load todo list to show the number of todos in the same RecyclerView without blocking the UI.
Here we define our endpoints. Retrofit2 supports return type of RxJava Observable for network calls.
Now, fetch todo list of users using the 2nd endpoint.
Since we are not going to make another call, we don’t need Observable type in return of this method. So, here we use map() instead of flatMap() and we return User type.
.flatMapIterable() – is used to convert Observable<List<T>> to Observable<T> which is needed for filter each item in list
.filter() – we filter todos to get each user’s completed todo list
.toList().toObservable() – for converting back to Observable<List<T>>
.map() – we set filtered list to user object which will be used in next code snippet
Now, the last step, we call the methods:
getUsersObservable()
.subscribeOn(Schedulers.io())
.concatMap((Function<User, ObservableSource<User>>) this::getTodoListByUserId) // operator can be concatMap()
.observeOn(AndroidSchedulers.mainThread())
.subscribe(new Observer<User>() {
@Override
public void onSubscribe(Disposable d) {
disposables.add(d);
}
@Override
public void onNext(User user) {
adapterRV.updateData(user);
}
@Override
public void onError(Throwable e) {
Log.e(TAG, e.getMessage());
}
@Override
public void onComplete() {
Log.d(TAG, "completed!");
}
});
subscribeOn() – makes the next operator performed on background
concatMap() – here we call one of our methods getTodoListByUserId() or getAllTodo()
.observeOn(), .subscribe() – every time the user’s todo list is fetched from api in background thread, it emits the data and triggers onNext() so we update RecyclerView in UI thread
Left
getTodoListByUserId()
flatMap()
Right
concatMap()
getAllTodo() – filter usage
Difference between flatMap and concatMap is that the former is done in an arbitrary order but the latter preserves the order
Disposable
When an observer subscribes to an observable, a disposable object is provided in onSubscribe() method so it can later be used to terminate the background process to avoid it returning from callback to a dead activity.
https://www.north-47.com/wp-content/uploads/2020/02/reactivex.jpg3821024Ziyaddin Ovchiyevhttps://www.north-47.com/wp-content/uploads/2018/09/logo_menu-copy-1-300x138.pngZiyaddin Ovchiyev2020-03-03 08:44:372020-03-03 08:44:38ReactiveX in Android with an example - RxJava
In this article, I want to present a very powerful tool called Project Lombok. It acts as an annotation processor that allows us to modify the classes at compile time. Project Lombok enables us to reduce the amount of boilerplate code that needs to be written. The main idea is to give the users an option to put annotation processors into the build classpath.
Project Lombok provides the following annotations:
@Getter and @Setter: create getters and setters for your fields
@EqualsAndHashCode: implements equals() and hashCode()
@ToString: implements toString()
@Data: uses the four previous features
@Cleanup: closes your stream
@Synchronized: synchronize on objects
@SneakyThrows: throws exceptions and many more. Check the full list of available annotations: https://projectlombok.org/features/all
Common object methods
In this example, we have a class that represents User and holds five attributes, for which we want to have an additional constructor for all attributes, toString representation, getters, and setters and overridden equals and hashCode in terms of the email attribute:
private String email;
private String firstName;
private String lastName;
private String password;
private int age;
// empty constructor
// constructor for all attributes
// getters and setters
// toString
// equals() and hashCode()
}
With some help from Lombok, the class now looks like this:
import lombok.AllArgsConstructor;
import lombok.EqualsAndHashCode;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.Setter;
import lombok.ToString;
@Getter
@Setter
@NoArgsConstructor
@AllArgsConstructor
@ToString
@EqualsAndHashCode(of = {"email"})
public class User {
private String email;
private String firstName;
private String lastName;
private String password;
private int age;
}
As you can see, the annotations are replacing the boilerplate code that needs to be written for all the fields, constructor, toString, etc. The annotations do the following:
using @Getter and @Setter Lombok is instructed to generate getters and setters for all attributes
using @NoArgsConstructor and @AllArgsConstructors Lombok created the default empty constructor and an additional one for all the attributes
using @ToString generates toString() method
using @EqualsAndHashCode we get the pair of equals() and hashCode() methods defined for the email field (Note that more than one field can be specified here)
Customize Lombok Annotations
We can customize the existing example with the following:
in case we want to restrict the visibility of the default constructor we can use AccessLevel.PACKAGE
in case we want to be sure that the method fields won’t get null values assigned to them, we can use @NonNull annotation
in case we want to exclude some property from toString generated code, we can use excludes argument in @ToString annotation
we can change the access level of the setters from public to protected with AccessLevel.PROTECTED for @Setter annotation
in case we want to do some kind of checks in case the field gets modified we can implement the setter method by yourself. Lombok will not generate the method because it already exists
Lombok offers another powerful annotation called @Builder. Builder annotation can be placed on a class, or on a constructor, or on a method.
In our example, the User can be created using the following:
User user = User
.builder()
.email("dimitar.gavrilov@north-47.com")
.password("secret".getBytes(StandardCharsets.UTF_8))
.firstName("Dimitar")
.registrationTs(Instant.now())
.build();
Delegation
Looking at our example the code can be further improved. If we want to follow the rule of composition over inheritance, we can create a new class called ContactInformation. The object can be modelled via an interface:
This article covers some basic features and there is a lot more that can be investigated. I hope you have found a motivation to give Lombok a chance in your project if you find it applicable.
CQRS stands for Command Query Responsibility Segregation. It is a pattern where allowed actions in the application are divided into two groups: Commands and Queries. A command changes the state of an object but does not return any data, while a query returns data and does not change any state. This design style comes from the need for different strategies when scaling the reading part and the writing part of our application. By dividing methods into these two categories, you can use a different model to update information than the model you use to read information. By separate models we most commonly mean different object models, probably running in different logical processes, perhaps on separate hardware. A web example would see a user looking at a web page that’s rendered using the query model. If they initiate a change that change is routed to the separate command model for processing, the resulting change is communicated to the query model to render the updated state.
Event Sourcing
Event Sourcing is a specialized pattern for data storage. Instead of storing the current state for an entity, every change of state is stored as a separate event that makes sense to a business user. The current state is calculated by applying all events that changed the state of an entity. In terms of CQRS, the events stored are the results of executing a command against an aggregate on the right side. The event store also transmits events that it saves. The read side can process these events and builds the targeted data sets it needs for queries.
AXON Framework
Axon is “CQRS Framework” for Java. It is an Open-source framework that provides the building blocks that CQRS requires and helps to create scalable and extensible applications while maintaining application consistency in distributed systems. It provides basic building blocks for writing aggregates, commands, queries, events, sagas, command handlers, event handlers, query handlers, repositories, communication buses and so on. Axon Framework comes with a Spring Boot Starter, making using it as easy as adding a dependency. Axon will automatically configure itself based on best practices and other dependencies you have set. Providing explicit configuration is a matter of adding a bean of the component that needs to be configured differently. Furthermore, Axon Framework integrates with Spring Cloud Discovery, Spring Messaging and Spring Cloud AMQP.
AXON components
Command – a combination of expressed intent (which describes what you want to be done) as well as the information required to undertake action based on that intent. The Command Model is used to process the incoming command, to validate it and define the outcome. Commands are typically represented by simple and straightforward objects that contain all data necessary for a command handler to execute it
Command Bus – receives commands and routes them to the Command Handlers
Command Handler – the component responsible for handling commands of a certain type and taking action based on the information contained inside it. Each command handler responds to a specific type of command and executes logic based on the contents of the command
Event bus – dispatches events to all interested event listeners. This can either be done synchronously or asynchronously. Asynchronous event dispatching allows the command execution to return and hand over control to the user, while the events are being dispatched and processed in the background. Not having to wait for event processing to complete makes an application more responsive. Synchronous event processing, on the other hand, is simpler and is a sensible default. By default, synchronous processing will process event listeners in the same transaction that also processed the command
Event Handler – are the components that act on incoming events. They typically execute logic based on decisions that have been made by the command model. Usually, this involves updating view models or forwarding updates to other components, such as 3rd party integrations
Query Handler – components act on incoming query messages. They typically read data from the view models created by the Event listeners
Query Bus receives queries and routes them to the Query Handlers. A query handler is registered at the query bus with both the type of query it handles as well as the type of response it provides. Both the query and the result type are typically simple, read-only DTO objects
Will each application benefit from Axon?
Unfortunately not. Simple CRUD (Create, Read, Update, Delete) applications which are not expected to scale will probably not benefit from CQRS or Axon. Fortunately, there is a wide variety of applications that do benefit from Axon. Axon platform is being used by a wide range of companies in highly demanding sectors such as healthcare, banking, insurance, logistics and public sector. Axon Framework is free and the source code is available in a Git repository: https://github.com/AxonFramework.
https://www.north-47.com/wp-content/uploads/2019/10/cqrs_axon_framework.jpg6111844Elena Stojanovskahttps://www.north-47.com/wp-content/uploads/2018/09/logo_menu-copy-1-300x138.pngElena Stojanovska2019-10-01 11:40:052019-10-01 11:40:05CQRS and Event Sourcing with Axon Framework
Today, we are gonna take a look at JMeter. You can embed it in your application as a library and make an external integration testing solution. You don’t have to use it for load testing, it could simply send onej request, check the return status code, check the return value and move on. There is an argument that JMeter may be overkill for that, but it provides an easy way to verify the return, allows you to set it up using JMeter desktop app and then you can move into testing latency under load.
First, we need to create a test file that will be put later in our spring boot application. The steps for creating the .jmx file are as follows:
1 – Open the JMeter window by clicking on /home/apache-jmeter-5.1.1/bin/jmeter.bat. The next step you want to do with every JMeter Test Plan is to add a thread group element. Set the “Loop Count” parameter equal to 1, as shown below:
2 – The next step is to create a while controller. The purpose the while controller is to repeat a given set of actions until the condition is broken. While is a basic programming concept to run actions where the number of iterations to run are unknown or varying.
3 – Create an HTTP request as shown in the figure below:
4 – Afterwards, we are gonna create a CSV Data Set Config. This step refers to the CSV file for which the partner users will be collected and replaced as in the httprequest.
5- After running our test, we want to see the results e.g. which calls have been done, and which ones have failed. This is done through Listeners, a recording mechanism that shows results, including logging and debugging.
The View Results Tree is the most common Listener.
Right-click – Add->Listener->View Results Tree
6 – At the end, it should be something like the figure below:
Now click ‘Save’. Your test will be saved as a .jmx file. To run the test, click the green arrow on top. After the test completes running, you can view the results on the Listener as in the figure below. In this example, you can see the tests were successful because they’re green. On the right, you can see more detailed results, like load time, connect time, errors, the request data, the response data, etc. You can also save the results if you want to.
JMeter can also be used for Maven testing through a plugin and work quite nicely with variables and prerequisites etc. Integrating Performance Testing in your projects has many benefits:
It provides a constant check of the performances of the application
Secures continuous delivery in production
Allows early detection of performance problems or performance regressions
Automates the process means less manual work, allowing your team to focus on more valuable tasks like performance analysis and optimisation
First of all, you need to add your plugin to the project. So go to Maven project directory (jmeter-testproject in this case) and edit the pom.xml file. Here you must add the plugin. You can find the basic configuration here. You just need to copy the configuration text and paste it in your pom.xml file.
Finally, you have a pom.xml file that looks like this:
A unit is the smallest testable part of an application. Mockito is a well known mock framework that allows us to create configure mock objects. With Mockito we can mock both interfaces and classes in the class under test. Mockito also helps us to reduce the number of boilerplate code while using mockito annotations.
@InjectMocks – instantiates the tested object and injects all the annotated field dependencies into it
@Captor – used to capture argument values for further assertions
Mockito example @Mock
Let’s say we have the following classes and we want to write a test for the CalculationService:
public class CalculationService {
private AddService addService;
public int calculate(int x, int y) {
return addService.add(x, y);
}
}
public class AddService {
public int add(int x, int y) {
return x+y;
}
}
The usage of the @Mock and @InjectMock annotations is shown in the following sample code:
@InjectMocks
private CalculationService calculationService;
@Mock
private AddService addService;
@Before
public void setUp() {
// initializes objects annotated with @Mock, @Spy, @Captor, or @InjectMocks
MockitoAnnotations.initMocks(this);
}
@Test
public void testCalculationService() {
// mock the result from method add in addService
doReturn(20).when(addService).add(10, 10);
// verify that the calculate method from calculationService will return the same value
assertEquals(20, calculationService.calculate(10, 10));
}
@Spy
Mockito spy is used to spying on a real object. The main difference between a spy and mock is that with spy the tested instance will behave as a normal instance. The following example will explain it:
Note that method add is called and the size of the spy list is 2.
@Captor
Mockito framework gives us plenty of useful annotations. One of the most recent that I’ve had a chance to use is @Captor. ArgumentCaptor is used to capture the inner data in a method that is either void or returns a different type of object. Let’s say we have the following method snippet:
public class AnyClass {
public void doSearch(SearchData searchData) {
CustomData data = new CustomData("custom data");
searchData.doSomething(data);
}
}
We want to capture the argument data so we can verify its inner data. So, to check that, we can use ArgumentCaptor from Mockito:
// Create a mock of the SearchData
SearchData data = mock(SearchData.class);
// Run the doSearch method with the mock
new AnyClass().doSearch(data);
// Capture the argument of the doSomething function
ArgumentCaptor<CustomData> captor = ArgumentCaptor.forClass(CustomData.class);
verify(data, times(1)).doSomething(captor.capture());
// Assert the argument
CustomData actualData = captor.getValue();
assertEquals("custom data", actualData.customData);
New features in Mockito 2.x
Since its inception, Mockito lacked mocking finals. One of the major features in the 2.X version is the support stubbing of the final method and final class. This feature has to be explicitly activated by creating the file MockMaker in this directory src/test/resources/mockito-extensions/org.mockito.plugins.MockMaker containing a single line: mock-maker-inline
public final class MyFinalClass {
public String hello() {
return "my final class says hello";
}
}
public class MyCallingClass {
final MyFinalClass myFinalClass = new MyFinalClass();
public String executeFinal() {
return myFinalClass.hello();
}
}
public class MyCallingClassTest {
@Test
public void testFinalClass() {
MyCallingClass myCallingClass = new MyCallingClass();
MyFinalClass myFinalClass = mock(MyFinalClass.java);
when(myFinalClass.hello()).thenReturn("testString");
assertEquals("testString", myCallingClass.executeFinal());
}
}
Given the following example, without the file org.mockito.plugins.MockMaker and its content, we get the following error:
When the file is in the resources and the content is valid, we are all good.
The plan for the future is to have a programmatic way of using this feature.
Conclusion
In this article, I gave a brief overview of some of the features in Mockito test framework. Like any other tool, it must be used in a proper way to be useful. Now go and bring your unit tests to the next level.
https://www.north-47.com/wp-content/uploads/2019/09/mockito.png380774Dimitar Gavrilovhttps://www.north-47.com/wp-content/uploads/2018/09/logo_menu-copy-1-300x138.pngDimitar Gavrilov2019-09-05 07:47:292019-09-26 12:53:39Unit testing with Mockito
Why bother writing tests is already a well-discussed topic in software engineering. I won’t go into much details on this topic, but I will mention some of the main benefits.
In my opinion, testing your software is the only way to achieve confidence that your code will work on the production environment. Another huge benefit is that it allows you to refactor your code without fear that you will break some existing features.
Risk of bugs vs the number of tests
In the Java world, one of the most popular frameworks is Spring Boot, and part of the popularity and success of Spring Boot is exactly the topic of this blog – testing. Spring Boot and Spring framework offer out-of-the-box support for testing and new features are being added constantly. When Spring framework appeared on the Java scene in 2005, one of the reasons for its success was exactly this, ease of writing and maintaining tests, as opposed to JavaEE where writing integration requires additional libraries like Arquillian.
In the following, I will go over different types of tests in Spring Boot, when to use them and give a short example.
Testing pyramid
We can roughly group all automated tests into 3 groups:
Unit tests
Service (integration) tests
UI (end to end) tests
As we go from the bottom of the pyramid to the top tests become slower for execution, so if we measure execution times, unit tests will be in orders of few milliseconds, service in hundreds milliseconds and UI will execute in seconds. If we measure the scope of tests, unit as the name suggest test small units of code. Service will test the whole service or slice of that service that involve multiple units and UI has the largest scope and they are testing multiple different services. In the following sections, I will go over some examples and how we can unit test and service test spring boot application. UI testing can be achieved using external tools like Selenium and Protractor, but they are not related to Spring Boot.
Unit testing
In my opinion, unit tests make the most sense when you have some kind of validators, algorithms or other code that has lots of different inputs and outputs and executing integration tests would take too much time. Let’s see how we can test validator with Spring Boot.
Validator class for emails
public class Validators {
private static final String EMAIL_REGEX = "(?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*|\"(?:[\\x01-\\x08\\x0b\\x0c\\x0e-\\x1f\\x21\\x23-\\x5b\\x5d-\\x7f]|\\\\[\\x01-\\x09\\x0b\\x0c\\x0e-\\x7f])*\")@(?:(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?|\\[(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?|[a-z0-9-]*[a-z0-9]:(?:[\\x01-\\x08\\x0b\\x0c\\x0e-\\x1f\\x21-\\x5a\\x53-\\x7f]|\\\\[\\x01-\\x09\\x0b\\x0c\\x0e-\\x7f])+)\\])";
public static boolean isEmailValid(String email) {
return email.matches(EMAIL_REGEX);
}
}
Unit tests for email validator with Spring Boot
@RunWith(MockitoJUnitRunner.class)
public class ValidatorsTest {
@Test
public void testEmailValidator() {
assertThat(isEmailValid("valid@north-47.com")).isTrue();
assertThat(isEmailValid("invalidnorth-47.com")).isFalse();
assertThat(isEmailValid("invalid@47")).isFalse();
}
}
MockitoJUnitRunner is used for using Mockito in tests and detection of @Mock annotations. In this case, we are testing email validator as a separate unit from the rest of the application. MockitoJUnitRunner is not a Spring Boot annotation, so this way of writing unit tests can be done in other frameworks as well.
Integration testing of the whole application
If we have to choose only one type of test in Spring Boot, then using the integration test to test the whole application makes the most sense. We will not be able to cover all the scenarios, but we will significantly reduce the risk. In order to do integration testing, we need to start the application context. In Spring Boot 2, this is achieved with following annotations @RunWith(SpringRunner.class) and @SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT. This will start the application on some random port and we can inject beans into our tests and do REST calls on application endpoints.
In the following is an example code for testing book endpoints. For making rest API calls we are using Spring TestRestTemplate which is more suitable for integration tests compared to RestTemplate.
@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
public class SpringBootTestingApplicationTests {
@Autowired
private TestRestTemplate restTemplate;
@Autowired
private BookRepository bookRepository;
private Book defaultBook;
@Before
public void setup() {
defaultBook = new Book(null, "Asimov", "Foundation", 350);
}
@Test
public void testShouldReturnCreatedWhenValidBook() {
ResponseEntity<Book> bookResponseEntity = this.restTemplate.postForEntity("/books", defaultBook, Book.class);
assertThat(bookResponseEntity.getStatusCode()).isEqualTo(HttpStatus.CREATED);
assertThat(bookResponseEntity.getBody().getId()).isNotNull();
assertThat(bookRepository.findById(1L)).isPresent();
}
@Test
public void testShouldFindBooksWhenExists() throws Exception {
Book savedBook = bookRepository.save(defaultBook);
ResponseEntity<Book> bookResponseEntity = this.restTemplate.getForEntity("/books/" + savedBook.getId(), Book.class);
assertThat(bookResponseEntity.getStatusCode()).isEqualTo(HttpStatus.OK);
assertThat(bookResponseEntity.getBody().getId()).isEqualTo(savedBook.getId());
}
@Test
public void testShouldReturn404WhenBookMissing() throws Exception {
Long nonExistingId = 999L;
ResponseEntity<Book> bookResponseEntity = this.restTemplate.getForEntity("/books/" + nonExistingId, Book.class);
assertThat(bookResponseEntity.getStatusCode()).isEqualTo(HttpStatus.NOT_FOUND);
}
}
Integration testing of web layer (controllers)
Spring Boot offers the ability to test layers in isolation and only starting the necessary beans that are required for testing. From Spring Boot v1.4 on there is a very convenient annotation @WebMvcTest that only the required components in order to do a typical web layer test like controllers, Jackson converters and similar without starting the full application context and avoid startup of unnecessary components for this test like database layer. When we are using this annotation we will be making the REST calls with MockMvc class.
Following is an example of testing the same endpoints like in the above example, but in this case, we are only testing if the web layer is working as expected and we are mocking the database layer using @MockBean annotation which is also available starting from Spring Boot v1.4. Using these annotations we are only using BookController in the application context and mocking database layer.
@RunWith(SpringRunner.class)
@WebMvcTest(BookController.class)
public class BookControllerTest {
@Autowired
private MockMvc mockMvc;
@MockBean
private BookRepository repository;
@Autowired
private ObjectMapper objectMapper;
private static final Book DEFAULT_BOOK = new Book(null, "Asimov", "Foundation", 350);
@Test
public void testShouldReturnCreatedWhenValidBook() throws Exception {
when(repository.save(Mockito.any())).thenReturn(DEFAULT_BOOK);
this.mockMvc.perform(post("/books")
.content(objectMapper.writeValueAsString(DEFAULT_BOOK))
.contentType(MediaType.APPLICATION_JSON)
.accept(MediaType.APPLICATION_JSON))
.andExpect(status().isCreated())
.andExpect(MockMvcResultMatchers.jsonPath("$.name").value(DEFAULT_BOOK.getName()));
}
@Test
public void testShouldFindBooksWhenExists() throws Exception {
Long id = 1L;
when(repository.findById(id)).thenReturn(Optional.of(DEFAULT_BOOK));
this.mockMvc.perform(get("/books/" + id)
.accept(MediaType.APPLICATION_JSON))
.andExpect(status().isOk())
.andExpect(MockMvcResultMatchers.content().string(Matchers.is(objectMapper.writeValueAsString(DEFAULT_BOOK))));
}
@Test
public void testShouldReturn404WhenBookMissing() throws Exception {
Long id = 1L;
when(repository.findById(id)).thenReturn(Optional.empty());
this.mockMvc.perform(get("/books/" + id)
.accept(MediaType.APPLICATION_JSON))
.andExpect(status().isNotFound());
}
}
Integration testing of database layer (repositories)
Similarly to the way that we tested web layer we can test the database layer in isolation, without starting the web layer. This kind of testing in Spring Boot is achieved using the annotation @DataJpaTest. This annotation will do only the auto-configuration related to JPA layer and by default will use an in-memory database because its fastest to startup and for most of the integration tests will do just fine. We also get access TestEntityManager which is EntityManager with supporting features for integration tests of JPA.
Following is an example of testing the database layer of the above application. With these tests we are only checking if the database layer is working as expected we are not making any REST calls and we are verifying results from BookRepository, by using the provided TestEntityManager.
@RunWith(SpringRunner.class)
@DataJpaTest
public class BookRepositoryTest {
@Autowired
private TestEntityManager entityManager;
@Autowired
private BookRepository repository;
private Book defaultBook;
@Before
public void setup() {
defaultBook = new Book(null, "Asimov", "Foundation", 350);
}
@Test
public void testShouldPersistBooks() {
Book savedBook = repository.save(defaultBook);
assertThat(savedBook.getId()).isNotNull();
assertThat(entityManager.find(Book.class, savedBook.getId())).isNotNull();
}
@Test
public void testShouldFindByIdWhenBookExists() {
Book savedBook = entityManager.persistAndFlush(defaultBook);
assertThat(repository.findById(savedBook.getId())).isEqualTo(Optional.of(savedBook));
}
@Test
public void testFindByIdShouldReturnEmptyWhenBookNotFound() {
long nonExistingID = 47L;
assertThat(repository.findById(nonExistingID)).isEqualTo(Optional.empty());
}
}
In the following table, I’m showing the execution times with the startup of the different types of tests that I’ve used as examples. We can clearly see that unit tests, as mentioned in the beginning, are the fastest ones and that separating integration tests into layered testing leads to faster execution times.
Type of test
Execution time with startup
Unit test
80 ms
Integration test
620 ms
Web layer test
190 ms
Database layer test
220 ms
https://www.north-47.com/wp-content/uploads/2019/08/Spring-BOOT-Interview-questions-1.jpg500900Bojan Trajkovskihttps://www.north-47.com/wp-content/uploads/2018/09/logo_menu-copy-1-300x138.pngBojan Trajkovski2019-08-20 07:41:252020-03-03 10:18:42Testing Spring Boot application with examples
Editable templates have been introduced in AEM 6.2 and since then with each next version they are constantly improving. They allow authors to create and edit templates. Template authors can create and configure templates without the help of the development team. To be able to create and edit templates the authors must be members of the template-authors group.
Here are some of the benefits of using editable templates:
editable templates provide flexibility to define content policies to persist in design properties. There is no need for design mode which requires extra permission to give author to set design properties along with a replication of design page after making any design changes
it maintains a dynamic connection between pages and template which gives power to template authors to change template structure along with locked content which will be reflected across all pages based on the editable template
this doesn’t require any extra training for authors to create a new page based on an editable template. Authors can similarly create a new page as they create with static templates
can be created and edited by your authors
after the new page is created, a dynamic connection is maintained between the page and the template. This means that changes to the template structure are reflected on any pages created with that template (changes to the initial content will bot be reflected)
uses content policies (edited from the template author) to persist the design properties (does not use Design mode within the page editor)
are stored under /conf
Here are the tasks that the template author can do with the help of the AEM’s template editor:
create a new template or copy an existing template
manage the life cycle of the template
add components to the template and position them on a responsive grid
pre-configure the components using policies
define which components can be edited on pages created with the template
Create editable templates from the configuration browser
Go to Tools -> General -> Configuration Browser
It will create the basic hierarchy of templates in /conf directory.
There are three parts of template editor:
templates: all dynamic (editable) templates created by authors
policies: there are two types of policies – template level policy: used for adding policies to the template – component level policy: used for adding policies to the component level
template-types: base templates for the creation of new templates in runtime
There are three parts of a template:
initial: the initial content of the page created – based on the template. This content could be edited or removed by the author
policies: here a particular template is linked to a policy by using cq:policy property
structure: – the structure allows you to define the structure of the template – the components defined in the template level can’t be removed from the resulting page – if you want that template authors can add and remove components, add parsys to the template – components can be locked and unlocked to allow you to define initial content
How to create base template-type
To start working on editable templates, we need to create a page component. Navigate to /apps/47northlabs-sample-project/components/page and click on create component.
Next would be to create a template-type which helps the authors to create its editable templates. (Please note that the configuration is available on GitLab).
How can authors create editable templates
Next step would be to create the editable template. That can be done from Tools -> General -> Templates -> 47northlabs-sample-project -> Choose empty template.
Add template title, description and open the template. There should be a responsive grid available.
Configure policies
With defining policies, authors will be able to configure different components on the page for regular authors to place on the page. Since there is no defined policy on the template, no component is assigned to this template. Click on the first (Policy) icon will take the author to a new screen where the author can define what components can put on this template.
Create a new policy.
Once done, components will be available on the page to drag & drop on the page.
Enable template
Once template authors are done with creating templates, authors must enable templates to make them available in sites section where regular authors can select templates to create pages.
Create editable templates from code
Another possibility is to We can create a sample project based on Adobe Archetype using the following command:
The generated sample project already contains content-page editable template by default. We are going to create three additional editable templates i.e.
home-page
landing-page
language-page
We are going to add some components as default in the templates, and few pages in ui.content project. The general idea is to have some test content and to play around with some corner cases. Let’s start.
The demo content is like on the image below.
We have four editable templates available. With property cq:allowedTemplates we control the available templates that can be used to create child pages under a given page. For example, for the content-page we want to make available only the landing-page, so the initial content will look like:
What happens in case we want to add a fancy new template which should be available only for a particular page types? These two solutions come to my mind:
write a groovy script that will update existing cq:allowedTemplates property of all the pages created for a given template with the new template
update the structure of the given template, so all the existing pages created with that template will be updated
With the second approach, for example, if we want to add that fancy page template to the content page template, we have to add the following configuration to the structural element of the content-page template:
Given this, it is important to point out the differences between initial content and structural elements of the template definition.
Initial Content
is merged with the structure during page creation
changes to the initial content will not be reflected in all pages created with the given template
the jcr:content node is copied to any new page
the root node holds a list of components that defines what will be available in the resulting page
Structure
is merged with the initial content during page creation
changes to the structure will be reflected in all pages created with the given template
the cq:responsive node is responsible for the responsive layout
the root node defines the available components in the resulting page
Conclusion
With an editable template, you give template authors the flexibility to create and modify the template as they want. It acts as a central place to manage everything about the template (structure, initial content, layout) and components used in the template. If you are migrating to use an editable template, make sure you accessed the requirements for not only the template but also the components.
The code from this blog post is available on GitLab.
https://www.north-47.com/wp-content/uploads/2019/08/editable_templates_aem.png720910Dimitar Gavrilovhttps://www.north-47.com/wp-content/uploads/2018/09/logo_menu-copy-1-300x138.pngDimitar Gavrilov2019-08-13 06:31:182020-03-04 12:53:09Editable Templates in AEM 6.5
If you have read my previous parts, this is the last one in which I will give my highlights on the talks that I have visited.
First stop was the opening talk from Anton Keks on topic The world needs full-stack craftsmen. Interesting presentation about current problems in software development like splitting development roles and what is the real result of that. Another topic was about agile methodology and is it really helping the development teams to build a better product. Also, some words about startup companies and usual problems. In general, excellent presentation.
Simon Ritter, in my opinion, he had the best talks about JPoint. First day with the topic JDK 12: Pitfalls for the unwary. In this session, he covered the impact of application migration from previous versions of Java to the last one, from aspects like Java language syntax, class libraries and JVM options. Another interesting thing was how to choose which versions of Java to use in production. Well balanced presentation with real problems and solutions.
Next stop Kohsuke Kawaguchi, creator of Jenkins, with the topic Pushing a big project forward: the Jenkins story. It was like a story from a management perspective, about new projects that are coming up and what the demands of the business are. To be honest, it was a little bit boring for me, because I was expecting superpowers coming to Jenkins, but he changed the topic to this management story.
Sebastian Daschner from IBM, his topic was Bulletproof Java Enterprise applications. This session covered which non-functional requirements we need to be aware of to build stable and resilient applications. Interesting examples of different resiliency approaches, such as circuit breakers, bulkheads, or backpressure, in action. In the end, adding telemetry to our application and enhancing our microservice with monitoring, tracing, or logging in a minimalistic way.
Again, Simon Ritter, this time, with the topic Local variable type inference. His talk was about using var and let the compiler define the type of the variable. There were a lot of examples, when it makes sense to use it, but also when you should not. In my opinion, a very useful presentation.
Rafael Winterhalter talked about Java agents, to be more specific he covered the Byte Buddy library, and how to program Java agents with no knowledge of Java bytecode. Another thing was showing how Java classes can be used as templates for implementing highly performant code changes, that avoid solutions like AspectJ or Javassist and still performing better than agents implemented in low-level libraries.
To summarize, the conference was excellent, any Java developer would be happy to be here, so put JPoint on your roadmap for sure. Stay tuned for my next conference, thanks for reading, THE END 🙂
https://www.north-47.com/wp-content/uploads/2019/05/IMG_2528-2.jpg11522048Boris Karovhttps://www.north-47.com/wp-content/uploads/2018/09/logo_menu-copy-1-300x138.pngBoris Karov2019-07-15 06:24:452019-07-15 06:24:45My opinion on talks from JPoint Moscow 2019
Java Stream API was added in Java 8 in order to provide a functional approach to process a collection of objects. Do not get confused with I/O Streams, Java 8 Streams are a completely different thing.
Java Stream does not store data and is not a data structure. Also the underlying data source is not modified.
Java Stream uses functional interfaces and supports functional-style operations on streams of elements using lambda expressions.
Stream operations
Stream operations are divided into intermediate and terminal operations.
Intermediate operations are Stream operations that return a new Stream. These operations are used for producing new stream elements and to send the stream to the next operation. These operations are lazy, i.e. they are not executed until the result of the processing is needed.
List of all Stream intermediate operations:
filter()
map()
flatMap()
distinct()
sorted()
peek()
limit()
skip()
Terminal operations are Stream operations that do not return a new Stream. Once the terminal method is called in a stream, it consumes the stream and after that the stream can not be used anymore. Terminal operations are processing all the elements in the stream before they return the result.
List of all Stream terminal operations:
toArray()
collect()
count()
reduce()
forEach()
forEachOrdered()
min()
max()
anyMatch()
allMatch()
noneMatch()
findAny()
findFirst()
Some code examples
forEach
The simplest and most common operation, it loops over the elements of the stream, calling the supplied code on each element.
Map<Integer, List<User>> usersByAge = users.stream().collect(Collectors.groupingBy(User::getAge));
usersByAge.forEach((age, u) -> System.out.format("age %s: %s\n", age, u));
// age 18: [John]
// age 23: [Alex, David]
// age 12: [Peter]
map
Produces new stream after applying a function to each element of the existing stream. It can produce a new stream of different type.
Returns an Optional from any entry in the stream. It will most likely return the first entry in the stream in a non-parallel execution, but there is no guarantee for that.
Sorted is an intermediate operation which returns a sorted view of the stream. The elements are sorted in natural order unless you pass a custom Comparator.
List<String> names = Arrays.asList("Peter", "Dave", "Mike", "David", "Pete");
names.stream().sorted().map(String::toUpperCase).forEach(System.out::println);
Output:
DAVE
DAVID
MIKE
PETE
PETER
Keep in mind that sorted does only create a sorted view of the stream without manipulating the ordering of the backed collection. The ordering of names is untouched.
count()
Count is a terminal operation returning the number of elements in the stream as a long.
long total = names.stream().filter((s) -> s.startsWith("D")).count();
System.out.println(total);
Output: 2
reduce()
This terminal operation performs a reduction on the elements of the stream with the given function. The result is an Optional holding the reduced value.
List<Car> cars = Arrays.asList(new Car("Kia", 19500),
new Car("Hyundai", 20500),
new Car("Ford", 25000),
new Car("Dacia", 20000));
Optional<Car> car = cars.stream().reduce((c1, c2)
-> c1.getPrice() > c2.getPrice() ? c1 : c2);
car.ifPresent(System.out::println);
Output: Ford 25000
anyMatch()
The method accepts Predicate as an argument. This will return true once a condition passed as predicate satisfy. It will not process any more elements.
The method accepts Predicate as an argument. The Predicateis applied to each entry in the stream and if every element satisfies the passed Predicate, then it returns true otherwise false.
The method accepts Predicate as an argument. The Predicate is applied to each entry in the stream and if every element does not satisfy the passed Predicate, then it returns true otherwise false.
In this article we showed some of the details of the new Stream API in Java 8. We saw various operations and how lambdas are used to reduce a huge amount of boilerplate code.
In this article, we’ll have a closer look at a library called JSONassert library. We will explain using some examples and how this library can be used. So, let’s get started!
Working with an easy example:
Let’s start our tests with a simple JSON string comparison:
String actual = "{objectId:123, name:\"magy\",lastName:\"henry\"}"; String expected="{objectId:123,name:\"magy\"}"; JSONAssert.assertEquals(expected, actual, false);
The above example will work for the condition strict=false. However, if it is set to true, the test will fail. You have to keep in mind, that JSONAssert makes a logical comparison of the data. This means that the ordering of elements does not matter while dealing with JSON objects.
Working with a complex example
Assuming, that you want a unit test where you have to validate (match an actual response with expected) our rest interfaces in the JUnit. Our endpoint delivers a list of objects. So, the goals are to verify the properties of each object. Let’s assume that delivered response is a list of a type called partner, where the implementation of the class partner is as following:
The following Java examples will help you to understand the usage of. Assuming, we need to write some assertions for each family member on the list. So, the following should be done.
ResponseEntity<List<Partner>> response = restTemplate.exchange(
"http://localhost:8080/partners/x/family",
HttpMethod.GET,
null,
new ParameterizedTypeReference<List<Partner>>(){});
List<Parnter> partners= response.getBody();
//Now we need to test that each partner in the family has a certain values.
Parnter partner1= partners.stream.filter(partner -> parnter.getid().equals("1")).findFirst().get();
assertEquals("Magy", partner1.getFirstName());
assertEquals("Mueller", partner1.getLastName());
assertEquals(of(1980, 7, 12), partner1.getBirthDate());
assertEquals(FEMALE, partner1.getGender());
assertNull(partner1.getMaritalStatus());
// Testing the second person
Parnter partner2= partners.stream.filter(partner -> parnter.getid().equals("2")).findFirst().get();
assertEquals("Marc", partner2.getFirstName());
assertEquals("Ullenstein", partner2.getLastName());
assertEquals(of(1988, 7, 13), partner2.getBirthDate());
assertEquals(MALE, partner2.getGender());
assertNull(partner1.getMaritalStatus());
So, to test the values for just one partner, it takes some time. So, in case of testing multiple partners, it will take a lot of code lines. We would like to use JSONAssert. To make it easier using JSONAssert, let’s create a JSON file under test/resources/json in which we create a list of objects, where each object contains the properties, that will be tested. You should also have in mind, that this library allows developers not having a restrict mode, which means that you don’t have to test against each property.
The above method takes three parameters as seen in the above example, where the first parameter is the expected JSON. The second one is the result, or what we got as a response from our endpoint. The third one defines whether we should use the strict mode or not. When comparing the code written in both cases, you are going to think about why you should use this library in future. You can give up all of the used assertions in the above example.
Conclusion
In this article, we looked at multiple scenarios in which JSONAssert can be helpful. We started with a very simple example and moved on to more complex comparisons. And, as always, stay tuned for new interesting articles.
https://www.north-47.com/wp-content/uploads/2019/06/jsonassert_library.png3391024Shady Eltobgyhttps://www.north-47.com/wp-content/uploads/2018/09/logo_menu-copy-1-300x138.pngShady Eltobgy2019-06-03 12:21:522019-06-03 12:21:52Unit testing using JSONassert library
Writing API definition is pretty cool stuff. It helps consumers to understand the API and agree on its attributes. In our company for that purpose we are using OpenAPI Specification (formerly Swagger Specification).
But the real deal is generating code and documentation from the specification file. In this blog, I will show you how we are doing that at N47.
We will split this blog into two parts. The first part will be generating code, and the second part will be using the generated code.
Part 1
We are creating an empty maven project named “demo-specification”.
Next thing is creating an API definition file, api.yaml in src/main/resources/ directory. The demo content of this file is:
openapi: "3.0.0"
info:
description: "Codegen for demo service"
version: "0.0.1"
title: "Demo Service Specification"
contact:
email: "antonie.zafirov@north-47.com"
tags:
- name: "user"
description: "User tag for demo purposes"
servers:
- url: http://localhost:8000/
description: "local host"
paths:
/user/{id}:
get:
tags:
- "user"
summary: "Retrieves User by ID"
operationId: "getUserById"
parameters:
- name: "id"
in: "path"
description: "retrieves user by user id"
required: true
schema:
type: "integer"
format: "int64"
responses:
200:
description: "Retrieves family members by person id"
content:
application/json:
schema:
type: "object"
$ref: '#/components/schemas/User'
components:
schemas:
User:
type: "object"
required:
- "id"
- "firstName"
- "lastName"
- "dateOfBirth"
- "gender"
properties:
id:
type: "integer"
format: "int64"
firstName:
type: "string"
example: "John"
lastName:
type: "string"
example: "Smith"
dateOfBirth:
type: "string"
example: "1992-10-05"
gender:
type: "string"
enum:
- "MALE"
- "FEMALE"
- "UNKNOWN"
After that, we are executing MVN clean install in the root directory of the project. The result is in target/generated-sources/. com.northlabs.demo.api.UserApi generated API interface is what we need.
The magic is done by openapi-generator-maven-plugin. There are a lot of different generators that can be used, with a lot of options. Here is the list of them.
In application.properties file we are setting server.port to 8000.
server.port=8000
Next step is creating a class UserRestController which will implement previously generated UserApi from demo-specification.
package com.northlabs.demoservice.rest.controller;
import com.northlabs.demo.api.UserApi;
import org.springframework.web.bind.annotation.RestController;
@RestController
public class UserRestController implements UserApi {
}
Now, if we run the application and try to make GET request to /user/1 the response status will be 501 Not Implemented.
Let’s make some simple implementation of the API:
package com.northlabs.demoservice.rest.controller;
import com.northlabs.demo.api.UserApi;
import com.northlabs.demo.api.model.User;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RestController;
@RestController
public class UserRestController implements UserApi {
@Override
public ResponseEntity<User> getUserById(@PathVariable("id") Long id) {
User user = new User();
user.setId(id);
user.setFirstName("John");
user.setLastName("Doe");
user.setGender(User.GenderEnum.MALE);
user.setDateOfBirth("01-01-1970");
return ResponseEntity.ok(user);
}
}
Now the response will be:
And we are done!
This is how we are implementing OpenAPI/Swagger in our projects. In the next blog, I will show you how you can provide Swagger UI, generate Java client, JavaScript client modify base paths etc.
Download the source code
Both projects are freely available on our GitLab repository. Feel free to fix any mistakes and to comment here if you have any questions or feedback.
https://www.north-47.com/wp-content/uploads/2019/05/spring_boot_swagger_openapi.jpg3391024Antonie Zafirovhttps://www.north-47.com/wp-content/uploads/2018/09/logo_menu-copy-1-300x138.pngAntonie Zafirov2019-05-17 07:59:142020-03-02 17:35:12Generate Spring Boot REST API using Swagger/OpenAPI
Spring I/O is the conference, which is leading the European Conference for the Spring Framework ecosystem. This year it will be the 8th edition and take place in Barcelona, Spain between 16 to 17 May and I’m going to attend it for the first time. This conference is also my first conference for this year, so I’m very excited 😊 about it.
Location Palau de Congressos de Catalunya on Google MapsThe entrance
Topics
Detailed agenda and topics will be available here. But I’m interested in below-mentioned topics:
The State of Java Relational Persistence
Configuration Management with Kubernetes, a Spring Boot use-case
Moving beyond REST: GraphQL and Java & Spring
Spring Framework 5.2: Core Container Revisited
JUnit 5: what’s new and what’s coming
Migrating a modern spring web application to serverless
Relational Persistence with Spring Data JDBC [Workshop]
Clean Architecture with Spring
How to secure your Spring apps with Keycloak
Boot Loot – up your game and Spring like the pros
Spring Boot with Kotlin, Kofu and Coroutines
Multi-Service Reactive Streams Using Spring, Reactor, and RSocket
Zero Downtime Migrations with Spring Boot
Apart from the conference, I am planning to visit Font Màgica de Montjuïc, which is near to the conference venue.
I’m open to further suggestions regarding my visit to Barcelona. What else should I visit? Is there any special food that I should try?
https://www.north-47.com/wp-content/uploads/2019/05/springio-2019.jpg6751200Amit Pethanihttps://www.north-47.com/wp-content/uploads/2018/09/logo_menu-copy-1-300x138.pngAmit Pethani2019-05-15 09:22:192019-05-15 12:01:40Spring I/O, The Conference in Barcelona - 2019
Adobe Experience Manager (AEM) is one of the leading enterprise content management system (CMS), formerly knows as Day CQ. The initial version was introduced in 2002, at a time when web projects were mostly implemented as static, server-side rendered websites. Content as well as styling information were mixed up within the same HTML document. Nowadays traditional websites are being more and more replaced and complemented by (mobile) applications which come up with their own presentation layer. Thus there is a need for a more flexible approach that separates content from styling, and that is able to deliver the data to multiple channels in a universal format. This requirement led to the emergence of so called headless content management systems, which usually supply the data in a RESTful manner (as JSON) to their consumers. This blog post summarizes the headless CMS extension provided by AEM.
Headless CMS in AEM
The headless CMS extension for AEM was introduced with version 6.3 and has been continuously improved since then, it mainly consists of the following components:
Content Services: Provides the functionality to expose user-defined content through a HTTP API in JSON format. This allows to deliver data to 3rd party clients other than AEM.
Content Fragments: Allows the user to insert/edit content as structured data entities. The schema of each content fragment is defined by a corresponding Content Fragment Model.
Note that AEM follows a hybrid approach, e.g. content fragments can either be delivered as JSON through the content services API, or embedded within a traditional HTML page. Visit Adobe’s headless CMS website for further information.
Example Project
There is a tutorial provided by Adobe where the concept of content services is explained in detail. It describes how to model the entries of a FAQ list by using content fragments, and how to expose this data through a API as JSON. The complete article can be found here.
The example is based on the existing We.Retail demo project that comes with the installation file of AEM. In summary, the following steps have to be performed:
First content fragment models should be enabled for the We.Retail project. From the AEM welcome page, go to Tools → Configuration Browser, open the properties of the We.Retail configuration and ensure that the Content Fragment Models property has been selected.
Navigate to Tools → Assets → Content Fragment Models → We.Retail to create or edit content fragment models. Select the FAQ model and click on the edit button to open the Content Fragment Model Editor. Here you can edit the model structure by adding/removing elements using drag and drop.
The model can be used to create new content fragments which contain the actual data . To do this, navigate to Assets → Files → We.Retail → English → FAQs → Company. There is already content available here: Each entry of the FAQ list is modeled as a single fragment. The picture below shows the editor view for a FAQ content fragment.
To access the data through content services from outside, the FAQ items have to be embedded within a content page. Content fragments can be added to the page by drag and drop in the same way as any standard content component.
The content of the page can now be delivered as JSON via the “model” selector. The code section below shows an extract of the response of the FAQ page /content/we-retail/language-masters/en/faq.model.json
...
":items": {
"contentfragment": {
"title": "The Company Name",
"description": "",
"model": "we-retail/models/faq",
"elements": {
"question": {
"title": "Question",
"dataType": "string",
"value": "What's with the name \"We.Retail\"?",
":type": "string"
},
"answer": {
"title": "Answer",
"dataType": "string",
"paragraphs": [
""
],
"value": "<p>We're not sure, but it sounds good!<br>\n</p>\n",
":type": "text/html"
}
},
":items": {},
"elementsOrder": [
"question",
"answer"
],
":itemsOrder": [],
":type": "weretail/components/content/contentfragment"
},
"contentfragment_1811741936": {
"title": "History",
...
Finally the data can now be consumed by any 3rd party application. The screenshot below shows an example made with vue.js, where the FAQ list is loaded from AEM content services by XHR request.
Conclusion
AEM content services provide a flexible way to deliver structured content to multiple channels, the data as well as its corresponding schema can be created and modified without any need for a deployment. However, the main focus of AEM is still on the authoring of backend-side rendered websites, but content services may be a good addition for environments where AEM is already in use.
https://www.north-47.com/wp-content/uploads/2019/05/headless_cms_aem_6.5.png3391024Lukas Stukerhttps://www.north-47.com/wp-content/uploads/2018/09/logo_menu-copy-1-300x138.pngLukas Stuker2019-05-07 07:08:432019-05-07 08:45:22Exploring the headless CMS functionality of AEM 6.5
In March 2019 Shady and I visited Voxxed Days Romania in Bucharest. If you haven’t seen our post about that, check it out now! There were some really cool talks and so I decided to pick one and write about it.
At my previous employer, we switched from a monolithic service to a microservice architecture. After implementing about 20 different microservices in 2 years, the communication between them got more complex. In addition to that, all microservices where communicating synchronously! Did we build another monolith? I just recently read a blog post about that on another site: https://thenewstack.io/synchronous-rest-turns-microservices-back-monoliths/
I tried to recreate the complexity of synchronous communication in microservices in this picture 😅
Apache Kafka is a distributed streaming platform. Communication between endpoints is driven by messaging-middleware parties like Apache Kafka or RabbitMQ.
The destination defines to which pipeline (or topic) the message is published to.
Create or edit /src/main/java/com/northlabs/lab/moneyprinterproducer/MoneyprinterProducerApplication.java
package com.northlabs.lab.moneyprinterproducer;
import lombok.AllArgsConstructor;
import lombok.Data;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.messaging.Source;
import org.springframework.scheduling.annotation.EnableScheduling;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;
import org.springframework.messaging.support.MessageBuilder;
import java.util.Random;
import java.util.UUID;
@SpringBootApplication
public class MoneyprinterProducerApplication {
public static void main(String[] args) {
SpringApplication.run(MoneyprinterProducerApplication.class, args);
}
}
@EnableBinding(Source.class)
@EnableScheduling
@AllArgsConstructor
class Spammer {
private final Source source;
private final SubscriberGenerator generator;
@Scheduled(fixedRate = 1000)
private void spam() {
Money money = generator.printMoney();
System.out.println(money);
source.output().send(MessageBuilder.withPayload(money).build());
}
}
@Component
class SubscriberGenerator {
private final String[] type = "Coin, Note".split(", ");
private final String[] currency = "CHF, EUR, USD, JPY, GBP".split(", ");
private final String[] value = "1, 2, 5, 10, 20, 50, 100, 200, 500, 1000".split(", ");
private final String[] quality = "poor, fair, good, premium, flawless, perfect".split(", ");
private final Random rnd = new Random();
private int i = 0, j = 0, k=0, l=0;
Money printMoney() {
i = rnd.nextInt(2);
j = rnd.nextInt(5);
k = rnd.nextInt(10);
l = rnd.nextInt(6);
return new Money(UUID.randomUUID().toString(), type[i], currency[j], value[k], quality[l]);
}
}
@Data
@AllArgsConstructor
class Money {
private final String id, type, currency, value, quality;
}
Here we simply create the whole microservice in one class. The most important code is highlighted here. SUPER SIMPLE! Now you already have a microservice, which prints money and publishes it to the destination topic/pipeline “processor” 👏.
If you can’t connect, add this line to /etc/hosts to ensure proper routing to container network “kafka”:
127.0.0.1 kafka
Start messaging platforms with the docker start command:
docker start zookeeper
docker start kafka
It’s a wrap!
Congratulations! You made it. Now just run your producer, processor and consumer and it should look something like this:
My example
Getting started
Run docker/runKafka.sh
Run docker/startMessagingPlatforms.sh
Start producer, processor and consumer microservice (e.g. inside IntelliJ)
Enjoy the log output 👨💻📋
Download the source code
The whole project is freely available on our Gitlab repository. Feel free to fix any mistakes and to comment here if you have any questions or feedback.
The conference took place at the World Trade Center in Moscow and started at 9 am. It looked like it will be huge from the beginning, well organized and big conference halls. The first step was an attendee registration.
After completing the registration and picking up some welcome packages, we had some starting coffee break and drinks. Also, we had visited most of the big company representative stands, that were in front of the conference halls. You can find interesting free materials there, like stickers, manuals and packages from the company you are visiting.
After the opening ceremony talk, the conference started with different speakers on every track. Some of them were Russian speakers, so we focused on the English ones. Every talk was one to one and a half hour long and after that was a coffee break in the lounge room. There were also two lunch breaks included. In the end, the party at 20:00. You can check the full schedule here.
Day two was completely the same setup, some different speakers or the same one with a different topic. In general, the whole organization of the conference was amazing, like it should be for a world-class event. I highly recommend visiting if you have a chance.
Stay tuned for my next part where I will describe my opinion of the talks that I have visited…
https://www.north-47.com/wp-content/uploads/2019/04/IMG_2508.jpeg11042048Boris Karovhttps://www.north-47.com/wp-content/uploads/2018/09/logo_menu-copy-1-300x138.pngBoris Karov2019-04-24 09:43:152019-04-24 09:43:16Live from JPoint, Moscow 2019
As a developer, when you need to extend your programming knowledge theoretical, practical, or either or, you need to go to a conference. Also, conferences are a good change to peer others in your field. Unfortunately, most software engineering conferences focus on introducing new technologies more than defining how a software engineer becomes an architect. That makes developer conferences a place to broaden the technical horizons, but not the vertical horizons. Exactly this makes DEVOXX so special. I have already had the pleasure to visit a DEVOXX conference in Europe and other conferences. Check out the articleabout that here!
What we expect from this conference 👤💬?
Normally, I focus on the new technical topics like what is new in Java. What do the new versions of Java offer? However, at this time, I would like to focus on both, the technical topics and software architecture, as it is a massive and fast-moving discipline. I would like to expect some training and insights to help you stay current with the latest trends in technologies, frameworks, and techniques — and build the skills needed to advance your career.
The conference will be held in Kiev. So, my colleague Jeremy and I will be travelling from Zurich airport to Kiev. According to some articles, Kiev is considered one of the cheapest cities in Europe. We will try to explore the nightlife of Kiev. To be honest, I didn’t expect that the conference ticket is so cheap, it just costs 150 usd.
I will write another blog to explain what I and my colleague Jeremy did in Kiev. I can say one thing at the end: “Stay Tuned”!
https://www.north-47.com/wp-content/uploads/2019/04/devoxx_ukraine.jpeg12001600Shady Eltobgyhttps://www.north-47.com/wp-content/uploads/2018/09/logo_menu-copy-1-300x138.pngShady Eltobgy2019-04-17 09:48:502019-04-17 09:48:51DEVOXX UKRAINE, Here I come
JPoint is one of the three (JPoint, Joker and JBreak) most common technical Java conferences for experienced Java developers. There will be 40 talks in two days, separated in 4 tracks in parallel. The conference takes place each year, this is being the seventh consecutive year.
Organization to visit JPoint conference
Apart from changing flights to reach Moscow everything else should not be any bigger issue. Book flights and choose some nearby hotel.
There are a few types of tickets. From which I’ll choose the personal ticket, main reason is the discount of 58%.
What is scheduled by now?
Many interesting subjects are going to be covered during two days of presentations:
New projects in Jenkins
Java SE 10 variable types
More of Java collections
Decomposing Java applications
AOT Java compilation
Java vulnerability
Prepare Java Enterprise application for production
Application migration JDK 9, 10, 11 and 12
Jenkins X
The following topics on the conference will be the most interesting ones for me:
Prepare Enterprise application for production (telemetry is crucial).
Is Java so vulnerable? What can we do to reduce security issues?
What is the right way of splitting application to useful components?
It looks that now with Jenkins Essentials there is significant less overhead for managing it, without any user involvement. Let us see what Jenkins replaced with few commands.
Just half of
presentations are scheduled by now. Expect many more to be announce.
Probably every single developer has headaches with null pointers, so what is the silver bullet for this problem?
Java 8 introduced a handy way of dealing with this and it’s called Optional object. This is a container type of a value which may be absent. For example, let’s search for some objects in the repository:
We have a potential null pointer exception if the object with id “1” is not found in the database. The same example with using Optional will look like this:
By returning an Optional object from the repository, we are forcing the developer to handle this situation. Once you have an Optional, you can use various methods that come with it, to handle different situations.
Will return true if we have a non-null value for the Optional object.
Throw an exception when a value is not present
One possible solution for handling null pointer exceptions when the object is not present would be throwing a custom exception for the specific object. All of these custom exceptions should be summarized and handle on a higher level, in the end, they can be shown to the end user.
@GetMapping("/cars/{carId}")
public Car getCar(@PathVariable("carId") String carId) {
return carRepository.findByCarId(carId).orElseThrow(
() -> new ResourceNotFoundException("Car not found with carId " + carId);
);
}
For that purpose, we can use orElseThrow() method to throw an exception when Optional is empty.
Thanks for reading, I hope it helps and don’t forget always to keep it simple, think simple 🙂
https://www.north-47.com/wp-content/uploads/2019/03/java-8-optional-1.png240480Boris Karovhttps://www.north-47.com/wp-content/uploads/2018/09/logo_menu-copy-1-300x138.pngBoris Karov2019-03-18 07:31:592019-04-10 16:49:10Null pointer exceptions in Java 8
Spring Boot is the most used framework by Java developer for creating microservices. The first version of Spring Boot 1.0 was released in January 2014. After that many releases were done, but Spring Boot 2.0 is the first major release after its launch. Spring Boot-2.0 was released on March 2018 and while writing this blog, recently released version is 2.1.3, which was released on 15th February 2019.
There are many changes which will break your existing application if you want to upgrade from Spring Boot 1.x to 2.x. here is a described migration guide.
We are using Spring Boot 2.0 too 💻!
Currently, here at N47, we are implementing different services and also an in-house developed product(s). We decided to use Spring Boot 2.0 and we already have a blog post about Deploy Spring Boot Application on Google Cloud with GitLab. Check it out and if you have any questions, feel free to use the commenting functionality 💬.
Java
Spring boot 2.0 requires Java 8 as minimum version and it also supports Java 9. if you are using Java 7 or earlier and want to use Spring Boot 2.0 version then it’s not possible, you have to upgrade to Java 8 or 9. also Spring Boot 1.5 version will not support Java 9 and new latest version of Java.
Spring Boot 2.1 has also supported Java 11. it has continuous integration configured to build and test Spring Boot against the latest Java 11 release.
Gradle Plugin
Spring Boot’s Gradle plugin 🔌 has been mostly rewritten to enable a number of significant improvements. Spring Boot 2.0 now requires Gradle 4.x.
Third-party Library Upgrades
Spring Boot builds on Spring Framework. Spring Boot 2.0 requires Spring Framework 5, while Spring Boot 2.1 requires Spring Framework 5.1.
Spring Boot has upgraded to the latest stable releases of other third-party jars wherever it possible. Some notable dependency upgrades in 2.0 release include:
Tomcat 8.5
Flyway 5
Hibernate 5.2
Thymeleaf 3
Some notable dependency upgrades in 2.1 release include:
Tomcat 9
Undertow 2
Hibernate 5.3
JUnit 5.2
Micrometre 1.1
Reactive Spring
Many projects in the Spring portfolio are now providing first-class support for developing reactive applications. Reactive applications are fully asynchronous and non-blocking. They’re intended for use in an event-loop execution model (instead of the more traditional one thread-per-request execution model).
Spring Boot 2.0 fully supports reactive applications via auto-configuration and starter-POMs. The internals of Spring Boot itself has also been updated where necessary to offer reactive alternatives.
Spring WebFlux & WebFlux.fn
Spring WebFlux is a fully non-blocking reactive alternative to Spring MVC. Spring Boot provides auto-configuration for both annotation-based Spring WebFlux applications, as well as WebFlux.fn which offers a more functional style API. To get started, use the starter spring-boot-starter-webflux POM which will provide Spring WebFlux backed by an embedded Netty server.
Reactive Spring Data
Where the underlying technology enables it, Spring Data also provides support for reactive applications. Currently, Cassandra, MongoDB, Couchbase and Redis all have reactive API support.
Spring Boot includes special starter-POMs for these technologies that provide everything you need to get started. For example, spring-boot-starter-data-mongodb-reactive includes dependencies to the reactive mongo driver and project reactor.
Reactive Spring Security
Spring Boot 2.0 can make use of Spring Security 5.0 to secure your reactive applications. Auto-configuration is provided for WebFlux applications whenever Spring Security is on the classpath. Access rules for Spring Security with WebFlux can be configured via a SecurityWebFilterChain. If you’ve used Spring Security with Spring MVC before, this should feel quite familiar.
Embedded Netty Server
Since WebFlux does not rely on Servlet APIs, Spring Boot is now able to support Netty as an embedded server for the first time. The starter spring-boot-starter-webflux POM will pull-in Netty 4.1 and Reactor Netty.
HTTP/2 Support
HTTP/2 support is provided for Tomcat, Undertow and Jetty. Support depends on the chosen web server and the application environment.
Kotlin
Spring Boot 2.0 now includes support for Kotlin 1.2.x and offers a functionrunApplication which provides a way to run a Spring Boot application using Kotlin.
Actuator Improvements
There have been many improvements and refinements to the actuator endpoints with Spring Boot 2.0. All HTTP actuator endpoints are now exposed under the path and resulting /actuator JSON payloads have been improved.
Data Support
In addition, the “Reactive Spring Data” support mentioned above, several other updates and improvements have been made in the area of Data.
HikariCP
Initialization
JOOQ
JdbcTemplate
Spring Data Web Configuration
Influx DB
Flyway/Liquibase Flexible Configuration
Hibernate
MongoDB Client Customization
Redis
Here I have mentioned only the list for changes in Data support but a detailed description will be available here for each topic.
Animated ASCII Art
Finally, Spring Boot 2.0 also provides support for animated GIF banners.
For a complete overview of changes in configuration go here and the release note for 2.1 available here.
https://www.north-47.com/wp-content/uploads/2019/03/springBoot2.png4261339Amit Pethanihttps://www.north-47.com/wp-content/uploads/2018/09/logo_menu-copy-1-300x138.pngAmit Pethani2019-03-13 10:32:442020-03-03 10:09:42Spring Boot 2.0 new Features
Tickets for individuals: 280€ until 1st March. No possibility to change the participant.
Personal tickets may not be acquired by companies in any way. The companies may not fully or partially reimburse these tickets’ costs to their employees.
Standard tickets: 465€ until 1st March. A possibility to change the participant is given.
Tickets for companies and individuals, no limits. Includes a set of closing documents and amendments to the contract.
So what are my plans and expectations for the first day of JPoint. I will start with Rafael Winterhalter who is a Java Champion and will talk about Java agents. It will be interesting to see how Java classes can be used as templates for implementing highly performant code changes.
Next stop will be the creator of Jenkins: Kohsuke Kawaguchi. He has great headline Superpowers coming to your Jenkins and I am exicted to see where Jenkins is going next.
Last stop for day one, Simon Ritter from Azul Systems, with focus on local variable type inference. As with many features, there are some unexpected nuances as well as both good and bad use cases that will be covered.
There will be many more for day one but I will focus on these three for now. Also at the end, party at 20:00.
DAY 2
I will start the second day with Simon Ritter again, this time with focus on JDK 12. Pitfalls for the unwary, it will be interesting to see all the areas of JDK 9, 10, 11 and 12 that may impact application migration. Another topic will be how the new JDK release cadence will impact Java support and the choices of which Java versions to use in production.
Other headliner talks for the second day are still under consideration, so I’m expecting something interesting from Pivotal and JetBrains.
Feel free to share some Moscow hints or interesting talks that I’m missing.
https://www.north-47.com/wp-content/uploads/2019/02/Jpoint_Moscow.jpg4231210Boris Karovhttps://www.north-47.com/wp-content/uploads/2018/09/logo_menu-copy-1-300x138.pngBoris Karov2019-03-04 09:23:552019-04-10 16:51:09My expectations on JPoint Moscow 2019
A lot of developers experience a painful process with their code being deployed on the environment. We, as a company, suffer the same thing so that we wanted to create something to make our life easier.
After internal discussions, we decided to make a fully automated CI/CD process. We investigated and came up with a decision for that purpose to implement Gitlab CI/CD with google cloud deployment.
Further in this blog, you can see how we achieved that and how you can achieve the same.
Now, after we have our application in GitLab repository, we can go to setup Google Cloud. But, before you start, be sure that you have a G-Suite account with enabled billing.
The first step is to create a new project: in my case it is northlabs–gitlab-demo.
Create project: northlabs-gitlab-demo
Now, let’s create our Kubernetes Cluster.
It will take some time after Kubernetes clusters are initialized so that GitLab will be able to create a cluster.
We are done with Google Cloud, so it’s time to set up Kubernetes in our GitLab repository.
First, we add a Kubernetes cluster.
Add Kubernetes ClusterSign in with Google
Next, we give a name to the cluster and select a project from our Google Cloud account: in my case it’s gitlab-demo.
The base domain name should be set up.
Installing Helm Tiller is required, and installing other applications is optional (I installed Ingress, Cert-Manager, Prometheus, and GitLab Runner).
Install Helm Tiller Installed Ingress, Cert-Manager, Prometheus, and GitLab Runner
After installing the applications it’s IMPORTANT to update your DNS settings. Ingress IP address should be copied and added to your DNS configuration. In my case, it looks like this:
Configure DNS
We are almost done. 🙂
The last thing that should be done is to enable Auto DevOps.
And to set up Auto DevOps.
Now take your coffee and watch your pipelines. 🙂 After a couple of minutes your pipeline will finish and will look like this:
Now open the production pipeline and in the log, under notes section, check the URL of the application. In my case that is:
Application should be accessible at: http://47northlabs-47northlabs-product-playground-gitlab-demo.gitlab-demo.north-47.com
This is just a basic implementation of the GitLab Auto DevOps. In some of the next blogs, we will show how to customize your pipeline, and how to add, remove or edit jobs.
https://www.north-47.com/wp-content/uploads/2019/02/springboot_googlecloud-copy.jpg4231210Antonie Zafirovhttps://www.north-47.com/wp-content/uploads/2018/09/logo_menu-copy-1-300x138.pngAntonie Zafirov2019-03-01 08:28:522020-03-04 13:18:23Deploy Spring Boot Application on Google Cloud with GitLab
We had a great time visiting these two cities 🙌 and we can’t wait to do that again this year 😎.
APPDEVCON Amsterdam
DEVOXX Antwerp
What do we expect from the two conferences in 2019 👤💬?
Like last year, we are interested in several different topics. For me, I am looking forward to Methodology & Culture slots, Shady is most interested in Java stuff. All in all, we hope that there are several different interesting speeches about:
Architecture & Security
Cloud, Containers & Infrastructure
Java
Big Data & Machine Learning
Methodology & Culture
Other programming languages
Web & UX
Mobile & IoT
We ❤️ food!
The title is speaking for itself. We just love food 🍴! Travelling ✈️ gives a good opportunity to see and taste something new 👅. All over the world, every culture has unique and special cuisine. Each cuisine is very different because of the different methods of cooking food. We try to taste (almost) everything when we arrive in new countries and cities.
We are really looking forward to seeing, what Bucharest’s and Kiev’s specialities are 🍽 and to trying them all! Here some snapshots from our trips to the conferences in Amsterdam Netherlands and Krakow Poland in 2018…
Dutch raw herring in Amsterdam
Indian dishes in Amsterdam
Pierogies in Krakow
Best burger in Krakow
Mixed israel food in Krakow
What about the costs 💸?
One great thing at N47 is, that your personal development 🧠 is important to the company. Besides hosted internal events and workshops, you can also visit international conferences 🛫 and everything is paid 💰. Every employee can choose his desired conferences/workshops, gather the information about the costs and request his visit. One step of the approval process is writing 📝 about his expectation in a blogpost. This is exactly what you are reading 🤓📖 at the moment.
Costs breakdown (per person)
Flights: 170 USD Hotel: 110 USD CHF (3 days, 2 nights) Conference: 270 USD Food and public transportation: 150 USD Knowledge gains: priceless Explore new country and food: priceless Spend time with your buddy: priceless —– Total: 700 USD —–
Hotel, 110 USD
Conference, 270 USD
Flights, 170 USD
Any recommendations for Bucharest or Kiev?
We never visited the two cities 🏙, so if you have any tips or recommendations, please let us know in the comments 💬!
https://www.north-47.com/wp-content/uploads/2019/02/CszaLR6VIAAhZg-.jpg13652048Jeremy Haashttps://www.north-47.com/wp-content/uploads/2018/09/logo_menu-copy-1-300x138.pngJeremy Haas2019-02-13 10:13:082020-03-03 09:41:25Voxxed Days Bucharest & Devoxx Ukraine - HERE WE COME!
In Adobe Experience Manager (AEM) projects developers are working a lot in services, filters, servlets and handlers. All of these classes are OSGi components and services using the Felix SCR annotations or the newer OSGi DS annotations. But sometimes you need an OSGi service or the BundleContext also in non OSGi / DS controlled class
Solution
You can use the OSGi FrameworkUtil [1] to get the reference to the bundle context from any object. The code below shows how to get reference to the BundleContext and the service.
Most of the time, you have the SlingHttpServletRequest ready to pass:
import org.apache.sling.api.SlingHttpServletRequest;
import org.apache.sling.api.scripting.SlingBindings;
import org.apache.sling.api.scripting.SlingScriptHelper;
import org.osgi.framework.BundleContext;
import org.osgi.framework.FrameworkUtil;
import org.osgi.framework.ServiceReference;
import java.util.Objects;
public class ServiceUtils {
/**
* Gets the service from the current bundle context.
* Return null if something goes wrong.
*
* @param <T> the generic type
* @param request the request
* @param type the type
* @return the service
*/
@SuppressWarnings({"unchecked", "rawtypes"})
public static <T> T getService(SlingHttpServletRequest request, Class<T> type) {
SlingBindings bindings = (SlingBindings) request.getAttribute(SlingBindings.class.getName());
if (bindings != null) {
SlingScriptHelper sling = bindings.getSling();
return Objects.isNull(sling) ? null : sling.getService(type);
} else {
BundleContext bundleContext = FrameworkUtil.getBundle(type).getBundleContext();
ServiceReference settingsRef = bundleContext.getServiceReference(type.getName());
return (T) bundleContext.getService(settingsRef);
}
}
}
Or you just use the class that was loaded over a bundle classloader.
package com.example.aem.core.utils;
import org.osgi.framework.Bundle;
import org.osgi.framework.BundleContext;
import org.osgi.framework.FrameworkUtil;
import org.osgi.framework.ServiceReference;
public class ServiceUtils {
/**
* Gets the service from the current bundle context.
* Return null if something goes wrong.
*
* @param <T> the class that was loaded over a bundle classloader.
* @param type the service type.
* @return the service instance.
*/
@SuppressWarnings({"unchecked", "rawtypes"})
public static <T> T getService(Class clazz, Class<T> type) {
Bundle currentBundle = FrameworkUtil.getBundle(clazz);
if (currentBundle == null) {
return null;
}
BundleContext bundleContext = currentBundle.getBundleContext();
if (bundleContext == null) {
return null;
}
ServiceReference<T> serviceReference = bundleContext.getServiceReference(type);
if (serviceReference == null) {
return null;
}
T service = bundleContext.getService(serviceReference);
if (service == null) {
return null;
}
return service;
}
}
https://www.north-47.com/wp-content/uploads/2019/01/adobe-experience-manager.png4231210Jeremy Haashttps://www.north-47.com/wp-content/uploads/2018/09/logo_menu-copy-1-300x138.pngJeremy Haas2019-01-03 09:49:192021-01-11 15:52:51How to get a service reference or BundleContext with no OSGi context