Using the Vue3’s composition API in Vue2

Reading Time: 4 minutes

Vue3 is officially out since September 18 and along with it comes the composition API which is replacing the old options API. The new composition is defined as a set of additive, function-based APIs that allow the flexible composition of component logic.

We are introduced to the composition API because with the old options API as the component grows the readability of the components was not quite pleasant or comfortable and was starting to get messy, the code reuse patterns had some drawbacks, the support for TypeScript was limited etc.
The visual comparison of both APIs look something like this:

First, we must install @vue/composition-api via Vue.use() before using other APIs.
Import the VueCompositionApi:

import Vue from 'vue'
import VueCompositionApi from '@vue/composition-api'

Register the plugin:

Vue.use(VueCompositionApi)

And then we are ready to start.

So, the new API must contain the setup() function which serves as the entry point for using the composition API inside the components. setup() is called before the beforeCreate hook. The component would look like this:

<template>
    <div> {{ note }} </div>
    <div> {{ data.count }} </div>
</template>

<script>
import { ref, computed, reactive, onMounted } from '@vue/composition-api'

export default {
  setup() {
    const notes = ref([])
    async function getData() {
      notes.value = await DataService.getNotesData()
    }

    const data = reactive({
      count: 0,
      actions: ['firstAction', 'secondAction', 'thirdAction'],
      object: {
        foo: 'bar'
      }
    })
    const computedData = computed(() => notes.value)
    
    onMounted(() => {
      getData()
    }

    return {
      ...toRefs(data),
      notes,
      computedData
    }
  }
}
</script>

In the code above we can see the structure of the new API. We have the setup() function, which is exported in the script tag.
Inside the setup function we can see several familiar properties.

const notes = ref([])

This is initializing a property inside the setup function scope. On this property, we must add ref if we want to make it reactive, which means if we don’t add the ref and we make a change to that variable, the change won’t be reflected in the DOM. This is the same as initializing variable in the data() in Vue2:

data() {
  return {
    notes: []
  }
}

As we can see we do not have the method section for creating functions, instead we define the functions in the setup(). After that, the method is used in the mounted hook which is a bit different than the one in the options API.

Some of the hooks were removed, but almost all of them are available in a similar form.

  • beforeCreate -> use setup()
  • created -> use setup()
  • beforeMount -> onBeforeMount
  • mounted -> onMounted
  • beforeUpdate -> onBeforeUpdate
  • updated -> onUpdated
  • beforeDestroy -> onBeforeUnmount
  • destroyed -> onUnmounted
  • activated -> onActivated
  • deactivated -> onDeactivated
  • errorCaptured -> onErrorCaptured

In addition, two new debug hooks were added to the composition API:

  • onRenderTracked
  • onRenderTriggered

Computed and watch are still available. In the code above we can see how computed is used to return the notes values.

Reactive() is similar to ref() and if we want to create a reactive object we can still use ref(). But underneath the hood, it’s just calling reactive(). On the other side, reactive() will not work with primitive values. It takes an object and returns a reactive proxy of the original. The big difference is when you want to access data defined using reactive(), for example, if we want to use the count, we must do it like:

<div> {{ data.count }} </div>

One very important thing is to convert a reactive object to plain object is using the toRefs and return the data like in the component:

return {
  ...toRefs(data)
}

This is just a small piece of the cake, only something to begin with using the composition API in Vue2 application. For more, you can visit the documentation on the following link.

JavaScript Best Practices for Readable and Maintainable Code

Reading Time: 4 minutes

Let’s have a look at some coding standards that can help with:

  • Keep the code consistent
  • Easier to read and understand
  • Easier to maintain
  • Easier to refactor

These coding standards are my own personal opinion that can help with the above points using what I have learned and experienced while developing and reviewing other developers code.

Variables

Always use ‘const’ & ‘let’ over ‘var’

Using const helps readability as developers know it can’t be reassigned.
var and let are both used for variable declaration in javascript but the difference between them is that var is function scoped and let is block scoped. It is too much to get into detail here maybe I will write another post for that.

Avoid using global variables

Minimize the use of global variables. Global variables are a terribly bad idea. The problem with global variables and functions is that they can be overwritten by other scripts.

Naming variables

Always try to come up with names that make sense and are not too long. Naming variables may be the hardest thing in coding.
let should be camelCase. const, if it is at the top of the file it should be SNAKE_CASE (All Caps). If it is not at the top then it should be camelCase.

API Calls

Pick a method and stick with it

By method I mean one of the below:

  • XMLHttpRequest
  • fetch
  • Axios
  • jQuery

So far Axios and fetch are the most preferred way to go with. The benefit of using Axios is that it has wide browser support. Even IE11 can run Axios.

Make the calls reusable

Instead of just having the calls everywhere in the code it is good to have modules for your API calls. This way it becomes easier to refactor. If something changes in the API you will have to change it only once.

Dom Manipulation

Use CSS classes over adding styles

For example we have some basic forms here:

<form>
  <input type='text' required>
  <button type='submit'>
    Submit
  </button>
</form>

Along with the following JavaScript code:

const input = document.querySelector('input');
const form = document.querySelector('form');
form.onsubmit = (e) => {
  e.preventDefault();
}
input.oninvalid = () => {
  input.style.borderColor = 'red';
  input.style.borderStyle = 'solid';
  input.style.borderWidth = '1px';
}

Instead of adding inline style like the above example, it is much cleaner to add CSS class to the input field like in the example below:

const input = document.querySelector('input');
const form = document.querySelector('form');
form.onsubmit = (e) => {
  e.preventDefault();
}
input.oninvalid = () => {
  input.className = 'invalid';
}
// CSS
.invalid {
  border: 1px solid red;
}

Accessing the DOM tree

Accessing the DOM tree is a quite expensive operation this is the bottleneck of JavaScript in terms of performance. Therefore, we must strive to minimize the operations of accessing the DOM tree.

Example:

// BAD
let padding = document.getElementById("content").style.padding
let margin = document.getElementById("content").style.margin;

//GOOD
let style = document.getElementById("content").style
let padding = style.padding
let margin = style.margin

Functions

Use ES6 arrow functions where possible

Arrow functions are a more concise syntax for a writing function expression. They’re are anonymous and change the way this binds in functions.

//BAD
var multiply = function(a, b) {
  return a* b;
}

//GOOD
const multiply = (a, b) => a * b

 Naming functions

Functions should be camelCase and should have descriptive names:

//BAD
const sMail = () => {
  //...
};
const sendmail = () => {
  //...
};

//GOOD
const sendMail = () => {
  //...
};

Functions should only do one thing

This is one of the most important rules in programming. When your function does more than one thing, it is harder to test and read. When you isolate a function to just one action, it can be refactored easily and your code will be read much much cleaner.

//BAD
const notifyListeners = listeners => {
  listeners.forEach(listener => {
    const listenerRecord = database.lookup(listener);
    if (listenerRecord.isActive()) {
      notify(listener);
    }
  });
}
//GOOD
const notifyActiveListeners = listeners => {
  listeners.filter(isListenerActive).forEach(notify);
}

function isListenerActive(listener) {
  const listenerRecord = database.lookup(listener);
  return listenerRecord.isActive();
}

Conclusion

Coding standards in any language can really help with the readability and the maintainability of an application. The main point is to be consistent as they really help scale up an application in a clean way.

If you take this advice, it will bring your read and maintainability to the next level. The next time you need to address an issue or implement a feature request, your journey will be fast and seamless.

Server-side rendering with Nuxt.js

Reading Time: 5 minutes

What is server-side rendering

By default, modern JS frameworks produce and manipulate DOM in the browser as an output. But, it is possible to render the same codebase into HTML strings on the server and send them to the browser and finally compile the static markup into a fully working application on the client-side. A server-rendered application can also be considered isomorphic or universal, in the sense that the majority of your code runs on both the server and the client.

Trade-offs when using SSR

  • Development constraints
  • Build setup, deployment requirements
  • Server-side load

Advantages when using SSR

  • Better SEO
  • Faster time to content

Nuxt.js

Nuxt is a framework based on Vue.js, Node.js, Webpack and Babel. It is free and open-source and we can use it to create various applications from static landing pages to complex enterprise-ready web solutions.

Supports 3 modes of working:

  1. Server-Rendered (Universal SSR)
  2. Single Page Applications (SPA)
  3. Static-Generated (Pre Rendering)

Some of the features Nuxt provides:

  • Write Vue Files (*.vue)
  • Automatic Code Splitting
  • Server-Side Rendering
  • Powerful Routing System with Asynchronous Data
  • Static File Serving
  • ES2015+ Transpilation
  • Bundling and minifying of your JS & CSS
  • Managing element
  • Hot module replacement in Development
  • Pre-processor: Sass, Less, Stylus, etc.
  • HTTP/2 push headers ready
  • Extending with Modular architecture

Project structure

Here we have the project structure that Nuxt provides out of the box. The pages, middleware, plugins and layouts directories are framework-specific and we’ll briefly explain their purpose.

The Nuxt community has added great README.md files into each directory with links to the documentation.

The layouts directory defines all of the layouts that our application can use. This is a great place to add shared global components that are used across the application like the header and footer for example. By default, the template that is used for .vue files in the pages directory is default.vue. It is needed to inject all of the page’s components, text, assets, and data.

Pages is the only required directory. All Vue components in this directory are automatically added to the vue-router based on their filenames and the directory structure. We can have dynamic routes by adding an underscore (_) to a directory or a .vue file.

/pages
---/categories
------/_category_id
------/products
---------/_product_id

// The structure above generates a router config that will provide 
// a route for /categories/1/products/3 for example

Middleware is a function that can be executed before rendering a page or layout. There is a variety of reasons we may want to do so. Route guarding is a popular use where we could check the Vuex store for a valid login or validate some params (instead of using the validate method on the component itself). Another use for middleware can be to generate dynamic breadcrumbs based on the route and params. These functions can be asynchronous, meaning nothing will be shown to the user until the middleware is resolved.

The plugins directory allows us to register Vue plugins before the application is created. This allows the plugin to be shared throughout our app on the Vue instance and be accessible in any component. Most major plugins have a Nuxt version that can be easily registered to the Vue instance by following their docs. However, there will be circumstances when we will have to develop a plugin or need to adapt to an existing plugin.

Nuxt’s supercharged components

Nuxt’s page components have extra methods attached to them that we can use to provide additional functionality. The main ones we would use in a project will be the asyncData and fetch methods. Both are very similar in concept, they run asynchronously before the component is generated, and they can be used to populate the data of a component and the store. They also enable the page to be fully rendered on the server before sending it to the client even when we have to wait for some database or API call.

Summary

There’s a lot to cover with Nuxt and server-side rendering, this post aims to provide a general overview of the framework, why server-side rendering is important and how we can scaffold our next web application using Nuxt.

Before using SSR though, we should ask whether we actually need it. Generally, it depends on how important time to content is for the application. If we are building an internal dashboard where an extra few hundred milliseconds on initial load isn’t an issue, SSR wouldn’t be needed. However, in applications where time to content is critical, SSR can help us achieve the best possible performance.

This is a starter Nuxt application which showcases examples of the features mentioned in the post if you have any comments or feedback let us know.
nuxt-ssr-example

Thanks for reading!

Lifecycle hooks in Vue.js

Reading Time: 3 minutes

In this post we’ll be doing an overview of lifecycle hooks in Vue.js.

Each Vue application first creates a Vue instance, with the Vue function:

var vm = new Vue({
  router,
  render: function (createElement) {
    return createElement(App)
  }
}).$mount('#app')

This Vue instance during its initialization goes through several phases and it exposes some properties and methods in each phase. This is an example of Vue using the template method behavioural design pattern.

The methods which run by default in this process of creating and updating the DOM are called lifecycle hooks and using them properly allows easy access to a behind the scenes overview of how the library works.

Below we can see a simple diagram showcasing all methods in one instance.

We can see that we have 8 methods that we can split into 4 phases in an application’s lifecycle.

  • Creation or initialization hooks (beforeCreate, created)
  • Mounting or DOM insertion hooks (beforeMount, mounted)
  • Updating hooks (beforeUpdate, updated)
  • Destroying hooks (beforeDestroy, destroyed)

Creation hooks

beforeCreate is the first hook that gets called in a Vue component, it has no access to the components reactive data and events as they haven’t been initialized yet. It’s good to use this hook for non-reactive data.
The created hook has access to the components events and reactive data and state. The DOM and $el properties are still not available. This hook is usually used to perform API calls and store the response.

Mounting hooks

The next lifecycle hook to be called is beforeMount and it happens right before the component is mounted on the DOM. Here is our last opportunity to perform operations before the DOM gets rendered.
Mounted is called right after and now the DOM is available. This is a good place to add third-party integrations like charts, Google Maps etc, which interact directly with the DOM.

mounted() {
  console.log('This is the DOM instance', this.$el)
  this.$nextTick(function() {
    console.log('Child components have now been loaded')
  }
}

Updating hooks

beforeUpdate and updated are the two hooks that are called each time a reactive property is changed. The data in beforeUpdate holds the previous values of the property, while after updated runs, the instance has finished re-rendering.

beforeUpdate () {
  console.log('before update called')
},
updated () {
  console.log('update finished')
}

Destroying hooks

When beforeDestroy, we can still mutate the state and the instance is still functional. Here we can do some last moment data mutation before the instance is destroyed.
When destroy is called, the Vue instance does not exist anymore. All directives, event listeners have been removed and child components have been destroyed.

beforeDestroy () {
  console.log('before destroy called')
},
destroyed () {
  console.log('destroyed')
}

I hope this provides a good overview of the lifecycle hooks in a Vue application. For more information, refer to the official docs here.

JavaScript loop and object iteration (optimization)

Reading Time: 3 minutes

Nowadays, it’s interesting that loops become part of our life as developers and we use them at least one time a day. Because of that, one day I decided to investigate and go deeper into JavaScript loops, where I found very interesting things and if I do not share them with you, I am going to feel guilty.

Before you continue reading, I would strongly recommend you to read my previous blog which I believe you will find very useful to create a full picture of the loops. So, go on and read it.

Object properties iteration

Let’s first analyze object iteration and suppose that we have an object, something like:

var obj = {
    property1: 1,
    property2: 2,
    …
}

First, what comes to our mind is to iterate them with the standard for each iteration:

for (var prop in obj) {
    console.log(prop);
}

In this case, we are going to iterate through the object properties but is it the correct way? The answer is yes and no, depends on your needs. Another way to iterate trough is to exclude all inherited properties, which in some case we do not need. So, we can exclude them by using the JavaScript method hasOwnProperty(). You can find an explanation about in operator and hasOwnProperty() in my previous blog.

Since we learned some object optimization/improvements/usage, now the question is, can we really do an optimization?
The answer is yes. Before I am going to show you how we can do that, let’s spend some time on the loops.

Loop iteration

In order to continue the previous example, I will continue explaining the loops with object iteration (of course you can test it with a list of integers like the speed test examples or anything you want).

To accomplish that, we will need the JavaScript method Object.keys().

  • Object.keys() method returns an array of a given object but only the own properties of the object

Let’s write the standard for loop:

var keys = Object.keys(obj)
for (var i = 0; i < keys.length; i++) {
    console.log(obj[keys[i]]);
}

Now we have a solution where we decreased the iteration time by eliminating the time for the evaluation `keys.length` from O(N) to O(1) which is a big time saving if we iterate big arrays.
So, during the development, if you are not limited (like applying some best practices,…), you can add another optimization, by using while loop.

var i = keys.length;
while (i--) {
    console.log(obj[keys[i]]);
}

In this case, we do not declare a new variable, we don’t execute new operations and the while loop will automatically stop when it reaches -1.

Speed testing:

Since the new browsers like Chrome are very fast and optimized, in order to see the best speed differences, I would suggest executing the loops on IE where you will be able to see a real speed difference between the loops.

var arr = new Array(10000);

Example speed test 1	
console.time();                        
for (var i = 0; i < arr.length; i++) {
    // operations...		
    var sum = i * i;   		
}
Execution 1: 4.4ms		
Execution 2: 5.5ms		
Execution 3: 5ms			
Execution 4: 4.6ms			
Execution 5: 5ms	
					
Example speed test 2                    
console.timeEnd(); 	          
while (i--) {
    // operations...
    var sum = i * i;
}
console.timeEnd();

Execution 1: 3.7ms
Execution 2: 4.8ms
Execution 3: 3.9ms
Execution 4: 3.8ms
Execution 5: 4.2ms

Thank you for reading and I would appreciate any feedback.

Tool Showcase: Node-RED

Reading Time: 5 minutes

Node-RED is a flow-oriented tool to wire together hardware devices, APIs and online services. It is mainly targeting the IoT market but can be used for a lot of other things as well. Because of its easy to use browser-based UI and drag and drop programming system, it’s really beginner-friendly with a steep learning curve.

Even though it was developed by IBM in 2013, it’s not really known to most of the IT community. At least none of my colleagues knew it. That’s worth for me to write this tool showcase. Enjoy reading!

Getting started

Instead of just reading along, I encourage everybody to just start Node-RED and try it yourself. If docker is installed, this is just a matter of seconds. Use the following command to start a Node-RED instance locally:

docker run -it -p 1880:1880 --name mynodered nodered/node-red

That’s it. You are ready to go! Open your browser and go to localhost:1880 to access the Node-RED UI.

One of the simplest flows is the following one:

Drag an “http in”, “template” and “http out” node into the flow and connect them. After clicking the deploy button you can access localhost:1880/<whateverPathYouConfiguredInYourHttpInNode> to see whatever you’ve configured in your template node. Congratulations, you have just created your first Node-RED flow!

Of course, rendering static content on an endpoint is not the most exciting thing to do, but between the HTTP in and out nodes, you’re free to do whatever you want. Nodes to make HTTP calls to different URLs, reading and writing files and much more are provided by Node-RED by default.

The power of the community

Node-RED uses Node.js for its nodes (yes, the terminology “node” is overused in the Node-RED context 🙂 ). This has a big advantage, that new nodes can be added easily from the node package manager (npm). You can find these nodes by searching for “node-red-contrib” in the npm repository. An even simpler option is to install these nodes using the “Manage Palette” option in the UI. There you can install new nodes with a single click.

There are nodes for almost everything. Need support for slack? Yep, there’s a node for that. Tired of pressing light switches in your house to turn off and on your Philips Hue lights? Yep, there’s a node for that as well. Whatever you can imagine, there’s a node for it.

A slightly more advanced flow

To test some Node-RED features, I tried to come up with a slightly more complicated example. I own some Philips Hue lamps and a LaMetric Time. After searching some nodes for these devices, I was really surprised that somebody already built some for these two devices (I was especially surprised about the support for the not so well-known LaMetric Time).

My use case was pretty straight forward. Turn on the lights when it gets dark and display a message on my LaMetric near my TV. At midnight, turn off the lights and display a different message. Additionally, I wanted some web endpoints that I could call to trigger both actions manually.

After only a few minutes, I had the following flow:

And it works! I found a node that sends an event as soon as the sun goes down at my particular location. Very cool. All the other nodes (integration for Philips Hue and LaMetric) can also easily be added with the “Manage Palette” option in the GUI. All in all, the implementation of my example use-case was pretty straight forward and required no programming know-how.

Expandability

Even though there are almost 3000 community-contributed nodes available to use, you might have some hardware or API that does not (yet) have some pre-made nodes. In that case, you can implement your own nodes pretty easily. The only thing required is a text editor and some node.js know-how.

The Node-RED documentation provides a good guide on how to create custom nodes: https://nodered.org/docs/creating-nodes/first-node

It is highly recommended to push your custom nodes to the npm repository to be used by the community.

Additional Resources

There are a whole lot more features that are not described in this blogpost.

  • Flows are just .json files and can easily be imported or exported or added to a git repository
  • Flows can be converted to subflows and used like nodes in other flows
  • Multiple flows can run in parallel and trigger each other
  • There are special nodes for error handling or low-level TCP communication
  • There are keyboard shortcuts for everything
  • … and much more!

Feel free to have a look yourself:

Thanks for reading!

GraphQL! 😍😍

Reading Time: 3 minutes

As excited as I am to talk about GraphQL, I don’t have many words to say.

Last year, we had a Front end conference in Konstanz, which was so amazing. Thanks to GraphQL Day Bodensee.

The conference promised to focus on adopting GraphQL and to get the most out of it in production. To learn from a lineup of thought leaders and connect with other forward-thinking local developers and technical leaders.

GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.

https://graphql.org/

Just reading this paragraph made me more interested than ever.

How was the conference?

The conference delivered everything that it promised. Small talks but very informative and helpful. People came from almost every country in Europe and America, they were super friendly. So now lets dive in, so that you all have an idea about this awesome query language.

Just to be clear GraphQL can be used with most of the framework out there.

Steps to use GraphQL?

1. Create a GraphQL service.

First, we define types and fields on those types

type Query {
  me: User
}

type User {
  id: ID
  name: String
}

Then, we create functions for each field on each type

function Query_me(request) {
  return request.auth.user;
}

function User_name(user) {
  return user.getName();
}

When the GraphQL service is running e.g. on a web service, we can send GraphQL queries to validate and execute. First, the received query is checked to ensure it only refers to the types and fields defined, then runs the provided functions to produce a result.

2. Send the Query

{
  me {
    name
  }
}

3. Get the JSON results

{
  "me": {
    "name": "Luke Skywalker"
  }
}

Pros +++

  • Good fit for complex systems and microservices
  • Fetching data with a single API call
  • No over- and under-fetching problems
  • Validation and type checking out-of-the-box

Cons – – –

  • Performance issues with complex queries
  • Overkill for small applications
  • Web caching complexity
  • Takes a while to understand

GraphQL vs Rest

Putting it together

GraphQL is an awesome query language, but as you can see from the pros & cons it doesn’t make sense to use it in every situation, in cases of small projects its an overkill, but in bigger ones, it can make the complexity of Backend Frontend much much easy.

JHipster with Google App Engine and Cloud MySQL

Reading Time: 5 minutes

How does it sound to set up a complete spring application, with front-end and database? With all the models, repositories and controllers? Even with Unit and Integration tests, with mocked data? All within a few hours? Your solution is JHipster!

JHipster

JHipster or “Java Hipster” is a handy application generator, a development platform, to develop and deploy web applications. JHipster has become popular in a short time, and it has been featured in many conferences all around the globe – Montreal, Omaha, Taipei, Richmond, Frankfurt, Paris, London. It supports:

  • Spring Boot (Back-end)
  • Angular/React/Vue (Front-end)
  • Spring microservices

JHipster is used for generating complete applications, it will create for you a Spring Boot and Angular/React/Vue application, high-quality application with most of the things pre-configured, using Java as back-end technology and an extensive set of Spring technologies: Spring Security, Spring Boot, Spring MVC (providing a framework for web-sockets, REST and MVC), Spring Data, etc. and Angular/React/Vue front-end and a suite of pre-configured development tools like Yeoman, Maven, Gradle, Grunt, Gulp.js and Bower.

JHipster gives you a head start in creating Spring Boot application with a set of pre-defined screens for user management, monitoring, and logging. The generated Spring Boot application is specifically tailored to make working with Angular/React/Vue a smoother experience. At the top of all that, JHipster also gives you the tools to update, manage and package the resulting application.

By now you may think it sounds too good to be true… But it is not everything that JHipster offers. If you are a web developer, by now probably you have a lot of questions. 🙂
One important question we will answer in this blog post: is it supported by today’s cloud solutions, is it compatible with all of them? The answer is yes, it is compatible with the popular cloud solutions from Google, Amazon, Microsoft, and Heroku. Let’s see what it takes to make a complete integration in Google’s cloud platform, the app engine.

Compatibility Test - NEXCOM

Google App Engine

Google App Engine is a cloud solution provided by Google, a platform for developing and hosting web applications in data centres managed by Google; Platform as a Service (PaaS). Applications are sandboxed and run across multiple servers. The App Engine supports Java or Python, uses the Google query language and stores data in Google BigTable.

It is free of usage up to a certain amount of resource usage. After the user is exceeding the limited usage rates for storage, CPU resources, requests or number of API calls and concurrent requests can pay for more of these resources.

It is fully compatible with the JHipster generated projects. What it takes to host your application is just to follow the official how-to guide from Google App Engine documentation, as normal Spring Boot Application. To make things easier, Google offers a database which works closely with the Google App Engine, the Cloud SQL.

Cloud SQL

Cloud SQL is a database service offered by Google for their cloud solutions, fully-managed that makes it easy to configure, manage, maintain, and operate your relational databases on Google Cloud Platform.

It offers three database options to integrate with:

  • MySQL
  • PostgreSQL
  • SQL Server

Let’s get into details of integrating with Cloud SQL for MySQL:

  1. The first step is to create a Cloud SQL instance on the Google Cloud Platform, which requires few things like instance ID, password and etc. to be set and it gives you the option to choose the MySQL database version.
  2. The following step is to create the database in the newly created instance. It is possible to have more databases in one instance.
  3. Now, our application, in the case to be able to communicate with the Cloud SQL, without any permission blockers, we need to register the application in the Cloud SQL and manually configure the service account roles.
  4. The final step is connecting your application to the created Cloud SQL instance. It is done through JDBC. All the required properties can be found in the overview of the Cloud SQL, instance connection name, credentials and etc.

So the conclusion: don’t be afraid to invest some time in new technologies, be curious, you never know where they may lead you. Thank you for reading. 🙂

Starting with unit testing using Vue.js 2 and Jest

Reading Time: 4 minutes

As a FrontEnd developer, you may know a lot of FrontEnd technologies and frameworks but in time you need to upgrade yourself as a developer. A good skill to strengthen your knowledge is to learn unit testing.

Since I am working with Vue.js for several years, we are going to see some of the basics for testing Vue components using the Jest JavaScript testing framework.

To start, first, we need a Vue.js 2 project created via the Vue CLI. After that we need to add the Jest framework to the project:

# jest unit testing
vue add @vue/unit-jest

I’ll make a simple component that will increase a number on click of a button:

// testComponent.js
export default {
  template: `
    <div>
      <span class="count">{{ count }}</span>
      <button @click="increase">Increase</button>
    </div>
  `,

  data() {
    return {
      count: 0
    }
  },

  methods: {
    increase () {
      this.count++
    }
  }
}

The way of testing is by mounting the components in isolation, after that comes the mocking the needed inputs like injections, props and user events. In the end, comes the confirmation of the outputs of the rendered results emitted events etc.

After that, the components are returned inside a wrapper. A wrapper is an object that contains a mounted component or a VNode and methods to test them.

Let’s create a wrapper using the mount method:

// jestTest.js

// first we import the mount method
import { mount } from '@/vue/test-utils'
import Calculate from './calculate'

// then we mount (wrap) the component
const wrapper = mount(Calculate)

// this way you can access the Vue instance
const vm = wrapper.vm

// you can inspect the wrapper by logging it into the console
console.log(wrapper)

Next step after we do the wrapping, follows to verify if the rendered HTML output of the component matches the expectations.

import { mount } from '@vue/test-utils'
import Calculate from './calculate'

describe('Calculate', () => {
  // Now mount the component and you have the wrapper
  const wrapper = mount(Calculate)

  it('renders the correct markup', () => {
    expect(wrapper.html()).toContain('<span class="calculate">0</span>')
  })

  // it's also easy to check for the existence of elements
  it('has a button', () => {
    expect(wrapper.contains('button')).toBe(true)
  })
})

Then run the tests with npm test and see them pass.

The code in testComponent.js should increment the number on button click so next step is to simulate the user interaction. For this, we need the wrapper’s method wrapper.find() to get the wrapper for the button and then simulate the click event by calling the method trigger().

it('simulation of button click should increment the calculate by 2', () => {
  expect(wrapper.vm.calculate).toBe(0)
  const button = wrapper.find('button')
  button.trigger('click')
  button.trigger('click')
  expect(wrapper.vm.calculate).toBe(2)
})

For asynchronous updates, we use the Vue.nextTick()(need to receive a function as a parameter) method, which comes from Vue. With this method, we are waiting for the DOM update and after that, we execute the code (the code in the function parameter).

// this will not be caught

it('time out', done => {
  Vue.nextTick(() => {
    expect(true).toBe(false)
    done()
  })
})


// the three following tests will work as expected 
// (1)

it('catch the error using done method', done => {
  Vue.config.errorHandler = done
  Vue.nextTick(() => {
    expect(true).toBe(false)
    done()
  })
})

// (2)
it('catch the error using a promise', () => {
  return Vue.nextTick()
    .then(function() {
      expect(true).toBe(false)
    })
})

it('catch the error using async/await', async () => {
  await Vue.nextTick()
  expect(true).toBe(false)
})

Using nextTick can be tricky for the errors because the errors thrown inside it might not be caught by the test runner. That is happening as a consequence of using promises internally. To fix this we can set the done callback as a Vue’s global error handler (1) or we can use the nextTick method without parameter and return it as a promise (2) like we did earlier.

This article is a guide on how to set up the environment and start writing unit tests using Jest. For more information about testing with Vue and using Jest, you can visit the official site for Vue test utils.

JavaScript objects: Why? How? Compared with switch-case and if?

Reading Time: 4 minutes

During the development, we use objects and we are always confused which approach is the best. That was the main reason why I decided to write a blog about it, so I really hope in near future most of your obstacles will be resolved and you will be more confident in choosing which is the best approach.

In this blog, I will explain some of the object features and I will make some comparisons, so let’s start!

Switch vs object literals

We all know what is a switch – case statement and we used them at least one time in our life, no matter which programming language. But since we are talking about JavaScript, have you ever asked yourself if it is clever to use it?

Well, of course, the answer is NO. Now you are asking yourself, then it must be if-else statements, but still, the answer is NO. The best approach is to use objects.

Let’s see why…

Problems:

  • switch-case
    • Hard to maintain which leads you to difficult debugging and testing
    • You are manually forced to use break within each case
    • Nested errors
    • Restrictions, like not allowing to use the same constant in two different cases…
    • In JavaScript, everything is based on curly braces, but not switch
    • Evaluates every case until it finds the right one
  • If-else statements
    • Hard to maintain which leads you to difficult debugging and testing
    • Hard to understand when there is complex logic
    • Hard to test
    • Evaluates every statement until it finds the right one if you don’t end it explicitly

According to these problems, the best solution is objects. The reason for that is the advantages that objects are offering us, like:

  • You are not forced to do anything
  • You can use functions inside the objects which means you are much more flexible
  • You can use closure benefits
  • You are using the standard JavaScript objects, which makes the code friendlier
  • Gives you better readability and maintainability
  • Since the objects approach is like a hash table the performance gets better rather than the average cost of switch case
  • All these advantages lead us to the conclusion that objects are more natural and are part of many design patterns in JavaScript where switch-case is an old way of coding

Let’s see an example with objects how they look like:

<!DOCTYPE html>
<html> 
<body>
<script>
      function getById (id) {
            var ids = {
                  'id1': function () {
                        return 'Id 1';
                  },
                  'id2': function () {
                        return 'Id 2';
                  },
                  'default': function () {
                        return 'Default';
                  }
            }
            return (ids[id] || ids['default'])();
      };

      var ref = getById('id1')
      console.log(ref)         // Id 1

      var ref1 = getById()
      console.log(ref)         // Default

      var ref1 = getById(‘noExistingId’)
      console.log(ref)         // Default
</script>
</body>
</html>

hasOwnProperty(key) vs in

hasOwnProperty() – method which returns Boolean or true only if the object contains that property as its own property.

in – operator which returns Boolean or true only if the object contains that property as its own property or in its prototype chain.

function TestObj(){
      this.name = 'TestName';
}
TestObj.prototype.gender = 'male';

var obj = new TestObj();

console.log(obj.hasOwnProperty('name'));       // true
console.log('name' in obj);                    // true
console.log(obj.hasOwnProperty('gender'));     // false 
console.log('gender' in obj);                  // true

Object properties

There are two ways: dot notation and bracket notation. Most of the developers often ask themselves which approach they should use or maybe if there is any difference? In the end, they use one of them wherein most of the cases both solutions work fine without knowing why and without paying attention to its differences. Let’s make an overview.

Dot notation:

  • s can only be alphanumeric, which means it can only include two special characters “_” and “$”
  • Property identifiers can’t start with numbers and variables

Bracket notation:

  • Property identifiers must be String or a variable which references to a String
  • Property identifier can contain any character in their name
var obj = {
      '$test': 'DolarValue',
      '%test': 'PercentageValue'
      …
}

console.log(obj["$test"])                      // 'DolarValue'
console.log(obj.$test)                          // 'DolarValue'

These both will give the same result because both support `$` in their name, but what happens if we use another special character like `%`?

console.log(obj["%test"])    // 'PercentageValue'
console.log(obj.%test)       // It will throw an error:              
                            Uncaught SyntaxError: Unexpected token ‘%’

In the next blog, I will cover loops optimization using the object properties and testing code speed.

Difference between Normal and Arrow Functions

Reading Time: 3 minutes

Arrow functions have been adopted since ECMAScript 2015. They are very powerful and simple. Many ES5-based projects adopted this feature to refactor the code. However, the arrow functions aren’t the same as the normal functions you’ve got to know. In this blog, I will explain what is the difference between normal and arrow functions.

This Keyword

A normal functions .this – binding is determined by who calls the function:

let a = 10;
let obj = { 
  hello,
  a: 20
};

function hello() {
  console.log(this.a);
}

hello(); // outputs 10
obj.hello(); //outputs 20

The method hello gives a different result by how it’s called. This is because a normal functions this is bound to the object that calls the function.

In contrast to a normal function, an arrow functions this is always bound to the outer function that surrounds the inner function:

let a = 10;
let hello = () =&gt; { 
  console.log(this.a);
};

let obj = {
  hello,
  a: 20
}

hello(); // outputs 10
obj.hello(); //outputs 10

In this example, hello is an arrow function. It’s a property of the obj. Even though obj calls hello, it still prints 10 because a function’ this always refers to the outer environments this. And the global this is a window so it points out to window.x.

Arguments

A normal function has a special property called argument:

function hello () {
  console.log(arguments.length)
};

hello(1, 2, 3); // outputs 3

hello(10); // outputs 1

An arrow function doesn’t have an arguments property:

let hello = () =&gt; {
  console.log(arguments.length)
};

hello(1, 2, 3); // outputs arguments is not defined

Binds

function.prototype.bind is a method you can use to change the this of the function:

let car = ‘Volvo’ 

function whatCar() {
  console.log(this.car)
};

whatCar(); // outputs ‘Volvo’
whatCar.bind({car: ‘Nissan’})() // outputs ‘Nissan’

whatCar prints the value depending on the assigned this.

But an arrow function doesn’t work with function.prototype.bind because it doesn’t have a local thisBinding. So it just looks at the outer environments this:

let car = ‘Volvo’ 

let whatCar = () =&gt; {
  console.log(this.car)
};

whatCar(); // outputs ‘Volvo’
whatCar.bind({car: ‘Nissan’})() // outputs ‘Volvo

Constructor

Normal function can be constructible and callable and arrow function is only callable and not constructible, they can never be invoked with the new keyword:

function hello () {};
let hi = () =&gt; {};
new hello(); // works
 hi(); // outputs hi is not a constructor

Arrow functions are more handy and stylish, but there are differences between arrow function and normal function. Maybe an arrow function won’t always be the best option to use. All I want to say is the best way to choose what function to use depends on each situation.

Experiences of FrontendConnect 2019 conference Warsaw, Poland

Reading Time: 4 minutes

INTRODUCTION

Everybody has an open lifetime book full of blank pages, waiting to be filled. We write the story as we go, so back in November 2019, I have started the chapter ‘Frontend conferences’ by attending the FrontendConnect2019 in Warsaw, Poland, thanks to my company N47.

My motivation to choose this conference was the fact that I will gain new knowledge, and exchange practical ways of using frontend frameworks. Despite this, given the fact that there were great speakers from the IT world, I had no doubt choosing this tech event. Duration of the event was three days, one workshop day and two speaking conference days.

WHICH WORKSHOP DID I ATTEND TO?

As I was experienced with Vue.js, I wanted to upgrade the knowledge with Nuxt as their workshop description was “It may take it to the next level, thanks to its convention over configuration approach.” I got a certificate of attendance and completion of “My first Nuxt.js application” by the Vue.js Core Team member Darek ‘Gusto’ Wędrychowski. Coding under the eye of ‘Gusto’ and having a wonderful panorama view of Warsaw in my horizon, was definitely a day well spent.

WHICH PRESENTATION DID I ATTEND TO?

Rich agenda with scheduled talks, thoughts about which ones to choose, moreover similar questions were going through my mind. I attended the ones that caught my eye and were mostly within my interests.

At the beginning of each day, there was a high valued speaker opening the day with their talks. The first day I had to meet and listen to the very appreciated, Douglas Crockford with his JSON Saga.

The second day, there was Minko Gechev, a Google engineer working on the Angular framework with the talk ‘The Future of Front-End Frameworks’.

Some other topics that I attended to were about the state management in a world of hooks, some optimizations of the modern JavaScript applications and loading them instantly, as well as Angular and Vue.js 3.0 topics.

WHAT CAUGHT MY MIND?

Two of my favourite talks were ‘The JSON Saga’ – Douglas Crockford and ‘Vue 3.0 for Library Authors’ – Damian Dulisz.

The JSON Saga

Douglas was retelling the story about how he discovered JSON (JavaScript Object Notation). He explained how he did not invent, but found it in the early 2000s, named it and described its usefulness. JSON is a format for storing data and establishing communication between the servers. He explained how some companies complained and did not want to accept JSON because they were used to XML, and could not consider anything else, at that moment. He mentioned that some of the people denied its usage because of it not being a standard. So, what he did next was buying JSON.org, a website which after a few years spread among the users. After a while, JSON got the support of all languages. He announced that there will be no more changes to JSON because for him there is no feature more important than the stability of JSON.

Vue 3.0 for Library Authors

Getting more in details about this topic and Vue 3.0-alpha version will be covered in my next blog.

THE CULTURE AND ENVIRONMENT IN THE CONFERENCE

Frontend Connect was happening in the theatre of the Palace of Culture and Science in Warsaw, Poland where the history and modern world meet at the same time. It is one of the symbolic icons of Warsaw and the place of the city`s rebirth. There were people from all over the world, and the atmosphere was really friendly. Everybody was discussing the topics and shared their work ethics.

CONCLUSION

Visiting conferences is a really good way to meet new friendly people that you have a lot in common with, as well as having an opportunity to reach out to the speaker if you enjoyed the talk, and discuss what you found interesting. We should always strive for more experiences like this and face new challenges within modern technologies. With that being said, we need to nurture our idea to reach our full potential, in order to make a bigger impact in the IT world.

The way to the professional VueJS-Project ( Part 1 )

Reading Time: 4 minutes

Okay, it’s usually easy to start a VueJS project. There are many tutorials or Vue-Cli templates and with the Vue-Cli 3.x, it’s super easy to create your own. Here are some links:

BUT what if the requirements increase and become more demanding? Or if component/functional testing and typification are required? Or “newer” technologies such as GraphQL, serverless, state machines/diagrams and module dependency management come into play?

How to start?

We’ll start with the easiest way and use the VueCli API.

# via console
vue create professional-world
# or via the vue cli gui
vue ui

We will be prompted to pick a preset. First, select manually features.

After we select these features, we will choose the following setup.

Some points about our setup:

  • class-style
    I choose this mode to show you the TypeScript decorator for class-style Vue components. But of course, you can take the normal style it is not so many new stuff for the first time.
  • history mode
    We do not choose this router mode, because for a simpler environment setup. If you don’t like this – feel free to change this. Read more about it here.
  • css pre-processor
    You can choose whatever you want. I prefer stylus regarding the less code. 😏
  • lintner
    It makes sense to activate here TSLint and also the auto-fix on commit. But we will add husky instead of a git-hook.
  • cypress vs nightwatch
    We choose cypress because it has some nice other testing features e.g. debuggability, automatic waiting, network traffic control, spies, stubs, clocks and screenshots and videos testing. But we will pay for it with the limited browser compatibility at the moment – later we will close this gap with regression tests.
  • config placing
    I prefer to use in dedicated config files. It is easier to change and also the package.json is more readable if you add more dependencies.

Now we will add some more dependencies before we can start:

yarn add -D husky vue-cli-plugin-pug eslint-plugin-pug jest-image-snapshot
  • husky
    It makes git hooks easy
  • pug
    It’s a robust, elegant, feature-rich template engine for Node.js
  • jest-image-snapshot
    It’s a jest matcher for image comparisons. Most commonly used for visual regression testing.

Last configurations

Husky needs to add the following file.huskyrc.js (If you want you can delete the links for the git-hook in the package.json😎

module.exports = {
  "hooks": {
    "pre-commit": "lint-staged"
  }
}

For pug, we add a vue plugin and also an eslint plugin. Eslint itself needs the following configuration in tslint.json.

"plugins": [ "pug" ],

Start coding

After this configurations we can start coding ☺️ Ok we start first with refactoring the example files from the Vue-Cli template to pug syntax. You can use for this a formatter e.g. html-to-pug.com.

Extra tipp

Create a new file named .editorconfig and add following content. It helps you with keep the coding style – you do not need to worry about the format.

root = true

[*]
charset = utf-8
indent_style = space
indent_size = 2
end_of_line = lf
insert_final_newline = true
trim_trailing_whitespace = true

After this you should have this status from your project:
https://gitlab.com/47northlabs/public/a-professional-vue-world/tree/part-1

Following parts

  • Coding with typescript, stylus and pug ( Part 2 )
  • First steps with unit component, functional and e2e tests ( Part 3 )
  • Vue and VueX meets state machines ( Part 4 )
  • Apollo/GraphQL with serverless services ( Part 5 )
  • Module dependency management in VUE ( Part 6 )

What to expect from ECMAScript in 2019

Reading Time: 5 minutes

JavaScript as a language has been maturing at a steady pace in the past decade with the evolution of ECMAScript. The current version was released in June 2018, ES2018 and the upcoming version brings several new additions to the language which we’ll cover in this post.

Before that, it can be useful to know how new features are being added to the language through the additions to the ECMAScript standard.

ECMAScript proposals

The Technical Committee 39, TC39, is responsible for reviewing and adopting changes to the existing version of the ECMAScript standard. The process takes the form of 5 stages of maturity after which a new feature is included in the standard.

  • Stage 0: Strawman
  • Stage 1: Proposal
  • Stage 2: Draft
  • Stage 3: Candidate
  • Stage 4: Finished

There is no guarantee that a feature in Stage 3 will be included in the standard, but generally, JavaScript engine implementations like V8 (Chrome, Node.js, Opera), SpiderMonkey (Firefox) and Chakra (Microsoft Edge) tend to add some features with experimental support so that developers can test and provide feedback.

ES2019 finished features

At the time of writing this post, there have been several finished features, you can check the full list in the TC39 proposal repository.

  • Array.prototype.{flat, flatMap}
  • Object.fromEntries()
  • String.prototype.{trimStart, trimEnd}
  • Symbol.prototype.description
  • Subsume JSON
  • Optional catch binding
  • Well-formed JSON.stringify()
  • Function.prototype.toString()

Array.prototype.{flat, flatMap}

The new array instance method, flat(), flattens an array up to a given value, it takes an optional depth argument which can be set to Infinity to run recursively until the array is single dimensional.

const nestedArray = ["a", ["b", "c"], ["d", ["e", ["f"]]]];

// using flat() on a multidimensional array
nestedArray.flat();
["a", "b", "c", "d", ["e", ["f"]]]

// using flat() with an argument
nestedArray.flat(2);
["a", "b", "c", "d", "e", ["f"]]

// using flat() with Infinity as argument
nestedArray.flat(Infinity);
["a", "b", "c", "d", "e", "f"]

flatMap() is similar to the existing map() method, but the callback can return an array and the end result will be a single dimensional array instead of nesting arrays.

const scatteredArray = ["a b c", "d", "e f", "g"];

// map() results in nested arrays
scatteredArray.map( chunk => chunk.split(" "));
[["a", "b", "c" ], [ "d" ], [ "e", "f" ], [ "g" ]]

// flatMap() concatenates the returned arrays together
scatteredArray.flatMap( chunk => chunk.split(" "));
["a", "b", "c", "d", "e", "f", "g"]

Object.fromEntries()

Objects have an entries() method since ES2017, it returns an array key-value pairs of the object’s properties. The fromEntries() method does the reverse and returns an object from an array of properties.

If we use both methods on an object the cloned object will be copied by reference.

const person = { name: "John", age: 30, country: "UK" }

// using Object.entries and assigning it to a new object
const entries = Object.entries(person)
[["name", "John" ], [ "age", 30 ], [ "country, "UK" ]]

// using the new Object.fromEntries
const newPerson = Object.fromEntries(entries)
{ name: "John", age: 30, country: "UK" }

// the new object is a shallow copy
person !== newPerson
true

String.prototype.{trimStart, trimEnd}

trimStart() returns a string with removed whitespaces from the start of the original string, while trimEnd() removes whitespaces from the end.

const str = "   blank spaces   ";

// trimStart() usage
str.trimStart();
"blank spaces   "

// trimEnd() usage
str.trimEnd();
"   blank spaces"

Symbol.prototype.description

When creating a symbol, you can now retrieve its description property instead of using toString() on the object.

const symbol = Symbol("This is the symbol description")

// retrieving the description property
symbol.description;
"This is the symbol description"

Subsume JSON

This proposal solves an issue of inconsistency between a JSON and an ECMAScript JSON, which by specification should be a superset of a JSON.

JSON strings currently accept unescaped U+2028 and U+2029 characters, while ECMAScript strings don’t. With this extension of the ECMAScript JSON, this gap will be patched.

Optional catch binding

In previous versions of ECMAScript, we had to implement a catch binding even if we never had to use it.

try {
  // ...
} catch (error) {
  // handle error
}

With this addition, we can now simply omit it.

try {
  // ...
} catch {
  // handle error
}

Well-formed JSON.stringify()

This fixes the JSON.stringify() output when it processes surrogate UTF-8 code points (U+D800 to U+DFFF). Before this change calling JSON.stringify() would return a malformed Unicode character (a “�”).

Now those surrogate code points can be safely represented as strings using JSON.stringify() and transformed back into their original representation using JSON.parse().

Function.prototype.toString()

ES2019 introduces a change to the toString instance method of functions. Now the returned value will be exactly as defined, without stripping whitespaces and comments.

function /* this is foo */ foo () {}

// previous behavior of toString()
foo.toString()
"function foo () {}"

// proposal behavior
foo.toString()
"function /* this is foo */ foo () {}"

Upcoming candidate features

The following list contains all current candidate features which have reached Stage 3 and will most likely be included in the specification in the future.

FrontCon 2019 in Riga, Latvia

Reading Time: 3 minutes

Only a few weeks left until I go to my first tech conference this year. Travelling means for me learning something new. And I like learning. Especially the immersion in a foreign culture and the contact to people of other countries makes me happy.

It’s always time to grow beyond yourself. 🤓

BUT why visit Riga just for a conference? Riga is a beautiful city on the Baltic Sea and the capital of Latvia. Latvia is a small country with the neighbours: Russia, Lithuania, Estonia and the sea. AND it’s a childhood dream of me to get to know this city. 😍

The dream was created by an old computer game named “The Patrician”. It’s a historical trading simulation computer game and my brothers and I loved it. We lost a lot of hours to play it instead of finding a way to hack it. 😅
For this dream, I will take some extra private days to visit Riga and the Country as well. 😇

Preparation

The most important preparation such as flight, hotel, workshop and conference are completed.

Furthermore, I also plan to visit some of the famous Latvian palaces and the Medieval Castle of Riga. I also need some tips for the evenings: restaurants and sightseeings from you. Feel free to share them in the comments. 😊

Some facts about the conference

There are four workshops available on the first day:

  • Serverless apps development for frontend developers
  • Vue state management with vuex
  • From Zero to App: A React Workshop
  • Advanced React Workshop

I chose the workshop with VueJS of course 😏 and I’m really happy to see that I can visit most of the talks in the following days. There are some interesting speeches like “Building resilient frontend architecture”, “AAA 3D graphics” and secure talks and server-less frontend development. Click here for the full list of tracks.

My expectations

Above all, I’m open to events to learn new things. Therefore, I have no great expectations in advance. So I’m looking forward to the

  • VueJS & Reacts parts
  • Visit the speakers from Wix, N26 and SumUp

I’m particularly curious about the open spaces between the speeches. I will be glad to have some great talks with the guys. 🤩

For my private trips:

That’s all for now

to be continued…

Webpack: The Good, The Bad and The Ugly

Reading Time: 5 minutes

Introduction

Webpack, a static module bundler, a complex yet simple tool, that allows you to spend between 10 minutes and 10 hours to make a simple web application bundlable.

The Good

As a static module bundler, Webpack delivers a bundle of code that’s easily parseable for your browser or node environment. It allows users to use the UMD/AMD system of bundling code and applies it on images, HTML, CSS and much much more. This allows the developer to create a module or more modules with a segment or multiple segments of a web app, and serve it all, or serve parts.
One of the major sell points of Webpack is the ability to modify it to your choosing. It’s a rich ecosystem of plugins that exist to enhance the already plentiful options of the bundler itself. This allows for Front-end libraries and frameworks to use it to bundle their code using custom solutions and plugins, the likes of Angular, ReactJS and VueJS.

This is made easy by the years of development through which we have reached Webpack 4, that allows for multiple configuration files for development and production environments.

The example in the image is a view into the differences that allow for good development overview and testing, yet at the same time make it possible to use a single script to build a production-ready build. All of this together makes Webpack a viable bundler for most JS projects, especially when it’s the base of ecosystems like in Angular or React.

The Bad

Whilst Webpack is a good bundler of JS, it’s not the only one available. Issues in Webpack come from the fact that most of the libraries that you use need to have been worked on and developed with Webpack bundling in mind. Its native module support is limited, requiring you to specify those resources and what way you do want them to be represented in the final build. And most of the time, a random version update of a model can break the entire project, even on tertiary modules.

The learning curve of Webpack is getting higher and higher, it all depends on how complex a project you are working on, and whether you are working with a preconfigured project or building a configuration on your own.

Just for this article I have gone over not only the Webpack docs, but around 20 articles, a ton of github issues. And around a year and a half of my personal setup experience bundling project with Webpack using Three.js, A-Frame, React, Angular, and a myriad of other niche applications. And in the end it still feel like i’ve barely scratched the surface.

The entire debugging process is ugly, it depends upon source-maps which vary from library to library. You can use the in-built Webpack option or use a plugin for your specific tech, it will never be fun. Loading up a 160k code bundle and blocking your pc, even with source-maps.

The Ugly

All in all, when you give Webpack a chance, your encounter will rarely be a pleasant one. There shall never be a valid standardized version for using and implementing the core. And the plugins don’t help. Each time you find something that works, something new will magically brake. It’s like trying to fix a sinking ship.

This image represents my average day using Webpack, if my project was the dog and Webpack the firestarter. Currently using it in combination with VueJS. It’s the same story, either use the vuecli and a preloaded config. Or regret not doing that later when you need to optimize your specific code integration that needs to run as a bundled part of a larger application.

The worst part of all of this probably is the widespread usage of black-box software like Webpack, which in theory is open source but is a bundle of libraries and custom code that takes as much time as a doctors thesis to study properly. And for all of this, it still is one of the better options out there.

Conclusion

Webpack as a bundler is excellent for use in a multitude of applications. Especially if someone else handles the config for you (Angular, React, Vue clis). It will hopefully become better, but like anything else in JS its roots and backwards compatibility will always bring it down. Be ready for a high learning curve and a lot of frustration. If you like to explore new things, or reimplement existing solutions, or optimize workflow, give it a try.