ready to build your product

How You May Improve Loading of JS Application

While developers create apps, they may stumble upon content loading issues at some point. They may also inherit a codebase which was never checked against performance.

Fortunately, there are many ways to make it faster and they are not necessarily too time-consuming.

We won’t cover simple tricks here such as removing unused but still downloaded content, as it’s obvious.

How it all began

Our team at Brainhub was tasked with something that seemed like a trivial errand – we were to create a few moderately large React.js components that would be inserted into a working site run by a traditional approach of PHP + jQuery.

When we were about to finish our first major iteration, we and our client noticed that the site took at least 12s to get interactive. React.js is very fast so there shouldn’t have been a problem with that, but we checked it anyway. The whole JS bundle downloaded and parsed swiftly, so as we suspected the problem must have lied somewhere else.

As we started getting deep into the existing code, we discovered many quirks, hacks, and optimizations along the way.

In the end, our app took less than 3s from the moment the document loaded. This article is the achievement of our work.

Modern build tools such as Webpack

Modern build tools such as Webpack help improve loading of JS application.

Image source

Webpack provides us with many built-in features such as dead code elimination. It imports only necessary code and ignores all the rest. The lesser code size, the better.

Server-side rendering and hydration

Server-side rendering and hydration help improve loading of JS application.

Image source

React.js, Angular, Vue etc. are client-side frameworks; that means all content will be generated in the browser, instead of receiving a plain document object from the server. This isn’t a bad thing as modern browsers’ engines are very performant, but it creates a problem in certain areas:

  • Government computers, which often come with JavaScript disabled.
  • Disabled people who rely on screen readers are much better off with static content.
  • Search engines; although we know that Google parses JavaScript code, we have practically no idea how other search engines work.
  • Lightweight devices with short-span battery lives.
  • People who would like to parse your HTML content. It would take a few lines in Python when it comes to static HTML but with essential JavaScript it would require a few megabytes JavaScript engine to return content to be parsed.
  • People with an Internet connection that may get slow at some point (train entering a tunnel as an example).

This is a place where server-side rendering comes into action. It’s a beautiful compromise between JavaScript rendered sites and static web pages. SSR enables you to pre-render your JavaScript content on the server, send it to a user so that he can see it, and when JavaScript is loaded, it reuses already generated markup as there’s no need to render everything again (this process is called “hydration”). Also, your users will see the content faster.

SSR sometimes comes with unexpected results if your frontend algorithms rely on DOM manipulations so be prepared to mock some results.

If your backend uses Node.js, you’re all set. If it doesn’t, it is still possible, although it may sometimes be tricky.

Lazy loading images

Lazy loading images helps improve loading of JS application.

Image source

If you load a lot of images on the page, loading of your script may be postponed. Think about lazy loading of images so that your script may start downloading faster. Alternatively, you may insert low-quality versions of the same images in a similar way to Medium.

Move your “script” tag higher in the document

Moving the “script” tag higher in the document helps improve loading of JS application.

Image source

Maybe the “script” tag with your bundle is too low in the DOM? Moving it as high as necessary may sometimes be crucial.

Shipping pure ES6+ code to production

Shipping pure ES6+ code to production helps improve loading of JS application.

Image source

Modern browsers support all ES6 specifications (maybe except for modularity, which can be easily covered anyway). We may create two build configurations for ES5 and ES6 builds respectively. The ES6 build may be shipped to selected browsers and ES5 to all the rest. Your backend server may send the first or second bundle based on the user agent.

As we all know, our ES6 code is babelified so that older browsers may still understand it, but it does not come without a cost. The cost of transpiling to ES5 comes with a lot of polyfills so that a simple “async await” function takes 4 times as much code after transpilation.

Yet oftentimes your node_modules take a huge majority of your bundle (and you should and will probably be forced anyway to exclude node_modules from transpilation), so in the end, it may not be worth the effort but it definitely is something to investigate in your case.

Code splitting

Code splitting helps improve loading of JS application.

Image source

Code splitting is one of the most powerful tools of Webpack. It allows you to define more than one entry point for your application. There are many cases where it might be useful. For example, you may need only a small portion of JavaScript code for a few pages which are available only to not logged-in users. There would be no need for them to download all the code as it would not be used at all.

Intelligent code chunks generated by Webpack

Using intelligent code chunks generated by Webpack helps improve loading of JS application.

Image source

If you rely on code splitting (and often you should) and generate chunks, you may sometimes end up with too many kB chunks, which cause only blockades of download. You should specify a minimum chunk size in Webpack configuration.

If you have many chunks, you probably ended up with repeated elements in these chunks. In order to disable such behavior, you should add the following option in your Webpack configuration:
[cc lang=”javascript”]
optimization: {
splitChunks: {
cacheGroups: {
vendors: false,
default: false,
}
}
}
[/cc]
Furthermore, you may decide what should land in your chunks. All you have to do is add
`/* webpackChunkName: “chunkName” */` in your dynamic import as presented here.

Compress your .js files if you haven’t already

Compressing .js files helps improve loading of JS application.

Image source

“Gzip” has been supported in all browsers for a long time. Saving on bundle size as much as possible is a huge win. It’s a very easy way to reduce the size of your JS files. As mentioned here it can all be compromised to the following conversation:

Browser: Hey, can I GET index.js? I’ll take a compressed version if you’ve got it.

Server: Let me find the file… yep, it’s here. And you’ll take a compressed version? Awesome.

Server: Ok, I’ve found index.js (200 OK), am zipping it and sending it over.

Browser: Great! It’s only 10kB. I’ll unzip it and show the user.

To make sure your server sends “gzip” files, check for “content-encoding” header in the files your client downloaded.

Optimize your bundles

Optimizing bundles helps improve loading of JS application.

Image source

We all love JavaScript. However, it’s the most demanding resource on the web. Oftentimes we rely on third-party libraries for faster development and to avoid reinventing the wheel.
Yet sometimes we exaggerate with the number of libraries.

Install Webpack Bundle Analyzer and track the heaviest parts of your app. To give you an example, Moment.js weighs 65kB minified and gzipped and 75% of its weight is caused by the huge number of locales it provides. Consider removing some (or most) locales from your build or use a lightweight alternative such as “date-fns” or “dayjs”.

Conclusion

Remember that if a user sees your content fast, he will likely stay on your site. At the end of the day, it is always good to go through a tool such as Lighthouse to find other problems that your web app may suffer from.