It's fast enough let's ship it. As a developer with a fast desktop or laptop and the latest iPhone, performance might look as a non-issue. But is a non-issue for your users?
Now try launching your web(app) in mobile lighthouse in Chrome. Go on we’ll wait. Yes it is a simulation of an midtier Android on a midtier network. The reality is worse… still the numbers we have seen for our app woke us up.
Over 7 seconds to see data on the page (Largest Contentful Paint) is clearly non-acceptable.
What can we do?
Ironically the question should be what we can not do?
Programming for the web can be mostly divided into two categories, sites and apps.
You can have your lunch and eat it too, use React for development, but not run anything on the user machine! Tools like Astro and GatsbyJS should be your go to tools for CMS and basic websites. Sites should not rely on JS, in fact they should work just fine without it. Static generation means that your (React or other framework) build web will be compiled to static html and css. Which can be hosted on CDN and cached for maximum performance.
If your site is slow, give static generation tools a try.
Applications that offer rich, interactive and user-specific functionality. If you need personalized sessions, for example because content needs to be protected with authentication.
In this blog we will focus on this type of apps.
Yes there are also hybrids, like a web site with a shopping-cart app or chat. Often this can be handled by a web-site with a dynamic "island" where the dynamic web app lives. Or in the past you could use a separate domain or web-page for the dynamic content.
In the Web-app category, only very few apps are truly “lived in”. Where the user starts the app and then spends hours working with it. This is the domain of content creation apps like word/excel/figma/photoshop, etc.
The rest, 90% of apps are used in a task oriented way. The user wants to accomplish a specific task and then do something else. This interaction (session) will take seconds or minutes and rarely needs more than two screens (pages).
If the app only supports one task scenario, it is straightforward to optimize. The problem is that most apps, provide many types of tasks the users can work on. The question is thus, can we only “send” the part or the app the user needs down the wire?
Historically there are two major ways to deploy dynamic web apps.
If we care for maximum performance, eg. the making the app usable as soon as possible, it can seem obvious we need server rendering. Before we bring out the refactoring big guns, it is best to measure. As our saying goes: measure twice, cut once.
Lighthouse will give us some good over all numbers. However they don't tell the whole story and you will need server side (and database side) numbers too. Not only do we need timings, how long things take, but also memory utilization.
To state the obvious, with server side rendering, all work is done on the server. Therefore your server needs to have the right cpu and memory to handle the required number of user sessions - at the same time. Here is where you would look into tour cloud burst options. In simpler cloud deployments or on-premise deployments, you must choose a powerful enough server to handle the maximum number of sessions you can reasonably expect. The difference between the average and maximum number of simultaneous sessions can be easily a factor of 2 or more. Yes these are real dollars wasted. Or slow/dropped sessions if you get the server size wrong.
The alternative is of course to reduce the burden on the server and shift some of the work to the client (browser).
Some - is the key word. The problem of pure client side rendering with single page apps (SPA) is that all the work is shifted to the client browser. And at once. The result is a gigantic javascript file has to be fetched and download from the server, then processed. Then executed, with another network fetch (or more) for actual data to be displayed. And then finally the actual html can be generated and painted for the user.
Let’s remind us why are we doing this? Because once the app (the gigantic js file) has been downloaded, very little data is required while working with the app. In an SPA app, you would fetch a few database record to show a list of records or even just a single record when displaying a form. In most cases, this would be a compressed json sent down the wire, a few bytes most of the time.
In other words, let’s say it takes 10 seconds to download the app, but afterwards working with the app is super fast. Was it worth it? Well yes, but only if you keep working with the app. Imagine you only want to check the time of your meeting in the app. This would take you 2 seconds. Now the initial 10 seconds delay is much harder to justify, and it doesn’t really matter how fast the app is after the initial load.
So should we abandon SPA?
No. And no the problem is not with the technology. The problem is we need to focus on the user. Fast is relative. There is no such thing as a fast app. It is always the user's experience of a particular workflow, that either feels fast or not. (Ok, if all/most workflows feel fast we can declare the app fast:). Now obviously good UX can help with the "fast-feeling" a little. And it also goes the other way around. If you make users click around a lot to get things done, or even worse, make them think:) it doesn't matter how fast each page loads.
Long story short, step one is to understand which tasks the user care most about. This means talking to users and measuring sessions. For example for most clients, the tasks of checking the calendar has to be very fast, while interacting with charts is not performance critical. Again, your client needs might be very different - talk and measure.
Our “methodology” is quite simple, hence the quotes.
Design, Cleanup, Reduce and Divide.
Make sure you mapped the user's journey(s) through your app. Without going into too much detail, let's just go over the basics.
Dead-code and features nobody uses accumulate over the lifetime of an app. Look at the 3rd party packages, and remove what is not used anymore. Again, you might need to start with measuring to be able to tell, what’s used. You might still be surprised of how many things just hang on. For example since you changed the font or logo and forgot to remove the old ones.
Inspect the fonts and images you are using. With images you should use next generation image formats, but even if you stick to jpegs and pngs, there are capable tools that will optimize the images without any quality loss. Look at this reddit thread. The go to image processing tool from the command line is imagemagick And there are many many sites the offer to compress your images like compressor.io.
With fonts, you can and should subset the fonts. Google fonts has many tricks to shrink fonts
For third party Javascript packages.
The grand finale of this blog, is the rather obvious idea that to make SPA performace better the last resort is simply to divide the functionality. We need to break up the app’s code into chunks and load the functionality on demand.
So what features go into which chunk? I am sure you can guess what's comming... The only to do that in a way that will benefit the app’s users, we have to observe and measure what users are doing (or trying and failing to do!).
Then divide their tasks into these groups:
Obviously groups 2 and 3 should not be part of the initial download. But what if the group 1. is still large? The answer for both is divide and conquer.
In case you are wondering (some tools like Gatsby tell you), the recommendation for 2024 is that large is more than 500KB uncompressed JS.
These days JS modules are widely used. Instead of putting JS files into an html head element, we declare our imports and let the bundler (vite, webpack, etc) build the app.
import { useState } from 'react';
Once the app is build ( npm run build
) we will get a single gigantic JS file.
To get a good idea on what goes into the file and how much your JS code or dependecies contribute to the size of the final package, you can use visualizers. Such as vite-bundle-visualizer
or vite-bundle-analyzer
,
npm run build
npx vite-bundle-visualizer
The interesting feature, that is actually very well supported is to import on demand within your code instead at the start of your JS/TS file. The result of the import operation is the same, you gain access to a package (or file) functionality. However there is a big difference. The on-demand import is asynchronous, you call import as a function, and get a Promise back.
Async import is how you actually implement code-splitting, that is turning a single gigantic JS file into multiple files - chunks. How does it work? It is quite straightforward: You start with your app entrypoint (App.tsx, App.js, etc.) and follow the (static) import chain. All the files you import and all the files they import, and so forth become the “main” chunk. Each time you do a dynamic import, a new chunk is started. The imported file is the “root” and again all statically imported files are added to that chunk.
How does code-splitting help? We put the group 1 into the main chunk (or break it down further) and group 2 and group 3 into dynamically imported chunks.
One common problem with this is that we might need to import React components, or libraries of React components.
For single large components, you can use the React’s Lazy feature. If you need to import multiple components or other stuff check out the code below.
import { useEffect, useState } from "react";
interface ILazyLib {
result: any;
promise: Promise<any>;
error: any;
}
const lazyLib: { [name: string]: ILazyLib } = {};
export function useLazyLib<T>(name: string, loader: () => Promise<T>): T | undefined {
let z = lazyLib[name];
if (!z) {
z = lazyLib[name] = {} as any;
z.promise = new Promise<void>((res, rej) => {
loader().then(value => { z.result = value; res() }).catch(e => { z.error = e; res() });
});
}
const [_, render] = useState(0);
const needRender = !z.result && !z.error;
useEffect(() => {
let cancel = false;
const exe = async () => {
await lazyLib[name].promise;
if (!cancel)
render(x => x + 1);
}
if (needRender)
exe();
return () => { cancel = true };
}, [needRender]);
if (z.error)
throw z.error;
return z.result;
}
Another interesting feature your outer library probably supports is async routes. With ReactRouter you can add a lazy async function to your route and then dynamically import whatever component you need.
With TanStack router there is the createLazyRoute method.
But, what if you have some javascript code that is relying on <script>
tags? Recall, that JS imported with <script>
tags, causes the browser to stop the page loading and do the script download and executing instead.
To change the behaviour, there are several options:
For more in-depth information on how the flags work, look at this article.
Here is some code we use for dynamically loading JS scripts.
let pdfKitPromise: any = undefined;
const importPdfKit = async () => {
const doImport = async () => {
const attachScript = (url: string, onload: any, location: Element) => {
var scriptTag = document.createElement('script');
scriptTag.src = url;
scriptTag.onload = onload;
(scriptTag as any).onreadystatechange = onload;
location.appendChild(scriptTag);
};
await new Promise((res, rej) => {
attachScript('/pdfkit/pdfkit.standalone.js', res, document.head);
});
await new Promise((res, rej) => {
attachScript('/pdfkit/blob-stream.js', res, document.head);
});
}
if (!pdfKitPromise) {
pdfKitPromise = new Promise((res, rej) => doImport().then(res).catch(rej));
}
await pdfKitPromise;
}
Building apps on a powerful pc/mac with gigabit internet is a joy, but we have to remember this is not how our users will experience our apps. There also no magic bullet, no technology or framework that is universally fast. Fast is in the eye of the beholder, so we need to measure and talk to our users to understand what it is they perceive as fast or slow.
As we hopefully demonstrated above, there are many ways to optimize performance. While we take tree shaking and JS minification for granted these days, larger SPA apps will quickly turn unwieldy (even unusable) without developer action.
Before we go on a rampage of squeezing the last drop of performance from our app, we should also remember the wisdom of Donald Knuth: “Premature Optimization Is the Root of All Evil”.
Happy measuring, talking and hacking!