Quick Summary: Angular is one of the top frontend frameoworks. It is robust, scalable, responsive and highly performant. Yet developers encounter a myriad of performance problems during development cycles. This article provides quick fixes to some common angular bottlenecks and details tips and tricks to imporove your angular app performance.
Large developer community actively develops and improves the framework along with a myriad of other interactive plugins and extensions that help developers code faster. Friendly features make development with Angular a breeze. However, everyone encounters some or the other problem/bottleneck while developing custom Angular applications that are extremely complex.
Disregarding app performance and UI sluggishness while continuing to implement new features is the most common practice in the industry. While the product may function as intended, the performance may not be upto the high standards. This results in egregious User Experiences which hinders wide adoption of the product.
This article assists you in pinpointing common mistakes made by developers and helps you avoid those pitfalls. Read till the end to understand how to implement Angular Best Practices and create great user experiences that drive your product. Let’s get started.
As more and more features are coded in, it is common to see that the performance begins to take a hit. Despite Angular being highly performant frameworks, without any prior knowledge about optimization and best practices the web application will slow down.
Slow apps can directly affect user experience in many ways. Therefore, it is of utmost priority to be aware of practices that slow down the performance and avoid them.
Before delving deep into the specific problem, let’s look at some questions that puzzle angular developers frequently face while developing custom softwares.
If your Angular app crashes out of the blue, it may drive down the value and rating of your application. Some common reasons for applications to crash are unhandled exceptions, unable to handle lots of requests simultaneously, or just poor programming.
To resolve app crashes one could put a rate limiter on the server, scale your cloud infrastructure or make use of technologies such as GraphQL.
Sometimes, even the documentation might not have all the answers. It may be dependent on the project features. This is the case where practical knowledge might be required. If a user clicks on a button and the web app isn’t interactive within 3-5 seconds, then the user might stop using the service altogether and may result in a decline in traffic and engagement.
A general advice would be to use OnPush at required places. This can help you lessen the load time and optimize the performance of your angular app.
When an app slows down for various reasons such as a lot of unoptimized code, insufficient computing resources or CDN not being properly implemented, the user loses patience and would rather move on to an alternative application.
To speed up your Angular app performance, you can consider utilizing Angular Universal to create PWA or cache static content to improve First and Largest Contentful Paint time.
If your Angular app is not working well with the migrated technologies, it will result in problems that would be tough to eliminate without a complete rewrite. Thus, user experience would be severely affected. Thus it is of utmost priority to resolve such types of issues as soon as possible.
The best solution is to get rid of redundant CPU intensive recomputation which will help free up resources and will provide a significant performance boost to your application.
Many resource consuming and compute intensive data streams in your Angular app may slow down the app.
On identification of such a problem, you could reduce the size of bootstrap logic which can help mitigate the issue and alleviate app performance bottlenecks.
There may be many tasks running in background that result in sudden usage spikes. However, Angular has ‘change detection’ functionality which may be behind the increased usage. Increased usage may slow down the client side application functioning.
As a remedy you could remove change detection that leads to sluggishness in your application and hence tackle the angular performance issue.
While Angular itself is very powerful and highly performant, creating complex apps with many functionalities and intensive computations can quickly bring down the performance. Identifying and solving such problems may take weeks which may significantly impact the product growth. Hence, the better solution would be to avoid such coding practices which impose complicated bottlenecks on the angular app.
All of the mentioned tricks may not be relevant to the project, however by having a deep understanding of Angular Framework, one can write clean code and develop an Angular Web app that is highly robust, scalable and responsive.
Without further ado, let’s get started
The JIT compiler compiles the angular app during runtime. It is part of a bundle that is sent to the client. Essentially what happens is that the server sends raw data along with assets and compiler to the client. The browser then compiles the code before displaying the output.
The AOT compiler compiles the angular app during build time, and hence only the compiled templates are generated. Compiler is not bundled with the data that the server sends to the client. Because the code is already completed the output is immediately rendered.
Comparing AOT with JIT compilation, ahead-of-time compilation results in 50% less bundle size. This amounts to significant bandwidth savings at the user end. Moreover, the app will maintain its snappy and responsive feel even on lower end devices as few resources are required.
Utilizing AOT compilation can save a significant amount of time because it saves the time required to transpile the application. Moreover the ‘time to interactive’ is significantly reduced as the received content is pre-processed and is ready to go just like a simple webpage. This also speeds up load time performance and the rendering process which results in an extremely fluid and responsive user interface.
Static assets such as CSS files can be minified and images, video, audio clips which can be compressed significantly without losing quality. This reduces the overall amount of data that is required to run the angular app, amounting to less download time even for users with slow connections.
Moreover, one can use bundling which is a widely used practice to lessen the number of requests that a browser needs to make for the purpose of rendering the application that the user has requested. The way that a bundler works is that it receives a list of entry points and it makes one or more bundles that have all the required files prepackaged.
The advantage of bundling is that the client can receive the whole application and required assets with only a few HTTP requests instead of making multiple requests for each and every file. This trick can be used to improve load time performance of your application notably.
Caching is one of the most popular techniques used to speed up the performance of the application. The idea is if the resource is used recently, it may be requested in future, and hence it would make sense to save it on the client side. Everything from HTML, CSS files to media assets can be cached using Angular PWA which uses web workers behind the scenes, thus multiple calls to the server can be avoided.
When it comes to requests, HTTP calls can also be cached. This results in faster user experience without cluttering the application with a ton of caching code. Either the app can be optimized for freshness, where cache is read each time the HTTP call expires first or can be optimized for performance, where cache is checked first, then API call is made and finally cache expires.
Using a Content Delivery Network can result in a significant boost. All of the major cloud providers like AWS, Azure, Google Cloud support CDNs. This results in consistent and quick performance irrespective of user location.
Another widely used technique to optimize the performance is lazy loading. But not loading certain routes, assets like audio, video, images at initial load and loading those only when required, lazy loading saves several seconds.
It will isolate a feature in its own bundle and will load it only when needed. For example if the entire application requires 1MB of assets but not all assets are required initially, then by splitting the routes only the homepage and its required assets are loaded which is at 300KB when the user visits the site. The resulting bundle is cut by more than half of the original size!! When the user navigates to other routes, route specific content is loaded.
This optimization is great as it reduces the ‘Time to interactive’ drastically and helps to craft intuitive user experiences. With Angular it is extremely easy to lazy load resources.
A service worker is essentially a script that runs on a non-blocking thread of the device, in other words it runs in the background. It is primarily used for tasks that don’t require a user interaction or a webpage.
With service workers one can build several features that were traditionally limited to native applications only like background synchronization, push notifications, geofencing and many more. Utilizing service workers developers can create offline functionalities with customizations possible down to the smallest features without leaving the browser ecosystem.
Data Encryption, Image Processing and Collision detection in games are most common applications where service workers offload the compute heavy calculations from the main thread.
These calculations are done in the background by the CPU while the UI is not blocked. This helps make the user experience feel more snappy and responsive. Let’s understand each case in detail:
Let’s consider the case of gaussian blur. If you want to apply blur on your image, then multiple calculations would need to be performed in a time span of seconds. First you need to transform the RGB image matrix according to the formulas and the compute Gaussian matrix by applying the mathematical equation to each and every pixel of the image. This operation freezes up the main user interface of the application.
To prevent this from happening, service workers spread out the calculations across many CPU threads and hence frees up the main thread. The computation continues in the background while the main user interface is fully functional and responsive in the foreground.
Many browser based online games require physics calculations. When it comes to real world physics simulation, collision detection is one of the most used concepts. For example, a simulation consisting of many balls bouncing and colliding with each other. In order to make it look like a realistic scenario, many calculations are required.
In case of a single threaded scenario, rendering each frame would take a significant amount of time which would appear as a bunch of images rather than a smooth video. Thus, multiple threads that can run in the background are of extreme necessity.
Service workers are efficient in this specific task. They can distribute the load across several CPU threads and cores as required. Because the heavy task is given to the background thread, the frontend becomes very snappy and responsive. Web workers, thus help to improve the app performance.
One of the most prominent examples is password managers. Password managers require a high level of security along with encryption. Password managers do not store the passwords in plain text. They use modern encryption algorithms to convert your plaintext passwords into secure hashes. Imagine a user who has many passwords stored in his vault. Each and every password hash needs to be decrypted, often requiring multiple iterations.
Web workers are extremely useful in this scenario. Service workers work through each and every iteration of the hash, while the main interface is not blocked. If passwords are updated, hashing those passwords and synchronizing it with the server can all be done in the background while the user may not even be in the application.
From the above example, we have seen that web workers are extremely versatile. They allow developers to craft snappy and fluid experiences despite being computationally heavy. Offloading the heavy computation to the several threads that run in background provides a significant boost to the app performance.
Generally, when a user requests a web page, the server sends bundles. While the bundles are being received, a blank page is shown. With SSR, content is quickly displayed on the screen though not interactive.
Another problem with client side rendered applications is that oftentimes search engine crawlers can’t read the content as it is dynamic. This may result in poor search rankings. SSR has the solution for that too. Instead of indexing dynamic pages that render on the client side, crawlers instead index static content that is provided by the server.
Though modern search engines such as Google are able to efficiently index dynamic pages, server side rendering is still a requirement for many other search engines. Apart from SEO, many server side rendering is also used to provide helpful link preview along with a short summary in many social media and messaging apps.
Change detection is a process run by Angular which checks for any changes that must have taken place in the application and updates the entire user interface automatically often by re-rendering the whole web page. This mechanism is very helpful because it eliminates the need to write a lot of code for doing such a simple task.
As powerful as it may be, change detection may start to hinder the performance as the application grows more complex and an increasing number of functionality is added. It is not uncommon to see the main interface being blocked while the application is performing the change detection process.
The solution to this problem is to apply change detection strategies such as OnPush, detaching change detection and running it outside of Angular.
Angular by default re-renders the entire application all the way from the root node to the innermost child node. This process, while convenient, leads to performance drops and long load time due to which users experience sluggishness. This leads to unnatural user experiences.
This problem can be solved by using the OnPush change detection strategy. OnPush can target specific subnodes to be rerendered without rerendering its parent nodes. The change detection process can be entirely avoided for those components that have received similar inputs i.e. they haven’t changed as the same inputs give the same outputs.
Instead of running the entire function to obtain outputs, Angular compares the current input data and previous input data to determine if that specific subnode should be rendered or not.
OnPush does the majority of the work on minimizing the number of times the change detection mechanism is triggered. However, there are some niche scenarios where some additional strategy is needed.
Sometimes, change detection needs to be triggered deliberately. If that object has an OnPush strategy and is mutable then nothing will be triggered. This is because mutable objects return the same reference. The entire DOM is checked only if the reference has changed.
To overcome this issue, you can either return a new object reference manually or simply use immutable objects as those automatically return a new reference when updated, hence triggering change detection automatically.
Change detection is a computationally very heavy process. While it is useful in many use cases, it may result in unnecessary application load while traversing through the entire tree looking for changes. Excessive change detection would result in a hit to the application performance. Even techniques such OnPush help solve only half of the problem. In complex apps with a lot of nested elements it becomes very difficult to render and update each one.
The key to this problem is to utilize change detection conditionally. You can detach the mechanism when external events or services are triggering the process more often than required. This way, the checks are carried out only for the said components and not for the entire subtree.
The mechanism is triggered by each and every asynchronous event. Frequent unnecessary computations slow down the performance despite the code being optimized for inline-caching. Performance can be significantly improved by performing the change detection process only for the sub trees that are supposed to change based on user actions. Change detection can be detached when multiple services trigger it unnecessarily and can be reattached when required. This is the best trick to improve the run time performance of the app.
Angular’s change detection process takes place after every asynchronous callback function is executed. This is due to Zone.js.
If you don’t want to execute your process in the Angular Zone context, you can make use of a method known as runOutsideAngular in the NgZone instance. This allows async callbacks without triggering a change detection mechanism.
Tree-shaking is a practice used to remove unused and dead code. In the production version of the application, not all of the code would be required. The extra code that is written by the developers or in the Angular or third-party library is essentially useless and hence dead. Tree-shaking would remove the code and hence would reduce the size of angular app significantly.
After angular version 6, in addition to the code, dead services can also be removed. Services will not be included in the production build unless they are referenced by other services/components. The best part of all this is that tree-shaking is enabled by default in Angular CLI, hence the Angular applications are already optimized.
Prefetching resources and assets is particularly beneficial in case of static resources as those do not change frequently.
Prefetching can be beneficial in a way that instead of waiting for the entire site to load with a blank loading screen, some static content can become visible. This way the users can start interacting because that content was already fetched in the background and cached while the user was on another page.
A variety of things from routes, modules all the way to static content such as images, audio, videos can be preloaded using Angular’s “PreloadingStrategy”.
Each and every time a component is re-rendered methods are called. Even with onPush change detection, methods are called every time there is an interaction which can take a toll on the performance if the methods are doing some very heavy processing.
To avoid such recomputation, use pure pipes because they are triggered only when the input to the pipe is modified. Recomputations only take place if the output is different as the same inputs produce the same outputs.
In an application lifecycle the most expensive phase is rendering content. For example, the backend of a chat application shows millions of messages in the admin panel. If a new message is sent, then at the backend, that message must also be updated.
But this process is very expensive because what Angular does by default is that it replaces the entire interface consisting of millions of messages with an updated one. The problem is that most of the data is the same which shouldn’t be rendered. The entire DOM tree along with styles has to be recomputed for a couple of minor changes
ngFor directive utilizes a strict reference operator which observes memory addresses. This combined with immutability, object references are broken constantly, resulting in re-rendering of the DOM elements with each iteration.
trackBy overcomes this angular performance roadblock by evaluating elements based on its unique identifier. Thus, instead of replacing the entire DOM, only those items that have added or changed or removed result in DOM manipulation
Continuing the example from earlier, if most of the messages stay the same, it makes sense to update only part of the UI. Hence, with the use of the trackBy option in ngFor directive, one can achieve just that.If any message is sent or deleted, that change is reflected on the backend, requiring re-rendering for the rest of the messages.
Using trackBy is only part of the solution. No matter how optimized the code is, with increasing complexity, the number of DOM elements increases, resulting in sluggish and laggy performance even in modern browsers.
This problem calls for the need for structural re-architecture. In order to decrease the application rendering time one can try
There you have it. The top tips and tricks to optimize the performance of your angular web application. More than how to apply these best practices, it is important to know when to apply them. Important factor to be weighed upon is the optimization of the application rather than building features.
A few fully-functioning features that improve experience go further than many half baked features that not only degrade the user experience, but also limit the reach of the product. An effective strategy would be to build the product first and then identify the performance pitfalls which can be solved using the above mentioned best practices.
Remember, if the optimization doesn’t improve the User Experience, the effort is in vain.