It seems like every time somebody suggests to the JavaScript world that there’s a different (often better) way to do things than the status quo, the knee-jerk response is to say that the old way is faster, and start pulling out perf scores. That’s all well and good, but by that standard, we should all be writing everything in C. The perf differences between one technique and another in JavaScript is just one of many considerations you should weigh while you’re working. Other major factors that should not be cast asside include developer productivity, code readability, application flexibility, and extensibility. Think about the trade off before you make it.
Don’t get me wrong, it’s essential that we keep performance in mind while we’re developing web applications, but your app isn’t going to be noticeably impacted by your choice of switch…case vs method lookup, or your choice of inheritance patterns.
Maybe you’ve read Nicholas Zakas’ “High Performance JavaScript (Build Faster Web Application Interfaces)”, or Steve Sauder’s “High Performance Websites: Essential Knowledge for Front-End Engineers” and “Even Faster Websites: Performance Best Practices for Web Developers”. You know all the perf tricks. You have a good understanding of which constructs are faster than others in JavaScript. That’s a really good start. You’re doing better than most.
The bottom line is this:
Know what to optimize. If you work too hard for a 1% gain, you miss 99% of your potential.
Concentrate on the big time wasters, first.
What Slows Down A Typical Modern Web Application?
For typical applications, your bottlenecks will be I/O or render-bound, or sometimes memory (a lot more a couple years ago, but even mobile devices are starting to pack a lot more RAM these days). Here’s my web application performance hit-list, in order of priority:
Too Many Network Requests
Hitting the network comes with a high cost. If you’re having performance problems, look here first, beginning with your initial page load.
Your application start-up sequence is unavoidably going to leave a strong first impression with your users. If you make them wait, they may not stick around long enough to see all the results of all your hard work, and even after they have used your app, created an account, and made a time investment, every time you make them wait, you run the risk of driving them to your competitors.
The more separate requests you have to make, the worse it gets, because each request introduces latency. Lots of tiny requests basically turn into a latency queue. Who cares how you instantiate your objects if your page never gets to finish loading before the user hits the back button or closes your app before it’s done booting? YSlow is a great resource to help you optimize page load times.
Inefficient API endpoints can force a lot of separate requests. For example, I recently created a service that delivers a few different types of loosely related data. Sometimes, you may want to fetch each different type individually. You may need one type, but not another. And then there are times that you need more than one type. It’s good to let consumers request all the data they need with one request, or limit data by querying for the specific data that they need. Whether you’re working the full stack, or working along side API specialists, make sure that the needs of the API consumers are driving feature development in the API such that you’re optimizing to trim the number and size of requests based on actual API use cases.
API payload size is also a big factor. I’ve seen a lot of APIs that send down a lot of duplicate data, because they in-line objects that are shared between a potentially large number of parent objects. If you run into that situation, consider referencing those duplicated objects by ID, instead, and sending down a separate hash that you can look them up from. Consider including both object sets in a single request. It makes the client code a little more complicated, but if your API has an SDK (which I recommend), you can smooth that over, too.
It’s harder to say what’s faster for page loads – injecting data into the initial HTML page-load, or building a configuration endpoint that you can asynchronously fetch data with. The former can hurt page caching, and the latter will introduce another request, and add latency overhead. That’s a question you’ll need to answer based on the needs of your particular application. Most of the time I inject data into the initial HTML page-load, but that doesn’t mean that’s the right choice for everyone. However, it could be easy to profile both methods and go with the one that works best for you.
Solutions
- Compile and compress scripts to deliver a single JavaScript payload. See Browserify for Node-style modules in the browser, or r.js for AMD modules, and UglifyJS for compression. You’ll find Grunt tasks to automate all of that.
- Compile and compress CSS.
- Consider a web font instead of a bunch of small image icons. Bonus – those web fonts can be styled and scaled a lot better than .png files.
- Make sure expires headers and ETags are set correctly on your servers.
- If there are parts of your app that relatively few users take advantage of in a typical session (for instance, administration consoles, settings, or user profile sections), consider splitting the assets for them into separate payloads, instead of loading them all at startup.
- Optimize API endpoints to minimize number of requests and payload size.
Page Reflows
Page reflows can cause your app to feel unresponsive and flickery. This can be especially problematic on low-powered mobile devices. There are various causes of reflows, including late-loading CSS, changing CSS rules after page render, image tags without specified dimensions, render order in your JavaScript, etc…
Solutions
- Load your CSS in the head (avoid reflows as CSS gets loaded).
- Specify image tag dimensions to avoid reflows as images load in the page.
- Load (most) JavaScript at the bottom (less important with client-side rendering).
- When rendering collections in JavaScript, add items to the collection while the list DOM is detached, and attach it only after all items have been appended. The same goes for collections of sub-views if many sub-views are used to build your complete page.
Too Many DOM Selections
The top jQuery newbie sin: failing to cache a selection that’s being used repeatedly. Raw metal people, you’re not off the hook. This also stands for direct consumers of the DOM Selectors API, like Document.querySelector() and Document.querySelectorAll().
1 2 3 | $(‘.foo’).doSomething(); $(‘.foo’).doSomethingElse(); $(‘.foo’).whyIsThisSoSlow(); |
Solution
Try to minimize your DOM access. Cache your selection at the beginning, and then use that cache for further access:
1 2 3 4 | var $foo = $(‘.foo’); $foo.doSomething(); $foo.doSomethingElse(); $foo.muchBetter(); |
You can also reduce your reliance on selections by separating more of your logic from your DOM interactions. Once you move from building basic websites to building full fledged applications, you should probably be using something like Backbone.js to help you keep concerns decoupled. That way, you won’t have your data collection algorithms dependent on DOM selections, and vice verse — a common issue that slows down a lot of poorly organized applications.
Too Many Event Listeners
You’ve probably heard this a million times already, but infinite scrolling is getting really popular, so you’d better pay attention this time: Stop binding event listeners to every item in a collection. All those listeners consume resources, and are a potential source of memory leaks.
Solution
Delegate events to a parent element. DOM events bubble up, allowing elements earlier in the hierarchy to hook up event handlers.
Blocking The Event Loop (For The Less Common CPU-Bound Cases)
Once in a while you’ll need to do some processing, number crunching, or collection management that could take some time. JavaScript executes in a single event loop, so if you just dive into a loop, you can freeze up everything else until that processing is done. It’s particularly distressing to users when the UI stops responding to their clicks.
Node programmers should go to lengths to avoid any blocking I/O or collection processing that would happen on every connection. It can cause connection attempts to go unanswered.
Solutions
- First, make sure you’re using an efficient algorithm. Nothing speeds an application up like selecting efficient algorithms. If you’re blocking in a way that is impacting the performance of your application, take a close look at the implementation. Maybe it can be improved by selecting a different algorithm or data structure.
- Consider timeslicing. Timeslicing is the process of breaking iterations of an algorithm out of the normal flow control of the application, and deferring subsequent iterations to the subsequent loop cycles (called ticks). You can do that by using a function instead of a normal loop, and calling the function recursively using
setTimeout()
. - For really heavy computations, you may need to break the whole job out and let it execute in a completely separate execution context. You can do that by spawning workers. Bonus — multi-core processors will be able to execute that job on a different CPU. Be aware that spawning workers has an associated overhead. You’ll want to be sure that you’re really getting a net win with this solution. For Node servers, it can be a great idea to spawn as many workers as you have CPU cores using the
cluster
module.
Conclusion
First, make sure your code works, it’s readable, flexible, and extensible. Then start optimizing. Do some profiling to figure out what’s slowing you down, and tackle the major choke points first: network and I/O, reflows, DOM selections, and blocking operations.
If you’ve done all of that, and you’re still having problems, do some more extensive profiling and figure out which language features and patterns you could swap out to gain the most from your efforts. Find out what’s actually causing you pain, instead of changing whatever you happen to know (or think) is faster.
Happy profiling!
here are some other interesting article on Ericsleads.com