Not only I decided to go full static and use Jekyll as my static site generator, but I also put extra effort in making my blog much faster improving its loading speed. I managed to get great result by applying the recommended good practices and a few other tips and tricks. I’ll be explaining here what I did to achieve it.
I had it very clear. This time I wanted to get rid of Wordpress (which I was never a fan of) and aim for a beautiful, simple, SEO friendly and fast blog. I wanted a custom blog in which I could have total control. A blog made from the scratch with my blood and sweat. After all, I am suppose to be a web developer right?
And why did I want to put extra emphasis in performance? Because loading time is a now one of the SEO factor with direct implications in search engines’ rankings and, I don’t know about you guys, but making an article takes me a lot of time and I definitely expect others to be able to reach it!
Fast = Better SEO = Easier to find = More visitors
The power of static site generators
You’ve probably read about it in thousands of other posts so here are my personal reasons for doing so:
- Static content is always faster than dynamic one (and even more yet using CDNs)
- It is far more secure (no database involved, no PHP or Wordpress exploits…)
- I don’t need a fancy admin panel
- I love Jekyll’s Markdown syntax (same as the used on Github)
- I have total control over everything
But… Álvaro, show me some results!
Glad you ask for them! I have a few to show but first of all, how did I measure performance ?
I made use of a few tools and websites out there such as tools.pingdom.com, tools.keycdn.com, webpagetest.org and gtmetrix.com but mainly Google Insights and Google Chrome Dev Tools - Audits tab, which basically uses an open source project called Lighthouse.
Warning!! The use of these kind of tools might cause temporary lose of common sense in the developer working with them. Obsessive disorder can be expected on the subject.
After a few weeks dealing with different issues here and there I managed to get a score of 100% in Google Insights (yeah… I completely lost the north for a few days, but now I’m fine, I pr0Ooomilll111se€€!!?¿¿099 )
Here the Google Insights results in mobile (first image) and Desktop (second image):
I didn’t manage to get the same score with Lighthouse due to some minor issues, but I got what I consider to be a pretty decent one.
Not bad either in pingdom and gtmetrix.com, in which I got an amazing 103ms load time result on the 2nd load of the page:
And a very acceptable TTFB (Time to first byte) from the server, which I got form KeyCDN Tools:
How did I achieve these results? Let’s get a bit more into detail.
Solving problems reported by Google Insights
First of all, I took a look at Google Insights and tried to solve all the problems found on my site.
This task can be a bit (well, let’s be honest, super) annoying and Google doesn’t always provide much information about how to deal with those errors.
On my crusades against those issues I found varvy.com. A great page that not only digs into those problems and explains how to solve them, but also lives by example. The page is ranking extremely good in google and applying most of the techniques they recommend. A quick inspect of the page will show you the good practises.
Definitely something to check out!
Less is more.
I kept it simple. I tried to minimice the amount of front-end code.
- No jQuery. (It’s very heavy! 84Kb!)
- No front-end libraries if possible (I only used vanilla-lazyload, 3Kb-5Kb)
- A single Google Font with 2 weights.
- No official sharing buttons and no counters for them. (They are super slow.)
I was even thinking of getting rid of Google Fonts. They gave me a huge headache! But what can I say… I’m weak. I like them! And I also found a good-enough solution for them at the end.
One thing I always had in mind while doing this is that every single line of code should be questioned. Is it really worth it? Can I simplify it? There’s a reason why simple and clean interfaces are usually also the ones which perform the best. Follow the KISS rule (Keep It Simple, Stupid!)
I reduced the number of HTTP requests
Having resources of small size, I tried to use as less HTTP requests as possible. I inlined CSS and JS when necessary and got rid of any external resources such as sharing buttons, that will usually flood your network activity with plenty of ugly calls and their respective DNS look-ups.
For this page you are reading right now, I used only 6 HTTP requests before page load. Others use 5.
Other external resources I use such as Disqus comments are only loaded when scrolling and won’t have effect in the page load time, same as Google Analytics, Facebook Pixel or my non-critical css styles.
Divide and conquer
These are my CSS files:
- critical.min.css (3.9Kb)
- non-critical.min.css (4.7Kb)
- home.min.css (1.3Kb)
- post.min.css (2.6Kb)
- responsive.min.css (1.7Kb)
- buttons.min.css (389 bytes)
- youtube.min.css (194 bytes)
- blockquote.min.css (333 bytes)
And these my Javascrip ones:
- critical.min.js (2.8Kb)
- actions.min.js (4.4Kb)
Now, why didn’t I jus combined all those files? A few reasons:
- Critical files are the ones we want to load as fast as possible. They are essential for the good visualisation of the page and without them the user won’t have a good experience. We want to keep them as small as possible and we want to inline them directly in the HTML code. More information here
- Do not load what you won’t use. Why would I want to load 4Kb of a post stylesheet when I’m on the blog’s home page? Or even syntax highlighting, youtube and blockquote? Let’s save Kb!
There’s a lot of debate online about bundle vs split files, and all comes down to your use case and website. In my case I inlined all of the stylesheets except
- They are in total less than 14Kb
- I get rid of HTTP requests
- They whole HTML file will be cached by the CDN
- They are critical or semi-critical.
- I can save a few KBs by not including styles I won’t use.
Include things only when necessary
Because I’m optimising for load time and because all my CSS files are inlined for non-blocking approach, every Kilobyte matters!
In relation with the previous point, I only added stuff to the page when necessary! For example:
And yeah… I took it to the very extreme by even saving a few bytes with things like
blockquote.min.css, but hey! Why not if I can! There’s no much extra effort involved on it and I can only see benefits! I’ll probably create a post about how I did it on Jekyll and Gulp.
(And yeah, in case you are wondering, it is not easy to inline your own generated CSS/JS minified files in Jekyll, that’s why I made use of include_absolute plugin for Jekyll and the minification beforehand with Gulp. (and yeah, I’m running Jekyll with Gulp) )
Back to the topic, applying the same logic to the extreme I decided to:
- Only include lazyload script if the post has images (3-5Kb + 1 HTTP request)
- Only include codepen’s script if the post has a codepen snippet (2.6Kb + 1 HTTP request)
How do I know if a post contains images? Good question my friend! I solved it by adding a new variable to the Jekyll’s post Front Matter:
But… Alvaro… that’s boring! Having to remember all the possible variable names every time you write a post doesn’t sound ideal!
You have also noticed uh? That’s why I created a node module I run on Gulp before running Jekyll :). The module examines my post files looking for certain strings, and when found, it modifies the Jekyll’s Front Matter to add the variable for me.
Defer, defer and defer!
The so called “lazyload” feature is one of these kind of techniques and is used to load images only when they are about to enter into the viewport.
This way I deferred:
- Non critical styles (footer, newsletter below the post, code highlighting)
- Google Analytics snippet.
- Facebook pixel snippet.
- Disqus comments that appear below the post content.
- Codepen include script.
- Google fonts.
The criteria I used to defer them varies from one another:
- Some like Google Analytics or Disqus comments are only loaded when scrolling the page (or when the page doesn’t need scroll).
- And others after page load.
I ended up not using that feature and just deferring my styles after page load, but if for any reason you feel like doing any action whenever an element enters in the viewport, you have it easy now!
I used asynchronous load for Javasript files
Asynchronous loading is very easy to implement. Not fully supported by all browsers, but a good practise anyways.
Basically with a single attribute called
async we can let the browser know that it can request / load a script in a non-blocking way. This is, it won’t block the
I also used the async tracking snippet for Google Analytics. You can read more about it on Google Analytics docs.
In used dns-prefech for external resources
Because I made use of some calls to external resources such as Codepen script, I used
dns-prefech in my
<head>. This way the the server will look up the DNS for that domain and resolve it before even seeing the script call in the body, which will result in a faster load of the site.
Anything that calls a external dns should be using
dns-prefetch. You can apply it to youtube and vimeo videos, diqus comments, js libraries etc. Although it won’t make much sense if you load those scripts dynamically too :)
I’m using an SSL certificate to run HTTPS
This is more a SEO/security concern than a performance one. Google announced in 2014 they wanted “HTTPs Everywhere” and even announced they would take that into account on their ranking algorithm.
But using HTTPS will not only give us that SEO/security boost, it allows us to use HTTP2, that only runs under the secure protocol HTTPS.
I’m using HTTP/2 protocol
This is a big deal breaker in terms of speed. Specially if a site makes tens of HTTP requests. HTTP2 supports multiplexing and that means that we can asynchronously download more files in the same amount of time.
I’m not going to get much into detail here, but you can test it yourself by using httpvshttps.com.
I used a CDN and cached HTML files too
Using a Content Delivery Network is ideal for static sites. They’ll cache the content and deliver it from the closest node to the client, making it faster for people far away from your server and reducing the server response time and the TTFB.
Additionally some of them provide more options and can even compress content or use HTTP/2, which, as I said, is a win win.
Personally I use CloudFlare which is totally free although they provide a paid version of it, but you can choose any other such as CloudFront, KeyCDN, RackSpace, MaxCDN… or even servers providing their own CDN such as Firebase, Github Pages, Netlify etc.
I mention it before, but I found KeyCDN tool very useful to compare server response times worldwide.
By the way, Cloudflare doesn’t cache HTML content by default, only assents, so I made sure to do so by creating a page rule for it. This way the whole page will be cached and requests from all around the world won’t have to hit my server at all when those files are in the cache.
I decided to cache content for 8 days as can be seen on Chrome Dev tools too on the response headers:
I fixed all accessibility issues
This has nothing to do with performance, but always a nice check to have!
To be honest, I was never a fan of accessibility when working in previous jobs. I was building web apps for specific companies and the employees using them couldn’t have any important disability. That’s not the case for any kind of public content on the web. And if you do not want to do it for them, do it at least for the SEO. I’m sure they value these things nowadays to some degree.
Luckily it is quite easy to fix these issues. They are usually reported some of the tools I mentioned, specially Lighthouse, the audit feature included in Google Chrome Dev Tools.
I’ll mention one of my little dirty tricks if you don’t tell anybody: all
input elements should contain a
label element linked to them. In my form I’m not using visible labels and I didn’t want to. So I just added them and made them invisible for the user.
And no, I didn’t came up with this idea. I’m glad that other developers use dirty tricks too! I couldn’t be bothered to spent time thinking about it and ended up finding this trick in here.
Well, that’s all for now! I didn’t get too much into detail with a few things, such as Google Fonts, but I guess I’ll create a specific article for them at some point. So, stay alert!
I’m decided to take this fast-load approach to the very extreme and I’ll keep researching whenever I can. There’s yet a lot to learn!
Currently I’m testing server response times using Netlify and MaxCDN as an alternative CDN / server to see if it is worth making a move to them.
Join 2,000+ readers and learn something new every month!