(Madness) Best way of serving static files

How much optimization is needed?

All the solutions discussed in this post might not be suitable for your case. Remember – premature optimization is a start-up killer. i

What takes time?

  • User with High Speed Internet: For low traffic app it is mainly TTFB, for high traffic website it is also server capacity.
  • User with Low Speed Internet: Everything matters – from tiny headers to size of images, css, js etc.

How file works

Any file which is opened in Linux, is cached in your RAM unless you are running out or either RAM or total files you can open! Thus you can increase the open files and RAM to ensure that more files are cached.

Problem with this approach is – each static content is file and thus, these files will contribute to Linux overall limit.

Use Faster SSDs among Clouds

If file is not in RAM, it is accessed from disk. Not all SSDs provided are same, UpCloud advertises to have a best SSD out there with proven benchmark.

RAM first servers

Since static files are loaded in RAM upon frequent access, you might want to consider high RAM servers instead of a default configuration. In case, your files are smaller in size, you can go for a standard ones.

Suitable CDN

Some CDN are really cheap to start with and they also have free space. I don’t endorse any CDN as such but we have tried CDN77, faced two maintenance so far – two bad days. CDN will ensure that your files will be served from nearest server to the end user and hence – very low latency and a low TTFB.

Cache Headers

For static content, omit the ETAGs wherever possible unless you are keeping same urls. Since, if ETAG is present, a request is made only to determine if cached content is changed or same.

  • If ETAG and max-age both are present, server will send a request just to ensure cached content is same once expired.
  • If only max-age is present, server will request a new content only on expiration.

Cookieless Domain

Since static files don’t need interactions, they don’t need cookies.

Enable gzip

To enable gzip for static files you can simply add this in your nginx.conf file or wherever is required.

gzip            on;
gzip_disable    "msie6";
gzip_comp_level 6; #fight between CPU and compression - a sweet spot
gzip_vary       on;
gzip_static     on; # --with-http_gzip_static_module is required then pre-generated .gz will be served saving CPU
gzip_proxied    any;

Minify CSS/JS

Although benefits of minifying at user end is losing because of the high speed internet connections. However, it does help in increasing your server capacity by reducing BW usage per second. For instance, if your server is connected with 1Gbps – you are limited to 1Gbps. Minifying css/js will increase your serving capacity by at least 33%.

Image Variants or even Format

Difference between image sizes are not so huge when it comes to mobile and laptop. For instance, if you have image 400X400, you can use this image in mobile as well as for thumbnail. In case of Laptop, you can load 1600X1600 and 400X400 for lightbox and thumbnail.

Madness – You can load a smaller image and show it in blur while bigger image is being loaded – but this is UX and not DevOps


We have used mogrify to do this job.

Serve Compressed Images

Update: Now you can use Dockerized Lossless Compression

It isn’t just the sizes, you can use various commands to optimize different kinds of images.


You can also use convert tool to compress animated gif images (along with others)

convert test.gif -fuzz 30% -layers Optimize result.gif

(Madness) Enforce RAM

You can mount your directory to RAM which will ensure that every file in that directory is in RAM.

mount -t tmpfs -o size=1024m tmpfs /path/to/static/folder

Ensure that it is mounted on each reboot.

cat /etc/fstab

tmpfs       /path/to/static/folder  tmpfs   nodev,nosuid,noexec,nodiratime,size=1024M   0 0


  • tmpfs may use SWAP space
  • You are limited to space assigned

(Madness) Serve Directly from Redis

With OpenResty/Nginx, you can serve directly from Redis – Advantage of this approach is – you can seamlessly balance the load between various redis servers while disadvantage is an overhead you will need to code to add data into the cache.

This approach requires redis2-nginx-module and ngx_set_mis

location /get {
    set_unescape_uri $key $arg_key;  # this requires ngx_set_misc
    redis2_query get $key;
    redis2_pass foo.com:6379;

Or you can use webdis which is an HTTP interface to Redis

location / {
  rewrite ^(.*)$ /GET/$1 break;

Worth Sharing?


Leave a Reply to K. Herzog

Click here to cancel reply.

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>