Nginx Reverse Proxy for Scalability

Nginx comes up with a wonderful Reverse Proxy with tons of option. But the usual way of proxy is flawed in the sense that it doesn’t allow load balancing. For example consider this one:

Usual way of Reverse Proxy

    location / {
        try_files $uri @app;
    }
    location @app {
        proxy_pass http://127.0.0.1:8081;
        ...
    }

All the request in the location / will go to http://127.0.0.1:8081 but once you have out grown to the local server, and need additional server, you have to do a lot of changes. However, Nginx comes up with an ‘upstream‘ which will make it more manageable and less change prone with more servers as shown below.

Better Reverse Proxy

http {
    upstream app{
        server 127.0.0.1:8081;
    }

    server {
        listen 80;

        location / {
            proxy_pass http://app;
            ....
        }
    }
}

With this approach, you have a proxy running just like before but if you want to add server, it is super easy like following:

http {
    upstream app{
        server 127.0.0.1:8081;
        server 192.168.0.2:8081;
    }

    server {
        listen 80;

        location / {
            proxy_pass http://app;
            ....
        }
    }
}

Weighing Server

Since, local server – the one with 127.0.0.1:8081 might be having a lot going on – for example each application has many services and they are all running in a single server – at least in the beginning, It is important that this server has lesser traffic than other. To do that you just need to add ‘weight’

http {
    upstream app{
        server 127.0.0.1:8081;
        server 192.168.0.2:8081 weight=5;
    }

    server {
        listen 80;

        location / {
            proxy_pass http://app;
            ....
        }
    }
}

Making initial server as “Backup”

Like stated above, you probably have a lot of things going on in initial server. Hence, it makes a lot of sense to add one more server and simply turn local server as a backup server – probably along with another server. For example look at the following block of code

http {
    upstream app{
        server 192.168.0.5:8081 weight=2;
        server 192.168.0.4:8081;

        server 127.0.0.1:8081 backup;
        server 192.168.0.2:8081 weight=5 backup;
    }

    server {
        listen 80;

        location / {
            proxy_pass http://app;
            ....
        }
    }
}

Setup a “Resolve” for Movable Servers

So far we have dealt with “IP” addresses and hence, it is more of a rigid setup. With scalability, you tend to move your servers a lot and hence, it is impossible to have a same IP address in all the location. Only fewer service provider allows it – honestly I only know “upcloud” which does that. In fact, any other cloud server which doesn’t allow that – you have to come up with a following block and it isn’t enough, you have to make sure to wait for at least 48 hours before you burn down the old server or you can use local dns server which updates the domain quickly. 

http {
    #Google but can use local dns for quicker updates
    resolver 8.8.8.8; 
    upstream app{
        server us1.webapplicationconsultant.com:8081 weight=2 resolve;
        server us2.webapplicationconsultant.com:8081 resolve;

        server 127.0.0.1:8081 backup;
        server 192.168.0.2:8081 weight=5 backup;
    }

    server {
        listen 80;

        location / {
            proxy_pass http://app;
            ....
        }
    }
}

Session Affinity

First rule of scalability is to have a common session handler – you can do it using redis – master-master configuration. However, for some reason if you are not using it, it becomes very important to have session affinity.

http {
    #Google but can use local dns for quicker updates
    resolver 8.8.8.8; 
    upstream app{
        server us1.webapplicationconsultant.com:8081 weight=2 resolve route=us1;
        server us2.webapplicationconsultant.com:8081 resolve route=us2;
        sticky cookie srv_id expires=1h domain=.webapplicationconsultant.com path=/;
        # srv_id = us1 or us2 
        server 127.0.0.1:8081 backup;
        server 192.168.0.2:8081 weight=5 backup;
    }

    server {
        listen 80;

        location / {
            proxy_pass http://app;
            ....
        }
    }
}

Other methods are “learn” and “route” which will be discussed in a dedicated post about Session Affinity. 

Nginx Reverse Proxy for Scalability

14 thoughts on “Nginx Reverse Proxy for Scalability

  1. One thing is to have ssl auto generation inside nginx and use various subdomains to traverse the data with lua’s regex.match

  2. We are also increasing complexity like your posts. Today I added backup servers. You got memory leaking problem with ddos attack?

    1. Only way to have memory leaking problem with DDOS attack is some Nginx Extension you are using. As – Nginx requests doesn’t keep anything in memory . As long as your timeout is less, nothing is wrong here.

  3. First look was scary but honestly I like the thinking in each step. How many servers we can add behind nginx?

  4. With two cores machine how much static file is served from nginx? I can’t afford the CDN and want to use nginx for static serving.

  5. I love this post. A simple configuration turned into complex one with explanations. We also scaled like this but gave up on session affinity and used redis

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top