Enable CORS with Credentials in Nginx

You must have noticed that when enable cors with “*”, it doesn’t allow credential to pass. Solution to this is pretty simply, you just need to list all of your domains in configuration. My approach is to have a separate file for each domain.

Directory Structure:

./conf/site-enabled/<site-name>
./conf/cors/<site-name>

Configuration

Assuming that site-name is webapplicationconsultant.com and I want to enable credentials for varunbatra.com along it itself – This is how it goes:

#./conf/cors/webapplicationconsultant.com


set $cors '';
if ($http_origin ~ '^https?://(varunbatra\.com|webapplicationconsultant\.com)') {
        set $cors 'true';
}

if ($cors = 'true') {
        add_header 'Access-Control-Allow-Origin' "$http_origin" always;
        add_header 'Access-Control-Allow-Credentials' 'true' always;
        add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS' always;
        add_header 'Access-Control-Allow-Headers' 'Accept,Authorization,Cache-Control,Content-Type,DNT,If-Modified-Since,Keep-Alive,Origin,User-Agent,X-Requested-With' always;
        # required to be able to read Authorization header in frontend
        #add_header 'Access-Control-Expose-Headers' 'Authorization' always;
}

if ($request_method = 'OPTIONS') {
        # Tell client that this pre-flight info is valid for 20 days
        add_header 'Access-Control-Max-Age' 1728000;
        add_header 'Content-Type' 'text/plain charset=UTF-8';
        add_header 'Content-Length' 0;
        return 204;
}
#./conf/site-enabled/webapplicationconsultant.com

server {
	listen 443 ssl;
	server_name webapplicationconsultant.com;
....	
        location / {
                include "./../cors/webapplicationconsultant.com";
                try_files $uri @app;
	}
	location @app {
...
	}
}

Worth Sharing?

Passing Real IP in WordPress behind Proxy or in Docker

If you have followed the tutorial on How to run WordPress Blog behind Nginx Secure (https) Proxy, you might be under a situation that WordPress is showing all ips as proxy ips. In case of Docker it must be like 172.X.X.X otherwise, it is the ip of your server. 

If this is a problem?

You might be wondering if this is worth solving? Well Yes!, Most of the real comments were categorized as spam. 

Adding Real-IP to WordPress

Step 1 – Editing WordPress config

In wp-config.php file add following lines just above /* That’s all, stop editing! Happy blogging. */

// Use X-Forwarded-For HTTP Header to Get Visitor's Real IP Address
if ( isset( $_SERVER['HTTP_X_FORWARDED_FOR'] ) ) {
  $http_x_headers = explode( ',', $_SERVER['HTTP_X_FORWARDED_FOR'] );
  $_SERVER['REMOTE_ADDR'] = $http_x_headers[0];
}
/* That's all, stop editing! Happy blogging. */

Step 2 – Editing Nginx

Inside your proxy settings in nginx, simply add this:

proxy_set_header        HTTP_X_FORWARDED_FOR       $remote_addr;

In case of WordPress Behind Docker

In case if you are using Docker, you will need to copy wp-config.php from container and later copy to container. This can be done as following.

#Copy from docker container
docker cp project_wordpress_1:/var/www/html/wp-config.php .

#Copy to docker container
docker cp wp-config.php project_wordpress_1:/var/www/html/wp-config.php

Easy-peasy right?

Worth Sharing?

Nginx Proxy Caching for Scalability.

Since our servers are spread across multiple locations, we had a lot of issues regarding speed. If it is served from the different location server, which is not in the local network, there is a latency of about 500ms to 750ms, This seems a lot and is unavoidable if you are running a maintenance on locals and have configured a load balancing using Nginx.

By default caching is off and thus it always go to the proxy server when a resource is requested and hence causes a lot of latency. Nginx cache is so advanced that you can tweak to to almost every use case. 

Generic configuration in any proxy caching.

Storage, Validity, Invalidity and conditions are basic requirements of any proxy caching.

Imagine a following configuration:

http {
    proxy_cache_path  /data/nginx/cache  levels=1:2    keys_zone=SCALE:10m inactive=1h  max_size=1g manager_files=20 manager_sleep=1000;
    server {
        location / {
            proxy_cache            SCALE;
            proxy_pass             http://1.2.3.4;
            proxy_set_header       Host $host;
            proxy_cache_min_uses   10;
            proxy_cache_valid      200  20m;
            proxy_cache_valid      401  1m;
            proxy_cache_revalidate on;
            proxy_cache_use_stale  error timeout invalid_header updating
                                   http_500 http_502 http_503 http_504;
        }
    }
}

Configuration of proxy_cache_path for scalability.

The cache directory is defined as a ‘zone’ with proxy_cache_path Cache is written in temp files before it is renamed which avoids ‘partial’ recurring response. A special process manager will delete cached files which is not accessed for one hour as specified by inactive=1h and to be less CPU intensive manager_files is set to 20 so that upon inactive instead of the default 100 files, only 20 files are deleted. Similarly manager_sleep is increased to 1000 instead of the default 200 to have a sleeping interval of 1 second before a next cycle to handle inactive files. Tweaking loader_files, loader_threshold, loader_sleep is generally not necessary. Defaults are good enough.

Please note that the approach using proxy_pass with the IP as above isn’t recommended, for more detail please, visit the guide of using Nginx Reverse Proxy for Scalability.

Configuring proxy_cache_min_uses for scalability

proxy_cache_min_uses tells the minimum number of times a resource has been requested before it is cached. Obviously, you don’t want a lower requesting resource to be cached. Hence, it has been increased to 10 in our case. This can be different for you. You might want to make it lower or higher value.

Configuring proxy_cache_revalidate for scalability

By default proxy_cache_revalidate is off, turning it on will only match ETAG from the proxy like a browser.

Conclusion

Nginx is extremely powerful but in order to use Nginx as a reverse proxy, not only cache zone must be configured, but some of the default values must be tweaked.

Worth Sharing?