Containerize your first Go Lang App using Docker

Does size matter?

It does – in production. Most orchestration solutions involve a smoother transitions from one docker image to other. A simple up command, if downloading only ~10MB of file, obviously it will save few seconds – but not just that, what about limited size we have in our production servers? This normally leads to the regular cleanups which is usually a manual and tedious tasks. So, for a scalability point of view, a multi-stage build approach is preferred over usual.

The Multi stage build Approach

Most of the approaches are quite poor as it ends up with a large docker image. New approach involves having a two steps approach as following:

The Build:

You build Go in this system and use that to build your application. This will result in an insane size of docker image ~ from 200MB to 300MB

Copy Build

You copy the build from first step and this ensures size of final docker image is less. I bet you will have no more than 10MB image size.


# The Build
FROM golang:alpine AS the-build
ADD . /src
RUN cd /src && go build -o app

# Copy Build
FROM alpine
COPY --from=the-build /src/app /app/

Here you are naming the first step of the build as ‘the-build’ Docker supports a partial build on multi-build process. So if you would like to build till ‘the-build’ you can do it with:

docker build --target the-build -t github/go-app:latest

If you omit –target – you can build the whole system in a single command as following:

docker build -t github/go-app:latest

Lossless Image Compressions using Docker

I tried couple of existing docker containers to compress images without success. Both suffered a sever security threat because of being ‘very old’. There was only one complete tool zevilz/zImageOptimizer and that too didn’t have docker (have sent a pull request) meaning that you have to install everything for the compression.

I turned it into docker image over  varunbatrait/zimageoptimizer

My primary requirement was to use this image to shrink images every week or a fortnight on few blogs or images shot by my camera.This docker image is ideal for that.

It supports cron and as a web user compressing images helps in saving your BW and thus contribute you scalability.

Supported Format

  1. JPEG
  2. PNG
  3. GIF

How to use?

There are two ways to do it:

Maintain the marker

Marker is just a file with a timestamp of last run command. If new images are added, zImageOptimizer will consider only new image.

docker run -it -u "$UID:$GID" -d --volume /mnt/ImagesHundred/marker:/work/marker --volume /mnt/ImagesHundred/images/:/work/images/ -v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro varunbatrait/zimageoptimizer ./ -p /work/images/ -q -n -m /work/marker/marker

Not Maintaining the marker

docker run -u "$UID:$GID" --volume /path/to/images:/work/images -v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro  varunbatrait/zimageoptimizer


  1. Images are losslessly compressed – no quality loss.
  2. You don’t have to install dependencies on every server. It is in docker.
  3. You can use it with cron.

Pain with PNGs

Please note that PNGs images can take significant time (15-25 seconds per image) and CPU (almost 100%). Just stay calm! 🙂

Nginx Proxy Caching for Scalability.

Since our servers are spread across multiple locations, we had a lot of issues regarding speed. If it is served from the different location server, which is not in the local network, there is a latency of about 500ms to 750ms, This seems a lot and is unavoidable if you are running a maintenance on locals and have configured a load balancing using Nginx.

By default caching is off and thus it always go to the proxy server when a resource is requested and hence causes a lot of latency. Nginx cache is so advanced that you can tweak to to almost every use case. 

Generic configuration in any proxy caching.

Storage, Validity, Invalidity and conditions are basic requirements of any proxy caching.

Imagine a following configuration:

http {
    proxy_cache_path  /data/nginx/cache  levels=1:2    keys_zone=SCALE:10m inactive=1h  max_size=1g manager_files=20 manager_sleep=1000;
    server {
        location / {
            proxy_cache            SCALE;
            proxy_pass   ;
            proxy_set_header       Host $host;
            proxy_cache_min_uses   10;
            proxy_cache_valid      200  20m;
            proxy_cache_valid      401  1m;
            proxy_cache_revalidate on;
            proxy_cache_use_stale  error timeout invalid_header updating
                                   http_500 http_502 http_503 http_504;

Configuration of proxy_cache_path for scalability.

The cache directory is defined as a ‘zone’ with proxy_cache_path Cache is written in temp files before it is renamed which avoids ‘partial’ recurring response. A special process manager will delete cached files which is not accessed for one hour as specified by inactive=1h and to be less CPU intensive manager_files is set to 20 so that upon inactive instead of the default 100 files, only 20 files are deleted. Similarly manager_sleep is increased to 1000 instead of the default 200 to have a sleeping interval of 1 second before a next cycle to handle inactive files. Tweaking loader_files, loader_threshold, loader_sleep is generally not necessary. Defaults are good enough.

Please note that the approach using proxy_pass with the IP as above isn’t recommended, for more detail please, visit the guide of using Nginx Reverse Proxy for Scalability.

Configuring proxy_cache_min_uses for scalability

proxy_cache_min_uses tells the minimum number of times a resource has been requested before it is cached. Obviously, you don’t want a lower requesting resource to be cached. Hence, it has been increased to 10 in our case. This can be different for you. You might want to make it lower or higher value.

Configuring proxy_cache_revalidate for scalability

By default proxy_cache_revalidate is off, turning it on will only match ETAG from the proxy like a browser.


Nginx is extremely powerful but in order to use Nginx as a reverse proxy, not only cache zone must be configured, but some of the default values must be tweaked.