Launch your VPN using Docker under a minute

You might be wondering that having your own VPN is very hard to configure, we thought that too. You will be surprised to know that as long as you have af_key module.

 

af_key Module

You can check it by issuing following command

sudo modprobe af_key

If you see this kind of error, that means it isn’t present and you have to change your configuration:

modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/4.17.8-x86_64-linode110/modules.dep.bin'
modprobe: FATAL: Module af_key not found in directory /lib/modules/4.17.8-x86_64-linode110

You can try adding following to /etc/modules like and remote your server but it will not work if your kernel doesn’t support it.

af_key

Run a 5$/month Linode for your own VPN

If you have Linode, you can choose GRUB2 kernel to enable af_key as shown below.

docker-compose.yml for VPN

Following is the content of your docker-compose.yml


version: '3.2'
services:
  vpn:
    image: hwdsl2/ipsec-vpn-server
    restart: always
    hostname: localvpn
    privileged: true
    volumes:
        - "/etc/passwd:/etc/passwd:ro"
        - "/etc/group:/etc/group:ro"
        - "/lib/modules:/lib/modules:ro"
    ports:
        - "500:500/udp"
        - "4500:4500/udp"
    environment:
        - VPN_IPSEC_PSK=secret_code
        - VPN_USER=login_with_this_user
        - VPN_PASSWORD=login_with_this_password

Run VPN

docker-compose up -d

Now use above credentials to connect to your VPN and it should run without any issues.

 

Worth Sharing?

How to use docker to generate wildcard SSL certificates for your website?

Google Chrome has started giving a warning for a non-SSL website and hence it has become more important than ever to generate SSL certificate for your website today!

When it comes to “docker” idea is simple, you mount a volume to share certificates with other containers. There are many docker images which have ‘in-built’ SSL generator. However, if you want it to be scalable, then this is a pretty bad way to do it. You would want to keep a track of all subdomains and their certificates along with where they have been generated. Load-balancers need not to be pointing to the “right” container during validation. So problems are many.

Docker image

I am using adferrand/letsencrypt-dns for this and it comes with ‘auto-restarting’ a docker container if a matching certificate has been renewed. It supports for 50+ dns managers and I am sure yours is covered ūüėČ . I am a fan of Linode, if you are serious about your business growth, give them a shot.
Docker Compose content:

cat docker-compose.yml

version: '3.2'
services:
  letsencrypt-dns:
    image: adferrand/letsencrypt-dns
    restart: always
    volumes:
        - "/etc/passwd:/etc/passwd:ro"
        - "/etc/group:/etc/group:ro"
        - "/var/run/docker.sock:/var/run/docker.sock"
        - "./letsencrypt:/etc/letsencrypt"
    environment:
        - CERTS_USER_OWNER=
        - CERTS_GROUP_OWNER=
        - CERTS_DIRS_MODE=0755
        - CERTS_FILES_MODE=0644
        - LETSENCRYPT_USER_MAIL=@.com
        - LEXICON_SLEEP_TIME=1500
        - LEXICON_PROVIDER=linode
        - LEXICON_LINODE_TOKEN=

Explaining the docker-compose.yml

We are mounting passwd and group as read-only to enable host user and group respectively.

Adding docker.sock ensures that it can restart related docker containers OR execute a command inside the targetted container. If you don’t mount it, containers will have old certificates even after certificates have been renewed and thus, it is very important that you mount it.

Since, dns server is using Linode’s DNS manager, we are adding LEXICON for linode and token. Sleep timing is 1500 seconds that means 25 mins, making each domain to be validated after 25 mins of adding the verification code in dns. If you are in USA, it will probably work with much lesser like 500 seconds.

Example content of domains.conf


cat letsencrypt/domains.conf

webapplicationconsultant.com *.webapplicationconsultant.com autorestart-containers=nginx_nginx_1,nginx_nginx_2
varunbatra.com *.varunbatra.com autocmd-containers=varunbatra_static_1:service nginx reload
  1. webapplicationconsultant.com *.webapplicationconsultant.com autorestart-containers=nginx_nginx_1,nginx_nginx_2 will restart containers by the name nginx_nginx_1 and nginx_nginx_2 once certificates of webapplicationconsultant.com has been renewed
  2. varunbatra.com *.varunbatra.com autocmd-containers=varunbatra_static_1:service nginx reload will execute the command service nginx reload once certificates of varunbatra.com have been renewed.

Generated SSL locations

  1. ./letsencrypt/live/varunbatra.com/fullchain.pem
  2. ./letsencrypt/live/webapplicationconsultant.com/fullchain.pem

Now you can use these certificates in NGINX or APACHE or ‘Whatever’ ūüôā Just make sure whatever you do, you don’t forget to add a proper autorestart and autocmd lines for their respective containers.

Worth Sharing?

How to enable OCSP Stapling in Nginx?

What is OCSP stapling?

OCSP stapling is a safe and quick way of determining whether or not an SSL certificate is valid. Instead of requesting information from the certificate’s vendor, it allows a web server to provide information on the validity of its own certificates.

Three Steps to enable OCSP Stapling in Nginx:

1. To enable OCSP stapling, just add these three lines under “server” block

ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /path/to/full/chain/pem;

2. Test Nginx configuration

sudo nginx -t

3. Reload Nginx Configuration

If everything is fine, just reload the configuration.

sudo service nginx reload

Test if OCSP Stapling is enabled

You can use globalsign sslabs to see if it is working fine.

Worth Sharing?

Why struct isn’t getting any data on unmarshal in Go?

Among first few problems, a go lang programmer will probably be struggling with a blank struct. Structs have blank values when one of the following things are wrong:

1. Invalid Naming Conventions:

First letter of fields in struct should be capital. For example


//Invalid name 	
type Person struct {
    name string
    age  int
}

//Valid Name
type Person struct {
    Name string
    Age  int
}

2. Invalid Matches:

Consider this is the json which we can’t change.


{
 FirstName: "John Doe",
 Age: "30",
}

 


//Results in blank data 	
type Person struct {
    FirstName string //This will match first_name but we have FirstName
    Age  int //This will match age but we have Age
}

//The fix
type Person struct {
    FirstName string `json:"FirstName"`
    Age  int `json:"Age"`
}

Worth Sharing?

First three utility packages for a Go Lang developer

With awesome Go, we have some usual problems which are solved by some awesome packages. Some common problems which every programming language has are – versioning, dependency and debugging.

1. GVM

GVM – Go Version Manager It helps in switching between different go versions. Depending on code base, you can easily switch between different go version.

2. DEP

Dep is dependency manager for go lang. It is production ready. All your codes dependency is maintain in vendor folder. You start with dep init command and it creates a configuration for you.

3. PRETTY

Pretty package helps you debugging variables by printing them in a pretty way.

So before your project goes out of control, it is better to learn and use these three packages right from the start.

Worth Sharing?

Ease your Node Web development with these 10 npm packages.

When we code, we often don’t rely on single bulkier packages. Since a bulkier framework would tie you up and some of its core-functionality isn’t really good for your app performance or maintainability.

For instance, you would like to bring your own ORM which you are using in small to big products. A native ORM is more likely to support more drivers than a full-fledged framework.

Top 10 must have npm packages

  1. Express – Express framework is extremely lightweight and many developers have expanded it. You can find many extensions of express in npmjs site.
  2. Moment – Moment is a goto package for time-based calculation or formatting time.
  3. Sequelize Or Mongoose – Sequelize is an AR-based ORM and support multiple relational databases. It comes with all the important functionality. File mongoose is for MongoDB
  4. Gulp – Depending on your choice, you would want Grunt or Gulp, my personal choice is Gulp as it gives more control.
  5. Bluebird – More of the advanced promises features are available in this package. My personal favorite is Promise.race.
  6. Lodash – Lodash comes up with a lot of small functions which you can use to modify, manipulate data.
  7. Chalk – Chalk is a complete solution to style terminal
  8. Bunyan – Bunyan enables you to format codes in more readable and expressive
  9. Got – Got is the most powerful package to send out requests.
  10. Webpack – Webpack bundles the javascript for a browser.

Worth Sharing?





How to write good “If” statements?

Writing a good code is an art and it can be useful in so trivial things that most developers often ignore. Usually, amateurs are programmed to write an ‘if’ statement which is always accompanied by an ‘else’ . However, it is important to realize that code quality can significantly be improved if else is omitted. For instance, consider this code.


function getUser($id){
  if($id < 1){
   $user = [];
  }else{
   //some big codes
  }
  return $user
}

Here some big codes can be so big that it will return statement will be lost in it. Better would be:


function getUser($id){
  if($id < 1){
   return [];
  }
  //some big codes
  return $user
}

Worth Sharing?





How to write the high-performance application in PHP?

We coded Product Kits app and it worked pretty well. Peak hits were 5000 hits per seconds. Read¬†Story of Product Kits from 100 to 100000 hits per minute. We had a lot of issues, but issues with PHP wasn’t scalability. We were able to handle everything but the problem came when we had to know what other workers were doing.

The first question that might come to your mind is if you have chosen the write tech stack? Since we are aware of the fact that PHP doesn’t support multi-threading. However, there is a trade-off between the speed of development and the performance. PHP is not very fast, in fact, it is slow – but it is fast enough. As long as you don’t want individual PHP scripts to know the state of each other, you are in a pretty good shape – most of the time.

Trade-Off – Scalable VS Speed

While using PHP, our major concern was RAM, it was much easier to get high RAM usage and CPU. We had to deal with a lot of data and data most of the time either stays in RAM or required us to increase the HIT if we wanted to keep it outside. If your PHP codes are using a lot of RAM, you will have to solve a scalability problem. However, if your app doesn’t require a lot of RAM, better is to optimize it for speed.

Writing the Right Codes:

  1. Rely on always running PHP codes – If a worker is written in PHP, tie the worker in an infinite loop which will wait for an event (A Queue, MySQL entry) instead of invoking PHP every second or so.
  2. Cache sooner РAlthough there are a couple of cache options in PHP РOPcache and Memcache. However, Redis is favourite which can further help you scaling by having multiple master or other topology. Combination of opcache and redis will be best.
  3. Load fewer classes – Ensure that you are not loading a lot of classes, rely on dynamic loading. This will increase the speed and reduce the memory.
  4. Keep over-writing variables – This is a pretty bad practice but it ensures that your memory is limited.
  5. Make smaller blocks – A heavy code or multiple functional calls under the loop are your sworn enemy. It is better to write multiple loops few smaller blocks than to have one large block.
  6. Use JSON instead of XML – JSON is a new standard and takes lesser memory.
  7. Use classes – Obvious but – having functions inside class will make it less memory hogger as long as you are loading classes when needed.

Micro Optimization of your codes:

These optimizations are not something you should do after the development as it has a very little effect. However, right from the beginning, a good practice is to follow it.

  1. Promote ‘static’ – This alone can increase the execution speed by 3X.
  2. Use single quotes – ‘ – As long as there is variable inside.
  3. Use str_replace instead of preg_replace_all
  4. Use ‘===’ instead of ‘==’
  5. Use ‘isset’ instead of count/size

Worth Sharing?





How to prepare docker for a production?

In order to use Docker on Production, you have to ensure that you can deploy easily, maintain multiple similar services and be ‘reboot proot’

1. Ditch Docker – Pick Compose

Docker compose is a great tool to keep docker as a configuration. You can add multiple dockers that too interlinked in docker-compose.yml file and run it.

2. Reboot proof Containers

There will be unavoidable circumstances when your host provider will reboot your system for scheduled or emergency maintenance. To ensure that containers/services come back online – use a ‘restart’ option. Version 2 and above support restart option.

[code]
version: '3.2'
services:
  web:
    image: nginx
    restart: always
[/code]

3. Avoiding network_mode: hosts

Network mode (network_mode) host ignores the port binding. Whatever port is exposed by the container is exposed from host machine. With multiple containers in a single machine, it is very commanded to have few services with same type Рexample Nginx serving static files when using docker on production. For this reason, only the first service will be able to start and rest will give port already used error. Thus, avoiding network_mode has hosts should be avoided unless it is absolutely required.

4. Log Management

Logs can become overwhelmingly problematic if not managed. Ensure the format of the log, its location and also check for logrotate service.  An unchecked log file will choke your server. It can disable ssh logins which can be devastating.

[code]
logging:
  options:
    max-size: '12m'
    max-file: '5'
  driver: json-file
[/code]

5. Not everything deserves a Container

It is tempting to have a stateless or portable container. However, some services need a lot of IO read-write and such services are often relying on databases. There can be a huge performance difference when these databases are not in containers.

 

Worth Sharing?





How to make app ready for scalability?

Most apps are built to fail – meaning that you develop an app with half-heartedly and not architect it well to make it scalable. Ask yourself, did you make an app to fail? The problem with success is scalability. If you can’t scale, you are bound to fail.

1. Divide Everything

I have a list of following things which you can divide. If you have more, please put that in comments:

  1. Multiple users can have separate database assigned.
  2. Credential are authenticated by a separate service.
  3. Outsource the background jobs to a different server.
  4. Have multiple queues.
  5. Have at least two masters.

2. Isolate and backup every service.

Putting small services on their own small servers can help you prevent a death from hardware failure. Consider an email sending service, you can easily have two or three service provider and if one is down, immediately switch to the second one. Similarly, if your backups are relying on one slave, make it work with the other one if it gets down.

3. Don’t just switch, resurrect the services.

We had three geolocation services on three separate servers. One day, none of them were working. In the log, we found that two were crashed weeks ago because of RAM usage as one of our developers used a nasty bulk check for 20M records. It only needed a service start command. So ensuring that it remains started can actually solve “fools’ development problem”

4. Proxy is new God

There are a lot of proxy solutions for every kind of services, place them ahead of everything you are running. These proxies serve two purposes:

  1. Switching the services when dead!
  2. Limiting the number of connections

Proxies have their own pool of connections and hence, despite you are hitting database by creating 200 connections, if it is going through a proxy service, it will be as low as 20. Some of the proxy solutions we have used:

5. Monitoring Services

We are a big fan of Prometheus and Grafana. While Prometheus exporters export different data, Grafana can be used to see it beautifully and send the alert.

6. (Bonus) – Attitude

Every app must be developed with a TDD/BDD approach and attempts must be made to tune everything you have. It is far better to run an optimized query than to throw a hardware on a database. The attitude of your development team matters most. So, the first step of scalability is in fact, make sure you hire the Good and fire the Bad.

Worth Sharing?