Meet SVIM – Dockerized VIM

Retired

It is retired in favor of SpaceBox

The Frustration

It is very annoying to found that you have to install VIM in every system. Despite VIM is very popular but it becomes really messy to have the same set all over the system. You keep building plugins and setting-resetting shortcuts.

The SVIM – Pronounce as “swim”

SVIM is designed to be portable and is based on amix/vimrc which is already a standard for more than 80% of VIM enthusiastic. SVIM understands GIT as well as Grep (FlyGrep)

Shortcuts:

All the shortcuts are derived from amix/vimrc extended version other than few as mentioned below.

To use GIT

Product base directory must be the mounting point which can be done by default if you are in that directory.

GIT SHORTCUTS

nmap ]h <Plug>GitGutterNextHunk
nmap [h <Plug>GitGutterPrevHunk
nmap ]s <Plug>GitGutterStageHunk
nmap ]u <Plug>GitGutterUndoHunk

FlyGrep Shortcuts

nnoremap <Space>s/ :FlyGrep<cr>

How to use SVIM?

alias svim='docker run -ti -e TERM=xterm -e GIT_USERNAME="You True" -e GIT_EMAIL="you@getyourdatasold"  --rm -v $(pwd):/home/developer/workspace varunbatrait/svim'

Takeaways:

  1. Portable
  2. Git enabled
  3. Visible hidden characters

Lossless Image Compressions using Docker

I tried couple of existing docker containers to compress images without success. Both suffered a sever security threat because of being ‘very old’. There was only one complete tool zevilz/zImageOptimizer and that too didn’t have docker (have sent a pull request) meaning that you have to install everything for the compression.

I turned it into docker image over  varunbatrait/zimageoptimizer

My primary requirement was to use this image to shrink images every week or a fortnight on few blogs or images shot by my camera.This docker image is ideal for that.

It supports cron and as a web user compressing images helps in saving your BW and thus contribute you scalability.

Supported Format

  1. JPEG
  2. PNG
  3. GIF

How to use?

There are two ways to do it:

Maintain the marker

Marker is just a file with a timestamp of last run command. If new images are added, zImageOptimizer will consider only new image.

docker run -it -u "$UID:$GID" -d --volume /mnt/ImagesHundred/marker:/work/marker --volume /mnt/ImagesHundred/images/:/work/images/ -v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro varunbatrait/zimageoptimizer ./zImageOptimizer.sh -p /work/images/ -q -n -m /work/marker/marker

Not Maintaining the marker

docker run -u "$UID:$GID" --volume /path/to/images:/work/images -v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro  varunbatrait/zimageoptimizer

Takeaways:

  1. Images are losslessly compressed – no quality loss.
  2. You don’t have to install dependencies on every server. It is in docker.
  3. You can use it with cron.

Pain with PNGs

Please note that PNGs images can take significant time (15-25 seconds per image) and CPU (almost 100%). Just stay calm! 🙂

3 Coding Mistakes to Unscalability

Every project starts small, ours did too. Before we knew it, it became a hit. I am talking about the Product Kits app of Shopify. You can read more about how we scaled to 100 to 100,000 hits in our Shopify App. However, there was a great learning experience and we realized how trivial things led to a big mess, but luckily everything was caught almost on first incident as we had proper logging.

Unfamiliar to Race Conditions

If you have a lot of traffic, the state of your system is unknown to two simultaneous operations unless you code that. This might cause either duplicate data and the fall-out of duplicate data or simply, operations are ignored. For example, Laravel’s Eloquent has firstOrCreate – it finds if there is a record with a specific condition, if not, it will create it and Shopify was sending the same webhook multiple times. Imagine the agony we had to face when we had duplicate data? If you have ‘group by’ in the query and then using ‘sort’ – ASC will have first record of duplicate, DESC will have last record of duplicate. Hence, an operation might be differ on each duplicate data – leading to a mess. This was happening because in between SELECT and INSERT, the SELECT of other webhook runs and finds no result. To avoid it, we used ‘locking’ SELECT – FOR UPDATE.

Unoptimized Migrations

If you are upgrading your app, you probably need migrations. Sometimes these migrations requiring you to operate on already present data in your database or – values are derived from other data in the table. For example, we were changing 1:N relations to N:M relations. Hence, a new table to be created to hold the relations and so on. In local environment, everything runs in less than a second right – you don’t have a 200GB data in local (usually) and in staging, you don’t have to keep the exact same data right? Now imagine what really an unoptimized code can do to 200GB of data? For starters, it will chock the RAM or already return an error if you are taking all the data in one go. If you are using iterations, it might take hours. We wrote a procedure to do it inside MySQL without the needing to take data and use PHP and write data back.

Choosing the Hammer

Just like not all languages work the same way, not all the frameworks are same. YII comes with the fantastic inbuilt cache with invalidation rules, but most used framework Laravel is missing it. PHP doesn’t have threads and bigint, Ruby, Golang does. So just don’t pick the Hammer, have a toolbox instead. Use docker for scalability combining all of your tools.

Conclusion

Unfortunately, a lot of things we take as granted when we write the first line of code. Worst thing that can happen to you is you become successful and can’t handle it well. I hope that these mistakes will not be made in your next project. I wish you good luck!