How to make MySQL inside Docker Production Ready?

It is easy to simply spin a new MySQL and assume that it is ready for the production. This can’t be further from the preparation of disaster.

Few days back I wrote about having a infrastructure docker file to do your development and I got a comment was it ready for production. Today that ‘No’ is about to change to ‘Yes’.

When application grows, the table grows and we need more in-memory, more innodb instances and more threads. However, default MySQL configuration is not changed for last 8-10 years. Machine has got faster, MySQL default config didn’t catch up to that. Thus it is important to override configurations.

This is one example:

docker-compose.yml

version: '3.2'
networks:
  dual-localhost:
    driver: bridge
services:
  mysql:
    image: mysql:5.8
    restart: always
    volumes:
        - type: bind
          source: ./mysql
          target: /var/lib/mysql
        - type: bind
          source: /var/log/mysql/
          target: /var/log/mysql/
        - type: bind
          source: ./mysql.cnf
          target: /etc/mysql/mysql.conf.d/mysql.cnf
    environment:
        - MYSQL_ROOT_PASSWORD=some_weird_password
    networks:
      - dual-localhost
[mysqld_safe]
socket		= /var/run/mysqld/mysqld.sock
nice		= 0

[mysqld]
#
# * Basic Settings
#
user		= mysql
pid-file	= /var/run/mysqld/mysqld.pid
socket		= /var/run/mysqld/mysqld.sock
port		= 3306
basedir		= /usr
datadir		= /var/lib/mysql
tmpdir		= /tmp
lc-messages-dir	= /usr/share/mysql
skip-external-locking
#
# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
#bind-address		= 0.0.0.0
#skip-networking #insecure
#bind-address = 127.0.0.1
#
# * Fine Tuning
#
key_buffer_size		= 16M
max_allowed_packet	= 512M
thread_stack		= 192K
thread_cache_size       = 64
innodb_read_io_threads = 2
innodb_write_io_threads = 2
# This replaces the startup script and checks MyISAM tables if needed
# the first time they are touched
myisam-recover-options  = BACKUP
#max_connections        = 100
#table_cache            = 64
#thread_concurrency     = 10
#
# * Query Cache Configuration
#
query_cache_limit	= 8M
tmp_table_size      = 32M
query_cache_size        = 32M
#
# * Logging and Replication
#
# Both location gets rotated by the cronjob.
# Be aware that this log type is a performance killer.
# As of 5.1 you can enable the log at runtime!
general_log_file        = /var/log/mysql/mysql.log
#general_log             = 1
#
# Error log - should be very few entries.
#
log_error = /var/log/mysql/error.log
#
# Here you can see queries with especially long duration
# slow-query-log=1
# slow-query-log-file=/var/log/mysql/mysql-slow.log

#log-queries-not-using-indexes
#
# The following can be used as easy to replay backup logs or for replication.
# note: if you are setting up a replication slave, see README.Debian about
#       other settings you may need to change.
server-id		= 10
log_bin			= /var/log/mysql/mysql-bin.log
expire_logs_days	= 10
binlog_row_image=minimal
max_binlog_size   = 256M
binlog_cache_size = 2M
binlog_rows_query_log_events = on

relay-log               = /var/log/mysql/mysql-relay-bin.log

innodb_log_file_size = 512M
#binlog_do_db		= include_database_name
#binlog_ignore_db	= include_database_name
#
# * InnoDB
#
# InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/.
# Read the manual for more InnoDB related options. There are many!
#
# * Security Features
#
# Read the manual, too, if you want chroot!
# chroot = /var/lib/mysql/
#
# For generating SSL certificates I recommend the OpenSSL GUI "tinyca".
#
# ssl-ca=/etc/mysql/cacert.pem
# ssl-cert=/etc/mysql/server-cert.pem
# ssl-key=/etc/mysql/server-key.pem
collation_server=utf8mb4_unicode_ci
character_set_server=utf8mb4

wait_timeout = 500
interactive_timeout = 28800

open_files_limit = 1024000
skip-name-resolve

join_buffer_size = 512K

innodb_buffer_pool_size = 3174M
innodb_buffer_pool_instances = 2
innodb_stats_persistent_sample_pages = 100
innodb_stats_transient_sample_pages = 24
innodb_rollback_on_timeout = on

Docker Services For Development Infrastructure

Most of the web developers need at least three services running in their machine namely redis, mongo and mysql.

I have composed a simply docker-compose.yml file to run MySQL, MongoDB and Redis with Docker that too with a standard ports and authentication.

version: '3.2'
services:
    redis:
        image: redis:5.0.3
        restart: always
        network_mode: host
        ports:
            - 6379:6379
        command: redis-server --requirepass a_password
    mysql:
        image: mysql:5.7
        restart: always
        network_mode: host
        ports:
            - 3306:3306
        volumes:
            - type: bind
              source: ./mysql
              target: /var/lib/mysql
        environment:
            - MYSQL_ROOT_PASSWORD=a_password
        command: mysqld --sql_mode="STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"
    mongodb:
        image: mongo:4.1.7-xenial
        restart: always
        environment:
          - MONGODB_USER="root"
          - MONGODB_DATABASE="some_db"
          - MONGODB_PASS="a_password"
          - MONGO_DATA_DIR=/data/db
          - MONGO_LOG_DIR=/dev/null
        volumes:
          - ./data/db:/data/db
        ports:
            - 27017:27017

Run

To run it, use following command:

docker-compose up -d

Choosing MySQL Memory Engine for Session/Caching

MySQL memory engine is least popular but most among most effective solution for a performance first application. Most of the ‘node’ application developers are generally spinning a Redis for having a session, while same can be achieved using memory engine of MySQL without any overhead of having a different tech – “Redis” in this case.

More overheads mean more ways to Break

It is often an “overkill” by using Redis for session/cache, in case you already have MySQL in place.

Configuration:

The max_heap_table_size system variable defines the limit of the maximum size of MEMORY tables. As this is dynamic variable, you can set this by following in runtime.

SET max_heap_table_size = 1024*1024;

Use cases:

  1. Non-Critical Read-Only and Read-Mostly Data
  2. Caching
  3. Session

An example of Non-Critical Read-Mostly Data where we used MySQL memory storage engine other than “session” and “caching” was ‘computing” to store intermediate results. To be more specific we used it in pattern recognition of stock pricing.

Limitations:

There are many limitations in Memory engine, most of these are okay in case of listed use cases above.

  1. No row-level locking
  2. No FKs
  3. No Transactions
  4. Clustering (No Scalability)
  5. Geospatial Data or Geospatial indexes