Overview
In this post, I will talk about MySQL InnoDB Memcached plugin.
Although there are many other posts about this kind of subject, I focus on the goal of applying the actual service. It also contains its own philosophy of manipulating the flow of data well.
So, Let’s start about this subject!
Cache layer?
As the traffic of the service increases to the level of a monster, it is difficult to service with the DB only.
Of course, most of the small and medium-sized data could be served with only single database, but in the case of giga-unit traffic, there is a performance limit to get static data that does not change from the Database every time.

More than anything! Database has limited resources, it is true that all monster traffic cannot be handled based on DB only.
So, in the past when disk I/O was not good, I remember processing frequently searched data with a memory DB such as ALTIBASE Database which is memory based in order to provide service for this processing.
There are many purposes of the cache layer, but I would like to highlight for two cases which are as below;
- Queuing or preprocessing in cache and write to file asynchronously.
- Deliver service after loading data with high frequency of inquiry but little change in memory.
The latter is what I will be talking about here, the frequency of inquiry is very high but little change.
I add one additional option, it can be said that it is a really interesting story for the requirement to show only the data that has been processed and completed. (like READ-COMMITTED!!)
MySQL Memcached plugin
There are many different cache stores, but the reason I was interested in the MySQL memcached plugin is simple.
It’s accessible InnoDB data directly, without query parsing and optimizer steps. In the past, I was interested in MariaDB with a concept similar to HandlerSocket.

Ref> https://dev.mysql.com/doc/refman/8.0/en/innodb-memcached-intro.html
Oh, of course, it’s possible allocate and use a separate cache memory control the memcached policy in MySQL, but in an environment where there is a device with superior random I/O than expected, such as SSD, and we have already cache memory area, InnoDB buffer pool.
It didn’t really appeal to my point of view. (Of course, since InnoDB is a B-Tree, you cannot compare the performance itself with a hash structure that normally looks up data in O(1).)

Direct access to InnoDB data? This story can also be interpreted differently as non-volatile caching layer. That is, a caching layer with infinite TTL that does not have an expire time separately!! I could see new possibilities.
Configuration
You may refer to the Oracle documentation (innodb-memcached-setup), but today, I will configure the memcached plugin based only on the purpose of using the memcache protocol.
First, configure the schema related to memcached as shown below to configure the InnoDB plugin. 🙂 Here, $MYSQL_HOME
means the home directory where MySQL is installed, and it will vary depending on each system configuration environment.
$ mysql -uroot < $MYSQL_HOME/share/innodb_memcached_config.sql
I will set up a separate token table as shown below.
## Create table
mysql> CREATE DATABASE `memcache_test`;
mysql> CREATE TABLE `memcache_test`.`token` (
-> `id` varchar(32) NOT NULL,
-> `token` varchar(128) NOT NULL,
-> PRIMARY KEY (`id`)
->);
Since data change will be performed through SQL, set as follows to perform only GET operation.
## Policy to allow only GET operation
mysql> update cache_policies set
-> get_policy = 'innodb_only',
-> set_policy = 'disabled',
-> delete_policy='disabled',
-> flush_policy = 'disabled';
## Delete the existing policy, add add new token policy
mysql> delete from containers;
mysql> insert into containers values ('token', 'memcache_test', 'token', 'id', 'token', 0,0,0, 'PRIMARY');
Now, modify the setting information so that the memcached plugin can see the table created above, and finally upload the plugin.
## Start InnoDB memcached plugin
mysql> INSTALL PLUGIN daemon_memcached soname "libmemcached.so";
After going through this process, an environment that can directly access DB data using the memcached protocol. “11211 port” has been opened as the default port, and you can get data directly from InnoDB data area by performing get operation with memcache protocol through this port.
In other words, you can directly read the changed data with SQL. (You can create a VALUE of memcache by mapping table columns to delimiters, but we will skip them here.)
Performance
Non-volatile caching layer? Of course, performance is a concern. No matter how good the expectations are, it is not a good solution if the response time or stability is poor in a caching system that generates a huge amount of reads. So, I compared the speed of InnoDB data lookup through a simple PK-based single lookup query and the memcached protocol.
Let’s configure the environment for a simple performance test. First, create a test table as shown below and create about 1 million data for performance testing.
## Generate test data
mysql> insert ignore into token
-> select md5(rand()), concat(uuid(), md5(rand())) from dual;
mysql> insert ignore into token
-> select md5(rand()), concat(uuid(), md5(rand())) from token;
... repeat ...
mysql> select count(*) from token;
+----------+
| count(*) |
+----------+
| 950435 |
+----------+
I did a simple test. SQL vs memcached!!
## SQL ##
SELECT * FROM token WHERE id = ${id}
## Memcache ##
get @@token.${id}
Based on the total ID value of the generated data, the system resources were measured by performing approximately 70,000 queries per second for each of the queries in the following random pattern.

In order to even evaluate the stability, I gave a stress test with continuous traffic of 70,000 per second for about 10 days. As a result, no error occurred even at once, and it showed a response time of about 0.3ms to 0.5ms each operation. About 1 second, the response time was only 1.5ms. (For reference, the right side is the interval average seconds.)
At this level, it is at a level that can be used as a cache in production. 🙂
+-----+---------------+
| ms | interval avg | <= seconds
+-----+---------------+
| 0.2 | 99 |
| 0.3 | 661736 |
| 0.4 | 162582 |
| 0.5 | 5686 |
| 0.6 | 1769 |
| 0.7 | 1004 |
| 0.8 | 576 |
| 0.9 | 310 |
| 1.0 | 159 |
| 1.1 | 77 |
| 1.2 | 29 |
| 1.3 | 12 |
| 1.4 | 4 |
| 1.5 | 1 |
+-----+---------------+
I even gave 140,000 GET operation stress per second. There was no problem with the service at all, and the average CPU resource utilization was about at 15%.. it was in a very stable state. Amazing!
Monitoring
No matter how great the service is, if there is no tool to check the health status of this service in real time, it will be useless, right? So, either way, I need to find the most effective monitoring method. Personally, I prefer to collect metrics using prometheus.

The reason I prefer prometheus is simple.
There are so many exporters that have already been made, and that if I want something, I can easily access it by adding functions to suit my own taste! The fact that I can collect monitoring metric information really efficiently with Prometheus, a time-series-based data store!
It is no exaggeration to say that Prometheus is optimized for metric collection.
Of course, in the case of the memcached exporter officially released by Prometheus, there are some problems in collecting InnoDB memcached information. The data of more than 4 characters in the stats settings result is recognized as an abnormal pattern, and memcached is collected as not running normally. If there is something that doesn’t work, I just fix it. It’s an open source fun factor.
// AS-IS ======================
stats := map[string]string{}
for err == nil && !bytes.Equal(line, resultEnd) {
s := bytes.Split(line, []byte(" "))
if len(s) != 3 || !bytes.HasPrefix(s[0], resultStatPrefix) {
return fmt.Errorf("memcache: unexpected stats line format %q", line)
}
stats[string(s[1])] = string(bytes.TrimSpace(s[2]))
line, err = rw.ReadSlice('\n')
if err != nil {
return err
}
}
// TO-BE ======================
stats := map[string]string{}
for err == nil && !bytes.Equal(line, resultEnd) {
s := bytes.Split(line, []byte(" "))
if len(s) == 3 {
stats[string(s[1])] = string(bytes.TrimSpace(s[2]))
} else if len(s) == 4 {
stats[string(s[1])] = string(bytes.TrimSpace(s[2])) + "-" + string(bytes.TrimSpace(s[2]))
} else {
return fmt.Errorf("memcache: unexpected stats line format %q", line)
}
line, err = rw.ReadSlice('\n')
if err != nil {
return err
}
}
By simply modifying a few lines, this unusual processing was easily resolved.
From now on, I can collect metrics and send alerts through prometheus, and check real-time metrics through Grafana! I’ve got a powerful monitoring tool.
Beyond physical memory
With only the contents listed above, it is absolutely perfect for production use as a cache layer.
However, I imagined possibilities beyond this.
As you all know, one of the powerful features of MySQL is data replication. If you reliably replicate master data through replication and utilize it well for services, you can naturally induce READ distribution as shown below. 🙂

Since all nodes have the same data, the same result is obtained no matter which device reads the data. In other words, as shown in the figure above, cache data is normally serviced for each group, but in case of a specific node failure, there is no service problem at all, even if data is pulled from another nearby MySQL.
Moreover, since each application reads as much data as needed, there is no reason to include all replicated data in the InnoDB buffer pool. In other words, non-volatile cache data that does not disappear is naturally distributed and serviced.
One more thing! If you take advantage of MySQL replication well, you can’t miss binary logs. Binary log is a file that stores the history of data changes in MySQL and has a role of replicating this data by replaying it as it is in the slave (applying the changed history in the master as it is).
In the case of ROW format, the changed data content itself is included. In the case of FULL image, all data before and after the change is included, so you can think of an interesting CDC structure for caching as shown below.

When data changes occur in tables related to caching (such as member information) in the MySQL Master DB where data changes occur, it is possible to create a replicator that detects and converts data into JSON format and puts data into cache of MySQL.
This related project is at the bottom URL. However, It’s an early project so that it needs to apply more modification
>> https://github.com/gywndi/uldra-binlog-json-transfer
If this happens, the left service (red) DB is converted into JSON as natural as flowing water without any manipulation and is stored as a non-volatile caching layer. Then, the application can perform asynchronous caching service based on the data converted to JSON.
Of course, I talked this easily here, but it will require a lot of defense logic to deal with many side effects such as replication delay.
Conclusion
I do not think that what I have talked about this topic so far is the best solution for each service.
However, alternatives such as InnoDB memcached plugin were able to present a fairly suitable solution in the current problem situation, and I wanted to talk about the process of thinking about new possibilities by harmonizing this with MySQL’s unique function.
The goal is not to have a system without failures, but to create a robust system with a low probability of failure, a system that can be restored quickly even if it fails!!
In order to do that, I wanted to take this opportunity to solve my own solutions that I imagined by combining the good functions of each system well. 🙂
Thanks for reading this long post. ^^