Current location - Plastic Surgery and Aesthetics Network - Plastic surgery and medical aesthetics - Reporter: Do you know the buffer zone of Redis?
Reporter: Do you know the buffer zone of Redis?
Hello, I'm Qi Xi(xρ).

Of course, everyone is familiar with Redis, but it is easy to ignore what is invisible at the use level. What I want to share with you today is the function of various buffers in Redis, the consequences of overflow and the optimization direction.

Before I start the text, I want to say a few more words. Regardless of Redis or other middleware, many underlying principles are similar and the design ideas are universal.

If you are learning any new frameworks/components in the future, you can try to relate them to the knowledge points you have already learned, which will be easier to understand, not to mention rote learning.

For example, what is the purpose of buffer now?

Without it, for performance.

Either cache data to improve response speed. For example, there is a change buffer in MySQL.

Either worry that the speed of consumers can't keep up with production and fear of data loss. Therefore, it is necessary to temporarily store production data. This is the role of Redis' buffer.

In addition, the speed of consumers can't keep up. If it is synchronous processing, will it also slow down the producers, so we are actually ensuring the speed of producers here.

Some readers may say: nonsense, consumers can't keep up, what's the use of producers?

In fact, is it possible that consumers don't care when they use it? The former deals with what the latter needs and gives it. Producers are very busy, and there are many other data to be processed, so they can't wait for consumers to finish spending at the same time before doing other things.

It seems that the beginning is a little enlarged. I'll take it. I will talk about it in detail below. If you have any questions, please get on the bus. Tanabata officially began.

First of all, what buffer does Redis have?

A *** 4:

The server will set an input buffer for each connected client.

Temporarily store the requested data.

The input buffer will temporarily store the commands sent by the client, and the Redis main thread will read the commands from the input buffer and process them.

In order to avoid the mismatch between the sending and processing speeds of the client and the server, this is the same as the output buffer mentioned later.

First, the buffer is a fixed-size memory area. If you want to fill this place, Redis will directly close the client connection.

Protect yourself. It's better if your client dies than if my server dies. Once the server hangs up, all clients are useless.

Then there are two situations when filling the buffer:

Then map the above principles to the scenario of Redis.

The situation of filling up at once can be to write a large amount of data into Redis, which is in the order of millions.

Another situation may be that the Redis server is blocked due to time-consuming operations, resulting in the inability to consume input buffer data.

Corresponding to the above two overflow scenarios, there is naturally an optimization direction.

In the case of filling up all at once, can you consider not writing so much data at once, and can you disassemble the data (in fact, it is unreasonable to write a lot of data at once)

In addition, is it possible to increase the size of the buffer?

This is actually not possible because there is no place to set it up. At present, the default size allocated by the server for each client input buffer is 1GB.

Then there is the second overflow scenario: the processing speed of both sides is inconsistent.

Under normal circumstances, the server should not be blocked for a long time, so we need to see what causes the congestion and solve it.

Like the input buffer, the server also sets an output buffer for each connected client.

As above, it is also a temporary request data.

This place is actually what I said at the beginning of the article. The producer doesn't care when the consumer uses it, but only takes care of what the consumer asked for before.

The server usually connects multiple clients, and the redis network communication module is single-threaded (even if the new version supports multi-threading).

What happens if there is no output buffer?

The server handles many requests from client A, and needs to return them to client A through a time-consuming network operation. In this process, the request of client B has not been processed and responded by the server, so the throughput will not go up.

With the buffer, at least the server can be freed to handle the request of client B.

This is also the same input buffer, so I won't go into details. If it overflows, the server will also close the client connection.

Similarly, don't read a lot of data at once; The MONITOR command will not be executed continuously online.

The size of the output buffer can be set through client-output-buffer-limit.

But generally speaking, we don't need to change it, because the default situation is enough, and it is good to know here.

Warm reminder, if you don't know about Redis synchronization/replication, such as full/incremental replication readers, I suggest you read my article: an article lets you know about Redis master-slave synchronization.

Let's get back to the point.

If there is replication, there must be master and slave. Data replication between master and slave includes full-scale replication and incremental replication.

Full replication is to synchronize all data, while incremental replication will only synchronize the commands received by the master library to the slave library when the master-slave library network is disconnected.

Temporary data.

Maintain a copy buffer on the master node for each slave node.

During full-scale replication, the master node will continue to receive the write command request sent by the client while transmitting the RDB file to the slave node, and save it in the replication buffer, and then send it to the slave node for execution after the RDB file transmission is completed.

The speed of receiving and loading RDB from the slave node is very slow, and the master node receives a large number of write commands, which will accumulate in the copy buffer and eventually overflow.

Once it overflows, the master node will directly close the connection with the slave node for replication operation, resulting in a complete replication failure.

You can control the data volume of the master node to 2~4GB (for reference only), which can make the full-scale synchronous execution faster and avoid too many commands piling up in the copy buffer.

You can also adjust the buffer size, or the previous client output buffer limit parameters.

Example: config set client-output-buffer-limitslave512mb128mb60.

This is the buffer used in the new copy.

Temporary data.

When the slave node is unexpectedly disconnected and reconnected, the unsynchronized data during this time can be synchronized from the buffer.

It won't overflow. (unexpectedly. jpg)

The buffer is essentially a fixed-length FIFO queue, and the default value is 1MB.

So when the queue is full, it's not an error, and it's not like the buffer above to close the connection directly. Instead, it overwrites the data that enters the queue first.

Therefore, if the slave node does not synchronize these old command data, it will cause the master node and the slave node to copy completely, not incrementally.

Resize the replication backlog buffer. Parameter is repl_backlog_size.