Sometimes when we resurrect a cached master after a successful partial
resynchronization attempt, there is pending data in the output buffers
of the client structure representing the master (likely REPLCONF ACK
commands).
If we don't reinstall the write handler, it will never be installed
again by addReply*() family functions as they'll assume that if there is
already data pending, the write handler is already installed.
This bug caused some slaves after a successful partial sync to never
send REPLCONF ACK, and continuously being detected as timing out by the
master, with a disconnection / reconnection loop.
Since we started sending REPLCONF ACK from slaves to masters, the
lastinteraction field of the client structure is always refreshed as
soon as there is room in the socket output buffer, so masters in timeout
are detected with too much delay (the socket buffer takes a lot of time
to be filled by small REPLCONF ACK <number> entries).
This commit only counts data received as interactions with a master,
solving the issue.
There was a bug that over-esteemed the amount of backlog available,
however this could only happen when a slave was asking for an offset
that was in the "future" compared to the master replication backlog.
Now this case is handled well and logged as an incident in the master
log file.
First change: now there is no need to be a master in order to detect a
failure, however the majority of masters signaling PFAIL or FAIL is needed.
This change is important because it allows slaves rejoining the cluster
after a partition to sense the FAIL condition so that eventually all the
nodes agree on failures.
The time is sent in requests, and copied back in reply packets.
This way the receiver can compare the time field in a reply with its
local clock and check the age of the request associated with this reply.
This is an easy way to discard delayed replies. Note that only a clock
is used here, that is the one of the node sending the packet. The
receiver only copies the field back into the reply, so no
synchronization is needed between clocks of different hosts.
Handshake nodes should turn into normal nodes or be freed in a
reasonable amount of time, otherwise they'll keep accumulating if the
address they are associated with is not reachable for some reason.
The code freed a reply object that was never created, resulting in a
segfault every time randomkey returned a key that was deleted before we
queried it for size.
Multiple missing calls to lua_pop prevented the error handler function
pushed on the stack for lua_pcall() to be popped before returning,
causing a memory leak in almost all the code paths of EVAL (both
successful calls and calls returning errors).
This caused two issues: Lua leaking memory (and this was very visible
from INFO memory output, as the 'used_memory_lua' field reported an
always increasing amount of memory used), and as a result slower and
slower GC cycles resulting in all the CPU being used.
Thanks to Tanguy Le Barzic for noticing something was wrong with his 2.8
slave, and for creating a testing EC2 environment where I was able to
investigate the issue.
We are sure the string is large, since when the sds optimization branch
is entered it means that it was not possible to encode it as EMBSTR for
size concerns.
When no encoding is possible, at least try to reallocate the sds string
with one that does not waste memory (with free space at the end of the
buffer) when the string is large enough.
We are sure that a string that is longer than 21 chars cannot be
represented by a 64 bit signed integer, as -(2^64) is 21 chars:
strlen(-18446744073709551616) => 21
This feature was implemented in the initial days of the Redis Cluster
implementaiton but is not a good idea at all.
1) It depends on clocks to be synchronized, that is already very bad.
2) Moreover it adds a bug where the pong time is updated via gossip so
no new PING is ever sent by the current node, with the effect of no PONG
received, no update of tables, no clearing of PFAIL flag.
In general to trust other nodes about the reachability of other nodes is
a broken distributed programming model.
The previous hashing used the trivial algorithm of xoring the integers
together. This is not optimal as it is very likely that different
hash table setups will hash the same, for instance an hash table at the
start of the rehashing process, and at the end, will have the same
fingerprint.
Now we hash N integers in a smarter way, by summing every integer to the
previous hash, and taking the integer hashing again (see the code for
further details). This way it is a lot less likely that we get a
collision. Moreover this way of hashing explicitly protects from the
same set of integers in a different order to hash to the same number.
This commit is related to issue #1240.
This commit does mainly two things:
1) It fixes zunionInterGenericCommand() by removing mass-initialization
of all the iterators used, so that we don't violate the unsafe iterator
API of dictionaries. This fixes issue #1240.
2) Since the zui* APIs required the allocator to be initialized in the
zsetopsrc structure in order to use non-iterator related APIs, this
commit fixes this strict requirement by accessing objects directly via
the op->subject->ptr pointer we have to the object.