Fix configEpoch assignment when a cluster slot gets "closed".

This is still code to rework in order to use agreement to obtain a new
configEpoch when a slot is migrated, however this commit handles the
special case that happens when the nodes are just started and everybody
has a configEpoch of 0. In this special condition to have the maximum
configEpoch is not enough as the special epoch 0 is not unique (all the
others are).

This does not fixes the intrinsic race condition of a failover happening
while we are resharding, that will be addressed later.
This commit is contained in:
antirez 2014-03-03 11:12:11 +01:00
parent a89c8bb87c
commit 8dea2029a4

View File

@ -3180,7 +3180,9 @@ void clusterCommand(redisClient *c) {
* the master is failed over by a slave. */
uint64_t maxEpoch = clusterGetMaxEpoch();
if (myself->configEpoch != maxEpoch) {
if (myself->configEpoch == 0 ||
myself->configEpoch != maxEpoch)
{
server.cluster->currentEpoch++;
myself->configEpoch = server.cluster->currentEpoch;
clusterDoBeforeSleep(CLUSTER_TODO_FSYNC_CONFIG);