diff --git a/doc/AppendCommand.html b/doc/AppendCommand.html deleted file mode 100644 index c2d8ed5b..00000000 --- a/doc/AppendCommand.html +++ /dev/null @@ -1,48 +0,0 @@ - - - -
- - - -If the key already exists and is a string, this command appends theprovided value at the end of the string.If the key does not exist it is created and set as an empty string, soAPPEND will be very similar to SET in this special case.-
-redis> exists mykey -(integer) 0 -redis> append mykey "Hello " -(integer) 6 -redis> append mykey "World" -(integer) 11 -redis> get mykey -"Hello World" -- -
<filename>
Request for authentication in a password protected Redis server.A Redis server can be instructed to require a password before to allow clientsto issue commands. This is done using the requirepass directive in theRedis configuration file.-
If the password given by the client is correct the server replies withan OK status code reply and starts accepting commands from the client.Otherwise an error is returned and the clients needs to try a new password.Note that for the high performance nature of Redis it is possible to trya lot of passwords in parallel in very short time, so make sure to generatea strong and very long password so that this attack is infeasible.-
redis-benchmark
utility that simulates SETs/GETs done by N clients at the same time sending M total queries (it is similar to the Apache's ab
utility). Below you'll find the full output of the benchmark executed against a Linux box.-./redis-benchmark -n 100000 - -====== SET ====== - 100007 requests completed in 0.88 seconds - 50 parallel clients - 3 bytes payload - keep alive: 1 - -58.50% <= 0 milliseconds -99.17% <= 1 milliseconds -99.58% <= 2 milliseconds -99.85% <= 3 milliseconds -99.90% <= 6 milliseconds -100.00% <= 9 milliseconds -114293.71 requests per second - -====== GET ====== - 100000 requests completed in 1.23 seconds - 50 parallel clients - 3 bytes payload - keep alive: 1 - -43.12% <= 0 milliseconds -96.82% <= 1 milliseconds -98.62% <= 2 milliseconds -100.00% <= 3 milliseconds -81234.77 requests per second - -====== INCR ====== - 100018 requests completed in 1.46 seconds - 50 parallel clients - 3 bytes payload - keep alive: 1 - -32.32% <= 0 milliseconds -96.67% <= 1 milliseconds -99.14% <= 2 milliseconds -99.83% <= 3 milliseconds -99.88% <= 4 milliseconds -99.89% <= 5 milliseconds -99.96% <= 9 milliseconds -100.00% <= 18 milliseconds -68458.59 requests per second - -====== LPUSH ====== - 100004 requests completed in 1.14 seconds - 50 parallel clients - 3 bytes payload - keep alive: 1 - -62.27% <= 0 milliseconds -99.74% <= 1 milliseconds -99.85% <= 2 milliseconds -99.86% <= 3 milliseconds -99.89% <= 5 milliseconds -99.93% <= 7 milliseconds -99.96% <= 9 milliseconds -100.00% <= 22 milliseconds -100.00% <= 208 milliseconds -88109.25 requests per second - -====== LPOP ====== - 100001 requests completed in 1.39 seconds - 50 parallel clients - 3 bytes payload - keep alive: 1 - -54.83% <= 0 milliseconds -97.34% <= 1 milliseconds -99.95% <= 2 milliseconds -99.96% <= 3 milliseconds -99.96% <= 4 milliseconds -100.00% <= 9 milliseconds -100.00% <= 208 milliseconds -71994.96 requests per second -Notes: changing the payload from 256 to 1024 or 4096 bytes does not change the numbers significantly (but reply packets are glued together up to 1024 bytes so GETs may be slower with big payloads). The same for the number of clients, from 50 to 256 clients I got the same numbers. With only 10 clients it starts to get a bit slower.
- ./redis-benchmark -q -n 100000 -SET: 53684.38 requests per second -GET: 45497.73 requests per second -INCR: 39370.47 requests per second -LPUSH: 34803.41 requests per second -LPOP: 37367.20 requests per second -Another one using a 64 bit box, a Xeon L5420 clocked at 2.5 Ghz:
- ./redis-benchmark -q -n 100000 -PING: 111731.84 requests per second -SET: 108114.59 requests per second -GET: 98717.67 requests per second -INCR: 95241.91 requests per second -LPUSH: 104712.05 requests per second -LPOP: 93722.59 requests per second --
Please for detailed information about the Redis Append Only File checkthe Append Only File Howto.-
BGREWRITEAOF rewrites the Append Only File in background when it gets toobig. The Redis Append Only File is a Journal, so every operation modifyingthe dataset is logged in the Append Only File (and replayed at startup).This means that the Append Only File always grows. In order to rebuildits content the BGREWRITEAOF creates a new version of the append only filestarting directly form the dataset in memory in order to guarantee thegeneration of the minimal number of commands needed to rebuild the database.-
The Append Only File Howto contains further details.-
Save the DB in background. The OK code is immediately returned.Redis forks, the parent continues to server the clients, the childsaves the DB on disk then exit. A client my be able to check if theoperation succeeded using the LASTSAVE command.-
BLPOP (and BRPOP) is a blocking list pop primitive. You can see this commandsas blocking versions of LPOP and RPOP able toblock if the specified keys don't exist or contain empty lists.-
The following is a description of the exact semantic. We describe BLPOP butthe two commands are identical, the only difference is that BLPOP pops theelement from the left (head) of the list, and BRPOP pops from the right (tail).-
When BLPOP is called, if at least one of the specified keys contain a nonempty list, an element is popped from the head of the list and returned tothe caller together with the name of the key (BLPOP returns a two elementsarray, the first element is the key, the second the popped value).-
Keys are scanned from left to right, so for instance if youissue BLPOP list1 list2 list3 0 against a dataset where list1 does notexist but list2 and list3 contain non empty lists, BLPOP guaranteesto return an element from the list stored at list2 (since it is the firstnon empty list starting from the left).-
If none of the specified keys exist or contain non empty lists, BLPOPblocks until some other client performs a LPUSH oran RPUSH operation against one of the lists.-
Once new data is present on one of the lists, the client finally returnswith the name of the key unblocking it and the popped value.-
When blocking, if a non-zero timeout is specified, the client will unblockreturning a nil special value if the specified amount of seconds passedwithout a push operation against at least one of the specified keys.-
The timeout argument is interpreted as an integer value. A timeout of zero means instead to block forever.-
Multiple clients can block for the same key. They are put intoa queue, so the first to be served will be the one that started to waitearlier, in a first-blpopping first-served fashion.-
BLPOP and BRPOP can be used with pipelining (sending multiple commands and reading the replies in batch), but it does not make sense to use BLPOP or BRPOP inside a MULTI/EXEC block (a Redis transaction).-
The behavior of BLPOP inside MULTI/EXEC when the list is empty is to return a multi-bulk nil reply, exactly what happens when the timeout is reached. If you like science fiction, think at it like if inside MULTI/EXEC the time will flow at infinite speed :)-
BLPOP returns a two-elements array via a multi bulk reply in order to returnboth the unblocking key and the popped value.-
When a non-zero timeout is specified, and the BLPOP operation timed out,the return value is a nil multi bulk reply. Most client values will returnfalse or nil accordingly to the programming language used.-Multi bulk reply -
Blocking version of the RPOPLPUSH command. Atomically removes and returnsthe last element (tail) of the source list at srckey, and as a side effect pushes the returned element in the head of the list at dstkey.-If the source list is empty, the client blocks until another client pushes against the source list. Of course in such a case the push operation against the destination list will be performed after the command unblocks detecting a push against the source list.
Command | Parameters | Description |
QUIT | - | close the connection |
AUTH | password | simple password authentication if enabled |
Command | Parameters | Description |
EXISTS | key | test if a key exists |
DEL | key | delete a key |
TYPE | key | return the type of the value stored at key |
KEYS | pattern | return all the keys matching a given pattern |
RANDOMKEY | - | return a random key from the key space |
RENAME | oldname newname | rename the old key in the new one, destroying the newname key if it already exists |
RENAMENX | oldname newname | rename the oldname key to newname, if the newname key does not already exist |
DBSIZE | - | return the number of keys in the current db |
EXPIRE | - | set a time to live in seconds on a key |
PERSIST | - | remove the expire from a key |
TTL | - | get the time to live in seconds of a key |
SELECT | index | Select the DB with the specified index |
MOVE | key dbindex | Move the key from the currently selected DB to the dbindex DB |
FLUSHDB | - | Remove all the keys from the currently selected DB |
FLUSHALL | - | Remove all the keys from all the databases |
Command | Parameters | Description |
SET | key value | Set a key to a string value |
GET | key | Return the string value of the key |
GETSET | key value | Set a key to a string returning the old value of the key |
SETNX | key value | Set a key to a string value if the key does not exist |
SETEX | key time value | Set+Expire combo command |
SETBIT | key offset value | Set bit at offset to value |
GETBIT | key offset | Return bit value at offset |
MSET | key1 value1 key2 value2 ... keyN valueN | Set multiple keys to multiple values in a single atomic operation |
MSETNX | key1 value1 key2 value2 ... keyN valueN | Set multiple keys to multiple values in a single atomic operation if none of the keys already exist |
MGET | key1 key2 ... keyN | Multi-get, return the strings values of the keys |
INCR | key | Increment the integer value of key |
INCRBY | key integer | Increment the integer value of key by integer |
DECR | key | Decrement the integer value of key |
DECRBY | key integer | Decrement the integer value of key by integer |
APPEND | key value | Append the specified string to the string stored at key |
SUBSTR | key start end | Return a substring of a larger string |
Command | Parameters | Description |
RPUSH | key value | Append an element to the tail of the List value at key |
LPUSH | key value | Append an element to the head of the List value at key |
LLEN | key | Return the length of the List value at key |
LRANGE | key start end | Return a range of elements from the List at key |
LTRIM | key start end | Trim the list at key to the specified range of elements |
LINDEX | key index | Return the element at index position from the List at key |
LSET | key index value | Set a new value as the element at index position of the List at key |
LREM | key count value | Remove the first-N, last-N, or all the elements matching value from the List at key |
LPOP | key | Return and remove (atomically) the first element of the List at key |
RPOP | key | Return and remove (atomically) the last element of the List at key |
BLPOP | key1 key2 ... keyN timeout | Blocking LPOP |
BRPOP | key1 key2 ... keyN timeout | Blocking RPOP |
RPOPLPUSH | srckey dstkey | Return and remove (atomically) the last element of the source List stored at srckey and push the same element to the destination List stored at dstkey |
BRPOPLPUSH | srckey dstkey | Like RPOPLPUSH but blocking of source key is empty |
Command | Parameters | Description |
SADD | key member | Add the specified member to the Set value at key |
SREM | key member | Remove the specified member from the Set value at key |
SPOP | key | Remove and return (pop) a random element from the Set value at key |
SMOVE | srckey dstkey member | Move the specified member from one Set to another atomically |
SCARD | key | Return the number of elements (the cardinality) of the Set at key |
SISMEMBER | key member | Test if the specified value is a member of the Set at key |
SINTER | key1 key2 ... keyN | Return the intersection between the Sets stored at key1, key2, ..., keyN |
SINTERSTORE | dstkey key1 key2 ... keyN | Compute the intersection between the Sets stored at key1, key2, ..., keyN, and store the resulting Set at dstkey |
SUNION | key1 key2 ... keyN | Return the union between the Sets stored at key1, key2, ..., keyN |
SUNIONSTORE | dstkey key1 key2 ... keyN | Compute the union between the Sets stored at key1, key2, ..., keyN, and store the resulting Set at dstkey |
SDIFF | key1 key2 ... keyN | Return the difference between the Set stored at key1 and all the Sets key2, ..., keyN |
SDIFFSTORE | dstkey key1 key2 ... keyN | Compute the difference between the Set key1 and all the Sets key2, ..., keyN, and store the resulting Set at dstkey |
SMEMBERS | key | Return all the members of the Set value at key |
SRANDMEMBER | key | Return a random member of the Set value at key |
Command | Parameters | Description |
ZADD | key score member | Add the specified member to the Sorted Set value at key or update the score if it already exist |
ZREM | key member | Remove the specified member from the Sorted Set value at key |
ZINCRBY | key increment member | If the member already exists increment its score by increment, otherwise add the member setting increment as score |
ZRANK | key member | Return the rank (or index) or member in the sorted set at key, with scores being ordered from low to high |
ZREVRANK | key member | Return the rank (or index) or member in the sorted set at key, with scores being ordered from high to low |
ZRANGE | key start end | Return a range of elements from the sorted set at key |
ZREVRANGE | key start end | Return a range of elements from the sorted set at key, exactly like ZRANGE, but the sorted set is ordered in traversed in reverse order, from the greatest to the smallest score |
ZRANGEBYSCORE | key min max | Return all the elements with score >= min and score <= max (a range query) from the sorted set |
ZCOUNT | key min max | Return the number of elements with score >= min and score <= max in the sorted set |
ZCARD | key | Return the cardinality (number of elements) of the sorted set at key |
ZSCORE | key element | Return the score associated with the specified element of the sorted set at key |
ZREMRANGEBYRANK | key min max | Remove all the elements with rank >= min and rank <= max from the sorted set |
ZREMRANGEBYSCORE | key min max | Remove all the elements with score >= min and score <= max from the sorted set |
ZUNIONSTORE / ZINTERSTORE | dstkey N key1 ... keyN WEIGHTS w1 ... wN AGGREGATE SUM|MIN|MAX | Perform a union or intersection over a number of sorted sets with optional weight and aggregate |
Command | Parameters | Description |
HSET | key field value | Set the hash field to the specified value. Creates the hash if needed. |
HGET | key field | Retrieve the value of the specified hash field. |
HMGET | key field1 ... fieldN | Get the hash values associated to the specified fields. |
HMSET | key field1 value1 ... fieldN valueN | Set the hash fields to their respective values. |
HINCRBY | key field integer | Increment the integer value of the hash at key on field with integer. |
HEXISTS | key field | Test for existence of a specified field in a hash |
HDEL | key field | Remove the specified field from a hash |
HLEN | key | Return the number of items in a hash. |
HKEYS | key | Return all the fields in a hash. |
HVALS | key | Return all the values in a hash. |
HGETALL | key | Return all the fields and associated values in a hash. |
Command | Parameters | Description |
SORT | key BY pattern LIMIT start end GET pattern ASC|DESC ALPHA | Sort a Set or a List accordingly to the specified parameters |
Command | Parameters | Description |
MULTI/EXEC/DISCARD/WATCH/UNWATCH | - | Redis atomic transactions |
Command | Parameters | Description |
SUBSCRIBE/UNSUBSCRIBE/PUBLISH | - | Redis Public/Subscribe messaging paradigm implementation |
Command | Parameters | Description |
SAVE | - | Synchronously save the DB on disk |
BGSAVE | - | Asynchronously save the DB on disk |
LASTSAVE | - | Return the UNIX time stamp of the last successfully saving of the dataset on disk |
SHUTDOWN | - | Synchronously save the DB on disk, then shutdown the server |
BGREWRITEAOF | - | Rewrite the append only file in background when it gets too big |
Command | Parameters | Description |
INFO | - | Provide information and statistics about the server |
MONITOR | - | Dump all the received requests in real time |
SLAVEOF | - | Change the replication settings |
CONFIG | - | Configure a Redis server at runtime |
The CONFIG command is able to retrieve or alter the configuration of a runningRedis server. Not all the configuration parameters are supported.-
CONFIG has two sub commands, GET and SET. The GET command is used to readthe configuration, while the SET command is used to alter the configuration.-
CONFIG GET returns the current configuration parameters. This sub commandonly accepts a single argument, that is glob style pattern. All theconfiguration parameters matching this parameter are reported as alist of key-value pairs. Example:
-$ redis-cli config get '*' -1. "dbfilename" -2. "dump.rdb" -3. "requirepass" -4. (nil) -5. "masterauth" -6. (nil) -7. "maxmemory" -8. "0\n" -9. "appendfsync" -10. "everysec" -11. "save" -12. "3600 1 300 100 60 10000" - -$ redis-cli config get 'm*' -1. "masterauth" -2. (nil) -3. "maxmemory" -4. "0\n" -The return type of the command is a bulk reply.
CONFIG SET is used in order to reconfigure the server, setting a specificconfiguration parameter to a new value.-
The list of configuration parameters supported by CONFIG SET can beobtained issuing a CONFIG GET *
command.
-The configuration set using CONFIG SET is immediately loaded by the Redisserver that will start acting as specified starting from the next command.-
Example:
-$ ./redis-cli -redis> set x 10 -OK -redis> config set maxmemory 200 -OK -redis> set y 20 -(error) ERR command not allowed when used memory > 'maxmemory' -redis> config set maxmemory 0 -OK -redis> set y 20 -OK -
The value of the configuration parameter is the same as the one of thesame parameter in the Redis configuration file, with the following exceptions:-
save
paramter is a list of space-separated integers. Every pair of integers specify the time and number of changes limit to trigger a save. For instance the command CONFIG SET save "3600 10 60 10000"
will configure the server to issue a background saving of the RDB file every 3600 seconds if there are at least 10 changes in the dataset, and every 60 seconds if there are at least 10000 changes. To completely disable automatic snapshots just set the parameter as an empty string.redis.conf
file included in the source code distribution is a starting point, you should be able to modify it in order do adapt it to your needs without troubles reading the comments inside the file.-$ ./redis-server redis.conf --
Return the number of keys in the currently selected database.-
Remove the specified keys. If a given key does not existno operation is performed for this key. The command returns the number ofkeys removed.-
-an integer greater than 0 if one or more keys were removed -0 if none of the specified key existed --
Test if the specified key exists. The command returns"0" if the key exists, otherwise "1" is returned.Note that even keys set with an empty string as value willreturn "1".-
-1 if the key exists. -0 if the key does not exist. -- -
Set a timeout on the specified key. After the timeout the key will beautomatically deleted by the server. A key with an associated timeout issaid to be volatile in Redis terminology.-
Voltile keys are stored on disk like the other keys, the timeout is persistenttoo like all the other aspects of the dataset. Saving a dataset containingexpires and stopping the server does not stop the flow of time as Redisstores on disk the time when the key will no longer be available as Unixtime, and not the remaining seconds.-
EXPIREAT works exctly like EXPIRE but instead to get the number of secondsrepresenting the Time To Live of the key as a second argument (that is arelative way of specifing the TTL), it takes an absolute one in the form ofa UNIX timestamp (Number of seconds elapsed since 1 Gen 1970).-
EXPIREAT was introduced in order to implement the Append Only File persistence modeso that EXPIRE commands are automatically translated into EXPIREAT commands for the append only file. Of course EXPIREAT can alsoused by programmers that need a way to simply specify that a given key should expire at a given time in the future.-
Since Redis 2.1.3 you can update the value of the timeout of a key alreadyhaving an expire set. It is also possible to undo the expire at allturning the key into a normal key using the PERSIST command.-
When the key is set to a new value using the SET command, or when a keyis destroied via DEL, the timeout is removed from the key.-
IMPORTANT: Since Redis 2.1.3 or greater, there are no restrictions aboutthe operations you can perform against volatile keys, however older versionsof Redis, including the current stable version 2.0.0, has the followinglimitations:-
Write operations like LPUSH, LSET and every other command that has theeffect of modifying the value stored at a volatile key have a special semantic:basically a volatile key is destroyed when it is target of a write operation.See for example the following usage pattern:-
-% ./redis-cli lpush mylist foobar /Users/antirez/hack/redis -OK -% ./redis-cli lpush mylist hello /Users/antirez/hack/redis -OK -% ./redis-cli expire mylist 10000 /Users/antirez/hack/redis -1 -% ./redis-cli lpush mylist newelement -OK -% ./redis-cli lrange mylist 0 -1 /Users/antirez/hack/redis -1. newelement -
What happened here is that LPUSH against the key with a timeout set deletedthe key before to perform the operation. There is so a simple rule, writeoperations against volatile keys will destroy the key before to perform theoperation. Why Redis uses this behavior? In order to retain an importantproperty: a server that receives a given number of commands in the samesequence will end with the same dataset in memory. Without the delete-on-writesemantic what happens is that the state of the server depends on the timethe commands were issued. This is not a desirable property in a distributed databasethat supports replication.-
Trying to call EXPIRE against a key that already has an associated timeoutwill not change the timeout of the key, but will just return 0. If insteadthe key does not have a timeout associated the timeout will be set and EXPIREwill return 1.-
Redis does not constantly monitor keys that are going to be expired.Keys are expired simply when some client tries to access a key, andthe key is found to be timed out.-
Of course this is not enough as there are expired keys that will neverbe accessed again. This keys should be expired anyway, so once everysecond Redis test a few keys at random among keys with an expire set.All the keys that are already expired are deleted from the keyspace.-
Each time a fixed number of keys where tested (100 by default). So ifyou had a client setting keys with a very short expire faster than 100for second the memory continued to grow. When you stopped to insertnew keys the memory started to be freed, 100 keys every second in thebest conditions. Under a peak Redis continues to use more and more RAMeven if most keys are expired in each sweep.-
Each time Redis:-
This is a trivial probabilistic algorithm, basically the assumption isthat our sample is representative of the whole key space,and we continue to expire until the percentage of keys that are likelyto be expired is under 25%-
This means that at any given moment the maximum amount of keys alreadyexpired that are using memory is at max equal to max setting operations per second divided by 4.-
-1: the timeout was set. -0: the timeout was not set since the key already has an associated timeout - (this may happen only in Redis versions < 2.1.3, Redis >= 2.1.3 will - happily update the timeout), or the key does not exist. -
-redis> set a 100 -OK -redis> expire a 360 -(integer) 1 -redis> incr a -(integer) 1 --I set a key to the value of 100, then set an expire of 360 seconds, and then incremented the key (before the 360 timeout expired of course). The obvious result would be: 101, instead the key is set to the value of 1. Why? -There is a very important reason involving the Append Only File and Replication. Let's rework a bit our example adding the notion of time to the mix: -
-SET a 100 -EXPIRE a 5 -... wait 10 seconds ... -INCR a --Imagine a Redis version that does not implement the "Delete keys with an expire set on write operation" semantic. -Running the above example with the 10 seconds pause will lead to 'a' being set to the value of 1, as it no longer exists when INCR is called 10 seconds later.
RPUSH data to the computer_ID key
. Don't want to save more than 1000 log lines per computer? Just issue a LTRIM computer_ID 0 999
command to trim the list after every push.echo 1 > /proc/sys/vm/overcommit_memory
:)overcommit_memory
setting is set to zero fork will fail unless there is as much free RAM as required to really duplicate all the parent memory pages, with the result that if you have a Redis dataset of 3 GB and just 2 GB of free memory it will fail.overcommit_memory
to 1 says Linux to relax and perform the fork in a more optimistic allocation fashion, and this is indeed what you want for Redis.overcommit_memory
and overcommit_ratio
is this classic from Red Hat Magaize, "Understanding Virtual Memory": http://www.redhat.com/magazine/001nov04/features/vm/ Delete all the keys of all the existing databases, not just the currently selected one. This command never fails.-
Delete all the keys of the currently selected DB. This command never fails.-
Get the value of the specified key. If the keydoes not exist the special value 'nil' is returned.If the value stored at key is not a string an erroris returned because GET can only handle string values.-
Returns the bit value at offset in the string value stored at key.-When offset is beyond the string length, the string is assumed to be a contiguous space with 0 bits. When key does not exist it is assumed to be an empty string, so offset is always out of range and the value is also assumed to be a contiguous space with 0 bits.
GETSET is an atomic set this value and return the old value command.Set key to the string value and return the old value stored at key.The string can't be longer than 1073741824 bytes (1 GB).-
GETSET can be used together with INCR for counting with atomic reset whena given condition arises. For example a process may call INCR against thekey mycounter every time some event occurred, but from time totime we need to get the value of the counter and reset it to zero atomicallyusing GETSET mycounter 0
.
- -struct sdshdr { - long len; - long free; - char buf[]; -}; -The buf character array stores the actual string.
sds
is defined in sds.h to be a synonymn for a character pointer:-typedef char *sds; -
sdsnewlen
function defined in sds.c creates a new Redis String: -sds sdsnewlen(const void *init, size_t initlen) { - struct sdshdr *sh; - - sh = zmalloc(sizeof(struct sdshdr)+initlen+1); -#ifdef SDS_ABORT_ON_OOM - if (sh == NULL) sdsOomAbort(); -#else - if (sh == NULL) return NULL; -#endif - sh->len = initlen; - sh->free = 0; - if (initlen) { - if (init) memcpy(sh->buf, init, initlen); - else memset(sh->buf,0,initlen); - } - sh->buf[initlen] = '\0'; - return (char*)sh->buf; -} -Remember a Redis string is a variable of type
struct sdshdr
. But sdsnewlen
returns a character pointer!!sdsnewlen
like below:-sdsnewlen("redis", 5); -This creates a new variable of type
struct sdshdr
allocating memory for len and free
-fields as well as for the buf character array.-sh = zmalloc(sizeof(struct sdshdr)+initlen+1); // initlen is length of init argument. -After
sdsnewlen
succesfully creates a Redis string the result is something like:------------ -|5|0|redis| ------------ -^ ^ -sh sh->buf -
sdsnewlen
returns sh->buf to the caller.sh
?sh
but you only have the pointer sh->buf
.sh
from sh->buf
?sh->buf
you get the pointer sh
. struct sdshdr
.sdslen
function and see this trick at work:-size_t sdslen(const sds s) { - struct sdshdr *sh = (void*) (s-(sizeof(struct sdshdr))); - return sh->len; -} -Knowing this trick you could easily go through the rest of the functions in sds.c.
Remove the specified field from an hash stored at key.-
If the field was present in the hash it is deleted and 1 is returned, otherwise 0 is returned and no operation is performed.-
Return 1 if the hash stored at key contains the specified field.-
Return 0 if the key is not found or the field is not present.-
If key holds a hash, retrieve the value associated to the specified field.-
If the field is not found or the key does not exist, a special 'nil' value is returned.-
HKEYS returns all the fields names contained into a hash, HVALS all the associated values, while HGETALL returns both the fields and values in the form of field1, value1, field2, value2, ..., fieldN, valueN.-
Increment the number stored at field in the hash at key by value. If key does not exist, a new key holding a hash is created. If field does not exist or holds a string, the value is set to 0 before applying the operation.-
The range of values supported by HINCRBY is limited to 64 bit signed integers.
-HINCRBY key field 1 (increment by one) -HINCRBY key field -1 (decrement by one, just like the DECR command) -HINCRBY key field -10 (decrement by 10) --
Return the number of entries (fields) contained in the hash stored at key. If the specified key does not exist, 0 is returned assuming an empty hash.-
Retrieve the values associated to the specified fields.-
If some of the specified fields do not exist, nil values are returned.Non existing keys are considered like empty hashes.-
Set the respective fields to the respective values. HMSET replaces old values with new values.-
If key does not exist, a new key holding a hash is created.-
Set the specified hash field to the specified value.-
If key does not exist, a new key holding a hash is created.-
If the field already exists, and the HSET just produced an update of thevalue, 0 is returned, otherwise if a new field is created 1 is returned.-
Set the specified hash field to the specified value, if field does not exist yet.-
If key does not exist, a new key holding a hash is created.-
If the field already exists, this operation has no effect and returns 0.Otherwise, the field is set to value and the operation returns 1.-
Increment or decrement the number stored at key by one. If the key doesnot exist or contains a value of a wrong type, set the key to thevalue of "0" before to perform the increment or decrement operation.-
INCRBY and DECRBY work just like INCR and DECR but instead toincrement/decrement by 1 the increment/decrement is integer.-
INCR commands are limited to 64 bit signed integers.-Note: this is actually a string operation, that is, in Redis there are not "integer" types. Simply the string stored at the key is parsed as a base 10 64 bit signed integer, incremented, and then converted back as a string.
The info command returns different information and statistics about the server in an format that's simple to parse by computers and easy to red by huamns.-
-edis_version:0.07 -connected_clients:1 -connected_slaves:0 -used_memory:3187 -changes_since_last_save:0 -last_save_time:1237655729 -total_connections_received:1 -total_commands_processed:1 -uptime_in_seconds:25 -uptime_in_days:0 -All the fields are in the form
field:value
used_memory
is returned in bytes, and is the total number of bytes allocated by the program using malloc
.uptime_in_days
is redundant since the uptime in seconds contains already the full uptime information, this field is only mainly present for humans.changes_since_last_save
does not refer to the number of key changes, but to the number of operations that produced some kind of change in the dataset.-$ ./redis-cli set mykey "my binary safe value" -OK -$ ./redis-cli get mykey -my binary safe value -As you can see using the Set command and the Get command is trivial to set values to strings and have this strings returned back.
-$ ./redis-cli set counter 100 -OK -$ ./redis-cli incr counter -(integer) 101 -$ ./redis-cli incr counter -(integer) 102 -$ ./redis-cli incrby counter 10 -(integer) 112 -The INCR command parses the string value as an integer, increments it by one, and finally sets the obtained value as the new string value. There are other similar commands like INCRBY, DECR and DECRBY. Actually internally it's always the same command, acting in a slightly different way.
-$ ./redis-cli rpush messages "Hello how are you?" -OK -$ ./redis-cli rpush messages "Fine thanks. I'm having fun with Redis" -OK -$ ./redis-cli rpush messages "I should look into this NOSQL thing ASAP" -OK -$ ./redis-cli lrange messages 0 2 -1. Hello how are you? -2. Fine thanks. I'm having fun with Redis -3. I should look into this NOSQL thing ASAP -Note that LRANGE takes two indexes, the first and the last element of the range to return. Both the indexes can be negative to tell Redis to start to count for the end, so -1 is the last element, -2 is the penultimate element of the list, and so forth.
-$ ./redis-cli incr next.news.id -(integer) 1 -$ ./redis-cli set news:1:title "Redis is simple" -OK -$ ./redis-cli set news:1:url "http://code.google.com/p/redis" -OK -$ ./redis-cli lpush submitted.news 1 -OK -We obtained an unique incremental ID for our news object just incrementing a key, then used this ID to create the object setting a key for every field in the object. Finally the ID of the new object was pushed on the submitted.news list.
-$ ./redis-cli sadd myset 1 -(integer) 1 -$ ./redis-cli sadd myset 2 -(integer) 1 -$ ./redis-cli sadd myset 3 -(integer) 1 -$ ./redis-cli smembers myset -1. 3 -2. 1 -3. 2 -I added three elements to my set and told Redis to return back all the elements. As you can see they are not sorted.
-$ ./redis-cli sismember myset 3 -(integer) 1 -$ ./redis-cli sismember myset 30 -(integer) 0 -"3" is a member of the set, while "30" is not. Sets are very good in order to express relations between objects. For instance we can easily use Redis Sets in order to implement tags.
-$ ./redis-cli sadd news:1000:tags 1 -(integer) 1 -$ ./redis-cli sadd news:1000:tags 2 -(integer) 1 -$ ./redis-cli sadd news:1000:tags 5 -(integer) 1 -$ ./redis-cli sadd news:1000:tags 77 -(integer) 1 -$ ./redis-cli sadd tag:1:objects 1000 -(integer) 1 -$ ./redis-cli sadd tag:2:objects 1000 -(integer) 1 -$ ./redis-cli sadd tag:5:objects 1000 -(integer) 1 -$ ./redis-cli sadd tag:77:objects 1000 -(integer) 1 -To get all the tags for a given object is trivial:
-$ ./redis-cli sinter tag:1:objects tag:2:objects tag:10:objects tag:27:objects -... no result in our dataset composed of just one object ;) ... -Look at the Command Reference to discover other Set related commands, there are a bunch of interesting one. Also make sure to check the SORT command as both Redis Sets and Lists are sortable.
-$ ./redis-cli zadd hackers 1940 "Alan Kay" -(integer) 1 -$ ./redis-cli zadd hackers 1953 "Richard Stallman" -(integer) 1 -$ ./redis-cli zadd hackers 1965 "Yukihiro Matsumoto" -(integer) 1 -$ ./redis-cli zadd hackers 1916 "Claude Shannon" -(integer) 1 -$ ./redis-cli zadd hackers 1969 "Linus Torvalds" -(integer) 1 -$ ./redis-cli zadd hackers 1912 "Alan Turing" -(integer) 1 -For sorted sets it's a joke to return these hackers sorted by their birth year because actually they are already sorted. Sorted sets are implemented via a dual-ported data structure containing both a skip list and an hash table, so every time we add an element Redis performs an O(log(N)) operation, that's good, but when we ask for sorted elements Redis does not have to do any work at all, it's already all sorted:
-$ ./redis-cli zrange hackers 0 -1 -1. Alan Turing -2. Claude Shannon -3. Alan Kay -4. Richard Stallman -5. Yukihiro Matsumoto -6. Linus Torvalds -Didn't know that Linus was younger than Yukihiro btw ;)
-$ ./redis-cli zrevrange hackers 0 -1 -1. Linus Torvalds -2. Yukihiro Matsumoto -3. Richard Stallman -4. Alan Kay -5. Claude Shannon -6. Alan Turing -A very important note, ZSets have just a "default" ordering but you are still free to call the SORT command against sorted sets to get a different ordering (but this time the server will waste CPU). An alternative for having multiple orders is to add every element in multiple sorted sets at the same time.
-$ ./redis-cli zrangebyscore hackers -inf 1950 -1. Alan Turing -2. Claude Shannon -3. Alan Kay -We asked Redis to return all the elements with a score between negative infinite and 1950 (both extremes are included).
-$ ./redis-cli zremrangebyscore hackers 1940 1960 -(integer) 2 -ZREMRANGEBYSCORE is not the best command name, but it can be very useful, and returns the number of removed elements.
Returns all the keys matching the glob-style pattern asspace separated strings. For example if you have in thedatabase the keys "foo" and "foobar" the command "KEYS foo*
"will return "foo foobar".
-Note that while the time complexity for this operation is O(n)the constant times are pretty low. For example Redis runningon an entry level laptop can scan a 1 million keys databasein 40 milliseconds. Still it's better to consider this one of --the slow commands that may ruin the DB performance if not usedwith care*.-In other words this command is intended only for debugging and *special* operations like creating a script to change the DB schema. Don't use it in your normal code. Use Redis Sets in order to group together a subset of objects.-Glob style patterns examples: -* h?llo will match hello hallo hhllo* h*llo will match hllo heeeello* hUse \ to escape special chars if you want to match them verbatim.[
ae]
llo will match hello and hallo, but not hilloReturn value
-Multi bulk reply
Return the UNIX TIME of the last DB save executed with success.A client may check if a BGSAVE command succeeded reading the LASTSAVEvalue, then issuing a BGSAVE command and checking at regular intervalsevery N seconds if LASTSAVE changed.-
Return the specified element of the list stored at the specifiedkey. 0 is the first element, 1 the second and so on. Negative indexesare supported, for example -1 is the last element, -2 the penultimateand so on.-
If the value stored at key is not of list type an error is returned.If the index is out of range a 'nil' reply is returned.-
Note that even if the average time complexity is O(n) asking forthe first or the last element of the list is O(1).-
-LPUSH mylist a # now the list is "a" -LPUSH mylist b # now the list is "b","a" -RPUSH mylist c # now the list is "b","a","c" (RPUSH was used this time) --The resulting list stored at mylist will contain the elements "b","a","c".
Return the length of the list stored at the specified key. If thekey does not exist zero is returned (the same behaviour as forempty lists). If the value stored at key is not a list an error is returned.-
-The length of the list. -- -
Atomically return and remove the first (LPOP) or last (RPOP) elementof the list. For example if the list contains the elements "a","b","c" LPOPwill return "a" and the list will become "b","c".-
If the key does not exist or the list is already empty the specialvalue 'nil' is returned.-
>
end, an empty list is returned.
-If end is over the end of the list Redis will threat it just like
-the last element of the list.Remove the first count occurrences of the value element from the list.If count is zero all the elements are removed. If count is negativeelements are removed from tail to head, instead to go from head to tailthat is the normal behaviour. So for example LREM with count -2 and_hello_ as value to remove against the list (a,b,c,hello,x,hello,hello) willlave the list (a,b,c,hello,x). The number of removed elements is returnedas an integer, see below for more information about the returned value.Note that non existing keys are considered like empty lists by LREM, so LREMagainst non existing keys will always return 0.-
-The number of removed elements if the operation succeeded -- -
Set the list element at index (see LINDEX for information about the_index_ argument) with the new value. Out of range indexes willgenerate an error. Note that setting the first or last elements ofthe list is O(1).-
Similarly to other list commands accepting indexes, the index can be negative to access elements starting from the end of the list. So -1 is the last element, -2 is the penultimate, and so forth.
Trim an existing list so that it will contain only the specifiedrange of elements specified. Start and end are zero-based indexes.0 is the first element of the list (the list head), 1 the next elementand so on.-
For example LTRIM foobar 0 2 will modify the list stored at foobarkey so that only the first three elements of the list will remain.-
_start_ and end can also be negative numbers indicating offsetsfrom the end of the list. For example -1 is the last element ofthe list, -2 the penultimate element and so on.-
Indexes out of range will not produce an error: if start is overthe end of the list, or start > end, an empty list is left as value.If end over the end of the list Redis will threat it just likethe last element of the list.-
Hint: the obvious use of LTRIM is together with LPUSH/RPUSH. For example:-
- LPUSH mylist <someelement> - LTRIM mylist 0 99 -
The above two commands will push elements in the list taking care thatthe list will not grow without limits. This is very useful when usingRedis to store logs for example. It is important to note that when usedin this way LTRIM is an O(1) operation because in the average casejust one element is removed from the tail of the list.-
Get the values of all the specified keys. If one or more keys dont existor is not of type String, a 'nil' value is returned instead of the valueof the specified key, but the operation never fails.-
-$ ./redis-cli set foo 1000 -+OK -$ ./redis-cli set bar 2000 -+OK -$ ./redis-cli mget foo bar -1. 1000 -2. 2000 -$ ./redis-cli mget foo bar nokey -1. 1000 -2. 2000 -3. (nil) -$ -- -
MONITOR is a debugging command that outputs the whole sequence of commandsreceived by the Redis server. is very handy in order to understandwhat is happening into the database. This command is used directlyvia telnet.-
-% telnet 127.0.0.1 6379 -Trying 127.0.0.1... -Connected to segnalo-local.com. -Escape character is '^]'. -MONITOR -+OK -monitor -keys * -dbsize -set x 6 -foobar -get x -del x -get x -set key_x 5 -hello -set key_y 5 -hello -set key_z 5 -hello -set foo_a 5 -hello -
The ability to see all the requests processed by the server is useful in orderto spot bugs in the application both when using Redis as a database and asa distributed caching system.-
In order to end a monitoring session just issue a QUIT command by hand.-
Move the specified key from the currently selected DB to the specifieddestination DB. Note that this command returns 1 only if the key wassuccessfully moved, and 0 if the target key was already there or if thesource key was not found at all, so it is possible to use MOVE as a lockingprimitive.-
-1 if the key was moved -0 if the key was not moved because already present on the target DB or was not found in the current DB. -- -
Set the the respective keys to the respective values. MSET will replace oldvalues with new values, while MSETNX will not perform any operation at alleven if just a single key already exists.-
Because of this semantic MSETNX can be used in order to set different keysrepresenting different fields of an unique logic object in a way thatensures that either all the fields or none at all are set.-
Both MSET and MSETNX are atomic operations. This means that for instanceif the keys A and B are modified, another client talking to Redis can eithersee the changes to both A and B at once, or no modification at all.-
-1 if the all the keys were set -0 if no key was set (at least one key already existed) --
-?> r.multi -=> "OK" ->> r.incr "foo" -=> "QUEUED" ->> r.incr "bar" -=> "QUEUED" ->> r.incr "bar" -=> "QUEUED" ->> r.exec -=> [1, 1, 2] --As it is possible to see from the session above, MULTI returns an "array" of -replies, where every element is the reply of a single command in the -transaction, in the same order the commands were queued.
-Trying 127.0.0.1... -Connected to localhost. -Escape character is '^]'. -MULTI -+OK -SET a 3 -abc -+QUEUED -LPOP a -+QUEUED -EXEC -*2 -+OK --ERR Operation against a key holding the wrong kind of value --MULTI returned a two elements bulk reply where one is an +OK -code and one is a -ERR reply. It's up to the client lib to find a sensible -way to provide the error to the user.
IMPORTANT: even when a command will raise an error, all the other commandsin the queue will be processed. Redis will NOT stop the processing ofcommands once an error is found.-Another example, again using the write protocol with telnet, shows how -syntax errors are reported ASAP instead: -
-MULTI -+OK -INCR a b c --ERR wrong number of arguments for 'incr' command --This time due to the syntax error the "bad" INCR command is not queued -at all.
-?> r.set("foo",1) -=> true ->> r.multi -=> "OK" ->> r.incr("foo") -=> "QUEUED" ->> r.discard -=> "OK" ->> r.get("foo") -=> "1" -
-val = GET mykey -val = val + 1 -SET mykey $val --This will work reliably only if we have a single client performing the operation in a given time. -If multiple clients will try to increment the key about at the same time -there will be a race condition. For instance client A and B will read the -old value, for instance, 10. The value will be incremented to 11 by both -the clients, and finally SET as the value of the key. So the final value -will be "11" instead of "12".
-WATCH mykey -val = GET mykey -val = val + 1 -MULTI -SET mykey $val -EXEC --Using the above code, if there are race conditions and another client -modified the result of val in the time between our call to WATCH and -our call to EXEC, the transaction will fail.
-WATCH zset -ele = ZRANGE zset 0 0 -MULTI -ZREM zset ele -EXEC --If EXEC fails (returns a nil value) we just re-iterate the operation.
-The result of a MULTI/EXEC command is a multi bulk reply where every element is the return value of every command in the atomic transaction. -If a MULTI/EXEC transaction is aborted because of WATCH detected modified keys, a Null Multi Bulk reply is returned. -
-WATCH foo -old_value = HGET foo field -MULTI -HSET foo field new_value -EXEC -
-WATCH foo -score = ZSCORE foo bar -IF score != NIL - MULTI - ZADD foo 1 bar - EXEC -ENDIF --
-*<number of arguments> CR LF -$<number of bytes of argument 1> CR LF -<argument data> CR LF -... -$<number of bytes of argument N> CR LF -<argument data> CR LF -See the following example:
-*3 -$3 -SET -$5 -mykey -$7 -myvalue -This is how the above command looks as a quoted string, so that it is possible to see the exact value of every byte in the query:
-"*3\r\n$3\r\nSET\r\n$5\r\nmykey\r\n$8\r\nmyvalue\r\n" -As you will see in a moment this format is also used in Redis replies. -The format used for every argument "$6\r\nmydata\r\n" is called a Bulk Reply. -While the actual unified request protocol is what Redis uses to return list of items, and is called a Multi Bulk Reply. It is just the sum of N different -Bulk Replies prefixed by a
*<argc>\r\n
string where <argc>
is the number of arguments (Bulk Replies) that will follow.*
"-+OK -The client library should return everything after the "+", that is, the string "OK" in the example.
-C: GET mykey -S: $6 -S: foobar -The server sends as the first line a "$" byte followed by the number of bytes -of the actual reply, followed by CRLF, then the actual data bytes are sent, -followed by additional two bytes for the final CRLF. -The exact sequence sent by the server is:
-"$6\r\nfoobar\r\n" -If the requested value does not exist the bulk reply will use the special -value -1 as data length, example:
-C: GET nonexistingkey -S: $-1 -The client library API should not return an empty string, but a nil object, when the requested object does not exist. -For example a Ruby library should return 'nil' while a C library should return -NULL (or set a special flag in the reply object), and so forth.
*
. Example:-C: LRANGE mylist 0 3 -S: *4 -S: $3 -S: foo -S: $3 -S: bar -S: $5 -S: Hello -S: $5 -S: World -As you can see the multi bulk reply is exactly the same format used in order -to send commands to the Redis server unsing the unified protocol.
-C: LRANGE nokey 0 1 -S: *-1 -A client library API SHOULD return a nil object and not an empty list when this -happens. This makes possible to distinguish between empty list and other error conditions (for instance a timeout condition in the BLPOP command).
-S: *3 -S: $3 -S: foo -S: $-1 -S: $3 -S: bar -The second element is nul. The client library should return something like this:
-["foo",nil,"bar"] -
* Inline commands: simple commands where argumnets are just space separated strings. No binary safeness is possible.* Bulk commands: bulk commands are exactly like inline commands, but the last argument is handled in a special way in order to allow for a binary-safe last argument.-
-C: PING -S: +PONG -The following is another example of an INLINE command returning an integer:
-C: EXISTS somekey -S: :0 -Since 'somekey' does not exist the server returned ':0'.
-C: SET mykey 6 -C: foobar -S: +OK -The last argument of the commnad is '6'. This specify the number of DATA -bytes that will follow, that is, the string "foobar". Note that even this bytes are terminated by two additional bytes of CRLF.
"SET mykey 6\r\nfoobar\r\n"-Redis has an internal list of what command is inline and what command is bulk, so you have to send this commands accordingly. It is strongly suggested to use the new Unified Request Protocol instead. -
SUBSCRIBE, UNSUBSCRIBE and PUBLISH commands implement thePublish/Subscribe messaging paradigm where (citing Wikipedia) senders (publishers) are not programmed to send their messages to specific receivers (subscribers). Rather, published messages are characterized into channels, without knowledge of what (if any) subscribers there may be. Subscribers express interest in one or more channels, and only receive messages that are of interest, without knowledge of what (if any) publishers there are. This decoupling of publishers and subscribers can allow for greater scalability and a more dynamic network topology.-
For instance in order to subscribe to the channels foo and bar the clientwill issue the SUBSCRIBE command followed by the names of the channels.
-SUBSCRIBE foo bar --
All the messages sent by other clients to this channels will be pushed bythe Redis server to all the subscribed clients, in the form of a threeelements bulk reply, where the first element is the message type, thesecond the originating channel, and the third argument the message payload.-
A client subscribed to 1 or more channels should NOT issue other commandsother than SUBSCRIBE and UNSUBSCRIBE, but can subscribe or unsubscribeto other channels dynamically.-
The reply of the SUBSCRIBE and UNSUBSCRIBE operations are sent in the formof messages, so that the client can just read a coherent stream of messageswhere the first element indicates the kind of message.
Messages are in the form of multi bulk replies with three elements.The first element is the kind of message:
-SUBSCRIBE first second -*3 -$9 -subscribe -$5 -first -:1 -*3 -$9 -subscribe -$6 -second -:2 --at this point from another client we issue a PUBLISH operation against the channel named "second". This is what the first client receives: -
-*3 -$7 -message -$6 -second -$5 -Hello --Now the client unsubscribes itself from all the channels using the UNSUBSCRIBE command without additional arguments: -
-UNSUBSCRIBE -*3 -$11 -unsubscribe -$6 -second -:1 -*3 -$11 -unsubscribe -$5 -first -:0 --
-PSUBSCRIBE news.* --Will receive all the messages sent to the channel news.art.figurative and news.music.jazz and so forth. All the glob style patterns as valid, so multiple wild cards are supported.
-SUBSCRIBE foo -PSUBSCRIBE f* --In the above example, if a message is sent to the foo channel, the client will receive two messages, one of type "message" and one of type "pmessage". -
-$ wget http://redis.googlecode.com/files/redis-1.02.tar.gz -The unstable source code, with more features but not ready for production, can be downloaded using git:
-$ git clone git://github.com/antirez/redis.git -
-$ tar xvzf redis-1.02.tar.gz -$ cd redis-1.02 -$ make -In order to test if the Redis server is working well in your computer make sure to run
make test
and check that all the tests are passed.-$ ./redis-server -With the default configuration Redis will log to the standard output so you can check what happens. Later, you can change the default settings.
make
and it is called redis-cli
For instance to set a key and read back the value use the following:-$ ./redis-cli set mykey somevalue -OK -$ ./redis-cli get mykey -somevalue -What about adding elements to a list:
-$ ./redis-cli lpush mylist firstvalue -OK -$ ./redis-cli lpush mylist secondvalue -OK -$ ./redis-cli lpush mylist thirdvalue -OK -$ ./redis-cli lrange mylist 0 -1 -1. thirdvalue -2. secondvalue -3. firstvalue -$ ./redis-cli rpop mylist -firstvalue -$ ./redis-cli lrange mylist 0 -1 -1. thirdvalue -2. secondvalue -
Ask the server to silently close the connection.-
-redis> lpush programming_languages C -OK -redis> lpush programming_languages Ruby -OK -redis> rpush programming_languages Python -OK -redis> rpop programming_languages -Python -redis> lpop programming_languages -Ruby -More complex operations are available for each data type as well. Continuing with lists, you can get a range of elements with LRANGE (O(start+n)) or trim the list with LTRIM (O(n)):
-redis> lpush cities NYC -OK -redis> lpush cities SF -OK -redis> lpush cities Tokyo -OK -redis> lpush cities London -OK -redis> lpush cities Paris -OK -redis> lrange cities 0 2 -1. Paris -2. London -3. Tokyo -redis> ltrim cities 0 1 -OK -redis> lpop cities -Paris -redis> lpop cities -London -redis> lpop cities -(nil) -You can also add and remove elements from a set, and perform intersections, unions, and differences.
slaveof 192.168.1.100 6379
. We provide a Replication Howto if you want to know more about this feature../redis-server /etc/redis.conf-This is NOT required. The server will start even without a configuration file -using a default built-in configuration.
-$ telnet localhost 6379 -Trying 127.0.0.1... -Connected to localhost. -Escape character is '^]'. -SET foo 3 -bar -+OK -The first line we sent to the server is "set foo 3". This means "set the key -foo with the following three bytes I'll send you". The following line is -the "bar" string, that is, the three bytes. So the effect is to set the -key "foo" to the value "bar". Very simple!
-GET foo -$3 -bar -Ok that's very similar to 'set', just the other way around. We sent "get foo", -the server replied with a first line that is just the $ character follwed by -the number of bytes the value stored at key contained, followed by the actual -bytes. Again "\r\n" are appended both to the bytes count and the actual data. In Redis slang this is called a bulk reply.
-GET blabla -$-1 -When the key does not exist instead of the length, just the "$-1" string is sent. Since a -1 length of a bulk reply has no meaning it is used in order to specifiy a 'nil' value and distinguish it from a zero length value. Another way to check if a given key exists or not is indeed the EXISTS command:
-EXISTS nokey -:0 -EXISTS foo -:1 -As you can see the server replied ':0' the first time since 'nokey' does not -exist, and ':1' for 'foo', a key that actually exists. Replies starting with the colon character are integer reply.
Return a randomly selected key from the currently selected DB.-
-- SUNION, SDIFF, SUNIONSTORE, SDIFFSTORE commands implemented. (Aman Gupta, antirez) -- Non blocking replication. Now while N slaves are synchronizing, the master will continue to ask to client queries. (antirez) -- PHP client ported to PHP5 (antirez) -- FLUSHALL/FLUSHDB no longer sync on disk. Just increment the dirty counter by the number of elements removed, that will probably trigger a background saving operation (antirez) -- INCRBY/DECRBY now support 64bit increments, with tests (antirez) -- New fields in INFO command, bgsave_in_progress and replication related (antirez) -- Ability to specify a different file name for the DB (... can't remember ...) -- GETSET command, atomic GET + SET (antirez) -- SMOVE command implemented, atomic move-element across sets operation (antirez) -- Ability to work with huge data sets, tested up to 350 million keys (antirez) -- Warns if /proc/sys/vm/overcommit_memory is set to 0 on Linux. Also make sure to don't resize the hash tables while the child process is saving in order to avoid copy-on-write of memory pages (antirez) -- Infinite number of arguments for MGET and all the other commands (antirez) -- CPP client (Brian Hammond) -- DEL is now a vararg, IMPORTANT: memory leak fixed in loading DB code (antirez) -- Benchmark utility now supports random keys (antirez) -- Timestamp in log lines (antirez) -- Fix SINTER/UNIONSTORE to allow for &=/|= style operations (i.e. SINTERSTORE set1 set1 set2) (Aman Gupta) -- Partial qsort implemented in SORT command, only when both BY and LIMIT is used (antirez) -- Allow timeout=0 config to disable client timeouts (Aman Gupta) -- Alternative (faster/simpler) ruby client API compatible with Redis-rb (antirez) -- S*STORE now return the cardinality of the resulting set (antirez) -- TTL command implemented (antirez) -- Critical bug about glueoutputbuffers=yes fixed. Under load and with pipelining and clients disconnecting on the middle of the chat with the server, Redis could block. (antirez) -- Different replication fixes (antirez) -- SLAVEOF command implemented for remote replication management (antirez) -- Issue with redis-client used in scripts solved, now to check if the latest argument must come from standard input we do not check that stdin is or not a tty but the command arity (antirez) -- Warns if using the default config (antirez) -- maxclients implemented, see redis.conf for details (antirez) -- max bytes of a received command enlarged from 1k to 32k (antirez) --
-2009-06-16 client libraries updated (antirez) -2009-06-16 Better handling of background saving process killed or crashed (antirez) -2009-06-14 number of keys info in INFO command (Diego Rosario Brogna) -2009-06-14 SPOP documented (antirez) -2009-06-14 Clojure library (Ragnar Dahlén) -2009-06-10 It is now possible to specify - as config file name to read it from stdin (antirez) -2009-06-10 max bytes in an inline command raised to 1024*1024 bytes, in order to allow for very large MGETs and still protect from client crashes (antirez) -2009-06-08 SPOP implemented. Hash table resizing for Sets and Expires too. Changed the resize policy to play better with RANDOMKEY and SPOP. (antirez) -2009-06-07 some minor changes to the backtrace code (antirez) -2009-06-07 enable backtrace capabilities only for Linux and MacOSX (antirez) -2009-06-07 Dump a backtrace on sigsegv/sigbus, original coded (Diego Rosario Brogna) -2009-06-05 Avoid a busy loop while sending very large replies against very fast links, this allows to be more responsive with other clients even under a KEY * against the loopback interface (antirez) -2009-06-05 Kill the background saving process before performing SHUTDOWN to avoid races (antirez) -2009-06-05 LREM now returns :0 for non existing keys (antirez) -2009-06-05 added config.h for #ifdef business isolation, added fstat64 for Mac OS X (antirez) -2009-06-04 macosx specific zmalloc.c, uses malloc_size function in order to avoid to waste memory and time to put an additional header (antirez) -2009-06-04 DEBUG OBJECT implemented (antirez) -2009-06-03 shareobjectspoolsize implemented in reds.conf, in order to control the pool size when object sharing is on (antirez) -2009-05-27 maxmemory implemented (antirez) --
-fork.c && ./a.out -allocated: 1 MB, fork() took 0.000 -allocated: 10 MB, fork() took 0.001 -allocated: 100 MB, fork() took 0.007 -allocated: 1000 MB, fork() took 0.059 -allocated: 10000 MB, fork() took 0.460 -allocated: 20000 MB, fork() took 0.895 -allocated: 30000 MB, fork() took 1.327 -allocated: 40000 MB, fork() took 1.759 -allocated: 50000 MB, fork() took 2.190 -allocated: 60000 MB, fork() took 2.621 -allocated: 70000 MB, fork() took 3.051 -allocated: 80000 MB, fork() took 3.483 -allocated: 90000 MB, fork() took 3.911 -allocated: 100000 MB, fork() took 4.340 -allocated: 110000 MB, fork() took 4.770 -allocated: 120000 MB, fork() took 5.202 --
initServer
function defined in redis.c initializes the numerous fields of the redisServer
structure variable. One such field is the Redis event loop el
:-aeEventLoop *el -
initServer
initializes server.el
field by calling aeCreateEventLoop
defined in ae.c. The definition of aeEventLoop
is below:
--typedef struct aeEventLoop -{ - int maxfd; - long long timeEventNextId; - aeFileEvent events[AE_SETSIZE]; /* Registered events */ - aeFiredEvent fired[AE_SETSIZE]; /* Fired events */ - aeTimeEvent *timeEventHead; - int stop; - void *apidata; /* This is used for polling API specific data */ - aeBeforeSleepProc *beforesleep; -} aeEventLoop; -
aeCreateEventLoop
first mallocs aeEventLoop structure then calls ae_epoll.c:aeApiCreate.
-
-
aeApiCreate mallocs
aeApiState that has two fields -
epfd that holds the epoll file descriptor returned by a call from [http://man.cx/epoll_create%282%29 epoll_create] and
events that is of type
struct epoll_event define by the Linux epoll library. The use of the
events field will be described later.
-
-Next is 'ae.c:aeCreateTimeEvent
. But before that initServer
call anet.c:anetTcpServer
that creates and returns a listening descriptor. The descriptor is listens to port 6379 by default. The returned listening descriptor is stored in server.fd
field.aeCreateTimeEvent
accepts the following as parameters:server.el
in redis.cinitServer
calls aeCreateTimeEvent
to add a timed event to timeEventHead
field of server.el
. timeEventHead
is a pointer to a list of such timed events. The call to aeCreateTimeEvent
from redis.c:initServer
function is given below:-aeCreateTimeEvent(server.el /*eventLoop*/, 1 /*milliseconds*/, serverCron /*proc*/, NULL /*clientData*/, NULL /*finalizerProc*/); -
redis.c:serverCron
performs many operations that helps keep Redis running properly.aeCreateFileEvent
function is to execute epoll_ctl system call which adds a watch for EPOLLIN
event on the listening descriptor create by anetTcpServer
and associate it with the epoll descriptor created by a call to aeCreateEventLoop
. aeCreateFileEvent
does when called from redis.c:initServer
.initServer
passes the following arguments to aeCreateFileEvent
:
-aeCreateEventLoop
. The epoll descriptor is got from server.el. eventLoop->events
table and store extra information like the callback function.eventLoop->events[server.fd]->rfileProc
. ae.c:aeMain
called from redis.c:main
does the job of processing the event loop that is initialized in the previous phase.ae.c:aeMain
calls ae.c:aeProcessEvents
in a while loop that processes pending time and file events.ae.c:aeProcessEvents
looks for the time event that will be pending in the smallest amount of time by calling ae.c:aeSearchNearestTimer
on the event loop. In our case there is only one timer event in the event loop that was created by ae.c:aeCreateTimeEvent
. aeCreateTimeEvent
has by now probably elapsed because it had a expiry time of one millisecond. Since, the timer has already expired the seconds and microseconds fields of the tvp
timeval structure variable is initialized to zero. tvp
structure variable along with the event loop variable is passed to ae_epoll.c:aeApiPoll
.aeApiPoll
functions does a epoll_wait on the epoll descriptor and populates the eventLoop->fired
table with the details:
-aeApiPoll
returns the number of such file events ready for operation. Now to put things in context, if any client has requested for a connection then aeApiPoll would have noticed it and populated the eventLoop->fired
table with an entry of the descriptor being the listening descriptor and mask being AE_READABLE
.aeProcessEvents
calls the redis.c:acceptHandler
registered as the callback. acceptHandler
executes [http://man.cx/accept(2) accept] on the listening descriptor returning a connected descriptor with the client. redis.c:createClient
adds a file event on the connected descriptor through a call to ae.c:aeCreateFileEvent
like below:- if (aeCreateFileEvent(server.el, c->fd, AE_READABLE, - readQueryFromClient, c) == AE_ERR) { - freeClient(c); - return NULL; - } -
c
is the redisClient
structure variable and c->fd
is the connected descriptor.ae.c:aeProcessEvent
calls ae.c:processTimeEvents
ae.processTimeEvents
iterates over list of time events starting at eventLoop->timeEventHead
.processTimeEvents
calls the registered callback. In this case it calls the only timed event callback registered, that is, redis.c:serverCron
. The callback returns the time in milliseconds after which the callback must be called again. This change is recorded via a call to ae.c:aeAddMilliSeconds
and will be handled on the next iteration of ae.c:aeMain
while loop.-$ (echo -en "PING\r\nPING\r\nPING\r\n"; sleep 1) | nc localhost 6379 -+PONG -+PONG -+PONG --This time we are not paying the cost of RTT for every call, but just one time for the three commands.
-require 'rubygems' -require 'redis' - -def bench(descr) - start = Time.now - yield - puts "#{descr} #{Time.now-start} seconds" -end - -def without_pipelining - r = Redis.new - 10000.times { - r.ping - } -end - -def with_pipelining - r = Redis.new - r.pipelined { - 10000.times { - r.ping - } - } -end - -bench("without pipelining") { - without_pipelining -} -bench("with pipelining") { - with_pipelining -} --Running the above simple script will provide this figures in my Mac OS X system, running over the loopback interface, where pipelining will provide the smallest improvement as the RTT is already pretty low: -
-without pipelining 1.185238 seconds -with pipelining 0.250783 seconds --As you can see using pipelining we improved the transfer by a factor of five. -
-$ git clone git://github.com/antirez/redis.git -Initialized empty Git repository in /tmp/redis/.git/ -... --Then you can list all the branches matching 2.1-alpha with: -
-cd redis -$ git tag | grep 2.2-alpha -2.2-alpha0 -2.2-alpha1 -2.2-alpha2 --At this point you can just use git checkout tagname, substituting tagname with 2.2-alphaX where X is the greater progressive number you see in the listing.
Atomically renames the key oldkey to newkey. If the source anddestination name are the same an error is returned. If newkeyalready exists it is overwritten.-
Rename oldkey into newkey but fails if the destination key newkey already exists.-
-1 if the key was renamed -0 if the target key already exist -- -
<->
slave link goes down for some reason. If the master receives multiple concurrent slave synchronization requests it performs a single background saving in order to serve all them.-slaveof 192.168.1.1 6379 --Of course you need to replace 192.168.1.1 6379 with your master ip address (or hostname) and port. - -
Atomically return and remove the last (tail) element of the srckey list,and push the element as the first (head) element of the dstkey list. Forexample if the source list contains the elements "a","b","c" and thedestination list contains the elements "foo","bar" after an RPOPLPUSH commandthe content of the two lists will be "a","b" and "c","foo","bar".-
If the key does not exist or the list is already empty the specialvalue 'nil' is returned. If the srckey and dstkey are the same theoperation is equivalent to removing the last element from the list and pusingit as first element of the list, so it's a "list rotation" command.-
Redis lists are often used as queues in order to exchange messages betweendifferent programs. A program can add a message performing an LPUSH operationagainst a Redis list (we call this program a Producer), while another program(that we call Consumer) can process the messages performing an RPOP commandin order to start reading the messages from the oldest.-
Unfortunately if a Consumer crashes just after an RPOP operation the messagegets lost. RPOPLPUSH solves this problem since the returned message isadded to another "backup" list. The Consumer can later remove the messagefrom the backup list using the LREM command when the message was correctlyprocessed.-
Another process, called Helper, can monitor the "backup" list to check fortimed out entries to repush against the main queue.-
Using RPOPPUSH with the same source and destination key a process canvisit all the elements of an N-elements List in O(N) without to transferthe full list from the server to the client in a single LRANGE operation.Note that a process can traverse the list even while other processesare actively RPUSHing against the list, and still no element will be skipped.-
Add the string value to the head (LPUSH) or tail (RPUSH) of the liststored at key. If the key does not exist an empty list is created just beforethe append operation. If the key exists but is not a List an erroris returned.-
Add the specified member to the set value stored at key. If memberis already a member of the set no operation is performed. If keydoes not exist a new set with the specified member as sole member iscreated. If the key exists but does not hold a set value an error isreturned.-
-1 if the new element was added -0 if the element was already a member of the set --
Save the whole dataset on disk (this means that all the databases are saved, as well as keys with an EXPIRE set (the expire is preserved). The server hangs while the saving is notcompleted, no connection is served in the meanwhile. An OK codeis returned when the DB was fully stored in disk.-
The background variant of this command is BGSAVE that is able to perform the saving in the background while the server continues serving other clients.-
Return the set cardinality (number of elements). If the key does notexist 0 is returned, like for empty sets.-
-the cardinality (number of elements) of the set as an integer. -- -
Return the members of a set resulting from the difference between the firstset provided and all the successive sets. Example:-
-key1 = x,a,b,c -key2 = c -key3 = a,d -SDIFF key1,key2,key3 => x,b -
Non existing keys are considered like empty sets.-
This command works exactly like SDIFF but instead of being returned the resulting set is stored in dstkey.-
Select the DB with having the specified zero-based numeric index.For default every new client connection is automatically selectedto DB 0.-
Set the string value as value of the key.The string can't be longer than 1073741824 bytes (1 GB).-
Sets or clears the bit at offset in the string value stored at key.-The bit is either set or cleared depending on value, which can be either 0 or 1. When key does not exist, a new string value is created. The string is grown to make sure it can hold a bit at offset. The offset argument is required to be greater than or equal to 0, and is limited to 232-1 (which limits bitmaps to 512MB). -When the string at key is grown, added bits are set to 0.
The command is exactly equivalent to the following group of commands:
-SET _key_ _value_ -EXPIRE _key_ _time_ --
The operation is atomic. An atomic SET+EXPIRE operation was already providedusing MULTI/EXEC, but SETEX is a faster alternative providedbecause this operation is very common when Redis is used as a Cache.-
SETNX works exactly like SET with the only difference thatif the key already exists no operation is performed.SETNX actually means "SET if Not eXists".-
-1 if the key was set -0 if the key was not set -
SETNX can also be seen as a locking primitive. For instance to acquirethe lock of the key foo, the client could try the following:-
-SETNX lock.foo <current UNIX time + lock timeout + 1> -
If SETNX returns 1 the client acquired the lock, setting the lock.fookey to the UNIX time at witch the lock should no longer be considered valid.The client will later use DEL lock.foo in order to release the lock.-
If SETNX returns 0 the key is already locked by some other client. We caneither return to the caller if it's a non blocking lock, or enter aloop retrying to hold the lock until we succeed or some kind of timeoutexpires.-
In the above locking algorithm there is a problem: what happens if a clientfails, crashes, or is otherwise not able to release the lock?It's possible to detect this condition because the lock key contains aUNIX timestamp. If such a timestamp is <= the current Unix time the lockis no longer valid.-
When this happens we can't just call DEL against the key to remove the lockand then try to issue a SETNX, as there is a race condition here, whenmultiple clients detected an expired lock and are trying to release it.-
Fortunately it's possible to avoid this issue using the following algorithm.Let's see how C4, our sane client, uses the good algorithm:-
Overwrites part of a string at key starting at the specified offset,for all the length of value.If the offset is over the old length of the string, the string is paddedwith zero bytes until needed. Non existing keys are considered likealready containing an empty string.-
-redis> set foo "Hello World" -OK -redis> setrange foo 6 "Redis" -(integer) 11 -redis> get foo -"Hello Redis" -Example of the zero padding behavior.
-redis> del foo -(integer) 1 -redis> setrange foo 10 bar -(integer) 13 -redis> get foo -"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00bar" -Note that the maximum offset that you can set is 536870911 as Redis Strings are limited to 512 megabytes. You can still create longer arrays of values using multiple keys.
Stop all the clients, save the DB, then quit the server. This commandsmakes sure that the DB is switched off without the lost of any data.This is not guaranteed if the client uses simply "SAVE" and then"QUIT" because other clients may alter the DB data between the twocommands.-
Return the members of a set resulting from the intersection of all thesets hold at the specified keys. Like in LRANGE the result is sent tothe client as a multi-bulk reply (see the protocol specification formore information). If just a single key is specified, then this commandproduces the same result as SMEMBERS. Actually SMEMBERS is just syntaxsugar for SINTERSECT.-
Non existing keys are considered like empty sets, so if one of the keys ismissing an empty set is returned (since the intersection with an emptyset always is an empty set).-
This commnad works exactly like SINTER but instead of being returned the resulting set is sotred as dstkey.-
Return 1 if member is a member of the set stored at key, otherwise0 is returned.-
-1 if the element is a member of the set -0 if the element is not a member of the set OR if the key does not exist -- -
The SLAVEOF command can change the replication settings of a slave on the fly.If a Redis server is arleady acting as slave, the command-SLAVEOF NO ONE
will turn off the replicaiton turning the Redis server into a MASTER.In the proper formSLAVEOF hostname port
will make the server a slave of thespecific server listening at the specified hostname and port.
If a server is already a slave of some master, SLAVEOF hostname port
willstop the replication against the old server and start the synchrnonizationagainst the new one discarding the old dataset.
-The form SLAVEOF no one
will stop replication turning the server into aMASTER but will not discard the replication. So if the old master stop workingit is possible to turn the slave into a master and set the application touse the new master in read/write. Later when the other Redis server will befixed it can be configured in order to work as slave.
-Return all the members (elements) of the set value stored at key. Thisis just syntax glue for SINTER.-
Move the specifided member from the set at srckey to the set at dstkey.This operation is atomic, in every given moment the element will appear tobe in the source or destination set for accessing clients.-
If the source set does not exist or does not contain the specified elementno operation is performed and zero is returned, otherwise the element isremoved from the source set and added to the destination set. On successone is returned, even if the element was already present in the destinationset.-
An error is raised if the source or destination keys contain a non Set value.-
-1 if the element was moved -0 if the element was not found on the first set and no operation was performed -- -
[
BY pattern]
[
LIMIT start count]
[
GET pattern]
[
ASC|DESC]
[
ALPHA]
[
STORE dstkey]
=
-Sort the elements contained in the List, Set, orSorted Set value at key. By defaultsorting is numeric with elements being compared as double precisionfloating point numbers. This is the simplest form of SORT:-
-SORT mylist -
Assuming mylist contains a list of numbers, the return value will bethe list of numbers ordered from the smallest to the biggest number.In order to get the sorting in reverse order use DESC:-
-SORT mylist DESC -
The ASC option is also supported but it's the default so you don'treally need it.If you want to sort lexicographically use ALPHA. Note that Redis isutf-8 aware assuming you set the right value for the LC_COLLATEenvironment variable.-
Sort is able to limit the number of returned elements using the LIMIT option:-
-SORT mylist LIMIT 0 10 -
In the above example SORT will return only 10 elements, starting fromthe first one (start is zero-based). Almost all the sort options canbe mixed together. For example the command:-
-SORT mylist LIMIT 0 10 ALPHA DESC -
Will sort mylist lexicographically, in descending order, returning onlythe first 10 elements.-
Sometimes you want to sort elements using external keys as weights tocompare instead to compare the actual List Sets or Sorted Set elements.For example the list mylist may contain the elements 1, 2, 3, 4, thatare just unique IDs of objects stored at object_1, object_2, object_3and object_4, while the keys weight_1, weight_2, weight_3 and weight_4can contain weights we want to use to sort our list of objectsidentifiers. We can use the following command:-
-SORT mylist BY weight_* -
the BY option takes a pattern (-weight_*
in our example) that is usedin order to generate the key names of the weights used for sorting.Weight key names are obtained substituting the first occurrence of*
with the actual value of the elements on the list (1,2,3,4 in our example).
Our previous example will return just the sorted IDs. Often it isneeded to get the actual objects sorted (object_1, ..., object_4 in theexample). We can do it with the following command:-
-SORT mylist BY nosort -
also the BY option can take a "nosort" specifier. This is useful if you want to retrieve a external key (using GET, read below) but you don't want the sorting overhead.-
-SORT mylist BY weight_* GET object_* -
Note that GET can be used multiple times in order to get more keys forevery element of the original List, Set or Sorted Set sorted.-
Since Redis >= 1.1 it's possible to also GET the list elements itselfusing the special # pattern:-
-SORT mylist BY weight_* GET object_* GET # -
By default SORT returns the sorted elements as its return value.Using the STORE option instead to return the elements SORT willstore this elements as a Redis List in the specified key.An example:-
-SORT mylist BY weight_* STORE resultkey -
An interesting pattern using SORT ... STORE consists in associatingan EXPIRE timeout to the resulting key so that inapplications where the result of a sort operation can be cached forsome time other clients will use the cached list instead to call SORTfor every request. When the key will timeout an updated version ofthe cache can be created using SORT ... STORE again.-
Note that implementing this pattern it is important to avoid that multipleclients will try to rebuild the cached version of the cacheat the same time, so some form of locking should be implemented(for instance using SETNX).-
It's possible to use BY and GET options against Hash fields using the following syntax:
-SORT mylist BY weight_*->fieldname -SORT mylist GET object_*->fieldname --
The two chars string -> is used in order to signal the name of the Hash field. The key is substituted as documented above with sort BY and GET against normal keys, and the Hash stored at the resulting key is accessed in order to retrieve the specified field.
Remove a random element from a Set returning it as return value.If the Set is empty or the key does not exist, a nil object is returned.-
The SRANDMEMBER command does a similar work butthe returned element is not removed from the Set.-
Return a random element from a Set, without removing the element. If the Set is empty or the key does not exist, a nil object is returned.-
The SPOP command does a similar work but the returned elementis popped (removed) from the Set.-
Remove the specified member from the set value stored at key. If_member_ was not a member of the set no operation is performed. If keydoes not hold a set value an error is returned.-
-1 if the new element was removed -0 if the new element was not a member of the set -- -
sds.c
(simple dynamic strings). This library caches the current length of the string, so to obtain the length of a Redis string is an O(1) operation (but currently there is no such STRLEN command. It will likely be added later).Returns the length of the string stored at the specified key.-
Return a subset of the string from offset start to offset end(both offsets are inclusive).Negative offsets can be used in order to provide an offset starting fromthe end of the string. So -1 means the last char, -2 the penultimate andso forth.-
The function handles out of range requests without raising an error, butjust limiting the resulting range to the actual length of the string.-
-redis> set s "This is a string" -OK -redis> substr s 0 3 -"This" -redis> substr s -3 -1 -"ing" -redis> substr s 0 -1 -"This is a string" -redis> substr s 9 100000 -" string" -- -
Return the members of a set resulting from the union of all thesets hold at the specified keys. Like in LRANGE the result is sent tothe client as a multi-bulk reply (see the protocol specification formore information). If just a single key is specified, then this commandproduces the same result as SMEMBERS.-
Non existing keys are considered like empty sets.-
This command works exactly like SUNION but instead of being returned the resulting set is stored as dstkey. Any existing value in dstkey will be over-written.-
Language | Name | Sharding | Pipelining | 1.1 | 1.0 |
ActionScript 3 | as3redis | No | Yes | Yes | Yes |
Clojure | redis-clojure | No | No | Partial | Yes |
Common Lisp | CL-Redis | No | No | No | Yes |
Erlang | erldis | No | Looks like | No | Looks like |
Go | Go-Redis | No | Yes | Yes | Yes |
Haskell | haskell-redis | No | No | No | Yes |
Java | JDBC-Redis | No | No | No | Yes |
Java | JRedis | No | Yes | Yes | Yes |
Java | Jedis | No | Yes | Yes | Yes |
LUA | redis-lua | No | No | Yes | Yes |
Perl | Redis Client | No | No | No | Yes |
Perl | AnyEvent::Redis | No | No | No | Yes |
PHP | Redis PHP Bindings | No | No | No | Yes |
PHP | phpredis (C) | No | No | No | Yes |
PHP | Predis | Yes | Yes | Yes | Yes |
PHP | Redisent | Yes | No | No | Yes |
Python | Python Client | No | No | No | Yes |
Python | py-redis | No | No | Partial | Yes |
Python | txredis | No | No | No | Yes |
Ruby | redis-rb | Yes | Yes | Yes | Yes |
Scala | scala-redis | Yes | No | No | Yes |
TCL | TCL | No | No | Yes | Yes |
The TTL command returns the remaining time to live in seconds of a key that has an EXPIRE set. This introspection capability allows a Redis client to check how many seconds a given key will continue to be part of the dataset. If the Key does not exists or does not have an associated expire, -1 is returned.-
-SET foo bar -Redis will store our data permanently, so we can later ask for "What is the value stored at key foo?" and Redis will reply with bar:
-GET foo => bar -Other common operations provided by key-value stores are DEL used to delete a given key, and the associated value, SET-if-not-exists (called SETNX on Redis) that sets a key only if it does not already exist, and INCR that is able to atomically increment a number stored at a given key:
-SET foo 10 -INCR foo => 11 -INCR foo => 12 -INCR foo => 13 -
-x = GET foo -x = x + 1 -SET foo x -The problem is that doing the increment this way will work as long as there is only a client working with the value x at a time. See what happens if two computers are accessing this data at the same time:
-x = GET foo (yields 10) -y = GET foo (yields 10) -x = x + 1 (x is now 11) -y = y + 1 (y is now 11) -SET foo x (foo is now 11) -SET foo y (foo is now 11) -Something is wrong with that! We incremented the value two times, but instead to go from 10 to 12 our key holds 11. This is because the INCR operation done with
GET / increment / SET
is not an atomic operation. Instead the INCR provided by Redis, Memcached, ..., are atomic implementations, the server will take care to protect the get-increment-set for all the time needed to complete in order to prevent simultaneous accesses.-LPUSH mylist a (now mylist holds one element list 'a') -LPUSH mylist b (now mylist holds 'b,a') -LPUSH mylist c (now mylist holds 'c,b,a') -LPUSH means Left Push, that is, add an element to the left (or to the head) of the list stored at mylist. If the key mylist does not exist it is automatically created by Redis as an empty list before the PUSH operation. As you can imagine, there is also the RPUSH operation that adds the element on the right of the list (on the tail).
username:updates
for instance. There are operations to get data or information from Lists of course. For instance LRANGE returns a range of the list, or the whole list.-LRANGE mylist 0 1 => c,b -LRANGE uses zero-based indexes, that is the first element is 0, the second 1, and so on. The command aguments are
LRANGE key first-index last-index
. The last index argument can be negative, with a special meaning: -1 is the last element of the list, -2 the penultimate, and so on. So in order to get the whole list we can use:-LRANGE mylist 0 -1 => c,b,a -Other important operations are LLEN that returns the length of the list, and LTRIM that is like LRANGE but instead of returning the specified range trims the list, so it is like Get range from mylist, Set this range as new value but atomic. We will use only this List operations, but make sure to check the Redis documentation to discover all the List operations supported by Redis. -
-SADD myset a -SADD myset b -SADD myset foo -SADD myset bar -SCARD myset => 4 -SMEMBERS myset => bar,a,foo,b -Note that SMEMBERS does not return the elements in the same order we added them, since Sets are unsorted collections of elements. When you want to store the order it is better to use Lists instead. Some more operations against Sets:
-SADD mynewset b -SADD mynewset foo -SADD mynewset hello -SINTER myset mynewset => foo,b -SINTER can return the intersection between Sets but it is not limited to two sets, you may ask for intersection of 4,5 or 10000 Sets. Finally let's check how SISMEMBER works:
-SISMEMBER myset foo => 1 -SISMEMBER myset notamember => 0 -Ok I think we are ready to start coding! -
-INCR global:nextUserId => 1000 -SET uid:1000:username antirez -SET uid:1000:password p1pp0 -We use the global:nextUserId key in order to always get an unique ID for every new user. Then we use this unique ID to populate all the other keys holding our user data. This is a Design Pattern with key-values stores! Keep it in mind. -Besides the fields already defined, we need some more stuff in order to fully define an User. For example sometimes it can be useful to be able to get the user ID from the username, so we set this key too:
-SET username:antirez:uid 1000 -This may appear strange at first, but remember that we are only able to access data by key! It's not possible to tell Redis to return the key that holds a specific value. This is also our strength, this new paradigm is forcing us to organize the data so that everything is accessible by primary key, speaking with relational DBs language. -
-uid:1000:followers => Set of uids of all the followers users -uid:1000:following => Set of uids of all the following users -Another important thing we need is a place were we can add the updates to display in the user home page. We'll need to access this data in chronological order later, from the most recent update to the older ones, so the perfect kind of Value for this work is a List. Basically every new update will be LPUSHed in the user updates key, and thanks to LRANGE we can implement pagination and so on. Note that we use the words updates and posts interchangeably, since updates are actually "little posts" in some way.
-uid:1000:posts => a List of post ids, every new post is LPUSHed here. --
-SET uid:1000:auth fea5e81ac8ca77622bed1c2132a021f9 -SET auth:fea5e81ac8ca77622bed1c2132a021f9 1000 -In order to authenticate an user we'll do this simple work (login.php): -
<username>
:uid key actually exists-include("retwis.php"); - -# Form sanity checks -if (!gt("username") || !gt("password")) - goback("You need to enter both username and password to login."); - -# The form is ok, check if the username is available -$username = gt("username"); -$password = gt("password"); -$r = redisLink(); -$userid = $r->get("username:$username:id"); -if (!$userid) - goback("Wrong username or password"); -$realpassword = $r->get("uid:$userid:password"); -if ($realpassword != $password) - goback("Wrong useranme or password"); - -# Username / password OK, set the cookie and redirect to index.php -$authsecret = $r->get("uid:$userid:auth"); -setcookie("auth",$authsecret,time()+3600*24*365); -header("Location: index.php"); -This happens every time the users log in, but we also need a function isLoggedIn in order to check if a given user is already authenticated or not. These are the logical steps preformed by the
isLoggedIn
function:
-<authcookie>
<authcookie>
exists, and what the value (the user id) is (1000 in the exmple).-function isLoggedIn() { - global $User, $_COOKIE; - - if (isset($User)) return true; - - if (isset($_COOKIE['auth'])) { - $r = redisLink(); - $authcookie = $_COOKIE['auth']; - if ($userid = $r->get("auth:$authcookie")) { - if ($r->get("uid:$userid:auth") != $authcookie) return false; - loadUserInfo($userid); - return true; - } - } - return false; -} - -function loadUserInfo($userid) { - global $User; - - $r = redisLink(); - $User['id'] = $userid; - $User['username'] = $r->get("uid:$userid:username"); - return true; -} -
loadUserInfo
as separated function is an overkill for our application, but it's a good template for a complex application. The only thing it's missing from all the authentication is the logout. What we do on logout? That's simple, we'll just change the random string in uid:1000:auth, remove the old auth:<oldauthstring>
and add a new auth:<newauthstring>
.<randomstring>
, but double check it against uid:1000:auth. The true authentication string is the latter, the auth:<randomstring>
is just an authentication key that may even be volatile, or if there are bugs in the program or a script gets interrupted we may even end with multiple auth:<something>
keys pointing to the same user id. The logout code is the following (logout.php):-include("retwis.php"); - -if (!isLoggedIn()) { - header("Location: index.php"); - exit; -} - -$r = redisLink(); -$newauthsecret = getrand(); -$userid = $User['id']; -$oldauthsecret = $r->get("uid:$userid:auth"); - -$r->set("uid:$userid:auth",$newauthsecret); -$r->set("auth:$newauthsecret",$userid); -$r->delete("auth:$oldauthsecret"); - -header("Location: index.php"); -That is just what we described and should be simple to undestand. -
-INCR global:nextPostId => 10343 -SET post:10343 "$owner_id|$time|I'm having fun with Retwis" -As you can se the user id and time of the post are stored directly inside the string, we don't need to lookup by time or user id in the example application so it is better to compact everything inside the post string.
-include("retwis.php"); - -if (!isLoggedIn() || !gt("status")) { - header("Location:index.php"); - exit; -} - -$r = redisLink(); -$postid = $r->incr("global:nextPostId"); -$status = str_replace("\n"," ",gt("status")); -$post = $User['id']."|".time()."|".$status; -$r->set("post:$postid",$post); -$followers = $r->smembers("uid:".$User['id'].":followers"); -if ($followers === false) $followers = Array(); -$followers[] = $User['id']; /* Add the post to our own posts too */ - -foreach($followers as $fid) { - $r->push("uid:$fid:posts",$postid,false); -} -# Push the post on the timeline, and trim the timeline to the -# newest 1000 elements. -$r->push("global:timeline",$postid,false); -$r->ltrim("global:timeline",0,1000); - -header("Location: index.php"); -The core of the function is the
foreach
. We get using SMEMBERS all the followers of the current user, then the loop will LPUSH the post against the uid:<userid>
:posts of every follower.-function showPost($id) { - $r = redisLink(); - $postdata = $r->get("post:$id"); - if (!$postdata) return false; - - $aux = explode("|",$postdata); - $id = $aux[0]; - $time = $aux[1]; - $username = $r->get("uid:$id:username"); - $post = join(array_splice($aux,2,count($aux)-2),"|"); - $elapsed = strElapsed($time); - $userlink = "<a class=\"username\" href=\"profile.php?u=".urlencode($username)."\">".utf8entities($username)."</a>"; - - echo('<div class="post">'.$userlink.' '.utf8entities($post)."<br>"); - echo('<i>posted '.$elapsed.' ago via web</i></div>'); - return true; -} - -function showUserPosts($userid,$start,$count) { - $r = redisLink(); - $key = ($userid == -1) ? "global:timeline" : "uid:$userid:posts"; - $posts = $r->lrange($key,$start,$start+$count); - $c = 0; - foreach($posts as $p) { - if (showPost($p)) $c++; - if ($c == $count) break; - } - return count($posts) == $count+1; -} -
showPost
will simply convert and print a Post in HTML while showUserPosts
get range of posts passing them to showPosts
.-SADD uid:1000:following 1001 -SADD uid:1001:followers 1000 -Note the same pattern again and again, in theory with a relational database the list of following and followers is a single table with fields like
following_id
and follower_id
. With queries you can extract the followers or following of every user. With a key-value DB that's a bit different as we need to set both the 1000 is following 1001
and 1001 is followed by 1000
relations. This is the price to pay, but on the other side accessing the data is simpler and ultra-fast. And having this things as separated sets allows us to do interesting stuff, for example using SINTER we can have the intersection of 'following' of two different users, so we may add a feature to our Twitter clone so that it is able to say you at warp speed, when you visit somebody' else profile, "you and foobar have 34 followers in common" and things like that.-server_id = crc32(key) % number_of_servers -This has a lot of problems since if you add one server you need to move too much keys and so on, but this is the general idea even if you use a better hashing scheme like consistent hashing.
global:nextPostId
key. How to fix this problem? A Single server will get a lot if increments. The simplest way to handle this is to have a dedicated server just for increments. This is probably an overkill btw unless you have really a lot of traffic. There is another trick. The ID does not really need to be an incremental number, but just it needs to be unique. So you can get a random string long enough to be unlikely (almost impossible, if it's md5-size) to collide, and you are done. We successfully eliminated our main problem to make it really horizontally scalable!Return the type of the value stored at key in form of astring. The type can be one of "none", "string", "list", "set"."none" is returned if the key does not exist.-
-"none" if the key does not exist -"string" if the key contains a String value -"list" if the key contains a List value -"set" if the key contains a Set value -"zset" if the key contains a Sorted Set value -"hash" if the key contains a Hash value -
-redis> set foo bar -OK -redis> debug object foo -Key at:0x100101d00 refcount:1, value at:0x100101ce0 refcount:1 encoding:raw serializedlength:4 --As you can see from the above output, the Redis top level hash table maps Redis Objects (keys) to other Redis Objects (values). The Virtual Memory is only able to swap values on disk, the objects associated to keys are always taken in memory: this trade off guarantees very good lookup performances, as one of the main design goals of the Redis VM is to have performances similar to Redis with VM disabled when the part of the dataset frequently used fits in RAM.
-/* The actual Redis Object */ -typedef struct redisObject { - void *ptr; - unsigned char type; - unsigned char encoding; - unsigned char storage; /* If this object is a key, where is the value? - * REDIS_VM_MEMORY, REDIS_VM_SWAPPED, ... */ - unsigned char vtype; /* If this object is a key, and value is swapped out, - * this is the type of the swapped out object. */ - int refcount; - /* VM fields, this are only allocated if VM is active, otherwise the - * object allocation function will just allocate - * sizeof(redisObjct) minus sizeof(redisObjectVM), so using - * Redis without VM active will not have any overhead. */ - struct redisObjectVM vm; -} robj; --As you can see there are a few fields about VM. The most important one is storage, that can be one of this values: -
-/* The VM object structure */ -struct redisObjectVM { - off_t page; /* the page at which the object is stored on disk */ - off_t usedpages; /* number of pages used on disk */ - time_t atime; /* Last access time */ -} vm; --As you can see the structure contains the page at which the object is located in the swap file, the number of pages used, and the last access time of the object (this is very useful for the algorithm that select what object is a good candidate for swapping, as we want to transfer on disk objects that are rarely accessed).
-... some code ... - if (server.vm_enabled) { - pthread_mutex_unlock(&server.obj_freelist_mutex); - o = zmalloc(sizeof(*o)); - } else { - o = zmalloc(sizeof(*o)-sizeof(struct redisObjectVM)); - } -... some code ... --As you can see if the VM system is not enabled we allocate just
sizeof(*o)-sizeof(struct redisObjectVM)
of memory. Given that the vm field is the last in the object structure, and that this fields are never accessed if VM is disabled, we are safe and Redis without VM does not pay the memory overhead.-vm-page-size 32 -vm-pages 134217728 --Redis takes a "bitmap" (an contiguous array of bits set to zero or one) in memory, every bit represent a page of the swap file on disk: if a given bit is set to 1, it represents a page that is already used (there is some Redis Object stored there), while if the corresponding bit is zero, the page is free.
rdbSavedObjectPages
that returns the number of pages used by an object on disk. Note that this function does not duplicate the .rdb saving code just to understand what will be the length after an object will be saved on disk, we use the trick of opening /dev/null and writing the object there, finally calling ftello
in order check the amount of bytes required. What we do basically is to save the object on a virtual very fast file, that is, /dev/null.vmFindContiguousPages
function. As you can guess this function may fail if the swap is full, or so fragmented that we can't easily find the required number of contiguous free pages. When this happens we just abort the swapping of the object, that will continue to live in memory.vmWriteObjectOnSwap
.vmLoadObject
passing the key object associated to the value object we want to load back is enough. The function will also take care of fixing the storage type of the key (that will be REDIS_VM_MEMORY), marking the pages as freed in the page table, and so forth.server.vm_max_threads
must be set to zero.
-We'll see later how this max number of threads info is used in the threaded VM, for now all it's needed to now is that Redis reverts to fully blocking VM when this is set to zero.server.vm_max_memory
. This parameter is very important as it is used in order to trigger swapping: Redis will try to swap objects only if it is using more memory than the max memory setting, otherwise there is no need to swap as we are matching the user requested memory usage.
-vmSwapOneObect
. This function takes just one argument, if 0 it will swap objects in a blocking way, otherwise if it is 1, I/O threads are used. In the blocking scenario we just call it with zero as argument.-swappability = age*log(size_in_memory) --The age is the number of seconds the key was not requested, while size_in_memory is a fast estimation of the amount of memory (in bytes) used by the object in memory. So we try to swap out objects that are rarely accessed, and we try to swap bigger objects over smaller one, but the latter is a less important factor (because of the logarithmic function used). This is because we don't want bigger objects to be swapped out and in too often as the bigger the object the more I/O and CPU is required in order to transfer it. -
-GET foo --If the value object of the
foo
key is swapped we need to load it back in memory before processing the operation. In Redis the key lookup process is centralized in the lookupKeyRead
and lookupKeyWrite
functions, this two functions are used in the implementation of all the Redis commands accessing the keyspace, so we have a single point in the code where to handle the loading of the key from the swap file to memory.server.io_newjobs
queue (that is, just a linked list). If there are no active I/O threads, one is started. At this point some I/O thread will process the I/O job, and the result of the processing is pushed in the server.io_processed
queue. The I/O thread will send a byte using an UNIX pipe to the main thread in order to signal that a new job was processed and the result is ready to be processed.iojob
structure looks like:
--typedef struct iojob { - int type; /* Request type, REDIS_IOJOB_* */ - redisDb *db;/* Redis database */ - robj *key; /* This I/O request is about swapping this key */ - robj *val; /* the value to swap for REDIS_IOREQ_*_SWAP, otherwise this - * field is populated by the I/O thread for REDIS_IOREQ_LOAD. */ - off_t page; /* Swap page where to read/write the object */ - off_t pages; /* Swap pages needed to save object. PREPARE_SWAP return val */ - int canceled; /* True if this command was canceled by blocking side of VM */ - pthread_t thread; /* ID of the thread processing this entry */ -} iojob; --There are just three type of jobs that an I/O thread can perform (the type is specified by the
type
field of the structure):
-page
, the object type is key->vtype
. The result of this operation will populate the val
field of the structure.val
into the swap. The result of this operation will populate the pages
field.val
to the swap file, at page offset page
.vmThreadedIOCompletedJob
. If this function detects that all the values needed for a blocked client were loaded, the client is restarted and the original command called.vmCancelThreadedIOJob
, and this is what it does:
-canceled
field to 1 in the iojob structure. The function processing completed jobs will just ignored and free the job instead of really processing it.-vm-enabled yes -Many other configuration options are able to change the behavior of VM. The rule is that you don't want to run with the default configuration, as every problem and dataset requires some tuning in order to get the maximum advantages.
-# The default vm-max-threads configuration -vm-max-threads 4 -This is the maximum number of threads used in order to perform I/O from/to the swap file. A good value is just to match the number of cores in your system.
-$ ./redis-stat vmstat - --------------- objects --------------- ------ pages ------ ----- memory ----- - load-in swap-out swapped delta used delta used delta - 138837 1078936 800402 +800402 807620 +807620 209.50M +209.50M - 4277 38011 829802 +29400 837441 +29821 206.47M -3.03M - 3347 39508 862619 +32817 870340 +32899 202.96M -3.51M - 4445 36943 890646 +28027 897925 +27585 199.92M -3.04M - 10391 16902 886783 -3863 894104 -3821 200.22M +309.56K - 8888 19507 888371 +1588 895678 +1574 200.05M -171.81K - 8377 20082 891664 +3293 899850 +4172 200.10M +53.55K - 9671 20210 892586 +922 899917 +67 199.82M -285.30K - 10861 16723 887638 -4948 895003 -4914 200.13M +312.35K - 9541 21945 890618 +2980 898004 +3001 199.94M -197.11K - 9689 17257 888345 -2273 896405 -1599 200.27M +337.77K - 10087 18784 886771 -1574 894577 -1828 200.36M +91.60K - 9330 19350 887411 +640 894817 +240 200.17M -189.72K -The above output is about a redis-server with VM enable, around 1 million of keys inside, and a lot of simulated load using the redis-load utility.
Add the specified member having the specifeid score to the sortedset stored at key. If member is already a member of the sorted setthe score is updated, and the element reinserted in the right position toensure sorting. If key does not exist a new sorted set with the specified_member_ as sole member is crated. If the key exists but does not hold asorted set value an error is returned.-
The score value can be the string representation of a double precision floatingpoint number.-
For an introduction to sorted sets check the Introduction to Redis data types page.-
-1 if the new element was added -0 if the element was already a member of the sorted set and the score was updated --
Return the sorted set cardinality (number of elements). If the key does notexist 0 is returned, like for empty sorted sets.-
-the cardinality (number of elements) of the set as an integer. -- -
If member already exists in the sorted set adds the increment to its scoreand updates the position of the element in the sorted set accordingly.If member does not already exist in the sorted set it is added with_increment_ as score (that is, like if the previous score was virtually zero).If key does not exist a new sorted set with the specified_member_ as sole member is crated. If the key exists but does not hold asorted set value an error is returned.-
The score value can be the string representation of a double precision floatingpoint number. It's possible to provide a negative value to perform a decrement.-
For an introduction to sorted sets check the Introduction to Redis data types page.-
-The new score (a double precision floating point number) represented as string. --
Return the specified elements of the sorted set at the specifiedkey. The elements are considered sorted from the lowerest to the highestscore when using ZRANGE, and in the reverse order when using ZREVRANGE.Start and end are zero-based indexes. 0 is the first elementof the sorted set (the one with the lowerest score when using ZRANGE), 1the next element by score and so on.-
_start_ and end can also be negative numbers indicating offsetsfrom the end of the sorted set. For example -1 is the last element ofthe sorted set, -2 the penultimate element and so on.-
Indexes out of range will not produce an error: if start is overthe end of the sorted set, or start >
end, an empty list is returned.If end is over the end of the sorted set Redis will threat it just likethe last element of the sorted set.
-It's possible to pass the WITHSCORES option to the command in order to return notonly the values but also the scores of the elements. Redis will return the dataas a single list composed of value1,score1,value2,score2,...,valueN,scoreN but clientlibraries are free to return a more appropriate data type (what we think is thatthe best return type for this command is a Array of two-elements Array / Tuple inorder to preserve sorting).-
Return the all the elements in the sorted set at key with a score between_min_ and max (including elements with score equal to min or max).-
The elements having the same score are returned sorted lexicographically asASCII strings (this follows from a property of Redis sorted sets and does notinvolve further computation).-
Using the optional LIMIT it's possible to get only a range of the matchingelements in an SQL-alike way. Note that if offset is large the commandsneeds to traverse the list for offset elements and this adds up to theO(M) figure.-
The ZCOUNT command is similar to ZRANGEBYSCORE but instead of returningthe actual elements in the specified interval, it just returns the numberof matching elements.
-ZRANGEBYSCORE zset (1.3 5 --Will return all the values with score > 1.3 and <= 5, while for instance: -
-ZRANGEBYSCORE zset (5 (10 --Will return all the values with score > 5 and < 10 (5 and 10 excluded). -
-redis> zadd zset 1 foo -(integer) 1 -redis> zadd zset 2 bar -(integer) 1 -redis> zadd zset 3 biz -(integer) 1 -redis> zadd zset 4 foz -(integer) 1 -redis> zrangebyscore zset -inf +inf -1. "foo" -2. "bar" -3. "biz" -4. "foz" -redis> zcount zset 1 2 -(integer) 2 -redis> zrangebyscore zset 1 2 -1. "foo" -2. "bar" -redis> zrangebyscore zset (1 2 -1. "bar" -redis> zrangebyscore zset (1 (2 -(empty list or set) --
ZRANK returns the rank of the member in the sorted set, with scores ordered from low to high. ZREVRANK returns the rank with scores ordered from high to low. When the given member does not exist in the sorted set, the special value 'nil' is returned. The returned rank (or index) of the member is 0-based for both commands.-
-the rank of the element as an integer reply if the element exists. -A nil bulk reply if there is no such element. --
Remove the specified member from the sorted set value stored at key. If_member_ was not a member of the set no operation is performed. If keydoes not not hold a set value an error is returned.-
-1 if the new element was removed -0 if the new element was not a member of the set -- -
Remove all elements in the sorted set at key with rank between start and end. Start and end are 0-based with rank 0 being the element with the lowest score. Both start and end can be negative numbers, where they indicate offsets starting at the element with the highest rank. For example: -1 is the element with the highest score, -2 the element with the second highest score and so forth.-
Remove all the elements in the sorted set at key with a score between_min_ and max (including elements with score equal to min or max).-
Return the score of the specified element of the sorted set at key.If the specified element does not exist in the sorted set, or the keydoes not exist at all, a special 'nil' value is returned.-
-the score (a double precision floating point number) represented as string. -- -
Creates a union or intersection of N sorted sets given by keys k1 through kN, and stores it at dstkey. It is mandatory to provide the number of input keys N, before passing the input keys and the other (optional) arguments.-
As the terms imply, the ZINTERSTORE command requires an element to be present in each of the given inputs to be inserted in the result. The ZUNIONSTORE command inserts all elements across all inputs.-
Using the WEIGHTS option, it is possible to add weight to each input sorted set. This means that the score of each element in the sorted set is first multiplied by this weight before being passed to the aggregation. When this option is not given, all weights default to 1.-
With the AGGREGATE option, it's possible to specify how the results of the union or intersection are aggregated. This option defaults to SUM, where the score of an element is summed across the inputs where it exists. When this option is set to be either MIN or MAX, the resulting set will contain the minimum or maximum score of an element across the inputs where it exists.-
Creates a union or intersection of N sorted sets given by keys k1 through kN, and stores it at dstkey. It is mandatory to provide the number of input keys N, before passing the input keys and the other (optional) arguments.-
As the terms imply, the ZINTERSTORE command requires an element to be present in each of the given inputs to be inserted in the result. The ZUNIONSTORE command inserts all elements across all inputs.-
Using the WEIGHTS option, it is possible to add weight to each input sorted set. This means that the score of each element in the sorted set is first multiplied by this weight before being passed to the aggregation. When this option is not given, all weights default to 1.-
With the AGGREGATE option, it's possible to specify how the results of the union or intersection are aggregated. This option defaults to SUM, where the score of an element is summed across the inputs where it exists. When this option is set to be either MIN or MAX, the resulting set will contain the minimum or maximum score of an element across the inputs where it exists.-
<->
slave replication works.