diff --git a/doc/AppendOnlyFileHowto.html b/doc/AppendOnlyFileHowto.html deleted file mode 100644 index b30a27ca..00000000 --- a/doc/AppendOnlyFileHowto.html +++ /dev/null @@ -1,40 +0,0 @@ - - - -
- - - -Request for authentication in a password protected Redis server.A Redis server can be instructed to require a password before to allow clientsto issue commands. This is done using the requirepass directive in theRedis configuration file.-
If the password given by the client is correct the server replies withan OK status code reply and starts accepting commands from the client.Otherwise an error is returned and the clients needs to try a new password.Note that for the high performance nature of Redis it is possible to trya lot of passwords in parallel in very short time, so make sure to generatea strong and very long password so that this attack is infeasible.-
redis-benchmark
utility that simulates SETs/GETs done by N clients at the same time sending M total queries (it is similar to the Apache's ab
utility). Below you'll find the full output of the benchmark executed against a Linux box.-./redis-benchmark -n 100000 - -====== SET ====== - 100007 requests completed in 0.88 seconds - 50 parallel clients - 3 bytes payload - keep alive: 1 - -58.50% <= 0 milliseconds -99.17% <= 1 milliseconds -99.58% <= 2 milliseconds -99.85% <= 3 milliseconds -99.90% <= 6 milliseconds -100.00% <= 9 milliseconds -114293.71 requests per second - -====== GET ====== - 100000 requests completed in 1.23 seconds - 50 parallel clients - 3 bytes payload - keep alive: 1 - -43.12% <= 0 milliseconds -96.82% <= 1 milliseconds -98.62% <= 2 milliseconds -100.00% <= 3 milliseconds -81234.77 requests per second - -====== INCR ====== - 100018 requests completed in 1.46 seconds - 50 parallel clients - 3 bytes payload - keep alive: 1 - -32.32% <= 0 milliseconds -96.67% <= 1 milliseconds -99.14% <= 2 milliseconds -99.83% <= 3 milliseconds -99.88% <= 4 milliseconds -99.89% <= 5 milliseconds -99.96% <= 9 milliseconds -100.00% <= 18 milliseconds -68458.59 requests per second - -====== LPUSH ====== - 100004 requests completed in 1.14 seconds - 50 parallel clients - 3 bytes payload - keep alive: 1 - -62.27% <= 0 milliseconds -99.74% <= 1 milliseconds -99.85% <= 2 milliseconds -99.86% <= 3 milliseconds -99.89% <= 5 milliseconds -99.93% <= 7 milliseconds -99.96% <= 9 milliseconds -100.00% <= 22 milliseconds -100.00% <= 208 milliseconds -88109.25 requests per second - -====== LPOP ====== - 100001 requests completed in 1.39 seconds - 50 parallel clients - 3 bytes payload - keep alive: 1 - -54.83% <= 0 milliseconds -97.34% <= 1 milliseconds -99.95% <= 2 milliseconds -99.96% <= 3 milliseconds -99.96% <= 4 milliseconds -100.00% <= 9 milliseconds -100.00% <= 208 milliseconds -71994.96 requests per second -Notes: changing the payload from 256 to 1024 or 4096 bytes does not change the numbers significantly (but reply packets are glued together up to 1024 bytes so GETs may be slower with big payloads). The same for the number of clients, from 50 to 256 clients I got the same numbers. With only 10 clients it starts to get a bit slower.
- ./redis-benchmark -q -n 100000 -SET: 53684.38 requests per second -GET: 45497.73 requests per second -INCR: 39370.47 requests per second -LPUSH: 34803.41 requests per second -LPOP: 37367.20 requests per second -Another one using a 64 bit box, a Xeon L5420 clocked at 2.5 Ghz:
- ./redis-benchmark -q -n 100000 -PING: 111731.84 requests per second -SET: 108114.59 requests per second -GET: 98717.67 requests per second -INCR: 95241.91 requests per second -LPUSH: 104712.05 requests per second -LPOP: 93722.59 requests per second --
Please for detailed information about the Redis Append Only File checkthe Append Only File Howto.-
BGREWRITEAOF rewrites the Append Only File in background when it gets toobig. The Redis Append Only File is a Journal, so every operation modifyingthe dataset is logged in the Append Only File (and replayed at startup).This means that the Append Only File always grows. In order to rebuildits content the BGREWRITEAOF creates a new version of the append only filestarting directly form the dataset in memory in order to guarantee thegeneration of the minimal number of commands needed to rebuild the database.-
The Append Only File Howto contains further details.-
Save the DB in background. The OK code is immediately returned.Redis forks, the parent continues to server the clients, the childsaves the DB on disk then exit. A client my be able to check if theoperation succeeded using the LASTSAVE command.-
test if a key exists
delete a key
return the type of the value stored at key
return all the keys matching a given pattern
return a random key from the key space
rename the old key in the new one, destroing the newname key if it already exists
rename the old key in the new one, if the newname key does not already exist
return the number of keys in the current db
set a time to live in seconds on a key
get the time to live in seconds of a key
Select the DB having the specified index
Move the key from the currently selected DB to the DB having as index dbindex
Remove all the keys of the currently selected DB
Remove all the keys from all the databases
set a key to a string value
return the string value of the key
set a key to a string returning the old value of the key
multi-get, return the strings values of the keys
set a key to a string value if the key does not exist
set a multiple keys to multiple values in a single atomic operation
set a multiple keys to multiple values in a single atomic operation if none of the keys already exist
increment the integer value of key
increment the integer value of key by integer
decrement the integer value of key
decrement the integer value of key by integer
Append an element to the tail of the List value at key
Append an element to the head of the List value at key
Return the length of the List value at key
Return a range of elements from the List at key
Trim the list at key to the specified range of elements
Return the element at index position from the List at key
Set a new value as the element at index position of the List at key
Remove the first-N, last-N, or all the elements matching value from the List at key
Return and remove (atomically) the first element of the List at key
Return and remove (atomically) the last element of the List at key
Return and remove (atomically) the last element of the source List stored at _srckey_ and push the same element to the destination List stored at _dstkey_
Add the specified member to the Set value at key
Remove the specified member from the Set value at key
Remove and return (pop) a random element from the Set value at key
Move the specified member from one Set to another atomically
Return the number of elements (the cardinality) of the Set at key
Test if the specified value is a member of the Set at key
Return the intersection between the Sets stored at key1, key2, ..., keyN
Compute the intersection between the Sets stored at key1, key2, ..., keyN, and store the resulting Set at dstkey
Return the union between the Sets stored at key1, key2, ..., keyN
Compute the union between the Sets stored at key1, key2, ..., keyN, and store the resulting Set at dstkey
Return the difference between the Set stored at key1 and all the Sets key2, ..., keyN
Compute the difference between the Set key1 and all the Sets key2, ..., keyN, and store the resulting Set at dstkey
Return all the members of the Set value at key
Return a random member of the Set value at key
Add the specified member to the Sorted Set value at key or update the score if it already exist
Remove the specified member from the Sorted Set value at key
If the member already exists increment its score by _increment_, otherwise add the member setting _increment_ as score
Return a range of elements from the sorted set at key
Return a range of elements from the sorted set at key, exactly like ZRANGE, but the sorted set is ordered in traversed in reverse order, from the greatest to the smallest score
Return all the elements with score >= min and score <= max (a range query) from the sorted set
Return the cardinality (number of elements) of the sorted set at key
Return the score associated with the specified element of the sorted set at key
Remove all the elements with score >= min and score <= max from the sorted set
Sort a Set or a List accordingly to the specified parameters
Synchronously save the DB on disk
Asynchronously save the DB on disk
Return the UNIX time stamp of the last successfully saving of the dataset on disk
Synchronously save the DB on disk, then shutdown the server
Rewrite the append only file in background when it gets too big
redis.conf
file included in the source code distribution is a starting point, you should be able to modify it in order do adapt it to your needs without troubles reading the comments inside the file.-$ ./redis-server redis.conf --
Return the number of keys in the currently selected database.-
Remove the specified keys. If a given key does not existno operation is performed for this key. The commnad returns the number ofkeys removed.-
-an integer greater than 0 if one or more keys were removed -0 if none of the specified key existed -- -
Test if the specified key exists. The command returns"0" if the key exists, otherwise "1" is returned.Note that even keys set with an empty string as value willreturn "1".-
-1 if the key exists. -0 if the key does not exist. -- -
Set a timeout on the specified key. After the timeout the key will beautomatically delete by the server. A key with an associated timeout issaid to be volatile in Redis terminology.-
Voltile keys are stored on disk like the other keys, the timeout is persistenttoo like all the other aspects of the dataset. Saving a dataset containingthe dataset and stopping the server does not stop the flow of time as Redisregisters on disk when the key will no longer be available as Unix time, andnot the remaining seconds.-
EXPIREAT works exctly like EXPIRE but instead to get the number of secondsrepresenting the Time To Live of the key as a second argument (that is arelative way of specifing the TTL), it takes an absolute one in the form ofa UNIX timestamp (Number of seconds elapsed since 1 Gen 1970).-
EXPIREAT was introduced in order to implement [Persistence append only saving mode] so that EXPIRE commands are automatically translated into EXPIREAT commands for the append only file. Of course EXPIREAT can alsoused by programmers that need a way to simply specify that a given key should expire at a given time in the future.-
When the key is set to a new value using the SET command, the INCR commandor any other command that modify the value stored at key the timeout isremoved from the key and the key becomes non volatile.-
Write operations like LPUSH, LSET and every other command that has theeffect of modifying the value stored at a volatile key have a special semantic:basically a volatile key is destroyed when it is target of a write operation.See for example the following usage pattern:-
-% ./redis-cli lpush mylist foobar /Users/antirez/hack/redis -OK -% ./redis-cli lpush mylist hello /Users/antirez/hack/redis -OK -% ./redis-cli expire mylist 10000 /Users/antirez/hack/redis -1 -% ./redis-cli lpush mylist newelement -OK -% ./redis-cli lrange mylist 0 -1 /Users/antirez/hack/redis -1. newelement -
What happened here is that lpush against the key with a timeout set deletedthe key before to perform the operation. There is so a simple rule, writeoperations against volatile keys will destroy the key before to perform theoperation. Why Redis uses this behavior? In order to retain an importantproperty: a server that receives a given number of commands in the samesequence will end with the same dataset in memory. Without the delete-on-writesemantic what happens is that the state of the server depends on the timeof the commands to. This is not a desirable property in a distributed databasethat supports replication.-
Trying to call EXPIRE against a key that already has an associated timeoutwill not change the timeout of the key, but will just return 0. If insteadthe key does not have a timeout associated the timeout will be set and EXPIREwill return 1.-
Redis does not constantly monitor keys that are going to be expired.Keys are expired simply when some client tries to access a key, andthe key is found to be timed out.-
Of course this is not enough as there are expired keys that will neverbe accessed again. This keys should be expired anyway, so once everysecond Redis test a few keys at random among keys with an expire set.All the keys that are already expired are deleted from the keyspace.-
Each time a fixed number of keys where tested (100 by default). So ifyou had a client setting keys with a very short expire faster than 100for second the memory continued to grow. When you stopped to insertnew keys the memory started to be freed, 100 keys every second in thebest conditions. Under a peak Redis continues to use more and more RAMeven if most keys are expired in each sweep.-
Each time Redis:-
This is a trivial probabilistic algorithm, basically the assumption isthat our sample is representative of the whole key space,and we continue to expire until the percentage of keys that are likelyto be expired is under 25%-
This means that at any given moment the maximum amount of keys alreadyexpired that are using memory is at max equal to max setting operations per second divided by 4.-
-1: the timeout was set. -0: the timeout was not set since the key already has an associated timeout, or the key does not exist. -- -
RPUSH data to the computer_ID key
. Don't want to save more than 1000 log lines per computer? Just issue a LTRIM computer_ID 0 999
command to trim the list after every push.echo 1 > /proc/sys/vm/overcommit_memory
:)overcommit_memory
setting is set to zero fork will fail unless there is as much free RAM as required to really duplicate all the parent memory pages, with the result that if you have a Redis dataset of 3 GB and just 2 GB of free memory it will fail.overcommit_memory
to 1 says Linux to relax and perform the fork in a more optimistic allocation fashion, and this is indeed what you want for Redis.Delete all the keys of all the existing databases, not just the currently selected one. This command never fails.-
Delete all the keys of the currently selected DB. This command never fails.-
Get the value of the specified key. If the keydoes not exist the special value 'nil' is returned.If the value stored at key is not a string an erroris returned because GET can only handle string values.-
GETSET is an atomic set this value and return the old value command.Set key to the string value and return the old value stored at key.The string can't be longer than 1073741824 bytes (1 GB).-
GETSET can be used together with INCR for counting with atomic reset whena given condition arises. For example a process may call INCR against thekey mycounter every time some event occurred, but from time totime we need to get the value of the counter and reset it to zero atomicallyusing GETSET mycounter 0
.
- Increment or decrement the number stored at key by one. If the key doesnot exist or contains a value of a wrong type, set the key to thevalue of "0" before to perform the increment or decrement operation.-
INCRBY and DECRBY work just like INCR and DECR but instead toincrement/decrement by 1 the increment/decrement is integer.-
INCR commands are limited to 64 bit signed integers.-
The info command returns different information and statistics about the server in an format that's simple to parse by computers and easy to red by huamns.-
-edis_version:0.07 -connected_clients:1 -connected_slaves:0 -used_memory:3187 -changes_since_last_save:0 -last_save_time:1237655729 -total_connections_received:1 -total_commands_processed:1 -uptime_in_seconds:25 -uptime_in_days:0 -All the fields are in the form
field:value
used_memory
is returned in bytes, and is the total number of bytes allocated by the program using malloc
.uptime_in_days
is redundant since the uptime in seconds contains already the full uptime information, this field is only mainly present for humans.changes_since_last_save
does not refer to the number of key changes, but to the number of operations that produced some kind of change in the dataset.-$ ./redis-cli set mykey "my binary safe value" -OK -$ ./redis-cli get mykey -my binary safe value -As you can see using the Set command and the Get command is trivial to set values to strings and have this strings returned back.
-$ ./redis-cli set counter 100 -OK -$ ./redis-cli incr counter -(integer) 101 -$ ./redis-cli incr counter -(integer) 102 -$ ./redis-cli incrby counter 10 -(integer) 112 -The INCR command parses the string value as an integer, increments it by one, and finally sets the obtained value as the new string value. There are other similar commands like INCRBY, DECR and DECRBY. Actually internally it's always the same command, acting in a slightly different way.
-$ ./redis-cli rpush messages "Hello how are you?" -OK -$ ./redis-cli rpush messages "Fine thanks. I'm having fun with Redis" -OK -$ ./redis-cli rpush messages "I should look into this NOSQL thing ASAP" -OK -$ ./redis-cli lrange messages 0 2 -1. Hello how are you? -2. Fine thanks. I'm having fun with Redis -3. I should look into this NOSQL thing ASAP -Note that LRANGE takes two indexes, the first and the last element of the range to return. Both the indexes can be negative to tell Redis to start to count for the end, so -1 is the last element, -2 is the penultimate element of the list, and so forth.
-$ ./redis-cli incr next.news.id -(integer) 1 -$ ./redis-cli set news:1:title "Redis is simple" -OK -$ ./redis-cli set news:1:url "http://code.google.com/p/redis" -OK -$ ./redis-cli lpush submitted.news 1 -OK -We obtained an unique incremental ID for our news object just incrementing a key, then used this ID to create the object setting a key for every field in the object. Finally the ID of the new object was pushed on the submitted.news list.
-$ ./redis-cli sadd myset 1 -(integer) 1 -$ ./redis-cli sadd myset 2 -(integer) 1 -$ ./redis-cli sadd myset 3 -(integer) 1 -$ ./redis-cli smembers myset -1. 3 -2. 1 -3. 2 -I added three elements to my set and told Redis to return back all the elements. As you can see they are not sorted.
-$ ./redis-cli sismember myset 3 -(integer) 1 -$ ./redis-cli sismember myset 30 -(integer) 0 -"3" is a member of the set, while "30" is not. Sets are very good in order to express relations between objects. For instance we can easily Redis Sets in order to implement tags.
-$ ./redis-cli sadd news:1000:tags 1 -(integer) 1 -$ ./redis-cli sadd news:1000:tags 2 -(integer) 1 -$ ./redis-cli sadd news:1000:tags 5 -(integer) 1 -$ ./redis-cli sadd news:1000:tags 77 -(integer) 1 -$ ./redis-cli sadd tag:1:objects 1000 -(integer) 1 -$ ./redis-cli sadd tag:2:objects 1000 -(integer) 1 -$ ./redis-cli sadd tag:5:objects 1000 -(integer) 1 -$ ./redis-cli sadd tag:77:objects 1000 -(integer) 1 -To get all the tags for a given object is trivial:
-$ ./redis-cli sinter tag:1:objects tag:2:objects tag:10:objects tag:27:objects -... no result in our dataset composed of just one object ;) ... -Look at the Command Reference to discover other Set related commands, there are a bunch of interesting one. Also make sure to check the SORT command as both Redis Sets and Lists are sortable.
-$ ./redis-cli zadd hackers 1940 "Alan Kay" -(integer) 1 -$ ./redis-cli zadd hackers 1953 "Richard Stallman" -(integer) 1 -$ ./redis-cli zadd hackers 1965 "Yukihiro Matsumoto" -(integer) 1 -$ ./redis-cli zadd hackers 1916 "Claude Shannon" -(integer) 1 -$ ./redis-cli zadd hackers 1969 "Linus Torvalds" -(integer) 1 -$ ./redis-cli zadd hackers 1912 "Alan Turing" -(integer) 1 -For sorted sets it's a joke to return these hackers sorted by their birth year because actually they are already sorted. Sorted sets are implemented via a dual-ported data structure containing both a skip list and an hash table, so every time we add an element Redis performs an O(log(N)) operation, that's good, but when we ask for sorted elements Redis does not have to do any work at all, it's already all sorted:
-$ ./redis-cli zrange hackers 0 -1 -1. Alan Turing -2. Claude Shannon -3. Alan Kay -4. Richard Stallman -5. Yukihiro Matsumoto -6. Linus Torvalds -Didn't know that Linus was younger than Yukihiro btw ;)
-$ ./redis-cli zrevrange hackers 0 -1 -1. Linus Torvalds -2. Yukihiro Matsumoto -3. Richard Stallman -4. Alan Kay -5. Claude Shannon -6. Alan Turing -A very important note, ZSets have just a "default" ordering but you are still free to call the SORT command against sorted sets to get a different ordering (but this time the server will waste CPU). An alternative for having multiple orders is to add every element in multiple sorted sets at the same time.
-$ ./redis-cli zrangebyscore hackers -inf 1950 -1. Alan Turing -2. Claude Shannon -3. Alan Kay -We asked Redis to return all the elements with a score between negative infinite and 1950 (both extremes are included).
-$ ./redis-cli zremrangebyscore hackers 1940 1960 -(integer) 2 -ZREMRANGEBYSCORE is not the best command name, but it can be very useful, and returns the number of removed elements.
Returns all the keys matching the glob-style pattern asspace separated strings. For example if you have in thedatabase the keys "foo" and "foobar" the command "KEYS foo*
"will return "foo foobar".
-Note that while the time complexity for this operation is O(n)the constant times are pretty low. For example Redis runningon an entry level laptop can scan a 1 million keys databasein 40 milliseconds. Still it's better to consider this one of --the slow commands that may ruin the DB performance if not usedwith care*.-In other words this command is intended only for debugging and *special* operations like creating a script to change the DB schema. Don't use it in your normal code. Use Redis Sets in order to group together a subset of objects.-Glob style patterns examples: -* h?llo will match hello hallo hhllo* h*llo will match hllo heeeello* hUse \ to escape special chars if you want to match them verbatim.[
ae]
llo will match hello and hallo, but not hilloReturn value
Bulk reply, specifically a string in the form of space separated list of keys. Note that most client libraries will return an Array of keys and not a single string with space separated keys (that is, split by " " is performed in the client library usually).
Return the UNIX TIME of the last DB save executed with success.A client may check if a BGSAVE command succeeded reading the LASTSAVEvalue, then issuing a BGSAVE command and checking at regular intervalsevery N seconds if LASTSAVE changed.-
Return the specified element of the list stored at the specifiedkey. 0 is the first element, 1 the second and so on. Negative indexesare supported, for example -1 is the last element, -2 the penultimateand so on.-
If the value stored at key is not of list type an error is returned.If the index is out of range an empty string is returned.-
Note that even if the average time complexity is O(n) asking forthe first or the last element of the list is O(1).-
-LPUSH mylist a # now the list is "a" -LPUSH mylist b # now the list is "b","a" -RPUSH mylist c # now the list is "b","a","c" (RPUSH was used this time) --The resulting list stored at mylist will contain the elements "b","a","c".
Return the length of the list stored at the specified key. If thekey does not exist zero is returned (the same behaviour as forempty lists). If the value stored at key is not a list an error is returned.-
-The length of the list. -- -
Atomically return and remove the first (LPOP) or last (RPOP) elementof the list. For example if the list contains the elements "a","b","c" LPOPwill return "a" and the list will become "b","c".-
If the key does not exist or the list is already empty the specialvalue 'nil' is returned.-
Return the specified elements of the list stored at the specifiedkey. Start and end are zero-based indexes. 0 is the first elementof the list (the list head), 1 the next element and so on.-
For example LRANGE foobar 0 2 will return the first three elementsof the list.-
_start_ and end can also be negative numbers indicating offsetsfrom the end of the list. For example -1 is the last element ofthe list, -2 the penultimate element and so on.-
Indexes out of range will not produce an error: if start is overthe end of the list, or start >
end, an empty list is returned.If end is over the end of the list Redis will threat it just likethe last element of the list.
-Remove the first count occurrences of the value element from the list.If count is zero all the elements are removed. If count is negativeelements are removed from tail to head, instead to go from head to tailthat is the normal behaviour. So for example LREM with count -2 and_hello_ as value to remove against the list (a,b,c,hello,x,hello,hello) willlave the list (a,b,c,hello,x). The number of removed elements is returnedas an integer, see below for more information about the returned value.Note that non existing keys are considered like empty lists by LREM, so LREMagainst non existing keys will always return 0.-
-The number of removed elements if the operation succeeded -- -
Set the list element at index (see LINDEX for information about the_index_ argument) with the new value. Out of range indexes willgenerate an error. Note that setting the first or last elements ofthe list is O(1).-
Trim an existing list so that it will contain only the specifiedrange of elements specified. Start and end are zero-based indexes.0 is the first element of the list (the list head), 1 the next elementand so on.-
For example LTRIM foobar 0 2 will modify the list stored at foobarkey so that only the first three elements of the list will remain.-
_start_ and end can also be negative numbers indicating offsetsfrom the end of the list. For example -1 is the last element ofthe list, -2 the penultimate element and so on.-
Indexes out of range will not produce an error: if start is overthe end of the list, or start > end, an empty list is left as value.If end over the end of the list Redis will threat it just likethe last element of the list.-
Hint: the obvious use of LTRIM is together with LPUSH/RPUSH. For example:-
- LPUSH mylist <someelement> - LTRIM mylist 0 99 -
The above two commands will push elements in the list taking care thatthe list will not grow without limits. This is very useful when usingRedis to store logs for example. It is important to note that when usedin this way LTRIM is an O(1) operation because in the average casejust one element is removed from the tail of the list.-
Get the values of all the specified keys. If one or more keys dont existor is not of type String, a 'nil' value is returned instead of the valueof the specified key, but the operation never fails.-
-$ ./redis-cli set foo 1000 -+OK -$ ./redis-cli set bar 2000 -+OK -$ ./redis-cli mget foo bar -1. 1000 -2. 2000 -$ ./redis-cli mget foo bar nokey -1. 1000 -2. 2000 -3. (nil) -$ -- -
MONITOR is a debugging command that outputs the whole sequence of commandsreceived by the Redis server. is very handy in order to understandwhat is happening into the database. This command is used directlyvia telnet.-
-% telnet 127.0.0.1 6379 -Trying 127.0.0.1... -Connected to segnalo-local.com. -Escape character is '^]'. -MONITOR -+OK -monitor -keys * -dbsize -set x 6 -foobar -get x -del x -get x -set key_x 5 -hello -set key_y 5 -hello -set key_z 5 -hello -set foo_a 5 -hello -
The ability to see all the requests processed by the server is useful in orderto spot bugs in the application both when using Redis as a database and asa distributed caching system.-
In order to end a monitoring session just issue a QUIT command by hand.-
Move the specified key from the currently selected DB to the specifieddestination DB. Note that this command returns 1 only if the key wassuccessfully moved, and 0 if the target key was already there or if thesource key was not found at all, so it is possible to use MOVE as a lockingprimitive.-
-1 if the key was moved -0 if the key was not moved because already present on the target DB or was not found in the current DB. -- -
Set the the respective keys to the respective values. MSET will replace oldvalues with new values, while MSETNX will not perform any operation at alleven if just a single key already exists.-
Because of this semantic MSETNX can be used in order to set different keysrepresenting different fields of an unique logic object in a way thatensures that either all the fields or none at all are set.-
Both MSET and MSETNX are atomic operations. This means that for instanceif the keys A and B are modified, another client talking to Redis can eithersee the changes to both A and B at once, or no modification at all.-
-1 if the all the keys were set -0 if no key was set (at least one key already existed) --
-C: PING -S: +PONG -An inline command is a CRLF-terminated string sent to the client. The server can reply to commands in different ways: -
*
")-C: EXISTS somekey -S: :0 -Since 'somekey' does not exist the server returned ':0'.
-C: SET mykey 6 -C: foobar -S: +OK -The last argument of the commnad is '6'. This specify the number of DATA -bytes that will follow (note that even this bytes are terminated by two -additional bytes of CRLF).
"SET mykey 6\r\nfoobar\r\n"-
-C: GET mykey -S: $6 -S: foobar -A bulk reply is very similar to the last argument of a bulk command. The -server sends as the first line a "$" byte followed by the number of bytes -of the actual reply followed by CRLF, then the bytes are sent followed by -additional two bytes for the final CRLF. The exact sequence sent by the -server is:
"$6\r\nfoobar\r\n"-If the requested value does not exist the bulk reply will use the special -value -1 as data length, example:
-C: GET nonexistingkey -S: $-1 -The client library API should not return an empty string, but a nil object, when the requested object does not exist. -For example a Ruby library should return 'nil' while a C library should return -NULL, and so forth.
*
. Example:-C: LRANGE mylist 0 3 -S: *4 -S: $3 -S: foo -S: $3 -S: bar -S: $5 -S: Hello -S: $5 -S: World -The first line the server sent is "4\r\n" in order to specify that four bulk -write will follow. Then every bulk write is transmitted.
-C: LRANGE nokey 0 1 -S: *-1 -A client library API SHOULD return a nil object and not an empty list when this -happens. This makes possible to distinguish between empty list and non existing ones.
-S: *3 -S: $3 -S: foo -S: $-1 -S: $3 -S: bar -The second element is nul. The client library should return something like this:
-["foo",nil,"bar"] -
-+OK -The client library should return everything after the "+", that is, the string "OK" in the example.
-SET mykey 8 -myvalue -While the following uses the multi bulk command protocol:
-*3 -$3 -SET -$5 -mykey -$8 -myvalue -Commands sent in this format are longer, so currently they are used only in -order to transmit commands containing multiple binary-safe arguments, but -actually this protocol can be used to send every kind of command, without to -know if it's an inline, bulk or multi-bulk command.
-$ wget http://redis.googlecode.com/files/redis-1.02.tar.gz -The unstable source code, with more features but not ready for production, can be downloaded using git:
-$ git clone git://github.com/antirez/redis.git -
-$ tar xvzf redis-1.02.tar.gz -$ cd redis-1.02 -$ make -In order to test if the Redis server is working well in your computer make sure to run
make test
and check that all the tests are passed.-$ ./redis-server -With the default configuration Redis will log to the standard output so you can check what happens. Later, you can change the default settings.
make
and it is called redis-cli
For instance to set a key and read back the value use the following:-$ ./redis-cli set mykey somevalue -OK -$ ./redis-cli get mykey -somevalue -What about adding elements to a list:
-$ ./redis-cli lpush mylist firstvalue -OK -$ ./redis-cli lpush mylist secondvalue -OK -$ ./redis-cli lpush mylist thirdvalue -OK -$ ./redis-cli lrange mylist 0 -1 -1. thirdvalue -2. secondvalue -3. firstvalue -$ ./redis-cli rpop mylist -firstvalue -$ ./redis-cli lrange mylist 0 -1 -1. thirdvalue -2. secondvalue -
Ask the server to silently close the connection.-
slaveof 192.168.1.100 6379
. We provide a Replication Howto if you want to know more about this feature../redis-server /etc/redis.conf-This is NOT required. The server will start even without a configuration file -using a default built-in configuration.
-$ telnet localhost 6379 -Trying 127.0.0.1... -Connected to localhost. -Escape character is '^]'. -SET foo 3 -bar -+OK -The first line we sent to the server is "set foo 3". This means "set the key -foo with the following three bytes I'll send you". The following line is -the "bar" string, that is, the three bytes. So the effect is to set the -key "foo" to the value "bar". Very simple!
-GET foo -$3 -bar -Ok that's very similar to 'set', just the other way around. We sent "get foo", -the server replied with a first line that is just the $ character follwed by -the number of bytes the value stored at key contained, followed by the actual -bytes. Again "\r\n" are appended both to the bytes count and the actual data. In Redis slang this is called a bulk reply.
-GET blabla -$-1 -When the key does not exist instead of the length, just the "$-1" string is sent. Since a -1 length of a bulk reply has no meaning it is used in order to specifiy a 'nil' value and distinguish it from a zero length value. Another way to check if a given key exists or not is indeed the EXISTS command:
-EXISTS nokey -:0 -EXISTS foo -:1 -As you can see the server replied ':0' the first time since 'nokey' does not -exist, and ':1' for 'foo', a key that actually exists. Replies starting with the colon character are integer reply.
Return a randomly selected key from the currently selected DB.-
-- SUNION, SDIFF, SUNIONSTORE, SDIFFSTORE commands implemented. (Aman Gupta, antirez) -- Non blocking replication. Now while N slaves are synchronizing, the master will continue to ask to client queries. (antirez) -- PHP client ported to PHP5 (antirez) -- FLUSHALL/FLUSHDB no longer sync on disk. Just increment the dirty counter by the number of elements removed, that will probably trigger a background saving operation (antirez) -- INCRBY/DECRBY now support 64bit increments, with tests (antirez) -- New fields in INFO command, bgsave_in_progress and replication related (antirez) -- Ability to specify a different file name for the DB (... can't remember ...) -- GETSET command, atomic GET + SET (antirez) -- SMOVE command implemented, atomic move-element across sets operation (antirez) -- Ability to work with huge data sets, tested up to 350 million keys (antirez) -- Warns if /proc/sys/vm/overcommit_memory is set to 0 on Linux. Also make sure to don't resize the hash tables while the child process is saving in order to avoid copy-on-write of memory pages (antirez) -- Infinite number of arguments for MGET and all the other commands (antirez) -- CPP client (Brian Hammond) -- DEL is now a vararg, IMPORTANT: memory leak fixed in loading DB code (antirez) -- Benchmark utility now supports random keys (antirez) -- Timestamp in log lines (antirez) -- Fix SINTER/UNIONSTORE to allow for &=/|= style operations (i.e. SINTERSTORE set1 set1 set2) (Aman Gupta) -- Partial qsort implemented in SORT command, only when both BY and LIMIT is used (antirez) -- Allow timeout=0 config to disable client timeouts (Aman Gupta) -- Alternative (faster/simpler) ruby client API compatible with Redis-rb (antirez) -- S*STORE now return the cardinality of the resulting set (antirez) -- TTL command implemented (antirez) -- Critical bug about glueoutputbuffers=yes fixed. Under load and with pipelining and clients disconnecting on the middle of the chat with the server, Redis could block. (antirez) -- Different replication fixes (antirez) -- SLAVEOF command implemented for remote replication management (antirez) -- Issue with redis-client used in scripts solved, now to check if the latest argument must come from standard input we do not check that stdin is or not a tty but the command arity (antirez) -- Warns if using the default config (antirez) -- maxclients implemented, see redis.conf for details (antirez) -- max bytes of a received command enlarged from 1k to 32k (antirez) --
-2009-06-16 client libraries updated (antirez) -2009-06-16 Better handling of background saving process killed or crashed (antirez) -2009-06-14 number of keys info in INFO command (Diego Rosario Brogna) -2009-06-14 SPOP documented (antirez) -2009-06-14 Clojure library (Ragnar Dahlén) -2009-06-10 It is now possible to specify - as config file name to read it from stdin (antirez) -2009-06-10 max bytes in an inline command raised to 1024*1024 bytes, in order to allow for very large MGETs and still protect from client crashes (antirez) -2009-06-08 SPOP implemented. Hash table resizing for Sets and Expires too. Changed the resize policy to play better with RANDOMKEY and SPOP. (antirez) -2009-06-07 some minor changes to the backtrace code (antirez) -2009-06-07 enable backtrace capabilities only for Linux and MacOSX (antirez) -2009-06-07 Dump a backtrace on sigsegv/sigbus, original coded (Diego Rosario Brogna) -2009-06-05 Avoid a busy loop while sending very large replies against very fast links, this allows to be more responsive with other clients even under a KEY * against the loopback interface (antirez) -2009-06-05 Kill the background saving process before performing SHUTDOWN to avoid races (antirez) -2009-06-05 LREM now returns :0 for non existing keys (antirez) -2009-06-05 added config.h for #ifdef business isolation, added fstat64 for Mac OS X (antirez) -2009-06-04 macosx specific zmalloc.c, uses malloc_size function in order to avoid to waste memory and time to put an additional header (antirez) -2009-06-04 DEBUG OBJECT implemented (antirez) -2009-06-03 shareobjectspoolsize implemented in reds.conf, in order to control the pool size when object sharing is on (antirez) -2009-05-27 maxmemory implemented (antirez) --
Atomically renames the key oldkey to newkey. If the source anddestination name are the same an error is returned. If newkeyalready exists it is overwritten.-
Rename oldkey into newkey but fails if the destination key newkey already exists.-
-1 if the key was renamed -0 if the target key already exist -- -
<->
slave link goes down for some reason. If the master receives multiple concurrent slave synchronization requests it performs a single background saving in order to serve all them.-slaveof 192.168.1.1 6379 --Of course you need to replace 192.168.1.1 6379 with your master ip address (or hostname) and port. -
Atomically return and remove the last (tail) element of the srckey list,and push the element as the first (head) element of the dstkey list. Forexample if the source list contains the elements "a","b","c" and thedestination list contains the elements "foo","bar" after an RPOPLPUSH commandthe content of the two lists will be "a","b" and "c","foo","bar".-
If the key does not exist or the list is already empty the specialvalue 'nil' is returned. If the srckey and dstkey are the same theoperation is equivalent to removing the last element from the list and pusingit as first element of the list, so it's a "list rotation" command.-
Redis lists are often used as queues in order to exchange messages betweendifferent programs. A program can add a message performing an LPUSH operationagainst a Redis list (we call this program a Producer), while another program(that we call Consumer) can process the messages performing an RPOP commandin order to start reading the messages from the oldest.-
Unfortunately if a Consumer crashes just after an RPOP operation the messagegets lost. RPOPLPUSH solves this problem since the returned message isadded to another "backup" list. The Consumer can later remove the messagefrom the backup list using the LREM command when the message was correctlyprocessed.-
Another process, called Helper, can monitor the "backup" list to check fortimed out entries to repush against the main queue.-
Using RPOPPUSH with the same source and destination key a process canvisit all the elements of an N-elements List in O(N) without to transferthe full list from the server to the client in a single LRANGE operation.Note that a process can traverse the list even while other processesare actively RPUSHing against the list, and still no element will be skipped.-
Add the string value to the head (RPUSH) or tail (LPUSH) of the liststored at key. If the key does not exist an empty list is created just beforethe append operation. If the key exists but is not a List an erroris returned.-
Add the specified member to the set value stored at key. If memberis already a member of the set no operation is performed. If keydoes not exist a new set with the specified member as sole member iscreated. If the key exists but does not hold a set value an error isreturned.-
-1 if the new element was added -0 if the element was already a member of the set --
Save the DB on disk. The server hangs while the saving is notcompleted, no connection is served in the meanwhile. An OK codeis returned when the DB was fully stored in disk.-
Return the set cardinality (number of elements). If the key does notexist 0 is returned, like for empty sets.-
-the cardinality (number of elements) of the set as an integer. -- -
Return the members of a set resulting from the difference between the firstset provided and all the successive sets. Example:-
-key1 = x,a,b,c -key2 = c -key3 = a,d -SDIFF key1,key2,key3 => x,b -
Non existing keys are considered like empty sets.-
This command works exactly like SDIFF but instead of being returned the resulting set is stored in dstkey.-
Select the DB with having the specified zero-based numeric index.For default every new client connection is automatically selectedto DB 0.-
Set the string value as value of the key.The string can't be longer than 1073741824 bytes (1 GB).-
SETNX works exactly like SET with the only difference thatif the key already exists no operation is performed.SETNX actually means "SET if Not eXists".-
-1 if the key was set -0 if the key was not set -
SETNX can also be seen as a locking primitive. For instance to acquirethe lock of the key foo, the client could try the following:-
-SETNX lock.foo <current UNIX time + lock timeout + 1> -
If SETNX returns 1 the client acquired the lock, setting the lock.fookey to the UNIX time at witch the lock should no longer be considered valid.The client will later use DEL lock.foo in order to release the lock.-
If SETNX returns 0 the key is already locked by some other client. We caneither return to the caller if it's a non blocking lock, or enter aloop retrying to hold the lock until we succeed or some kind of timeoutexpires.-
In the above locking algorithm there is a problem: what happens if a clientfails, crashes, or is otherwise not able to release the lock?It's possible to detect this condition because the lock key contains aUNIX timestamp. If such a timestamp is <= the current Unix time the lockis no longer valid.-
When this happens we can't just call DEL against the key to remove the lockand then try to issue a SETNX, as there is a race condition here, whenmultiple clients detected an expired lock and are trying to release it.-
Fortunately it's possible to avoid this issue using the following algorithm.Let's see how C4, our sane client, uses the good algorithm:-
Stop all the clients, save the DB, then quit the server. This commandsmakes sure that the DB is switched off without the lost of any data.This is not guaranteed if the client uses simply "SAVE" and then"QUIT" because other clients may alter the DB data between the twocommands.-
Return the members of a set resulting from the intersection of all thesets hold at the specified keys. Like in LRANGE the result is sent tothe client as a multi-bulk reply (see the protocol specification formore information). If just a single key is specified, then this commandproduces the same result as SMEMBERS. Actually SMEMBERS is just syntaxsugar for SINTERSECT.-
Non existing keys are considered like empty sets, so if one of the keys ismissing an empty set is returned (since the intersection with an emptyset always is an empty set).-
This commnad works exactly like SINTER but instead of being returned the resulting set is sotred as dstkey.-
Return 1 if member is a member of the set stored at key, otherwise0 is returned.-
-1 if the element is a member of the set -0 if the element is not a member of the set OR if the key does not exist -- -
The SLAVEOF command can change the replication settings of a slave on the fly.If a Redis server is arleady acting as slave, the command-SLAVEOF NO ONE
will turn off the replicaiton turning the Redis server into a MASTER.In the proper formSLAVEOF hostname port
will make the server a slave of thespecific server listening at the specified hostname and port.
If a server is already a slave of some master, SLAVEOF hostname port
willstop the replication against the old server and start the synchrnonizationagainst the new one discarding the old dataset.
-The form SLAVEOF no one
will stop replication turning the server into aMASTER but will not discard the replication. So if the old master stop workingit is possible to turn the slave into a master and set the application touse the new master in read/write. Later when the other Redis server will befixed it can be configured in order to work as slave.
-Return all the members (elements) of the set value stored at key. Thisis just syntax glue for SINTERSECT.-
Move the specifided member from the set at srckey to the set at dstkey.This operation is atomic, in every given moment the element will appear tobe in the source or destination set for accessing clients.-
If the source set does not exist or does not contain the specified elementno operation is performed and zero is returned, otherwise the element isremoved from the source set and added to the destination set. On successone is returned, even if the element was already present in the destinationset.-
An error is raised if the source or destination keys contain a non Set value.-
-1 if the element was moved -0 if the element was not found on the first set and no operation was performed -- -
[
BY pattern]
[
LIMIT start count]
[
GET pattern]
[
ASC|DESC]
[
ALPHA]
[
STORE dstkey]
=
-Sort the elements contained in the List, Set, orSorted Set value at key. By defaultsorting is numeric with elements being compared as double precisionfloating point numbers. This is the simplest form of SORT:-
-SORT mylist -
Assuming mylist contains a list of numbers, the return value will bethe list of numbers ordered from the smallest to the biggest number.In order to get the sorting in reverse order use DESC:-
-SORT mylist DESC -
The ASC option is also supported but it's the default so you don'treally need it.If you want to sort lexicographically use ALPHA. Note that Redis isutf-8 aware assuming you set the right value for the LC_COLLATEenvironment variable.-
Sort is able to limit the number of returned elements using the LIMIT option:-
-SORT mylist LIMIT 0 10 -
In the above example SORT will return only 10 elements, starting fromthe first one (start is zero-based). Almost all the sort options canbe mixed together. For example the command:-
-SORT mylist LIMIT 0 10 ALPHA DESC -
Will sort mylist lexicographically, in descending order, returning onlythe first 10 elements.-
Sometimes you want to sort elements using external keys as weights tocompare instead to compare the actual List Sets or Sorted Set elements.For example the list mylist may contain the elements 1, 2, 3, 4, thatare just unique IDs of objects stored at object_1, object_2, object_3and object_4, while the keys weight_1, weight_2, weight_3 and weight_4can contain weights we want to use to sort our list of objectsidentifiers. We can use the following command:-
-SORT mylist BY weight_* -
the BY option takes a pattern (-weight_*
in our example) that is usedin order to generate the key names of the weights used for sorting.Weight key names are obtained substituting the first occurrence of*
with the actual value of the elements on the list (1,2,3,4 in our example).
Our previous example will return just the sorted IDs. Often it isneeded to get the actual objects sorted (object_1, ..., object_4 in theexample). We can do it with the following command:-
-SORT mylist BY weight_* GET object_* -
Note that GET can be used multiple times in order to get more keys forevery element of the original List, Set or Sorted Set sorted.-
Since Redis >= 1.1 it's possible to also GET the list elements itselfusing the special # pattern:-
-SORT mylist BY weight_* GET object_* GET # -
By default SORT returns the sorted elements as its return value.Using the STORE option instead to return the elements SORT willstore this elements as a Redis List in the specified key.An example:-
-SORT mylist BY weight_* STORE resultkey -
An interesting pattern using SORT ... STORE consists in associatingan EXPIRE timeout to the resulting key so that inapplications where the result of a sort operation can be cached forsome time other clients will use the cached list instead to call SORTfor every request. When the key will timeout an updated version ofthe cache can be created using SORT ... STORE again.-
Note that implementing this pattern it is important to avoid that multipleclients will try to rebuild the cached version of the cacheat the same time, so some form of locking should be implemented(for instance using SETNX).-
Remove a random element from a Set returning it as return value.If the Set is empty or the key does not exist, a nil object is returned.-
The SRANDMEMBER command does a similar work butthe returned element is not removed from the Set.-
Return a random element from a Set, without removing the element. If the Set is empty or the key does not exist, a nil object is returned.-
The SPOP command does a similar work but the returned elementis popped (removed) from the Set.-
Remove the specified member from the set value stored at key. If_member_ was not a member of the set no operation is performed. If keydoes not hold a set value an error is returned.-
-1 if the new element was removed -0 if the new element was not a member of the set -- -
sds.c
(simple dynamic strings). This library caches the current length of the string, so to obtain the length of a Redis string is an O(1) operation (but currently there is no such STRLEN command. It will likely be added later).Return the members of a set resulting from the union of all thesets hold at the specified keys. Like in LRANGE the result is sent tothe client as a multi-bulk reply (see the protocol specification formore information). If just a single key is specified, then this commandproduces the same result as SMEMBERS.-
Non existing keys are considered like empty sets.-
This command works exactly like SUNION but instead of being returned the resulting set is stored as dstkey. Any existing value in dstkey will be over-written.-
Language | Name | Sharding | Pipelining | 1.1 | 1.0 |
ActionScript 3 | as3redis | No | Yes | Yes | Yes |
Clojure | redis-clojure | No | No | Partial | Yes |
Common Lisp | CL-Redis | No | No | No | Yes |
Erlang | erldis | No | Looks like | No | Looks like |
Go | Go-Redis | No | Yes | Yes | Yes |
Haskell | haskell-redis | No | No | No | Yes |
Java | JDBC-Redis | No | No | No | Yes |
Java | JRedis | No | Yes | Yes | Yes |
LUA | redis-lua | No | No | Yes | Yes |
Perl | Redis Client | No | No | No | Yes |
Perl | AnyEvent::Redis | No | No | No | Yes |
PHP | Redis PHP Bindings | No | No | No | Yes |
PHP | phpredis (C) | No | No | No | Yes |
PHP | Predis | Yes | Yes | Yes | Yes |
PHP | Redisent | Yes | No | No | Yes |
Python | Python Client | No | No | No | Yes |
Python | py-redis | No | No | Partial | Yes |
Python | txredis | No | No | No | Yes |
Ruby | redis-rb | Yes | Yes | Yes | Yes |
Scala | scala-redis | Yes | No | No | Yes |
TCL | TCL | No | No | Yes | Yes |
The TTL command returns the remaining time to live in seconds of a key that has an EXPIRE set. This introspection capability allows a Redis client to check how many seconds a given key will continue to be part of the dataset. If the Key does not exists or does not have an associated expire, -1 is returned.-
-SET foo bar -Redis will store our data permanently, so we can later ask for "What is the value stored at key foo?" and Redis will reply with bar:
-GET foo => bar -Other common operations provided by key-value stores are DEL used to delete a given key, and the associated value, SET-if-not-exists (called SETNX on Redis) that sets a key only if it does not already exist, and INCR that is able to atomically increment a number stored at a given key:
-SET foo 10 -INCR foo => 11 -INCR foo => 12 -INCR foo => 13 -
-x = GET foo -x = x + 1 -SET foo x -The problem is that doing the increment this way will work as long as there is only a client working with the value x at a time. See what happens if two computers are accessing this data at the same time:
-x = GET foo (yields 10) -y = GET foo (yields 10) -x = x + 1 (x is now 11) -y = y + 1 (y is now 11) -SET foo x (foo is now 11) -SET foo y (foo is now 11) -Something is wrong with that! We incremented the value two times, but instead to go from 10 to 12 our key holds 11. This is because the INCR operation done with
GET / increment / SET
is not an atomic operation. Instead the INCR provided by Redis, Memcached, ..., are atomic implementations, the server will take care to protect the get-increment-set for all the time needed to complete in order to prevent simultaneous accesses.-LPUSH mylist a (now mylist holds one element list 'a') -LPUSH mylist b (now mylist holds 'b,a') -LPUSH mylist c (now mylist holds 'c,b,a') -LPUSH means Left Push, that is, add an element to the left (or to the head) of the list stored at mylist. If the key mylist does not exist it is automatically created by Redis as an empty list before the PUSH operation. As you can imagine, there is also the RPUSH operation that adds the element on the right of the list (on the tail).
username:updates
for instance. There are operations to get data or information from Lists of course. For instance LRANGE returns a range of the list, or the whole list.-LRANGE mylist 0 1 => c,b -LRANGE uses zero-based indexes, that is the first element is 0, the second 1, and so on. The command aguments are
LRANGE key first-index last-index
. The last index argument can be negative, with a special meaning: -1 is the last element of the list, -2 the penultimate, and so on. So in order to get the whole list we can use:-LRANGE mylist 0 -1 => c,b,a -Other important operations are LLEN that returns the length of the list, and LTRIM that is like LRANGE but instead of returning the specified range trims the list, so it is like Get range from mylist, Set this range as new value but atomic. We will use only this List operations, but make sure to check the Redis documentation to discover all the List operations supported by Redis. -
-SADD myset a -SADD myset b -SADD myset foo -SADD myset bar -SCARD myset => 4 -SMEMBERS myset => bar,a,foo,b -Note that SMEMBERS does not return the elements in the same order we added them, since Sets are unsorted collections of elements. When you want to store the order it is better to use Lists instead. Some more operations against Sets:
-SADD mynewset b -SADD mynewset foo -SADD mynewset hello -SINTER myset mynewset => foo,b -SINTER can return the intersection between Sets but it is not limited to two sets, you may ask for intersection of 4,5 or 10000 Sets. Finally let's check how SISMEMBER works:
-SISMEMBER myset foo => 1 -SISMEMBER myset notamember => 0 -Ok I think we are ready to start coding! -
-INCR global:nextUserId => 1000 -SET uid:1000:username antirez -SET uid:1000:password p1pp0 -We use the global:nextUserId key in order to always get an unique ID for every new user. Then we use this unique ID to populate all the other keys holding our user data. This is a Design Pattern with key-values stores! Keep it in mind. -Besides the fields already defined, we need some more stuff in order to fully define an User. For example sometimes it can be useful to be able to get the user ID from the username, so we set this key too:
-SET username:antirez:uid 1000 -This may appear strange at first, but remember that we are only able to access data by key! It's not possible to tell Redis to return the key that holds a specific value. This is also our strength, this new paradigm is forcing us to organize the data so that everything is accessible by primary key, speaking with relational DBs language. -
-uid:1000:followers => Set of uids of all the followers users -uid:1000:following => Set of uids of all the following users -Another important thing we need is a place were we can add the updates to display in the user home page. We'll need to access this data in chronological order later, from the most recent update to the older ones, so the perfect kind of Value for this work is a List. Basically every new update will be LPUSHed in the user updates key, and thanks to LRANGE we can implement pagination and so on. Note that we use the words updates and posts interchangeably, since updates are actually "little posts" in some way.
-uid:1000:posts => a List of post ids, every new post is LPUSHed here. --
-SET uid:1000:auth fea5e81ac8ca77622bed1c2132a021f9 -SET auth:fea5e81ac8ca77622bed1c2132a021f9 1000 -In order to authenticate an user we'll do this simple work (login.php): -
<username>
:uid key actually exists-include("retwis.php"); - -# Form sanity checks -if (!gt("username") || !gt("password")) - goback("You need to enter both username and password to login."); - -# The form is ok, check if the username is available -$username = gt("username"); -$password = gt("password"); -$r = redisLink(); -$userid = $r->get("username:$username:id"); -if (!$userid) - goback("Wrong username or password"); -$realpassword = $r->get("uid:$userid:password"); -if ($realpassword != $password) - goback("Wrong useranme or password"); - -# Username / password OK, set the cookie and redirect to index.php -$authsecret = $r->get("uid:$userid:auth"); -setcookie("auth",$authsecret,time()+3600*24*365); -header("Location: index.php"); -This happens every time the users log in, but we also need a function isLoggedIn in order to check if a given user is already authenticated or not. These are the logical steps preformed by the
isLoggedIn
function:
-<authcookie>
<authcookie>
exists, and what the value (the user id) is (1000 in the exmple).-function isLoggedIn() { - global $User, $_COOKIE; - - if (isset($User)) return true; - - if (isset($_COOKIE['auth'])) { - $r = redisLink(); - $authcookie = $_COOKIE['auth']; - if ($userid = $r->get("auth:$authcookie")) { - if ($r->get("uid:$userid:auth") != $authcookie) return false; - loadUserInfo($userid); - return true; - } - } - return false; -} - -function loadUserInfo($userid) { - global $User; - - $r = redisLink(); - $User['id'] = $userid; - $User['username'] = $r->get("uid:$userid:username"); - return true; -} -
loadUserInfo
as separated function is an overkill for our application, but it's a good template for a complex application. The only thing it's missing from all the authentication is the logout. What we do on logout? That's simple, we'll just change the random string in uid:1000:auth, remove the old auth:<oldauthstring>
and add a new auth:<newauthstring>
.<randomstring>
, but double check it against uid:1000:auth. The true authentication string is the latter, the auth:<randomstring>
is just an authentication key that may even be volatile, or if there are bugs in the program or a script gets interrupted we may even end with multiple auth:<something>
keys pointing to the same user id. The logout code is the following (logout.php):-include("retwis.php"); - -if (!isLoggedIn()) { - header("Location: index.php"); - exit; -} - -$r = redisLink(); -$newauthsecret = getrand(); -$userid = $User['id']; -$oldauthsecret = $r->get("uid:$userid:auth"); - -$r->set("uid:$userid:auth",$newauthsecret); -$r->set("auth:$newauthsecret",$userid); -$r->delete("auth:$oldauthsecret"); - -header("Location: index.php"); -That is just what we described and should be simple to undestand. -
-INCR global:nextPostId => 10343 -SET post:10343 "$owner_id|$time|I'm having fun with Retwis" -As you can se the user id and time of the post are stored directly inside the string, we don't need to lookup by time or user id in the example application so it is better to compact everything inside the post string.
-include("retwis.php"); - -if (!isLoggedIn() || !gt("status")) { - header("Location:index.php"); - exit; -} - -$r = redisLink(); -$postid = $r->incr("global:nextPostId"); -$status = str_replace("\n"," ",gt("status")); -$post = $User['id']."|".time()."|".$status; -$r->set("post:$postid",$post); -$followers = $r->smembers("uid:".$User['id'].":followers"); -if ($followers === false) $followers = Array(); -$followers[] = $User['id']; /* Add the post to our own posts too */ - -foreach($followers as $fid) { - $r->push("uid:$fid:posts",$postid,false); -} -# Push the post on the timeline, and trim the timeline to the -# newest 1000 elements. -$r->push("global:timeline",$postid,false); -$r->ltrim("global:timeline",0,1000); - -header("Location: index.php"); -The core of the function is the
foreach
. We get using SMEMBERS all the followers of the current user, then the loop will LPUSH the post against the uid:<userid>
:posts of every follower.-function showPost($id) { - $r = redisLink(); - $postdata = $r->get("post:$id"); - if (!$postdata) return false; - - $aux = explode("|",$postdata); - $id = $aux[0]; - $time = $aux[1]; - $username = $r->get("uid:$id:username"); - $post = join(array_splice($aux,2,count($aux)-2),"|"); - $elapsed = strElapsed($time); - $userlink = "<a class=\"username\" href=\"profile.php?u=".urlencode($username)."\">".utf8entities($username)."</a>"; - - echo('<div class="post">'.$userlink.' '.utf8entities($post)."<br>"); - echo('<i>posted '.$elapsed.' ago via web</i></div>'); - return true; -} - -function showUserPosts($userid,$start,$count) { - $r = redisLink(); - $key = ($userid == -1) ? "global:timeline" : "uid:$userid:posts"; - $posts = $r->lrange($key,$start,$start+$count); - $c = 0; - foreach($posts as $p) { - if (showPost($p)) $c++; - if ($c == $count) break; - } - return count($posts) == $count+1; -} -
showPost
will simply convert and print a Post in HTML while showUserPosts
get range of posts passing them to showPosts
.-SADD uid:1000:following 1001 -SADD uid:1001:followers 1000 -Note the same pattern again and again, in theory with a relational database the list of following and followers is a single table with fields like
following_id
and follower_id
. With queries you can extract the followers or following of every user. With a key-value DB that's a bit different as we need to set both the 1000 is following 1001
and 1001 is followed by 1000
relations. This is the price to pay, but on the other side accessing the data is simpler and ultra-fast. And having this things as separated sets allows us to do interesting stuff, for example using SINTER we can have the intersection of 'following' of two different users, so we may add a feature to our Twitter clone so that it is able to say you at warp speed, when you visit somebody' else profile, "you and foobar have 34 followers in common" and things like that.-server_id = crc32(key) % number_of_servers -This has a lot of problems since if you add one server you need to move too much keys and so on, but this is the general idea even if you use a better hashing scheme like consistent hashing.
global:nextPostId
key. How to fix this problem? A Single server will get a lot if increments. The simplest way to handle this is to have a dedicated server just for increments. This is probably an overkill btw unless you have really a lot of traffic. There is another trick. The ID does not really need to be an incremental number, but just it needs to be unique. So you can get a random string long enough to be unlikely (almost impossible, if it's md5-size) to collide, and you are done. We successfully eliminated our main problem to make it really horizontally scalable!Return the type of the value stored at key in form of astring. The type can be one of "none", "string", "list", "set"."none" is returned if the key does not exist.-
-"none" if the key does not exist -"string" if the key contains a String value -"list" if the key contains a List value -"set" if the key contains a Set value -
Add the specified member having the specifeid score to the sortedset stored at key. If member is already a member of the sorted setthe score is updated, and the element reinserted in the right position toensure sorting. If key does not exist a new sorted set with the specified_member_ as sole member is crated. If the key exists but does not hold asorted set value an error is returned.-
The score value can be the string representation of a double precision floatingpoint number.-
For an introduction to sorted sets check the Introduction to Redis data types page.-
-1 if the new element was added -0 if the element was already a member of the sorted set and the score was updated --
Return the sorted set cardinality (number of elements). If the key does notexist 0 is returned, like for empty sorted sets.-
-the cardinality (number of elements) of the set as an integer. -- -
If member already exists in the sorted set adds the increment to its scoreand updates the position of the element in the sorted set accordingly.If member does not already exist in the sorted set it is added with_increment_ as score (that is, like if the previous score was virtually zero).If key does not exist a new sorted set with the specified_member_ as sole member is crated. If the key exists but does not hold asorted set value an error is returned.-
The score value can be the string representation of a double precision floatingpoint number. It's possible to provide a negative value to perform a decrement.-
For an introduction to sorted sets check the Introduction to Redis data types page.-
-The score of the member after the increment is performed. --
Return the specified elements of the sorted set at the specifiedkey. The elements are considered sorted from the lowerest to the highestscore when using ZRANGE, and in the reverse order when using ZREVRANGE.Start and end are zero-based indexes. 0 is the first elementof the sorted set (the one with the lowerest score when using ZRANGE), 1the next element by score and so on.-
_start_ and end can also be negative numbers indicating offsetsfrom the end of the sorted set. For example -1 is the last element ofthe sorted set, -2 the penultimate element and so on.-
Indexes out of range will not produce an error: if start is overthe end of the sorted set, or start >
end, an empty list is returned.If end is over the end of the sorted set Redis will threat it just likethe last element of the sorted set.
-It's possible to pass the WITHSCORES option to the command in order to return notonly the values but also the scores of the elements. Redis will return the dataas a single list composed of value1,score1,value2,score2,...,valueN,scoreN but clientlibraries are free to return a more appropriate data type (what we think is thatthe best return type for this command is a Array of two-elements Array / Tuple inorder to preserve sorting).-
Return the all the elements in the sorted set at key with a score between_min_ and max (including elements with score equal to min or max).-
The elements having the same score are returned sorted lexicographically asASCII strings (this follows from a property of Redis sorted sets and does notinvolve further computation).-
Using the optional LIMIT it's possible to get only a range of the matchingelements in an SQL-alike way. Note that if offset is large the commandsneeds to traverse the list for offset elements and this adds up to theO(M) figure.-
Remove the specified member from the sorted set value stored at key. If_member_ was not a member of the set no operation is performed. If keydoes not not hold a set value an error is returned.-
-1 if the new element was removed -0 if the new element was not a member of the set -- -
Remove all the elements in the sorted set at key with a score between_min_ and max (including elements with score equal to min or max).-
Return the score of the specified element of the sorted set at key.If the specified element does not exist in the sorted set, or the keydoes not exist at all, a special 'nil' value is returned.-
-the score (a double precision floating point number) represented as string. -- -
<->
slave replication works.