diff --git a/doc/QuickStart.html b/doc/QuickStart.html new file mode 100644 index 00000000..be0e4424 --- /dev/null +++ b/doc/QuickStart.html @@ -0,0 +1,51 @@ + + + +
+ + + ++git clone git://github.com/antirez/redis.git +
+tar xvzf redis-1.0.0.tar.gz +cd redis-1.0.0 +make +In order to test if the Redis server is working well in your computer make sure to run
make test
and check that all the tests are passed.+./redis-server +With the default configuration Redis will log to the standard output so you can check what happens. Later, when you'll ready to install Redis in production, you may want to use a configuration file. The
redis.conf
file included in the source code distribution is a starting point, you should be able to modify it in order do adapt it to your needs without troubles reading the comments inside the file. In order to start Redis using a configuration file just pass the file name as the sole argument when starting the server:+./redis-server redis.conf +
redis-cli
utility included in the source distribution (and automatically compiled when you compile Redis). For instance to set a key and read back the value use the following:+./redis-cli set mykey somevalue +OK +./redis-cli get mykey +somevalue +
SET foo barRedis will store our data permanently, so we can later ask for "What is the value stored at key foo?" and Redis will reply with bar:
@@ -242,7 +241,6 @@ Gentle reader, if you reached this point you are already an hero, thank you. Bef The first thing to do is to hash the key and issue the request on different servers based on the key hash. There are a lot of well known algorithms to do so, for example check the Redis Ruby library client that implements consistent hashing, but the general idea is that you can turn your key into a number, and than take the reminder of the division of this number by the number of servers you have:server_id = crc32(key) % number_of_serversThis has a lot of problems since if you add one server you need to move too much keys and so on, but this is the general idea even if you use a better hashing scheme like consistent hashing.
Ok, are key accesses distributed among the key space? Well, all the user data will be partitioned among different servers. There are no inter-keys operations used (like SINTER, otherwise you need to care that things you want to intersect will end in the same server. This is why Redis unlike memcached does not force a specific hashing scheme, it's application specific). Btw there are keys that are accessed more frequently.Special keys
For example every time we post a new message, we need to increment theglobal:nextPostId
key. How to fix this problem? A Single server will get a lot if increments. The simplest way to handle this is to have a dedicated server just for increments. This is probably an overkill btw unless you have really a lot of traffic. There is another trick. The ID does not really need to be an incremental number, but just it needs to be unique. So you can get a random string long enough to be unlikely (almost impossible, if it's md5-size) to collide, and you are done. We successfully eliminated our main problem to make it really horizontally scalable!
There is another one: global:timeline. There is no fix for this, if you need to take something in order you can split among different servers and then merge when you need to get the data back, or take it ordered and use a single key. Again if you really have so much posts per second, you can use a single server just for this. Remember that with commodity hardware Redis is able to handle 100000 writes for second, that's enough even for Twitter, I guess.
Please feel free to use the comments below for questions and feedbacks. -
<->
slave replication works.<->
slave replication works.