Redis VS Memcached (slightly better bench)

Sep 21, 2010 15:13

Hello! First read this if you haven't yet ( Read more... )

Leave a comment

Comments 4

Use four instances for Redis to maximize. ext_263764 September 22 2010, 07:27:09 UTC
Hello,
I'm not sure why memcached can't saturate all the threads with the async benchmark, but if you want to maximize everything in your test involving multiple processes you should also run four Redis nodes at the same time, and run every redis-benchmark against a different instance.
We tried, and this way you'll get very high numbers for Redis, but this is still wrong as it starts to be very dependent on what core is running the benchmark, and if it is the same as the server. So a better one is to have two boxes, linked with gigabit ethernet, and run the N clients in one box and the N threads of the server (being it a single memcache process and N threads or N Redis processes) on the other box.

Reply


Just verified ext_263764 September 22 2010, 09:11:49 UTC
Hello again,

ok just verified, using N servers and M instances of the benchmark, where you have M+N different cores (8 in your box, so you can use 4 servers and 4 benchmarks) you'll get 100k/s requests per instance, for a total of 400k/s.

This is for *SETs*, did not tried for GETs. So I think this shows how important it is to have a design where there is no contention. This is why Redis is single threaded.

Reply


jayp39 September 27 2010, 02:02:37 UTC
Another thing I haven't seen taken into account or mentioned anywhere is that redis doesn't have an LRU style algorithm like memcached does. Redis will only discard data if it is expired. That means that if you want to use redis like memcached, you will have to give every key an expiration short enough to keep redis from exceeding memory, and things will be discarded as they expire without regard for popularity.

In practice redis would be a poor substitute for memcached for any site with a large amount of data that might be cached, because memcached will make more efficient use of available memory by allowing more SETs to happen (don't have to worry about doing too many and running out of memory) and by automatically keeping the hottest content in memory while discarding unpopular content.

As you said, ultimately we are comparing two products designed for deifferent purposes.

Reply


Denis TRUFFAUT ext_965543 January 3 2012, 21:10:01 UTC
The last graph has for legend : '4 parallel', instead of 'no parallelization'.

That's a bit confusing according to the text.

BTW, excellent benchmark :)

Reply


Leave a comment

Up