From 02fdd5ab4c8691156c150bc3948cbebbe8b208ad Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 28 Apr 2009 16:33:35 +0200 Subject: [PATCH] GETSET command doc added --- Changelog | 3 +++ TODO | 1 - doc/CommandReference.html | 2 +- doc/FAQ.html | 4 ++-- doc/GetsetCommand.html | 40 +++++++++++++++++++++++++++++++++++++++ doc/README.html | 10 ++++++---- 6 files changed, 52 insertions(+), 8 deletions(-) create mode 100644 doc/GetsetCommand.html diff --git a/Changelog b/Changelog index 50f10d2c5..bdd99731f 100644 --- a/Changelog +++ b/Changelog @@ -1,3 +1,6 @@ +2009-04-28 GETSET tests +2009-04-28 GETSET implemented +2009-04-27 ability to specify a different file name for the DB 2009-04-27 log file parsing code improved a bit 2009-04-27 bgsave_in_progress field in INFO output 2009-04-27 INCRBY/DECRBY now support 64bit increments, with tests diff --git a/TODO b/TODO index 9b2fb3535..bb162494b 100644 --- a/TODO +++ b/TODO @@ -3,7 +3,6 @@ BEFORE REDIS 1.0.0-rc1 - What happens if the saving child gets killed instead to end normally? Handle this. - Make sinterstore / unionstore / sdiffstore returning the cardinality of the resulting set. - Remove max number of args limit -- GETSET - network layer stresser in test in demo, make sure to set/get random streams of data and check that what we read back is byte-by-byte the same. - maxclients directive - check 'server.dirty' everywere diff --git a/doc/CommandReference.html b/doc/CommandReference.html index 791278369..5ade73b96 100644 --- a/doc/CommandReference.html +++ b/doc/CommandReference.html @@ -27,7 +27,7 @@

Redis Command Reference

Every command name links to a specific wiki page describing the behavior of the command.

Connection handling

-

Commands operating on string values

+

Commands operating on string values

Commands operating on the key space

Commands operating on lists

Commands operating on sets

diff --git a/doc/FAQ.html b/doc/FAQ.html index 87be5de59..84320715e 100644 --- a/doc/FAQ.html +++ b/doc/FAQ.html @@ -16,7 +16,7 @@

FAQ

@@ -34,7 +34,7 @@ So Redis offers more features:

  • Keys can store different data t
    • We wrote a simple Twitter Clone using just Redis as database. Download the source code from the download section and imagine to write it with a plain key-value DB without support for lists and sets... it's much harder.
    • Multiple DBs. Using the SELECT command the client can select different datasets. This is useful because Redis provides a MOVE atomic primitive that moves a key form a DB to another one, if the target DB already contains such a key it returns an error: this basically means a way to perform locking in distributed processing.
    • So what is Redis really about? The User interface with the programmer. Redis aims to export to the programmer the right tools to model a wide range of problems. Sets, Lists with O(1) push operation, lrange and ltrim, server-side fast intersection between sets, are primitives that allow to model complex problems with a key value database.
    -

    Isn't this key-value thing just hype?

    I imagine key-value DBs, in the short term future, to be used like you use memory in a program, with lists, hashes, and so on. With Redis it's like this, but this special kind of memory containing your data structures is shared, atomic, persistent.

    When we write code it is obvious, when we take data in memory, to use the most sensible data structure for the work, right? Incredibly when data is put inside a relational DB this is no longer true, and we create an absurd data model even if our need is to put data and get this data back in the same order we put it inside (an ORDER BY is required when the data should be already sorted. Strange, dont' you think?).

    Key-value DBs bring this back at home, to create sensible data models and use the right data structures for the problem we are trying to solve.

    Can I backup a Redis DB while the server is working?

    Yes you can. When Redis saves the DB it actually creates a temp file, then rename(2) that temp file name to the destination file name. So even while the server is working it is safe to save the database file just with the cp unix command. Note that you can use master-slave replication in order to have redundancy of data, but if all you need is backups, cp or scp will do the work pretty well.

    What's the Redis memory footprint?

    Worst case scenario: 1 Million keys with the key being the natural numbers from 0 to 999999 and the string "Hello World" as value use 100MB on my Intel macbook (32bit). Note that the same data stored linearly in an unique string takes something like 16MB, this is the norm because with small keys and values there is a lot of overhead. Memcached will perform similarly.

    With large keys/values the ratio is much better of course.

    64 bit systems will use much more memory than 32 bit systems to store the same keys, especially if the keys and values are small, this is because pointers takes 8 bytes in 64 bit systems. But of course the advantage is that you can have a lot of memory in 64 bit systems, so to run large Redis servers a 64 bit system is more or less required.

    I like Redis high level operations and features, but I don't like it takes everything in memory and I can't have a dataset larger the memory. Plans to change this?

    The whole key-value hype started for a reason: performances. Redis takes the whole dataset in memory and writes asynchronously on disk in order to be very fast, you have the best of both worlds: hyper-speed and persistence of data, but the price to pay is exactly this, that the dataset must fit on your computers RAM.

    If the data is larger then memory, and this data is stored on disk, what happens is that the bottleneck of the disk I/O speed will start to ruin the performances. Maybe not in benchmarks, but once you have real load from multiple clients with distributed key accesses the data must come from disk, and the disk is damn slow. Not only, but Redis supports higher level data structures than the plain values. To implement this things on disk is even slower.

    Redis will always continue to hold the whole dataset in memory because this days scalability requires to use RAM as storage media, and RAM is getting cheaper and cheaper. Today it is common for an entry level server to have 16 GB of RAM! And in the 64-bit era there are no longer limits to the amount of RAM you can have in theory.

    Ok but I absolutely need to have a DB larger than memory, still I need the Redis features

    You may try to load a dataset larger than your memory in Redis and see what happens, basically if you are using a modern Operating System, and you have a lot of data in the DB that is rarely accessed, the OS's virtual memory implementation will try to swap rarely used pages of memory on the disk, to only recall this pages when they are needed. If you have many large values rarely used this will work. If your DB is big because you have tons of little values accessed at random without a specific pattern this will not work (at low level a page is usually 4096 bytes, and you can have different keys/values stored at a single page. The OS can't swap this page on disk if there are even few keys used frequently).

    Another possible solution is to use both MySQL and Redis at the same time, basically take the state on Redis, and all the things that get accessed very frequently: user auth tokens, Redis Lists with chronologically ordered IDs of the last N-comments, N-posts, and so on. Then use MySQL as a simple storage engine for larger data, that is just create a table with an auto-incrementing ID as primary key and a large BLOB field as data field. Access MySQL data only by primary key (the ID). The application will run the high traffic queries against Redis but when there is to take the big data will ask MySQL for specific resources IDs.

    I have an empty Redis server but INFO and logs are reporting megabytes of memory in use!

    This may happen and it's prefectly ok. Redis objects are small C structures allocated and freed a lot of times. This costs a lot of CPU so instead of being freed, released objects are taken into a free list and reused when needed. This memory is taken exactly by this free objects ready to be reused.

    What happens if Redis runs out of memory?

    With modern operating systems malloc() returning NULL is not common, usually the server will start swapping and Redis performances will be disastrous so you'll know it's time to use more Redis servers or get more RAM.

    However it is planned to add a configuration directive to tell Redis to stop accepting queries but instead to SAVE the latest data and quit if it is using more than a given amount of memory. Also the new INFO command (work in progress in this days) will report the amount of memory Redis is using so you can write scripts that monitor your Redis servers checking for critical conditions.

    Update: redis SVN is able to know how much memory it is using and report it via the INFO command.

    What Redis means actually?

    Redis means two things: +

    Isn't this key-value thing just hype?

    I imagine key-value DBs, in the short term future, to be used like you use memory in a program, with lists, hashes, and so on. With Redis it's like this, but this special kind of memory containing your data structures is shared, atomic, persistent.

    When we write code it is obvious, when we take data in memory, to use the most sensible data structure for the work, right? Incredibly when data is put inside a relational DB this is no longer true, and we create an absurd data model even if our need is to put data and get this data back in the same order we put it inside (an ORDER BY is required when the data should be already sorted. Strange, dont' you think?).

    Key-value DBs bring this back at home, to create sensible data models and use the right data structures for the problem we are trying to solve.

    Can I backup a Redis DB while the server is working?

    Yes you can. When Redis saves the DB it actually creates a temp file, then rename(2) that temp file name to the destination file name. So even while the server is working it is safe to save the database file just with the cp unix command. Note that you can use master-slave replication in order to have redundancy of data, but if all you need is backups, cp or scp will do the work pretty well.

    What's the Redis memory footprint?

    Worst case scenario: 1 Million keys with the key being the natural numbers from 0 to 999999 and the string "Hello World" as value use 100MB on my Intel macbook (32bit). Note that the same data stored linearly in an unique string takes something like 16MB, this is the norm because with small keys and values there is a lot of overhead. Memcached will perform similarly.

    With large keys/values the ratio is much better of course.

    64 bit systems will use much more memory than 32 bit systems to store the same keys, especially if the keys and values are small, this is because pointers takes 8 bytes in 64 bit systems. But of course the advantage is that you can have a lot of memory in 64 bit systems, so to run large Redis servers a 64 bit system is more or less required.

    I like Redis high level operations and features, but I don't like it takes everything in memory and I can't have a dataset larger the memory. Plans to change this?

    The whole key-value hype started for a reason: performances. Redis takes the whole dataset in memory and writes asynchronously on disk in order to be very fast, you have the best of both worlds: hyper-speed and persistence of data, but the price to pay is exactly this, that the dataset must fit on your computers RAM.

    If the data is larger then memory, and this data is stored on disk, what happens is that the bottleneck of the disk I/O speed will start to ruin the performances. Maybe not in benchmarks, but once you have real load from multiple clients with distributed key accesses the data must come from disk, and the disk is damn slow. Not only, but Redis supports higher level data structures than the plain values. To implement this things on disk is even slower.

    Redis will always continue to hold the whole dataset in memory because this days scalability requires to use RAM as storage media, and RAM is getting cheaper and cheaper. Today it is common for an entry level server to have 16 GB of RAM! And in the 64-bit era there are no longer limits to the amount of RAM you can have in theory.

    Ok but I absolutely need to have a DB larger than memory, still I need the Redis features

    You may try to load a dataset larger than your memory in Redis and see what happens, basically if you are using a modern Operating System, and you have a lot of data in the DB that is rarely accessed, the OS's virtual memory implementation will try to swap rarely used pages of memory on the disk, to only recall this pages when they are needed. If you have many large values rarely used this will work. If your DB is big because you have tons of little values accessed at random without a specific pattern this will not work (at low level a page is usually 4096 bytes, and you can have different keys/values stored at a single page. The OS can't swap this page on disk if there are even few keys used frequently).

    Another possible solution is to use both MySQL and Redis at the same time, basically take the state on Redis, and all the things that get accessed very frequently: user auth tokens, Redis Lists with chronologically ordered IDs of the last N-comments, N-posts, and so on. Then use MySQL as a simple storage engine for larger data, that is just create a table with an auto-incrementing ID as primary key and a large BLOB field as data field. Access MySQL data only by primary key (the ID). The application will run the high traffic queries against Redis but when there is to take the big data will ask MySQL for specific resources IDs.

    I have an empty Redis server but INFO and logs are reporting megabytes of memory in use!

    This may happen and it's prefectly ok. Redis objects are small C structures allocated and freed a lot of times. This costs a lot of CPU so instead of being freed, released objects are taken into a free list and reused when needed. This memory is taken exactly by this free objects ready to be reused.

    What happens if Redis runs out of memory?

    With modern operating systems malloc() returning NULL is not common, usually the server will start swapping and Redis performances will be disastrous so you'll know it's time to use more Redis servers or get more RAM.

    However it is planned to add a configuration directive to tell Redis to stop accepting queries but instead to SAVE the latest data and quit if it is using more than a given amount of memory. Also the new INFO command (work in progress in this days) will report the amount of memory Redis is using so you can write scripts that monitor your Redis servers checking for critical conditions.

    Update: redis SVN is able to know how much memory it is using and report it via the INFO command.

    How much time it takes to load a big database at server startup?

    Just an example on normal hardware: It takes about 45 seconds to restore a 2 GB database on a fairly standard system, no RAID. This can give you some kind of feeling about the order of magnitude of the time needed to load data when you restart the server.

    What Redis means actually?

    Redis means two things:
    • it's a joke on the word Redistribute (instead to use just a Relational DB redistribute your workload among Redis servers)
    • it means REmote DIctionary Server

    Why did you started the Redis project?

    In order to scale LLOOGG. But after I got the basic server working I liked the idea to share the work with other guys, and Redis was turned into an open source project. diff --git a/doc/GetsetCommand.html b/doc/GetsetCommand.html new file mode 100644 index 000000000..e1d8d5386 --- /dev/null +++ b/doc/GetsetCommand.html @@ -0,0 +1,40 @@ + + + + + + + +
    + + + +
    +
    + +GetsetCommand: Contents
      GETSET _key_ _value_
        Return value
        Design patterns
        See also +
    + +

    GetsetCommand

    + +
    + +
    + +
    +

    GETSET _key_ _value_

    +Time complexity: O(1)
    GETSET is an atomic set this value and return the old value command.Set key to the string value and return the old value stored at key.The string can't be longer than 1073741824 bytes (1 GB).
    +

    Return value

    Bulk reply

    Design patterns

    GETSET can be used together with INCR for counting with atomic reset whena given condition arises. For example a process may call INCR against thekey mycounter every time some event occurred, but from time totime we need to get the value of the counter and reset it to zero atomicallyusing GETSET mycounter 0.
    +

    See also

    + +
    + +
    +
    + + + diff --git a/doc/README.html b/doc/README.html index 402038d7f..ed6d56ac5 100644 --- a/doc/README.html +++ b/doc/README.html @@ -29,11 +29,13 @@

    Introduction

    Redis is a database. To be more specific redis is a very simple database implementing a dictionary where keys are associated with values. For example I can set the key "surname_1992" to the string "Smith".

    Redis takes the whole dataset in memory, but the dataset is persistent -since from time to time Redis writes a dump of the dataset on disk asynchronously. The dump is loaded every time the server is restarted. This means that if a system crash occurs the last few queries can get lost (that is acceptable in many applications), so we supported master-slave replication from the early days.

    Beyond key-value databases

    In most key-value databases keys and values are simple strings. In Redis keys are just strings too, but the associated values can be Strings, Lists and Sets, and there are commands to perform complex atomic operations against this data types, so you can think at Redis as a data structures server.

    For example you can append elements to a list stored at the key "mylist" using the LPUSH or RPUSH operation in O(1). Later you'll be able to get a range of elements with LRANGE or trim the list with LTRIM. Sets are very flexible too, it is possible to add and remove elements from Sets (unsorted collections of strings), and then ask for server-side intersection of Sets.

    All this features, the support for sorting Lists and Sets, allow to use Redis as the sole DB for your scalable application without the need of any relational database. We wrote a simple Twitter clone in PHP + Redis to show a real world example, the link points to an article explaining the design and internals in very simple words.

    What are the differences between Redis and Memcached?

    In the following ways:

    • Memcached is not persistent, it just holds everything in memory without saving since its main goal is to be used as a cache. Redis instead can be used as the main DB for the application. We wrote a simple Twitter clone using only Redis as database.
    +since from time to time Redis writes a dump of the dataset on disk asynchronously. The dump is loaded every time the server is restarted. This means that if a system crash occurs the last few queries can get lost (that is acceptable in many applications). Redis supports master-slave replication from the early days in order to improve performances and reliability.

    Beyond key-value databases

    In most key-value databases keys and values are simple strings. In Redis keys are just strings too, but the associated values can be Strings, Lists and Sets, and there are commands to perform complex atomic operations against this data types, so you can think at Redis as a data structures server.

    For example you can append elements to a list stored at the key "mylist" using the LPUSH or RPUSH operation in O(1). Later you'll be able to get a range of elements with LRANGE or trim the list with LTRIM. Sets are very flexible too, it is possible to add and remove elements from Sets (unsorted collections of strings), and then ask for server-side intersection, union, difference of Sets.

    All this features, the support for sorting Lists and Sets, allow to use Redis as the sole DB for your scalable application without the need of any relational database. We wrote a simple Twitter clone in PHP + Redis to show a real world example, the link points to an article explaining the design and internals in very simple words.

    What are the differences between Redis and Memcached?

    In the following ways:

    • Memcached is not persistent, it just holds everything in memory without saving since its main goal is to be used as a cache. Redis instead can be used as the main DB for the application. We wrote a simple Twitter clone using only Redis as database.
    • Like memcached Redis uses a key-value model, but while keys can just be strings, values in Redis can be lists and sets, and complex operations like intersections, set/get n-th element of lists, pop/push of elements, can be performed against sets and lists. It is possible to use lists as message queues.
    -

    What are the differences between Redis and Tokyo Cabinet / Tyrant?

    Redis and Tokyo can be used for the same applications, but actually they are ery different beasts:

    • Tokyo is purely key-value, everything beyond key-value storing of strings is delegated to an embedded Lua interpreter. AFAIK there is no way to guarantee atomicity of operations like pushing into a list, and every time you want to have data structures inside a Tokyo key you have to perform some kind of object serialization/de-serialization.
    -
    • Tokyo stores data on disk, synchronously, this means you can have datasets bigger than memory, but that under load, like every kind of process that relay on the disk I/O for speed, the performances may start to degrade. With Redis you don't have this problems but you have another problem: the dataset in every single server must fit in your memory.
    -
    • Redis is generally an higher level beast in the operations supported. Things like SORTing, Server-side set-intersections, can't be done with Tokyo. But Redis is not an on-disk DB engine like Tokyo: the latter can be used as a fast DB engine in your C project without the networking overhead just linking to the library. Still remember that in many scalable applications you need multiple servers talking with multiple servers, so the server-client model is almost always needed.
    +

    What are the differences between Redis and Tokyo Cabinet / Tyrant?

    Redis and Tokyo Cabinet can be used for the same applications, but actually they are very different beasts:

    • Tokyo Cabinet writes synchronously on disk, Redis takes the whole dataset on memory and writes on disk asynchronously. Tokyo Cabinet is safer, Redis faster (but note that Redis supports master-slave replication that is trivial to setup, so you are safe anyway if you want a setup where data can't be lost even after a disaster).
    +
    • Redis supports higher level operations and data structures. While Tokyo Cabinet supports a kind of database that is able to organize data into rows with named fields (in a way very similar to Berkeley DB) can't do things like server side List and Set operations Redis is able to do: pushing or popping from Lists in an atomic way, in O(1) time complexity, server side Set intersections, SortCommand ing of schema free data in complex ways (Btw TC supports sorting in the table-based database format).
    +
    • Tokyo Cabinet does not implement a networking layer. You have to use a networking layer called Tokyo Tyrant that interfaces to Tokyo Cabinet so you can talk to Tokyo Cabinet in a client-server fashion. In Redis the networking support is built-in inside the server, and is basically the only interface between the external world and the dataset.
    +
    • Redis is reported to be much faster, especially if you plan to access Tokyo Cabinet via Tokyo Tyrant. From the informal numbers I saw around on the net you can expect Redis to be 10 times faster than Tokyo Cabinet + Tyrant.
    +
    • Redis is not an on-disk DB engine like Tokyo: the latter can be used as a fast DB engine in your C project without the networking overhead just linking to the library. Still remember that in many scalable applications you need multiple servers talking with multiple clients, so the client-server model is almost always needed.

    Does Redis support locking?

    No, the idea is to provide atomic primitives in order to make the programmer able to use redis with locking free algorithms. For example imagine you have 10 computers and 1 redis server. You want to count words in a very large text.