Commit Graph

46 Commits

Author SHA1 Message Date
Moti Cohen 4df037962d
Change FLUSHALL/FLUSHDB SYNC to run as blocking ASYNC (#13167)
# Overview
Users utilize the `FLUSHDB SYNC` and `FLUSHALL SYNC` commands for a variety of 
reasons. The main issue with this command is that if the database becomes 
substantial in size, the server will be unresponsive for an extended period. 
Other than freezing application traffic, this may also lead some clients making 
incorrect judgments about the server's availability. For instance, a watchdog may 
erroneously decide to terminate the process, resulting in potential adverse 
outcomes. While a `FLUSH* ASYNC` can address these issues, it might not be used 
for two reasons: firstly, it's not the default, and secondly, in some cases, the 
client issuing the flush wants to wait for its completion before repopulating the 
database.

Between the option of triggering FLUSH* asynchronously in the background without 
indication for completion versus running it synchronously in the foreground by 
the main thread, there is another more appealing option. We can block the
client that requested the flush, execute the flush command in the background, and 
once done, unblock the client and return notification for completion. This approach 
ensures the server remains responsive to other clients, and the blocked client 
receives the expected response only after the flush operation has been successfully 
carried out.

# Implementation details
Instead of defining yet another flavor to the flush command, we can modify
`FLUSHALL SYNC` and `FLUSHDB SYNC` always run in this new mode.

## Extending BIO Threads capabilities
Today jobs that are carried out by BIO threads don't have the capability to 
indicate completion to the main thread. We can add this infrastructure by having
an additional dummy job, coined as completion-job, that eventually will be written 
by BIO threads to a response-queue. The main thread will take care to consume items
from the response-queue and call the provided callback function of each 
completion-job.

## FLUSH* SYNC to run as blocking ASYNC
Command `FLUSH* SYNC` will be modified to create one or more async jobs to flush
DB(s) and afterward will push additional completion-job request. By sending the
completion job request only at the end, the main thread will be called back only
after all the preceding jobs completed their task in the background. During that
time, the client of the command is suspended and marked as `BLOCKED_LAZYFREE`
whereas any other client will be able to communicate with the server without any
issue.
2024-04-02 15:09:52 +03:00
Binbin e04d41d78d
Prevent lua error_reply abuse from causing errorstats to become larger (#13141)
Users who abuse lua error_reply will generate a new error object on each
error call, which can make server.errors get bigger and bigger. This
will
cause the server to block when calling INFO (we also return errorstats
by
default).

To prevent the damage it can cause, when a misuse is detected, we will
print a warning log and disable the errorstats to avoid adding more new
errors. It can be re-enabled via CONFIG RESETSTAT.

Because server.errors may be very large (it may be better now since we
have the limit), config resetstat may block for a while. So in
resetErrorTableStats, we will try to lazyfree server.errors.

See the related discussion at the end of #8217.
2024-03-19 08:18:22 +02:00
Binbin 7b070423b8
Fix dictionary use-after-free in active expire and make kvstore iter to respect EMPTY flag (#13135)
After #13072, there is an use-after-free error. In expireScanCallback, we
will delete the dict, and then in dictScan we will continue to use the dict,
like we will doing `dictResumeRehashing(d)` in the end, this casued an error.

In this PR, in freeDictIfNeeded, if the dict's pauserehash is set, don't
delete the dict yet, and then when scan returns try to delete it again.

At the same time, we noticed that there will be similar problems in iterator.
We may also delete elements during the iteration process, causing the dict
to be deleted, so the part related to iter in the PR has also been modified.
dictResetIterator was also missing from the previous kvstoreIteratorNextDict,
we currently have no scenario that elements will be deleted in kvstoreIterator
process, deal with it together to avoid future problems. Added some simple
tests to verify the changes.

In addition, the modification in #13072 omitted initTempDb and emptyDbAsync,
and they were also added. This PR also remove the slow flag from the expire
test (consumes 1.3s) so that problems can be found in CI in the future.
2024-03-18 17:41:54 +02:00
Binbin ad28d222ed
Lua eval scripts first in first out LRU eviction (#13108)
In some cases, users will abuse lua eval. Each EVAL call generates
a new lua script, which is added to the lua interpreter and cached
to redis-server, consuming a large amount of memory over time.

Since EVAL is mostly the one that abuses the lua cache, and these
won't have pipeline issues (i.e. the script won't disappear
unexpectedly,
and cause errors like it would with SCRIPT LOAD and EVALSHA),
we implement a plain FIFO LRU eviction only for these (not for
scripts loaded with SCRIPT LOAD).

### Implementation notes:
When not abused we'll probably have less than 100 scripts, and when
abused we'll have many thousands. So we use a hard coded value of 500
scripts. And considering that we don't have many scripts, then unlike
keys, we don't need to worry about the memory usage of keeping a true
sorted LRU linked list. We compute the SHA of each script anyway,
and put the script in a dict, we can store a listNode there, and use
it for quick removal and re-insertion into an LRU list each time the
script is used.

### New interfaces:
At the same time, a new `evicted_scripts` field is added to
INFO, which represents the number of evicted eval scripts. Users
can check it to see if they are abusing EVAL.

### benchmark:
`./src/redis-benchmark -P 10 -n 1000000 -r 10000000000 eval "return
__rand_int__" 0`

The simple abuse of eval benchmark test that will create 1 million EVAL
scripts. The performance has been improved by 50%, and the max latency
has dropped from 500ms to 13ms (this may be caused by table expansion
inside Lua when the number of scripts is large). And in the INFO memory,
it used to consume 120MB (server cache) + 310MB (lua engine), but now
it only consumes 70KB (server cache) + 210KB (lua_engine) because of
the scripts eviction.

For non-abusive case of about 100 EVAL scripts, there's no noticeable
change in performance or memory usage.

### unlikely potentially breaking change:
in theory, a user can maybe load a
script with EVAL and then use EVALSHA to call it (by calculating the
SHA1 value on the client side), it could be that if we read the docs
carefully we'll realized it's a valid scenario, but we suppose it's
extremely rare. So it may happen that EVALSHA acts on a script created
by EVAL, and the script is evicted and EVALSHA returns a NOSCRIPT error.
that is if you have more than 500 scripts being used in the same
transaction / pipeline.

This solves the second point in #13102.
2024-03-13 08:27:41 +02:00
Binbin a7abc2f067
SCRIPT FLUSH run truly async, close lua interpreter in bio (#13087)
Even if we have SCRIPT FLUSH ASYNC now, when there are a lot of
lua scripts, SCRIPT FLUSH ASYNC will still block the main thread.
This is because lua_close is executed in the main thread, and lua
heap needs to release a lot of memory.

In this PR, we take the current lua instance on lctx.lua and call
lua_close on it in a background thread, to close it in async way.
This is MeirShpilraien's idea.
2024-02-28 17:57:29 +02:00
guybe7 8cd62f82ca
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.

# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.

# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.

Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.

before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```

after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 17:21:35 +02:00
Binbin 628c0dea1b
Some cleanups around function (#12940)
This PR did some cleanups around function:
- drop the comment about Libraries Ctx, since we do have comment
  in functionsLibCtx, no need to maintain multiple copies.
- remove outdated comment about the dropped Library description.
- remove unused desc and code vars in functionExtractLibMetaData.
- fix engines_nemory typo, changed it to engines_memory.
- remove outdated comment about FUNCTION CREATE and FUNCTION INFO,
  FUNCTION CREATE was renamed to FUNCTION LOAD.
- Check in initServer whether the return of functionsInit is OK.
2024-01-23 14:26:33 +02:00
zhaozhao.zz d8a21c5767
Unified db rehash method for both standalone and cluster (#12848)
After #11695, we added two functions `rehashingStarted` and
`rehashingCompleted` to the dict structure. We also registered two
handlers for the main database's dict and expire structures. This allows
the main database to record the dict in `rehashing` list when rehashing
starts. Later, in `serverCron`, the `incrementallyRehash` function is
continuously called to perform the rehashing operation. However,
currently, when rehashing is completed, `rehashingCompleted` does not
remove the dict from the `rehashing` list. This results in the
`rehashing` list containing many invalid dicts. Although subsequent cron
checks and removes dicts that don't require rehashing, it is still
inefficient.

This PR implements the functionality to remove the dict from the
`rehashing` list in `rehashingCompleted`. This is achieved by adding
`metadata` to the dict structure, which keeps track of its position in
the `rehashing` list, allowing for quick removal. This approach avoids
storing duplicate dicts in the `rehashing` list.

Additionally, there are other modifications:

1. Whether in standalone or cluster mode, the dict in database is
inserted into the rehashing linked list when rehashing starts. This
eliminates the need to distinguish between standalone and cluster mode
in `incrementallyRehash`. The function only needs to focus on the dicts
in the `rehashing` list that require rehashing.
2. `rehashing` list is moved from per-database to Redis server level.
This decouples `incrementallyRehash` from the database ID, and in
standalone mode, there is no need to iterate over all databases,
avoiding unnecessary access to databases that do not require rehashing.
In the future, even if unsharded-cluster mode supports multiple
databases, there will be no risk involved.
3. The insertion and removal operations of dict structures in the
`rehashing` list are decoupled from `activerehashing` config.
`activerehashing` only controls whether `incrementallyRehash` is
executed in serverCron. There is no need for additional steps when
modifying the `activerehashing` switch, as in #12705.
2023-12-15 10:42:53 +08:00
Vitaly 0270abda82
Replace cluster metadata with slot specific dictionaries (#11695)
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot.  Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.

## Important changes
* Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
* getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
* Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
* scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
* Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot. 
* Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
* DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.

## Performance
This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict. 

RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.

## Interface changes
* Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
* Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
* New RDB version to support the new op code for SLOT information. 

---------

Co-authored-by: Vitaly Arbuzov <arvit@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Roshan Khatri <rvkhatri@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-10-14 23:58:26 -07:00
Madelyn Olson 5e3be1be09
Remove prototypes with empty declarations (#12020)
Technically declaring a prototype with an empty declaration has been deprecated since the early days of C, but we never got a warning for it. C2x will apparently be introducing a breaking change if you are using this type of declarator, so Clang 15 has started issuing a warning with -pedantic. Although not apparently a problem for any of the compiler we build on, if feels like the right thing is to properly adhere to the C standard and use (void).
2023-05-02 17:31:32 -07:00
sundb 2168ccc661
Add listpack encoding for list (#11303)
Improve memory efficiency of list keys

## Description of the feature
The new listpack encoding uses the old `list-max-listpack-size` config
to perform the conversion, which we can think it of as a node inside a
quicklist, but without 80 bytes overhead (internal fragmentation included)
of quicklist and quicklistNode structs.
For example, a list key with 5 items of 10 chars each, now takes 128 bytes
instead of 208 it used to take.

## Conversion rules
* Convert listpack to quicklist
  When the listpack length or size reaches the `list-max-listpack-size` limit,
  it will be converted to a quicklist.
* Convert quicklist to listpack
  When a quicklist has only one node, and its length or size is reduced to half
  of the `list-max-listpack-size` limit, it will be converted to a listpack.
  This is done to avoid frequent conversions when we add or remove at the bounding size or length.
    
## Interface changes
1. add list entry param to listTypeSetIteratorDirection
    When list encoding is listpack, `listTypeIterator->lpi` points to the next entry of current entry,
    so when changing the direction, we need to use the current node (listTypeEntry->p) to 
    update `listTypeIterator->lpi` to the next node in the reverse direction.

## Benchmark
### Listpack VS Quicklist with one node
* LPUSH - roughly 0.3% improvement
* LRANGE - roughly 13% improvement

### Both are quicklist
* LRANGE - roughly 3% improvement
* LRANGE without pipeline - roughly 3% improvement

From the benchmark, as we can see from the results
1. When list is quicklist encoding, LRANGE improves performance by <5%.
2. When list is listpack encoding, LRANGE improves performance by ~13%,
   the main enhancement is brought by `addListListpackRangeReply()`.

## Memory usage
1M lists(key:0~key:1000000) with 5 items of 10 chars ("hellohello") each.
shows memory usage down by 35.49%, from 214MB to 138MB.

## Note
1. Add conversion callback to support doing some work before conversion
    Since the quicklist iterator decompresses the current node when it is released, we can 
    no longer decompress the quicklist after we convert the list.
2022-11-16 20:29:46 +02:00
Meir Shpilraien (Spielrein) 885f6b5ceb
Redis Function Libraries (#10004)
# Redis Function Libraries

This PR implements Redis Functions Libraries as describe on: https://github.com/redis/redis/issues/9906.

Libraries purpose is to provide a better code sharing between functions by allowing to create multiple
functions in a single command. Functions that were created together can safely share code between
each other without worrying about compatibility issues and versioning.

Creating a new library is done using 'FUNCTION LOAD' command (full API is described below)

This PR introduces a new struct called libraryInfo, libraryInfo holds information about a library:
* name - name of the library
* engine - engine used to create the library
* code - library code
* description - library description
* functions - the functions exposed by the library

When Redis gets the `FUNCTION LOAD` command it creates a new empty libraryInfo.
Redis passes the `CODE` to the relevant engine alongside the empty libraryInfo.
As a result, the engine will create one or more functions by calling 'libraryCreateFunction'.
The new funcion will be added to the newly created libraryInfo. So far Everything is happening
locally on the libraryInfo so it is easy to abort the operation (in case of an error) by simply
freeing the libraryInfo. After the library info is fully constructed we start the joining phase by
which we will join the new library to the other libraries currently exist on Redis.
The joining phase make sure there is no function collision and add the library to the
librariesCtx (renamed from functionCtx). LibrariesCtx is used all around the code in the exact
same way as functionCtx was used (with respect to RDB loading, replicatio, ...).
The only difference is that apart from function dictionary (maps function name to functionInfo
object), the librariesCtx contains also a libraries dictionary that maps library name to libraryInfo object.

## New API
### FUNCTION LOAD
`FUNCTION LOAD <ENGINE> <LIBRARY NAME> [REPLACE] [DESCRIPTION <DESCRIPTION>] <CODE>`
Create a new library with the given parameters:
* ENGINE - REPLACE Engine name to use to create the library.
* LIBRARY NAME - The new library name.
* REPLACE - If the library already exists, replace it.
* DESCRIPTION - Library description.
* CODE - Library code.

Return "OK" on success, or error on the following cases:
* Library name already taken and REPLACE was not used
* Name collision with another existing library (even if replace was uses)
* Library registration failed by the engine (usually compilation error)

## Changed API
### FUNCTION LIST
`FUNCTION LIST [LIBRARYNAME <LIBRARY NAME PATTERN>] [WITHCODE]`
Command was modified to also allow getting libraries code (so `FUNCTION INFO` command is no longer
needed and removed). In addition the command gets an option argument, `LIBRARYNAME` allows you to
only get libraries that match the given `LIBRARYNAME` pattern. By default, it returns all libraries.

### INFO MEMORY
Added number of libraries to `INFO MEMORY`

### Commands flags
`DENYOOM` flag was set on `FUNCTION LOAD` and `FUNCTION RESTORE`. We consider those commands
as commands that add new data to the dateset (functions are data) and so we want to disallows
to run those commands on OOM.

## Removed API
* FUNCTION CREATE - Decided on https://github.com/redis/redis/issues/9906
* FUNCTION INFO - Decided on https://github.com/redis/redis/issues/9899

## Lua engine changes
When the Lua engine gets the code given on `FUNCTION LOAD` command, it immediately runs it, we call
this run the loading run. Loading run is not a usual script run, it is not possible to invoke any
Redis command from within the load run.
Instead there is a new API provided by `library` object. The new API's: 
* `redis.log` - behave the same as `redis.log`
* `redis.register_function` - register a new function to the library

The loading run purpose is to register functions using the new `redis.register_function` API.
Any attempt to use any other API will result in an error. In addition, the load run is has a time
limit of 500ms, error is raise on timeout and the entire operation is aborted.

### `redis.register_function`
`redis.register_function(<function_name>, <callback>, [<description>])`
This new API allows users to register a new function that will be linked to the newly created library.
This API can only be called during the load run (see definition above). Any attempt to use it outside
of the load run will result in an error.
The parameters pass to the API are:
* function_name - Function name (must be a Lua string)
* callback - Lua function object that will be called when the function is invokes using fcall/fcall_ro
* description - Function description, optional (must be a Lua string).

### Example
The following example creates a library called `lib` with 2 functions, `f1` and `f1`, returns 1 and 2 respectively:
```
local function f1(keys, args)
    return 1
end

local function f2(keys, args)
    return 2
end

redis.register_function('f1', f1)
redis.register_function('f2', f2)
```

Notice: Unlike `eval`, functions inside a library get the KEYS and ARGV as arguments to the
functions and not as global.

### Technical Details

On the load run we only want the user to be able to call a white list on API's. This way, in
the future, if new API's will be added, the new API's will not be available to the load run
unless specifically added to this white list. We put the while list on the `library` object and
make sure the `library` object is only available to the load run by using [lua_setfenv](https://www.lua.org/manual/5.1/manual.html#lua_setfenv) API. This API allows us to set
the `globals` of a function (and all the function it creates). Before starting the load run we
create a new fresh Lua table (call it `g`) that only contains the `library` API (we make sure
to set global protection on this table just like the general global protection already exists
today), then we use [lua_setfenv](https://www.lua.org/manual/5.1/manual.html#lua_setfenv)
to set `g` as the global table of the load run. After the load run finished we update `g`
metatable and set `__index` and `__newindex` functions to be `_G` (Lua default globals),
we also pop out the `library` object as we do not need it anymore.
This way, any function that was created on the load run (and will be invoke using `fcall`) will
see the default globals as it expected to see them and will not have the `library` API anymore.

An important outcome of this new approach is that now we can achieve a distinct global table
for each library (it is not yet like that but it is very easy to achieve it now). In the future we can
decide to remove global protection because global on different libraries will not collide or we
can chose to give different API to different libraries base on some configuration or input.

Notice that this technique was meant to prevent errors and was not meant to prevent malicious
user from exploit it. For example, the load run can still save the `library` object on some local
variable and then using in `fcall` context. To prevent such a malicious use, the C code also make
sure it is running in the right context and if not raise an error.
2022-01-06 13:39:38 +02:00
Meir Shpilraien (Spielrein) 687210f155
Add FUNCTION FLUSH command to flush all functions (#9936)
Added `FUNCTION FLUSH` command. The new sub-command allows delete all the functions.
An optional `[SYNC|ASYNC]` argument can be given to control whether or not to flush the
functions synchronously or asynchronously. if not given the default flush mode is chosen by
`lazyfree-lazy-user-flush` configuration values.

Add the missing `functions.tcl` test to the list of tests that are executed in test_helper.tcl,
and call FUNCTION FLUSH in between servers in external mode
2021-12-16 17:58:25 +02:00
Wang Yuan c1718f9d86
Replication backlog and replicas use one global shared replication buffer (#9166)
## Background
For redis master, one replica uses one copy of replication buffer, that is a big waste of memory,
more replicas more waste, and allocate/free memory for every reply list also cost much.
If we set client-output-buffer-limit small and write traffic is heavy, master may disconnect with
replicas and can't finish synchronization with replica. If we set  client-output-buffer-limit big,
master may be OOM when there are many replicas that separately keep much memory.
Because replication buffers of different replica client are the same, one simple idea is that
all replicas only use one replication buffer, that will effectively save memory.

Since replication backlog content is the same as replicas' output buffer, now we
can discard replication backlog memory and use global shared replication buffer
to implement replication backlog mechanism.

## Implementation
I create one global "replication buffer" which contains content of replication stream.
The structure of "replication buffer" is similar to the reply list that exists in every client.
But the node of list is `replBufBlock`, which has `id, repl_offset, refcount` fields.
```c
/* Replication buffer blocks is the list of replBufBlock.
 *
 * +--------------+       +--------------+       +--------------+
 * | refcount = 1 |  ...  | refcount = 0 |  ...  | refcount = 2 |
 * +--------------+       +--------------+       +--------------+
 *      |                                            /       \
 *      |                                           /         \
 *      |                                          /           \
 *  Repl Backlog                               Replia_A      Replia_B
 * 
 * Each replica or replication backlog increments only the refcount of the
 * 'ref_repl_buf_node' which it points to. So when replica walks to the next
 * node, it should first increase the next node's refcount, and when we trim
 * the replication buffer nodes, we remove node always from the head node which
 * refcount is 0. If the refcount of the head node is not 0, we must stop
 * trimming and never iterate the next node. */

/* Similar with 'clientReplyBlock', it is used for shared buffers between
 * all replica clients and replication backlog. */
typedef struct replBufBlock {
    int refcount;           /* Number of replicas or repl backlog using. */
    long long id;           /* The unique incremental number. */
    long long repl_offset;  /* Start replication offset of the block. */
    size_t size, used;
    char buf[];
} replBufBlock;
```
So now when we feed replication stream into replication backlog and all replicas, we only need
to feed stream into replication buffer `feedReplicationBuffer`. In this function, we set some fields of
replication backlog and replicas to references of the global replication buffer blocks. And we also
need to check replicas' output buffer limit to free if exceeding `client-output-buffer-limit`, and trim
replication backlog if exceeding `repl-backlog-size`.

When sending reply to replicas, we also need to iterate replication buffer blocks and send its
content, when totally sending one block for replica, we decrease current node count and
increase the next current node count, and then free the block which reference is 0 from the
head of replication buffer blocks.

Since now we use linked list to manage replication backlog, it may cost much time for iterating
all linked list nodes to find corresponding replication buffer node. So we create a rax tree to
store some nodes  for index, but to avoid rax tree occupying too much memory, i record
one per 64 nodes for index.

Currently, to make partial resynchronization as possible as much, we always let replication
backlog as the last reference of replication buffer blocks, backlog size may exceeds our setting
if slow replicas that reference vast replication buffer blocks, and this method doesn't increase
memory usage since they share replication buffer. To avoid freezing server for freeing unreferenced
replication buffer blocks when we need to trim backlog for exceeding backlog size setting,
we trim backlog incrementally (free 64 blocks per call now), and make it faster in
`beforeSleep` (free 640 blocks).

### Other changes
- `mem_total_replication_buffers`: we add this field in INFO command, it means the total
  memory of replication buffers used.
- `mem_clients_slaves`:  now even replica is slow to replicate, and its output buffer memory
  is not 0, but it still may be 0, since replication backlog and replicas share one global replication
  buffer, only if replication buffer memory is more than the repl backlog setting size, we consider
  the excess as replicas' memory. Otherwise, we think replication buffer memory is the consumption
  of repl backlog.
- Key eviction
  Since all replicas and replication backlog share global replication buffer, we think only the
  part of exceeding backlog size the extra separate consumption of replicas.
  Because we trim backlog incrementally in the background, backlog size may exceeds our
  setting if slow replicas that reference vast replication buffer blocks disconnect.
  To avoid massive eviction loop, we don't count the delayed freed replication backlog into
  used memory even if there are no replicas, i.e. we also regard this memory as replicas's memory.
- `client-output-buffer-limit` check for replica clients
  It doesn't make sense to set the replica clients output buffer limit lower than the repl-backlog-size
  config (partial sync will succeed and then replica will get disconnected). Such a configuration is
  ignored (the size of repl-backlog-size will be used). This doesn't have memory consumption
  implications since the replica client will share the backlog buffers memory.
- Drop replication backlog after loading data if needed
  We always create replication backlog if server is a master, we need it because we put DELs in
  it when loading expired keys in RDB, but if RDB doesn't have replication info or there is no rdb,
  it is not possible to support partial resynchronization, to avoid extra memory of replication backlog,
  we drop it.
- Multi IO threads
 Since all replicas and replication backlog use global replication buffer,  if I/O threads are enabled,
  to guarantee data accessing thread safe, we must let main thread handle sending the output buffer
  to all replicas. But before, other IO threads could handle sending output buffer of all replicas.

## Other optimizations
This solution resolve some other problem:
- When replicas disconnect with master since of out of output buffer limit, releasing the output
  buffer of replicas may freeze server if we set big `client-output-buffer-limit` for replicas, but now,
  it doesn't cause freezing.
- This implementation may mitigate reply list copy cost time(also freezes server) when one replication
  has huge reply buffer and another replica can copy buffer for full synchronization. now, we just copy
  reference info, it is very light.
- If we set replication backlog size big, it also may cost much time to copy replication backlog into
  replica's output buffer. But this commit eliminates this problem.
- Resizing replication backlog size doesn't empty current replication backlog content.
2021-10-25 09:24:31 +03:00
Viktor Söderqvist 9a3bd07e9f
Unify dbSyncDelete and dbAsyncDelete (#9573)
Just a cleanup to make the code easier to maintain and reduce the risk of something being overlooked.
2021-10-01 15:49:33 +03:00
Viktor Söderqvist f24c63a292
Slot-to-keys using dict entry metadata (#9356)
* Enhance dict to support arbitrary metadata carried in dictEntry

Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>

* Rewrite slot-to-keys mapping to linked lists using dict entry metadata

This is a memory enhancement for Redis Cluster.

The radix tree slots_to_keys (which duplicates all key names prefixed with their
slot number) is replaced with a linked list for each slot. The dict entries of
the same cluster slot form a linked list and the pointers are stored as metadata
in each dict entry of the main DB dict.

This commit also moves the slot-to-key API from db.c to cluster.c.

Co-authored-by: Jim Brunner <brunnerj@amazon.com>
2021-08-30 23:25:36 -07:00
yoav-steinberg 5e908a290c
dict struct memory optimizations (#9228)
Reduce dict struct memory overhead
on 64bit dict size goes down from jemalloc's 96 byte bin to its 56 byte bin.

summary of changes:
- Remove `privdata` from callbacks and dict creation. (this affects many files, see "Interface change" below).
- Meld `dictht` struct into the `dict` struct to eliminate struct padding. (this affects just dict.c and defrag.c)
- Eliminate the `sizemask` field, can be calculated from size when needed.
- Convert the `size` field into `size_exp` (exponent), utilizes one byte instead of 8.

Interface change: pass dict pointer to dict type call back functions.
This is instead of passing the removed privdata field. In the future if
we'd like to have private data in the callbacks we can extract it from
the dict type. We can extend dictType to include a custom dict struct
allocator and use it to allocate more data at the end of the dict
struct. This data can then be used to store private data later acccessed
by the callbacks.
2021-08-05 08:25:58 +03:00
chenyang8094 e0cd3ad0de
Enhance mem_usage/free_effort/unlink/copy callbacks and add GetDbFromIO api. (#8999)
Create new module type enhanced callbacks: mem_usage2, free_effort2, unlink2, copy2.
These will be given a context point from which the module can obtain the key name and database id.
In addition the digest and defrag context can now be used to obtain the key name and database id.
2021-06-16 09:45:49 +03:00
Binbin 01495c674a
Extend freeSlotsToKeysMapAsync and freeTrackingRadixTreeAsync to check LAZYFREE_THRESHOLD. (#8969)
Without this fix, FLUSHALL ASYNC would have freed these in a background thread,
even if they didn't contain many elements (unlike how it works with other structures), which could be inefficient.
2021-05-30 17:56:04 +03:00
Oran Agra fbc0e2b834
Reset lazyfreed_objects info field with RESETSTAT, test for stream lazyfree (#8934)
And also add tests to cover lazy free of streams with various types of
metadata (see #8932)
2021-05-17 16:54:37 +03:00
Oran Agra 97108845e2
Fix crash unlinking a stream with groups rax and no groups (#8932)
When estimating the effort for unlink, we try to compute the effort of
the first group and extrapolate.
If there's a groups rax that's empty, there'a an assertion.

reproduce:
xadd s * a b
xgroup create s bla $
xgroup destroy s bla
unlink s
2021-05-10 17:08:43 +03:00
Madelyn Olson c73b4ddfd9
Fix memory leak when doing lazyfreeing client tracking table (#8822)
Interior rax pointers were not being freed
2021-04-19 22:16:27 -07:00
Yang Bodong 294f93af97
Add lazyfree-lazy-user-flush config to control default behavior of FLUSH[ALL|DB], SCRIPT FLUSH (#8258)
* Adds ASYNC and SYNC arguments to SCRIPT FLUSH
* Adds SYNC argument to FLUSHDB and FLUSHALL
* Adds new config to control the default behavior of FLUSHDB, FLUSHALL and SCRIPT FLUASH.

the new behavior is as follows:
* FLUSH[ALL|DB],SCRIPT FLUSH: Determine sync or async according to the
  value of lazyfree-lazy-user-flush.
* FLUSH[ALL|DB],SCRIPT FLUSH ASYNC: Always flushes the database in an async manner.
* FLUSH[ALL|DB],SCRIPT FLUSH SYNC: Always flushes the database in a sync manner.
2021-01-15 15:32:58 +02:00
Madelyn Olson 59ff42c421
Cleanup key tracking documentation and table management (#8039)
Cleanup key tracking documentation, always cleanup the tracking table, and free the tracking table in an async manner when applicable.
2020-12-23 19:13:12 -08:00
Wang Yuan 75f9dec644
Limit the main db and expires dictionaries to expand (#7954)
As we know, redis may reject user's requests or evict some keys if
used memory is over maxmemory. Dictionaries expanding may make
things worse, some big dictionaries, such as main db and expires dict,
may eat huge memory at once for allocating a new big hash table and be
far more than maxmemory after expanding.
There are related issues: #4213 #4583

More details, when expand dict in redis, we will allocate a new big
ht[1] that generally is double of ht[0], The size of ht[1] will be
very big if ht[0] already is big. For db dict, if we have more than
64 million keys, we need to cost 1GB for ht[1] when dict expands.

If the sum of used memory and new hash table of dict needed exceeds
maxmemory, we shouldn't allow the dict to expand. Because, if we
enable keys eviction, we still couldn't add much more keys after
eviction and rehashing, what's worse, redis will keep less keys when
redis only remains a little memory for storing new hash table instead
of users' data. Moreover users can't write data in redis if disable
keys eviction.

What this commit changed ?

Add a new member function expandAllowed for dict type, it provide a way
for caller to allow expand or not. We expose two parameters for this
function: more memory needed for expanding and dict current load factor,
users can implement a function to make a decision by them.
For main db dict and expires dict type, these dictionaries may be very
big and cost huge memory for expanding, so we implement a judgement
function: we can stop dict to expand provisionally if used memory will
be over maxmemory after dict expands, but to guarantee the performance
of redis, we still allow dict to expand if dict load factor exceeds the
safe load factor.
Add test cases to verify we don't allow main db to expand when left
memory is not enough, so that avoid keys eviction.

Other changes:

For new hash table size when expand. Before this commit, the size is
that double used of dict and later _dictNextPower. Actually we aim to
control a dict load factor between 0.5 and 1.0. Now we replace *2 with
+1, since the first check is that used >= size, the outcome of before
will usually be the same as _dictNextPower(used+1). The only case where
it'll differ is when dict_can_resize is false during fork, so that later
the _dictNextPower(used*2) will cause the dict to jump to *4 (i.e.
_dictNextPower(1025*2) will return 4096).
Fix rehash test cases due to changing algorithm of new hash table size
when expand.
2020-12-06 11:53:04 +02:00
Wang Yuan b55a827ea2
Backup keys to slots map and restore when fail to sync if diskless-load type is swapdb in cluster mode (#8108)
When replica diskless-load type is swapdb in cluster mode, we didn't backup
keys to slots map, so we will lose keys to slots map if fail to sync.
Now we backup keys to slots map at first, and restore it properly when fail.

This commit includes a refactory/cleanup of the backups mechanism (moving it to db.c and re-structuring it a bit).

Co-authored-by: Oran Agra <oran@redislabs.com>
2020-12-02 13:56:11 +02:00
chenyangyang c1aaad06d8
Modules callbacks for lazy free effort, and unlink (#7912)
Add two optional callbacks to the RedisModuleTypeMethods structure, which is `free_effort`
and `unlink`. the `free_effort` callback indicates the effort required to free a module memory.
Currently, if the effort exceeds LAZYFREE_THRESHOLD, the module memory may be released
asynchronously. the `unlink` callback indicates the key has been removed from the DB by redis, and
may soon be freed by a background thread.

Add `lazyfreed_objects` info field, which represents the number of objects that have been
lazyfreed since redis was started.

Add `RM_GetTypeMethodVersion` API, which return the current redis-server runtime value of
`REDISMODULE_TYPE_METHOD_VERSION`. You can use that when calling `RM_CreateDataType` to know
which fields of RedisModuleTypeMethods are gonna be supported and which will be ignored.
2020-11-16 10:34:04 +02:00
Wang Yuan 445a4b669a
Implement redisAtomic to replace _Atomic C11 builtin (#7707)
Redis 6.0 introduces I/O threads, it is so cool and efficient, we use C11
_Atomic to establish inter-thread synchronization without mutex. But the
compiler that must supports C11 _Atomic can compile redis code, that brings a
lot of inconvenience since some common platforms can't support by default such
as CentOS7, so we want to implement redis atomic type to make it more portable.

We have implemented our atomic variable for redis that only has 'relaxed'
operations in src/atomicvar.h, so we implement some operations with
'sequentially-consistent', just like the default behavior of C11 _Atomic that
can establish inter-thread synchronization. And we replace all uses of C11
_Atomic with redis atomic variable.

Our implementation of redis atomic variable uses C11 _Atomic, __atomic or
__sync macros if available, it supports most common platforms, and we will
detect automatically which feature we use. In Makefile we use a dummy file to
detect if the compiler supports C11 _Atomic. Now for gcc, we can compile redis
code theoretically if your gcc version is not less than 4.1.2(starts to support
__sync_xxx operations). Otherwise, we remove use mutex fallback to implement
redis atomic variable for performance and test. You will get compiling errors
if your compiler doesn't support all features of above.

For cover redis atomic variable tests, we add other CI jobs that build redis on
CentOS6 and CentOS7 and workflow daily jobs that run the tests on them.
For them, we just install gcc by default in order to cover different compiler
versions, gcc is 4.4.7 by default installation on CentOS6 and 4.8.5 on CentOS7.

We restore the feature that we can test redis with Helgrind to find data race
errors. But you need install Valgrind in the default path configuration firstly
before running your tests, since we use macros in helgrind.h to tell Helgrind
inter-thread happens-before relationship explicitly for avoiding false positives.
Please open an issue on github if you find data race errors relate to this commit.

Unrelated:
- Fix redefinition of typedef 'RedisModuleUserChangedFunc'
  For some old version compilers, they will report errors or warnings, if we
  re-define function type.
2020-09-17 16:01:45 +03:00
Oran Agra 1c71038540
Squash merging 125 typo/grammar/comment/doc PRs (#7773)
List of squashed commits or PRs
===============================

commit 66801ea
Author: hwware <wen.hui.ware@gmail.com>
Date:   Mon Jan 13 00:54:31 2020 -0500

    typo fix in acl.c

commit 46f55db
Author: Itamar Haber <itamar@redislabs.com>
Date:   Sun Sep 6 18:24:11 2020 +0300

    Updates a couple of comments

    Specifically:

    * RM_AutoMemory completed instead of pointing to docs
    * Updated link to custom type doc

commit 61a2aa0
Author: xindoo <xindoo@qq.com>
Date:   Tue Sep 1 19:24:59 2020 +0800

    Correct errors in code comments

commit a5871d1
Author: yz1509 <pro-756@qq.com>
Date:   Tue Sep 1 18:36:06 2020 +0800

    fix typos in module.c

commit 41eede7
Author: bookug <bookug@qq.com>
Date:   Sat Aug 15 01:11:33 2020 +0800

    docs: fix typos in comments

commit c303c84
Author: lazy-snail <ws.niu@outlook.com>
Date:   Fri Aug 7 11:15:44 2020 +0800

    fix spelling in redis.conf

commit 1eb76bf
Author: zhujian <zhujianxyz@gmail.com>
Date:   Thu Aug 6 15:22:10 2020 +0800

    add a missing 'n' in comment

commit 1530ec2
Author: Daniel Dai <764122422@qq.com>
Date:   Mon Jul 27 00:46:35 2020 -0400

    fix spelling in tracking.c

commit e517b31
Author: Hunter-Chen <huntcool001@gmail.com>
Date:   Fri Jul 17 22:33:32 2020 +0800

    Update redis.conf

    Co-authored-by: Itamar Haber <itamar@redislabs.com>

commit c300eff
Author: Hunter-Chen <huntcool001@gmail.com>
Date:   Fri Jul 17 22:33:23 2020 +0800

    Update redis.conf

    Co-authored-by: Itamar Haber <itamar@redislabs.com>

commit 4c058a8
Author: 陈浩鹏 <chenhaopeng@heytea.com>
Date:   Thu Jun 25 19:00:56 2020 +0800

    Grammar fix and clarification

commit 5fcaa81
Author: bodong.ybd <bodong.ybd@alibaba-inc.com>
Date:   Fri Jun 19 10:09:00 2020 +0800

    Fix typos

commit 4caca9a
Author: Pruthvi P <pruthvi@ixigo.com>
Date:   Fri May 22 00:33:22 2020 +0530

    Fix typo eviciton => eviction

commit b2a25f6
Author: Brad Dunbar <dunbarb2@gmail.com>
Date:   Sun May 17 12:39:59 2020 -0400

    Fix a typo.

commit 12842ae
Author: hwware <wen.hui.ware@gmail.com>
Date:   Sun May 3 17:16:59 2020 -0400

    fix spelling in redis conf

commit ddba07c
Author: Chris Lamb <chris@chris-lamb.co.uk>
Date:   Sat May 2 23:25:34 2020 +0100

    Correct a "conflicts" spelling error.

commit 8fc7bf2
Author: Nao YONASHIRO <yonashiro@r.recruit.co.jp>
Date:   Thu Apr 30 10:25:27 2020 +0900

    docs: fix EXPIRE_FAST_CYCLE_DURATION to ACTIVE_EXPIRE_CYCLE_FAST_DURATION

commit 9b2b67a
Author: Brad Dunbar <dunbarb2@gmail.com>
Date:   Fri Apr 24 11:46:22 2020 -0400

    Fix a typo.

commit 0746f10
Author: devilinrust <63737265+devilinrust@users.noreply.github.com>
Date:   Thu Apr 16 00:17:53 2020 +0200

    Fix typos in server.c

commit 92b588d
Author: benjessop12 <56115861+benjessop12@users.noreply.github.com>
Date:   Mon Apr 13 13:43:55 2020 +0100

    Fix spelling mistake in lazyfree.c

commit 1da37aa
Merge: 2d4ba28 af347a8
Author: hwware <wen.hui.ware@gmail.com>
Date:   Thu Mar 5 22:41:31 2020 -0500

    Merge remote-tracking branch 'upstream/unstable' into expiretypofix

commit 2d4ba28
Author: hwware <wen.hui.ware@gmail.com>
Date:   Mon Mar 2 00:09:40 2020 -0500

    fix typo in expire.c

commit 1a746f7
Author: SennoYuki <minakami1yuki@gmail.com>
Date:   Thu Feb 27 16:54:32 2020 +0800

    fix typo

commit 8599b1a
Author: dongheejeong <donghee950403@gmail.com>
Date:   Sun Feb 16 20:31:43 2020 +0000

    Fix typo in server.c

commit f38d4e8
Author: hwware <wen.hui.ware@gmail.com>
Date:   Sun Feb 2 22:58:38 2020 -0500

    fix typo in evict.c

commit fe143fc
Author: Leo Murillo <leonardo.murillo@gmail.com>
Date:   Sun Feb 2 01:57:22 2020 -0600

    Fix a few typos in redis.conf

commit 1ab4d21
Author: viraja1 <anchan.viraj@gmail.com>
Date:   Fri Dec 27 17:15:58 2019 +0530

    Fix typo in Latency API docstring

commit ca1f70e
Author: gosth <danxuedexing@qq.com>
Date:   Wed Dec 18 15:18:02 2019 +0800

    fix typo in sort.c

commit a57c06b
Author: ZYunH <zyunhjob@163.com>
Date:   Mon Dec 16 22:28:46 2019 +0800

    fix-zset-typo

commit b8c92b5
Author: git-hulk <hulk.website@gmail.com>
Date:   Mon Dec 16 15:51:42 2019 +0800

    FIX: typo in cluster.c, onformation->information

commit 9dd981c
Author: wujm2007 <jim.wujm@gmail.com>
Date:   Mon Dec 16 09:37:52 2019 +0800

    Fix typo

commit e132d7a
Author: Sebastien Williams-Wynn <s.williamswynn.mail@gmail.com>
Date:   Fri Nov 15 00:14:07 2019 +0000

    Minor typo change

commit 47f44d5
Author: happynote3966 <01ssrmikururudevice01@gmail.com>
Date:   Mon Nov 11 22:08:48 2019 +0900

    fix comment typo in redis-cli.c

commit b8bdb0d
Author: fulei <fulei@kuaishou.com>
Date:   Wed Oct 16 18:00:17 2019 +0800

    Fix a spelling mistake of comments  in defragDictBucketCallback

commit 0def46a
Author: fulei <fulei@kuaishou.com>
Date:   Wed Oct 16 13:09:27 2019 +0800

    fix some spelling mistakes of comments in defrag.c

commit f3596fd
Author: Phil Rajchgot <tophil@outlook.com>
Date:   Sun Oct 13 02:02:32 2019 -0400

    Typo and grammar fixes

    Redis and its documentation are great -- just wanted to submit a few corrections in the spirit of Hacktoberfest. Thanks for all your work on this project. I use it all the time and it works beautifully.

commit 2b928cd
Author: KangZhiDong <worldkzd@gmail.com>
Date:   Sun Sep 1 07:03:11 2019 +0800

    fix typos

commit 33aea14
Author: Axlgrep <axlgrep@gmail.com>
Date:   Tue Aug 27 11:02:18 2019 +0800

    Fixed eviction spelling issues

commit e282a80
Author: Simen Flatby <simen@oms.no>
Date:   Tue Aug 20 15:25:51 2019 +0200

    Update comments to reflect prop name

    In the comments the prop is referenced as replica-validity-factor,
    but it is really named cluster-replica-validity-factor.

commit 74d1f9a
Author: Jim Green <jimgreen2013@qq.com>
Date:   Tue Aug 20 20:00:31 2019 +0800

    fix comment error, the code is ok

commit eea1407
Author: Liao Tonglang <liaotonglang@gmail.com>
Date:   Fri May 31 10:16:18 2019 +0800

    typo fix

    fix cna't to can't

commit 0da553c
Author: KAWACHI Takashi <tkawachi@gmail.com>
Date:   Wed Jul 17 00:38:16 2019 +0900

    Fix typo

commit 7fc8fb6
Author: Michael Prokop <mika@grml.org>
Date:   Tue May 28 17:58:42 2019 +0200

    Typo fixes

    s/familar/familiar/
    s/compatiblity/compatibility/
    s/ ot / to /
    s/itsef/itself/

commit 5f46c9d
Author: zhumoing <34539422+zhumoing@users.noreply.github.com>
Date:   Tue May 21 21:16:50 2019 +0800

    typo-fixes

    typo-fixes

commit 321dfe1
Author: wxisme <850885154@qq.com>
Date:   Sat Mar 16 15:10:55 2019 +0800

    typo fix

commit b4fb131
Merge: 267e0e6 3df1eb8
Author: Nikitas Bastas <nikitasbst@gmail.com>
Date:   Fri Feb 8 22:55:45 2019 +0200

    Merge branch 'unstable' of antirez/redis into unstable

commit 267e0e6
Author: Nikitas Bastas <nikitasbst@gmail.com>
Date:   Wed Jan 30 21:26:04 2019 +0200

    Minor typo fix

commit 30544e7
Author: inshal96 <39904558+inshal96@users.noreply.github.com>
Date:   Fri Jan 4 16:54:50 2019 +0500

    remove an extra 'a' in the comments

commit 337969d
Author: BrotherGao <yangdongheng11@gmail.com>
Date:   Sat Dec 29 12:37:29 2018 +0800

    fix typo in redis.conf

commit 9f4b121
Merge: 423a030 e504583
Author: BrotherGao <yangdongheng@xiaomi.com>
Date:   Sat Dec 29 11:41:12 2018 +0800

    Merge branch 'unstable' of antirez/redis into unstable

commit 423a030
Merge: 42b02b7 46a51cd
Author: 杨东衡 <yangdongheng@xiaomi.com>
Date:   Tue Dec 4 23:56:11 2018 +0800

    Merge branch 'unstable' of antirez/redis into unstable

commit 42b02b7
Merge: 68c0e6e b8febe6
Author: Dongheng Yang <yangdongheng11@gmail.com>
Date:   Sun Oct 28 15:54:23 2018 +0800

    Merge pull request #1 from antirez/unstable

    update local data

commit 714b589
Author: Christian <crifei93@gmail.com>
Date:   Fri Dec 28 01:17:26 2018 +0100

    fix typo "resulution"

commit e23259d
Author: garenchan <1412950785@qq.com>
Date:   Wed Dec 26 09:58:35 2018 +0800

    fix typo: segfauls -> segfault

commit a9359f8
Author: xjp <jianping_xie@aliyun.com>
Date:   Tue Dec 18 17:31:44 2018 +0800

    Fixed REDISMODULE_H spell bug

commit a12c3e4
Author: jdiaz <jrd.palacios@gmail.com>
Date:   Sat Dec 15 23:39:52 2018 -0600

    Fixes hyperloglog hash function comment block description

commit 770eb11
Author: 林上耀 <1210tom@163.com>
Date:   Sun Nov 25 17:16:10 2018 +0800

    fix typo

commit fd97fbb
Author: Chris Lamb <chris@chris-lamb.co.uk>
Date:   Fri Nov 23 17:14:01 2018 +0100

    Correct "unsupported" typo.

commit a85522d
Author: Jungnam Lee <jungnam.lee@oracle.com>
Date:   Thu Nov 8 23:01:29 2018 +0900

    fix typo in test comments

commit ade8007
Author: Arun Kumar <palerdot@users.noreply.github.com>
Date:   Tue Oct 23 16:56:35 2018 +0530

    Fixed grammatical typo

    Fixed typo for word 'dictionary'

commit 869ee39
Author: Hamid Alaei <hamid.a85@gmail.com>
Date:   Sun Aug 12 16:40:02 2018 +0430

    fix documentations: (ThreadSafeContextStart/Stop -> ThreadSafeContextLock/Unlock), minor typo

commit f89d158
Author: Mayank Jain <mayankjain255@gmail.com>
Date:   Tue Jul 31 23:01:21 2018 +0530

    Updated README.md with some spelling corrections.

    Made correction in spelling of some misspelled words.

commit 892198e
Author: dsomeshwar <someshwar.dhayalan@gmail.com>
Date:   Sat Jul 21 23:23:04 2018 +0530

    typo fix

commit 8a4d780
Author: Itamar Haber <itamar@redislabs.com>
Date:   Mon Apr 30 02:06:52 2018 +0300

    Fixes some typos

commit e3acef6
Author: Noah Rosamilia <ivoahivoah@gmail.com>
Date:   Sat Mar 3 23:41:21 2018 -0500

    Fix typo in /deps/README.md

commit 04442fb
Author: WuYunlong <xzsyeb@126.com>
Date:   Sat Mar 3 10:32:42 2018 +0800

    Fix typo in readSyncBulkPayload() comment.

commit 9f36880
Author: WuYunlong <xzsyeb@126.com>
Date:   Sat Mar 3 10:20:37 2018 +0800

    replication.c comment: run_id -> replid.

commit f866b4a
Author: Francesco 'makevoid' Canessa <makevoid@gmail.com>
Date:   Thu Feb 22 22:01:56 2018 +0000

    fix comment typo in server.c

commit 0ebc69b
Author: 줍 <jubee0124@gmail.com>
Date:   Mon Feb 12 16:38:48 2018 +0900

    Fix typo in redis.conf

    Fix `five behaviors` to `eight behaviors` in [this sentence ](antirez/redis@unstable/redis.conf#L564)

commit b50a620
Author: martinbroadhurst <martinbroadhurst@users.noreply.github.com>
Date:   Thu Dec 28 12:07:30 2017 +0000

    Fix typo in valgrind.sup

commit 7d8f349
Author: Peter Boughton <peter@sorcerersisle.com>
Date:   Mon Nov 27 19:52:19 2017 +0000

    Update CONTRIBUTING; refer doc updates to redis-doc repo.

commit 02dec7e
Author: Klauswk <klauswk1@hotmail.com>
Date:   Tue Oct 24 16:18:38 2017 -0200

    Fix typo in comment

commit e1efbc8
Author: chenshi <baiwfg2@gmail.com>
Date:   Tue Oct 3 18:26:30 2017 +0800

    Correct two spelling errors of comments

commit 93327d8
Author: spacewander <spacewanderlzx@gmail.com>
Date:   Wed Sep 13 16:47:24 2017 +0800

    Update the comment for OBJ_ENCODING_EMBSTR_SIZE_LIMIT's value

    The value of OBJ_ENCODING_EMBSTR_SIZE_LIMIT is 44 now instead of 39.

commit 63d361f
Author: spacewander <spacewanderlzx@gmail.com>
Date:   Tue Sep 12 15:06:42 2017 +0800

    Fix <prevlen> related doc in ziplist.c

    According to the definition of ZIP_BIG_PREVLEN and other related code,
    the guard of single byte <prevlen> should be 254 instead of 255.

commit ebe228d
Author: hanael80 <hanael80@gmail.com>
Date:   Tue Aug 15 09:09:40 2017 +0900

    Fix typo

commit 6b696e6
Author: Matt Robenolt <matt@ydekproductions.com>
Date:   Mon Aug 14 14:50:47 2017 -0700

    Fix typo in LATENCY DOCTOR output

commit a2ec6ae
Author: caosiyang <caosiyang@qiyi.com>
Date:   Tue Aug 15 14:15:16 2017 +0800

    Fix a typo: form => from

commit 3ab7699
Author: caosiyang <caosiyang@qiyi.com>
Date:   Thu Aug 10 18:40:33 2017 +0800

    Fix a typo: replicationFeedSlavesFromMaster() => replicationFeedSlavesFromMasterStream()

commit 72d43ef
Author: caosiyang <caosiyang@qiyi.com>
Date:   Tue Aug 8 15:57:25 2017 +0800

    fix a typo: servewr => server

commit 707c958
Author: Bo Cai <charpty@gmail.com>
Date:   Wed Jul 26 21:49:42 2017 +0800

    redis-cli.c typo: conut -> count.

    Signed-off-by: Bo Cai <charpty@gmail.com>

commit b9385b2
Author: JackDrogon <jack.xsuperman@gmail.com>
Date:   Fri Jun 30 14:22:31 2017 +0800

    Fix some spell problems

commit 20d9230
Author: akosel <aaronjkosel@gmail.com>
Date:   Sun Jun 4 19:35:13 2017 -0500

    Fix typo

commit b167bfc
Author: Krzysiek Witkowicz <krzysiekwitkowicz@gmail.com>
Date:   Mon May 22 21:32:27 2017 +0100

    Fix #4008 small typo in comment

commit 2b78ac8
Author: Jake Clarkson <jacobwclarkson@gmail.com>
Date:   Wed Apr 26 15:49:50 2017 +0100

    Correct typo in tests/unit/hyperloglog.tcl

commit b0f1cdb
Author: Qi Luo <qiluo-msft@users.noreply.github.com>
Date:   Wed Apr 19 14:25:18 2017 -0700

    Fix typo

commit a90b0f9
Author: charsyam <charsyam@naver.com>
Date:   Thu Mar 16 18:19:53 2017 +0900

    fix typos

    fix typos

    fix typos

commit 8430a79
Author: Richard Hart <richardhart92@gmail.com>
Date:   Mon Mar 13 22:17:41 2017 -0400

    Fixed log message typo in listenToPort.

commit 481a1c2
Author: Vinod Kumar <kumar003vinod@gmail.com>
Date:   Sun Jan 15 23:04:51 2017 +0530

    src/db.c: Correct "save" -> "safe" typo

commit 586b4d3
Author: wangshaonan <wshn13@gmail.com>
Date:   Wed Dec 21 20:28:27 2016 +0800

    Fix typo they->the in helloworld.c

commit c1c4b5e
Author: Jenner <hypxm@qq.com>
Date:   Mon Dec 19 16:39:46 2016 +0800

    typo error

commit 1ee1a3f
Author: tielei <43289893@qq.com>
Date:   Mon Jul 18 13:52:25 2016 +0800

    fix some comments

commit 11a41fb
Author: Otto Kekäläinen <otto@seravo.fi>
Date:   Sun Jul 3 10:23:55 2016 +0100

    Fix spelling in documentation and comments

commit 5fb5d82
Author: francischan <f1ancis621@gmail.com>
Date:   Tue Jun 28 00:19:33 2016 +0800

    Fix outdated comments about redis.c file.
    It should now refer to server.c file.

commit 6b254bc
Author: lmatt-bit <lmatt123n@gmail.com>
Date:   Thu Apr 21 21:45:58 2016 +0800

    Refine the comment of dictRehashMilliseconds func

SLAVECONF->REPLCONF in comment - by andyli029

commit ee9869f
Author: clark.kang <charsyam@naver.com>
Date:   Tue Mar 22 11:09:51 2016 +0900

    fix typos

commit f7b3b11
Author: Harisankar H <harisankarh@gmail.com>
Date:   Wed Mar 9 11:49:42 2016 +0530

    Typo correction: "faield" --> "failed"

    Typo correction: "faield" --> "failed"

commit 3fd40fc
Author: Itamar Haber <itamar@redislabs.com>
Date:   Thu Feb 25 10:31:51 2016 +0200

    Fixes a typo in comments

commit 621c160
Author: Prayag Verma <prayag.verma@gmail.com>
Date:   Mon Feb 1 12:36:20 2016 +0530

    Fix typo in Readme.md

    Spelling mistakes -
    `eviciton` > `eviction`
    `familar` > `familiar`

commit d7d07d6
Author: WonCheol Lee <toctoc21c@gmail.com>
Date:   Wed Dec 30 15:11:34 2015 +0900

    Typo fixed

commit a4dade7
Author: Felix Bünemann <buenemann@louis.info>
Date:   Mon Dec 28 11:02:55 2015 +0100

    [ci skip] Improve supervised upstart config docs

    This mentions that "expect stop" is required for supervised upstart
    to work correctly. See http://upstart.ubuntu.com/cookbook/#expect-stop
    for an explanation.

commit d9caba9
Author: daurnimator <quae@daurnimator.com>
Date:   Mon Dec 21 18:30:03 2015 +1100

    README: Remove trailing whitespace

commit 72d42e5
Author: daurnimator <quae@daurnimator.com>
Date:   Mon Dec 21 18:29:32 2015 +1100

    README: Fix typo. th => the

commit dd6e957
Author: daurnimator <quae@daurnimator.com>
Date:   Mon Dec 21 18:29:20 2015 +1100

    README: Fix typo. familar => familiar

commit 3a12b23
Author: daurnimator <quae@daurnimator.com>
Date:   Mon Dec 21 18:28:54 2015 +1100

    README: Fix typo. eviciton => eviction

commit 2d1d03b
Author: daurnimator <quae@daurnimator.com>
Date:   Mon Dec 21 18:21:45 2015 +1100

    README: Fix typo. sever => server

commit 3973b06
Author: Itamar Haber <itamar@garantiadata.com>
Date:   Sat Dec 19 17:01:20 2015 +0200

    Typo fix

commit 4f2e460
Author: Steve Gao <fu@2token.com>
Date:   Fri Dec 4 10:22:05 2015 +0800

    Update README - fix typos

commit b21667c
Author: binyan <binbin.yan@nokia.com>
Date:   Wed Dec 2 22:48:37 2015 +0800

    delete redundancy color judge in sdscatcolor

commit 88894c7
Author: binyan <binbin.yan@nokia.com>
Date:   Wed Dec 2 22:14:42 2015 +0800

    the example output shoule be HelloWorld

commit 2763470
Author: binyan <binbin.yan@nokia.com>
Date:   Wed Dec 2 17:41:39 2015 +0800

    modify error word keyevente

    Signed-off-by: binyan <binbin.yan@nokia.com>

commit 0847b3d
Author: Bruno Martins <bscmartins@gmail.com>
Date:   Wed Nov 4 11:37:01 2015 +0000

    typo

commit bbb9e9e
Author: dawedawe <dawedawe@gmx.de>
Date:   Fri Mar 27 00:46:41 2015 +0100

    typo: zimap -> zipmap

commit 5ed297e
Author: Axel Advento <badwolf.bloodseeker.rev@gmail.com>
Date:   Tue Mar 3 15:58:29 2015 +0800

    Fix 'salve' typos to 'slave'

commit edec9d6
Author: LudwikJaniuk <ludvig.janiuk@gmail.com>
Date:   Wed Jun 12 14:12:47 2019 +0200

    Update README.md

    Co-Authored-By: Qix <Qix-@users.noreply.github.com>

commit 692a7af
Author: LudwikJaniuk <ludvig.janiuk@gmail.com>
Date:   Tue May 28 14:32:04 2019 +0200

    grammar

commit d962b0a
Author: Nick Frost <nickfrostatx@gmail.com>
Date:   Wed Jul 20 15:17:12 2016 -0700

    Minor grammar fix

commit 24fff01aaccaf5956973ada8c50ceb1462e211c6 (typos)
Author: Chad Miller <chadm@squareup.com>
Date:   Tue Sep 8 13:46:11 2020 -0400

    Fix faulty comment about operation of unlink()

commit 3cd5c1f3326c52aa552ada7ec797c6bb16452355
Author: Kevin <kevin.xgr@gmail.com>
Date:   Wed Nov 20 00:13:50 2019 +0800

    Fix typo in server.c.

From a83af59 Mon Sep 17 00:00:00 2001
From: wuwo <wuwo@wacai.com>
Date: Fri, 17 Mar 2017 20:37:45 +0800
Subject: [PATCH] falure to failure

From c961896 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=E5=B7=A6=E6=87=B6?= <veficos@gmail.com>
Date: Sat, 27 May 2017 15:33:04 +0800
Subject: [PATCH] fix typo

From e600ef2 Mon Sep 17 00:00:00 2001
From: "rui.zou" <rui.zou@yunify.com>
Date: Sat, 30 Sep 2017 12:38:15 +0800
Subject: [PATCH] fix a typo

From c7d07fa Mon Sep 17 00:00:00 2001
From: Alexandre Perrin <alex@kaworu.ch>
Date: Thu, 16 Aug 2018 10:35:31 +0200
Subject: [PATCH] deps README.md typo

From b25cb67 Mon Sep 17 00:00:00 2001
From: Guy Korland <gkorland@gmail.com>
Date: Wed, 26 Sep 2018 10:55:37 +0300
Subject: [PATCH 1/2] fix typos in header

From ad28ca6 Mon Sep 17 00:00:00 2001
From: Guy Korland <gkorland@gmail.com>
Date: Wed, 26 Sep 2018 11:02:36 +0300
Subject: [PATCH 2/2] fix typos

commit 34924cdedd8552466fc22c1168d49236cb7ee915
Author: Adrian Lynch <adi_ady_ade@hotmail.com>
Date:   Sat Apr 4 21:59:15 2015 +0100

    Typos fixed

commit fd2a1e7
Author: Jan <jsteemann@users.noreply.github.com>
Date:   Sat Oct 27 19:13:01 2018 +0200

    Fix typos

    Fix typos

commit e14e47c1a234b53b0e103c5f6a1c61481cbcbb02
Author: Andy Lester <andy@petdance.com>
Date:   Fri Aug 2 22:30:07 2019 -0500

    Fix multiple misspellings of "following"

commit 79b948ce2dac6b453fe80995abbcaac04c213d5a
Author: Andy Lester <andy@petdance.com>
Date:   Fri Aug 2 22:24:28 2019 -0500

    Fix misspelling of create-cluster

commit 1fffde52666dc99ab35efbd31071a4c008cb5a71
Author: Andy Lester <andy@petdance.com>
Date:   Wed Jul 31 17:57:56 2019 -0500

    Fix typos

commit 204c9ba9651e9e05fd73936b452b9a30be456cfe
Author: Xiaobo Zhu <xiaobo.zhu@shopee.com>
Date:   Tue Aug 13 22:19:25 2019 +0800

    fix typos

Squashed commit of the following:

commit 1d9aaf8
Author: danmedani <danmedani@gmail.com>
Date:   Sun Aug 2 11:40:26 2015 -0700

README typo fix.

Squashed commit of the following:

commit 32bfa7c
Author: Erik Dubbelboer <erik@dubbelboer.com>
Date:   Mon Jul 6 21:15:08 2015 +0200

Fixed grammer

Squashed commit of the following:

commit b24f69c
Author: Sisir Koppaka <sisir.koppaka@gmail.com>
Date:   Mon Mar 2 22:38:45 2015 -0500

utils/hashtable/rehashing.c: Fix typos

Squashed commit of the following:

commit 4e04082
Author: Erik Dubbelboer <erik@dubbelboer.com>
Date:   Mon Mar 23 08:22:21 2015 +0000

Small config file documentation improvements

Squashed commit of the following:

commit acb8773
Author: ctd1500 <ctd1500@gmail.com>
Date:   Fri May 8 01:52:48 2015 -0700

Typo and grammar fixes in readme

commit 2eb75b6
Author: ctd1500 <ctd1500@gmail.com>
Date:   Fri May 8 01:36:18 2015 -0700

fixed redis.conf comment

Squashed commit of the following:

commit a8249a2
Author: Masahiko Sawada <sawada.mshk@gmail.com>
Date:   Fri Dec 11 11:39:52 2015 +0530

Revise correction of typos.

Squashed commit of the following:

commit 3c02028
Author: zhaojun11 <zhaojun11@jd.com>
Date:   Wed Jan 17 19:05:28 2018 +0800

Fix typos include two code typos in cluster.c and latency.c

Squashed commit of the following:

commit 9dba47c
Author: q191201771 <191201771@qq.com>
Date:   Sat Jan 4 11:31:04 2020 +0800

fix function listCreate comment in adlist.c

Update src/server.c

commit 2c7c2cb536e78dd211b1ac6f7bda00f0f54faaeb
Author: charpty <charpty@gmail.com>
Date:   Tue May 1 23:16:59 2018 +0800

    server.c typo: modules system dictionary type comment

    Signed-off-by: charpty <charpty@gmail.com>

commit a8395323fb63cb59cb3591cb0f0c8edb7c29a680
Author: Itamar Haber <itamar@redislabs.com>
Date:   Sun May 6 00:25:18 2018 +0300

    Updates test_helper.tcl's help with undocumented options

    Specifically:

    * Host
    * Port
    * Client

commit bde6f9ced15755cd6407b4af7d601b030f36d60b
Author: wxisme <850885154@qq.com>
Date:   Wed Aug 8 15:19:19 2018 +0800

    fix comments in deps files

commit 3172474ba991532ab799ee1873439f3402412331
Author: wxisme <850885154@qq.com>
Date:   Wed Aug 8 14:33:49 2018 +0800

    fix some comments

commit 01b6f2b6858b5cf2ce4ad5092d2c746e755f53f0
Author: Thor Juhasz <thor@juhasz.pro>
Date:   Sun Nov 18 14:37:41 2018 +0100

    Minor fixes to comments

    Found some parts a little unclear on a first read, which prompted me to have a better look at the file and fix some minor things I noticed.
    Fixing minor typos and grammar. There are no changes to configuration options.
    These changes are only meant to help the user better understand the explanations to the various configuration options
2020-09-10 13:43:38 +03:00
Itamar Haber 5b0a06af48
Expands lazyfree's effort estimate to include Streams (#5794)
Otherwise, it is treated as a single allocation and freed synchronously. The following logic is used for estimating the effort in constant-ish time complexity:

1. Check the number of nodes.
1. Add an allocation for each consumer group registered inside the stream.
1. Check the number of PELs in the first CG, and then add this count times the number of CGs.
1. Check the number of consumers in the first CG, and then add this count times the number of CGs.
2020-08-25 15:58:50 +03:00
antirez 30adc62232 RDB: load files faster avoiding useless free+realloc.
Reloading of the RDB generated by

    DEBUG POPULATE 5000000
    SAVE

is now 25% faster.

This commit also prepares the ability to have more flexibility when
loading stuff from the RDB, since we no longer use dbAdd() but can
control exactly how things are added in the database.
2020-04-09 10:24:46 +02:00
zhaozhao.zz fddeeae724 refactor dbOverwrite to make lazyfree work 2018-07-31 12:07:57 +08:00
Jack Drogon 93238575f7 Fix typo 2018-07-03 18:19:46 +02:00
antirez c45366be0a Put more details in the comment introduced by #4601. 2018-01-15 12:50:08 +01:00
zhaozhao.zz 0517ab8397 lazyfree: fix memory leak for lazyfree-lazy-server-del 2018-01-15 00:45:37 +08:00
antirez 2a51bac44e Simplify atomicvar.h usage by having the mutex name implicit. 2017-05-04 17:01:00 +02:00
antirez 52bc74f221 Lazyfree: fix lazyfreeGetPendingObjectsCount() race reading counter. 2017-05-04 10:35:40 +02:00
antirez 1409c545da Cluster: hash slots tracking using a radix tree. 2017-03-27 16:37:22 +02:00
antirez a636aeac07 Apply the new dictUnlink() where possible.
Optimizations suggested and originally implemented by @oranagra.
Re-applied by @antirez using the modified API.
2016-09-14 16:37:53 +02:00
root 28e80bf96d fix linux compile bug 2016-01-13 00:49:28 -08:00
antirez 1f26a9468f Lazyfree: pending objects count in INFO output. 2015-10-01 13:02:26 +02:00
antirez c69c6c80fb Lazyfree: ability to free whole DBs in background. 2015-10-01 13:02:26 +02:00
antirez b08c36c5f2 Lazyfree: keep count of objects to free. 2015-10-01 13:02:25 +02:00
antirez 7af4eeb745 Lazyfree: incremental removed, only threaded survived. 2015-10-01 13:02:25 +02:00
antirez 9253d85073 Threaded lazyfree WIP #1. 2015-10-01 13:02:25 +02:00
antirez 0c05436cef Lazyfree: a first implementation of non blocking DEL. 2015-10-01 13:00:19 +02:00