Improve test suite to handle external servers better. (#9033)

This commit revives the improves the ability to run the test suite against
external servers, instead of launching and managing `redis-server` processes as
part of the test fixture.

This capability existed in the past, using the `--host` and `--port` options.
However, it was quite limited and mostly useful when running a specific tests.
Attempting to run larger chunks of the test suite experienced many issues:

* Many tests depend on being able to start and control `redis-server` themselves,
and there's no clear distinction between external server compatible and other
tests.
* Cluster mode is not supported (resulting with `CROSSSLOT` errors).

This PR cleans up many things and makes it possible to run the entire test suite
against an external server. It also provides more fine grained controls to
handle cases where the external server supports a subset of the Redis commands,
limited number of databases, cluster mode, etc.

The tests directory now contains a `README.md` file that describes how this
works.

This commit also includes additional cleanups and fixes:

* Tests can now be tagged.
* Tag-based selection is now unified across `start_server`, `tags` and `test`.
* More information is provided about skipped or ignored tests.
* Repeated patterns in tests have been extracted to common procedures, both at a
  global level and on a per-test file basis.
* Cleaned up some cases where test setup was based on a previous test executing
  (a major anti-pattern that repeats itself in many places).
* Cleaned up some cases where test teardown was not part of a test (in the
  future we should have dedicated teardown code that executes even when tests
  fail).
* Fixed some tests that were flaky running on external servers.
This commit is contained in:
Yossi Gottlieb 2021-06-09 15:13:24 +03:00 committed by GitHub
parent c396fd91a0
commit 8a86bca5ed
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
66 changed files with 1648 additions and 1358 deletions

View File

@ -15,7 +15,7 @@ jobs:
- name: test - name: test
run: | run: |
sudo apt-get install tcl8.6 tclx sudo apt-get install tcl8.6 tclx
./runtest --verbose ./runtest --verbose --tags -slow
- name: module api test - name: module api test
run: ./runtest-moduleapi --verbose run: ./runtest-moduleapi --verbose

42
.github/workflows/external.yml vendored Normal file
View File

@ -0,0 +1,42 @@
name: External Server Tests
on:
pull_request:
push:
schedule:
- cron: '0 0 * * *'
jobs:
test-external-standalone:
runs-on: ubuntu-latest
timeout-minutes: 14400
steps:
- uses: actions/checkout@v2
- name: Build
run: make REDIS_CFLAGS=-Werror
- name: Start redis-server
run: ./src/redis-server --daemonize yes
- name: Run external test
run: |
./runtest \
--host 127.0.0.1 --port 6379 \
--tags -slow
test-external-cluster:
runs-on: ubuntu-latest
timeout-minutes: 14400
steps:
- uses: actions/checkout@v2
- name: Build
run: make REDIS_CFLAGS=-Werror
- name: Start redis-server
run: ./src/redis-server --cluster-enabled yes --daemonize yes
- name: Create a single node cluster
run: ./src/redis-cli cluster addslots $(for slot in {0..16383}; do echo $slot; done); sleep 5
- name: Run external test
run: |
./runtest \
--host 127.0.0.1 --port 6379 \
--cluster-mode \
--tags -slow

57
tests/README.md Normal file
View File

@ -0,0 +1,57 @@
Redis Test Suite
================
The normal execution mode of the test suite involves starting and manipulating
local `redis-server` instances, inspecting process state, log files, etc.
The test suite also supports execution against an external server, which is
enabled using the `--host` and `--port` parameters. When executing against an
external server, tests tagged `external:skip` are skipped.
There are additional runtime options that can further adjust the test suite to
match different external server configurations:
| Option | Impact |
| -------------------- | -------------------------------------------------------- |
| `--singledb` | Only use database 0, don't assume others are supported. |
| `--ignore-encoding` | Skip all checks for specific encoding. |
| `--ignore-digest` | Skip key value digest validations. |
| `--cluster-mode` | Run in strict Redis Cluster compatibility mode. |
Tags
----
Tags are applied to tests to classify them according to the subsystem they test,
but also to indicate compatibility with different run modes and required
capabilities.
Tags can be applied in different context levels:
* `start_server` context
* `tags` context that bundles several tests together
* A single test context.
The following compatibility and capability tags are currently used:
| Tag | Indicates |
| --------------------- | --------- |
| `external:skip` | Not compatible with external servers. |
| `cluster:skip` | Not compatible with `--cluster-mode`. |
| `needs:repl` | Uses replication and needs to be able to `SYNC` from server. |
| `needs:debug` | Uses the `DEBUG` command or other debugging focused commands (like `OBJECT`). |
| `needs:pfdebug` | Uses the `PFDEBUG` command. |
| `needs:config-maxmemory` | Uses `CONFIG SET` to manipulate memory limit, eviction policies, etc. |
| `needs:config-resetstat` | Uses `CONFIG RESETSTAT` to reset statistics. |
| `needs:reset` | Uses `RESET` to reset client connections. |
| `needs:save` | Uses `SAVE` to create an RDB file. |
When using an external server (`--host` and `--port`), filtering using the
`external:skip` tags is done automatically.
When using `--cluster-mode`, filtering using the `cluster:skip` tag is done
automatically.
In addition, it is possible to specify additional configuration. For example, to
run tests on a server that does not permit `SYNC` use:
./runtest --host <host> --port <port> --tags -needs:repl

View File

@ -22,7 +22,7 @@ proc start_server_aof {overrides code} {
kill_server $srv kill_server $srv
} }
tags {"aof"} { tags {"aof external:skip"} {
## Server can start when aof-load-truncated is set to yes and AOF ## Server can start when aof-load-truncated is set to yes and AOF
## is truncated, with an incomplete MULTI block. ## is truncated, with an incomplete MULTI block.
create_aof { create_aof {

View File

@ -11,7 +11,7 @@ proc stop_bg_block_op {handle} {
catch {exec /bin/kill -9 $handle} catch {exec /bin/kill -9 $handle}
} }
start_server {tags {"repl"}} { start_server {tags {"repl" "external:skip"}} {
start_server {} { start_server {} {
set master [srv -1 client] set master [srv -1 client]
set master_host [srv -1 host] set master_host [srv -1 host]

View File

@ -1,3 +1,5 @@
tags {"external:skip"} {
# Copy RDB with zipmap encoded hash to server path # Copy RDB with zipmap encoded hash to server path
set server_path [tmpdir "server.convert-zipmap-hash-on-load"] set server_path [tmpdir "server.convert-zipmap-hash-on-load"]
@ -33,3 +35,5 @@ start_server [list overrides [list "dir" $server_path "dbfilename" "hash-zipmap.
assert_match {v1 v2} [r hmget hash f1 f2] assert_match {v1 v2} [r hmget hash f1 f2]
} }
} }
}

View File

@ -1,6 +1,6 @@
# tests of corrupt ziplist payload with valid CRC # tests of corrupt ziplist payload with valid CRC
tags {"dump" "corruption"} { tags {"dump" "corruption" "external:skip"} {
# catch sigterm so that in case one of the random command hangs the test, # catch sigterm so that in case one of the random command hangs the test,
# usually due to redis not putting a response in the output buffers, # usually due to redis not putting a response in the output buffers,

View File

@ -6,7 +6,7 @@
# * some tests set sanitize-dump-payload to no and some to yet, depending on # * some tests set sanitize-dump-payload to no and some to yet, depending on
# what we want to test # what we want to test
tags {"dump" "corruption"} { tags {"dump" "corruption" "external:skip"} {
set corrupt_payload_7445 "\x0E\x01\x1D\x1D\x00\x00\x00\x16\x00\x00\x00\x03\x00\x00\x04\x43\x43\x43\x43\x06\x04\x42\x42\x42\x42\x06\x3F\x41\x41\x41\x41\xFF\x09\x00\x88\xA5\xCA\xA8\xC5\x41\xF4\x35" set corrupt_payload_7445 "\x0E\x01\x1D\x1D\x00\x00\x00\x16\x00\x00\x00\x03\x00\x00\x04\x43\x43\x43\x43\x06\x04\x42\x42\x42\x42\x06\x3F\x41\x41\x41\x41\xFF\x09\x00\x88\xA5\xCA\xA8\xC5\x41\xF4\x35"

View File

@ -1,4 +1,4 @@
start_server {tags {"failover"}} { start_server {tags {"failover external:skip"}} {
start_server {} { start_server {} {
start_server {} { start_server {} {
set node_0 [srv 0 client] set node_0 [srv 0 client]

View File

@ -1,3 +1,5 @@
tags {"external:skip"} {
set system_name [string tolower [exec uname -s]] set system_name [string tolower [exec uname -s]]
set system_supported 0 set system_supported 0
@ -49,3 +51,5 @@ if {$system_supported} {
} }
} }
}

View File

@ -5,7 +5,7 @@
# We keep these tests just because they reproduce edge cases in the replication # We keep these tests just because they reproduce edge cases in the replication
# logic in hope they'll be able to spot some problem in the future. # logic in hope they'll be able to spot some problem in the future.
start_server {tags {"psync2"}} { start_server {tags {"psync2 external:skip"}} {
start_server {} { start_server {} {
# Config # Config
set debug_msg 0 ; # Enable additional debug messages set debug_msg 0 ; # Enable additional debug messages
@ -74,7 +74,7 @@ start_server {} {
}} }}
start_server {tags {"psync2"}} { start_server {tags {"psync2 external:skip"}} {
start_server {} { start_server {} {
start_server {} { start_server {} {
start_server {} { start_server {} {
@ -180,7 +180,7 @@ start_server {} {
} }
}}}}} }}}}}
start_server {tags {"psync2"}} { start_server {tags {"psync2 external:skip"}} {
start_server {} { start_server {} {
start_server {} { start_server {} {

View File

@ -4,7 +4,7 @@
# redis-benchmark. At the end we check that the data is the same # redis-benchmark. At the end we check that the data is the same
# everywhere. # everywhere.
start_server {tags {"psync2"}} { start_server {tags {"psync2 external:skip"}} {
start_server {} { start_server {} {
start_server {} { start_server {} {
# Config # Config

View File

@ -70,7 +70,7 @@ proc show_cluster_status {} {
} }
} }
start_server {tags {"psync2"}} { start_server {tags {"psync2 external:skip"}} {
start_server {} { start_server {} {
start_server {} { start_server {} {
start_server {} { start_server {} {

View File

@ -1,4 +1,4 @@
tags {"rdb"} { tags {"rdb external:skip"} {
set server_path [tmpdir "server.rdb-encoding-test"] set server_path [tmpdir "server.rdb-encoding-test"]

View File

@ -5,7 +5,7 @@ proc cmdstat {cmd} {
return [cmdrstat $cmd r] return [cmdrstat $cmd r]
} }
start_server {tags {"benchmark network"}} { start_server {tags {"benchmark network external:skip"}} {
start_server {} { start_server {} {
set master_host [srv 0 host] set master_host [srv 0 host]
set master_port [srv 0 port] set master_port [srv 0 port]

View File

@ -1,7 +1,16 @@
source tests/support/cli.tcl source tests/support/cli.tcl
if {$::singledb} {
set ::dbnum 0
} else {
set ::dbnum 9
}
start_server {tags {"cli"}} { start_server {tags {"cli"}} {
proc open_cli {{opts "-n 9"} {infile ""}} { proc open_cli {{opts ""} {infile ""}} {
if { $opts == "" } {
set opts "-n $::dbnum"
}
set ::env(TERM) dumb set ::env(TERM) dumb
set cmdline [rediscli [srv host] [srv port] $opts] set cmdline [rediscli [srv host] [srv port] $opts]
if {$infile ne ""} { if {$infile ne ""} {
@ -65,7 +74,7 @@ start_server {tags {"cli"}} {
} }
proc _run_cli {opts args} { proc _run_cli {opts args} {
set cmd [rediscli [srv host] [srv port] [list -n 9 {*}$args]] set cmd [rediscli [srv host] [srv port] [list -n $::dbnum {*}$args]]
foreach {key value} $opts { foreach {key value} $opts {
if {$key eq "pipe"} { if {$key eq "pipe"} {
set cmd "sh -c \"$value | $cmd\"" set cmd "sh -c \"$value | $cmd\""
@ -269,7 +278,7 @@ start_server {tags {"cli"}} {
assert_match "OK" [r config set repl-diskless-sync yes] assert_match "OK" [r config set repl-diskless-sync yes]
assert_match "OK" [r config set repl-diskless-sync-delay 0] assert_match "OK" [r config set repl-diskless-sync-delay 0]
test_redis_cli_rdb_dump test_redis_cli_rdb_dump
} } {} {needs:repl}
test "Scan mode" { test "Scan mode" {
r flushdb r flushdb
@ -302,13 +311,18 @@ start_server {tags {"cli"}} {
} }
close_cli $fd close_cli $fd
} } {} {needs:repl}
test "Piping raw protocol" { test "Piping raw protocol" {
set cmds [tmpfile "cli_cmds"] set cmds [tmpfile "cli_cmds"]
set cmds_fd [open $cmds "w"] set cmds_fd [open $cmds "w"]
puts $cmds_fd [formatCommand select 9] set cmds_count 2101
if {!$::singledb} {
puts $cmds_fd [formatCommand select 9]
incr cmds_count
}
puts $cmds_fd [formatCommand del test-counter] puts $cmds_fd [formatCommand del test-counter]
for {set i 0} {$i < 1000} {incr i} { for {set i 0} {$i < 1000} {incr i} {
@ -326,7 +340,7 @@ start_server {tags {"cli"}} {
set output [read_cli $cli_fd] set output [read_cli $cli_fd]
assert_equal {1000} [r get test-counter] assert_equal {1000} [r get test-counter]
assert_match {*All data transferred*errors: 0*replies: 2102*} $output assert_match "*All data transferred*errors: 0*replies: ${cmds_count}*" $output
file delete $cmds file delete $cmds
} }

View File

@ -1,4 +1,4 @@
start_server {tags {"repl"}} { start_server {tags {"repl external:skip"}} {
start_server {} { start_server {} {
test {First server should have role slave after SLAVEOF} { test {First server should have role slave after SLAVEOF} {
r -1 slaveof [srv 0 host] [srv 0 port] r -1 slaveof [srv 0 host] [srv 0 port]

View File

@ -1,4 +1,4 @@
start_server {tags {"repl"}} { start_server {tags {"repl external:skip"}} {
start_server {} { start_server {} {
test {First server should have role slave after SLAVEOF} { test {First server should have role slave after SLAVEOF} {
r -1 slaveof [srv 0 host] [srv 0 port] r -1 slaveof [srv 0 host] [srv 0 port]
@ -45,7 +45,7 @@ start_server {tags {"repl"}} {
} }
} }
start_server {tags {"repl"}} { start_server {tags {"repl external:skip"}} {
start_server {} { start_server {} {
test {First server should have role slave after SLAVEOF} { test {First server should have role slave after SLAVEOF} {
r -1 slaveof [srv 0 host] [srv 0 port] r -1 slaveof [srv 0 host] [srv 0 port]

View File

@ -1,4 +1,4 @@
start_server {tags {"repl network"}} { start_server {tags {"repl network external:skip"}} {
start_server {} { start_server {} {
set master [srv -1 client] set master [srv -1 client]
@ -39,7 +39,7 @@ start_server {tags {"repl network"}} {
} }
} }
start_server {tags {"repl"}} { start_server {tags {"repl external:skip"}} {
start_server {} { start_server {} {
set master [srv -1 client] set master [srv -1 client]
set master_host [srv -1 host] set master_host [srv -1 host]
@ -85,7 +85,7 @@ start_server {tags {"repl"}} {
} }
} }
start_server {tags {"repl"}} { start_server {tags {"repl external:skip"}} {
start_server {} { start_server {} {
set master [srv -1 client] set master [srv -1 client]
set master_host [srv -1 host] set master_host [srv -1 host]

View File

@ -117,6 +117,7 @@ proc test_psync {descr duration backlog_size backlog_ttl delay cond mdl sdl reco
} }
} }
tags {"external:skip"} {
foreach mdl {no yes} { foreach mdl {no yes} {
foreach sdl {disabled swapdb} { foreach sdl {disabled swapdb} {
test_psync {no reconnection, just sync} 6 1000000 3600 0 { test_psync {no reconnection, just sync} 6 1000000 3600 0 {
@ -139,3 +140,4 @@ foreach mdl {no yes} {
} $mdl $sdl 1 } $mdl $sdl 1
} }
} }
}

View File

@ -5,7 +5,7 @@ proc log_file_matches {log pattern} {
string match $pattern $content string match $pattern $content
} }
start_server {tags {"repl network"}} { start_server {tags {"repl network external:skip"}} {
set slave [srv 0 client] set slave [srv 0 client]
set slave_host [srv 0 host] set slave_host [srv 0 host]
set slave_port [srv 0 port] set slave_port [srv 0 port]
@ -51,7 +51,7 @@ start_server {tags {"repl network"}} {
} }
} }
start_server {tags {"repl"}} { start_server {tags {"repl external:skip"}} {
set A [srv 0 client] set A [srv 0 client]
set A_host [srv 0 host] set A_host [srv 0 host]
set A_port [srv 0 port] set A_port [srv 0 port]
@ -187,7 +187,7 @@ start_server {tags {"repl"}} {
} }
} }
start_server {tags {"repl"}} { start_server {tags {"repl external:skip"}} {
r set mykey foo r set mykey foo
start_server {} { start_server {} {
@ -252,7 +252,7 @@ start_server {tags {"repl"}} {
foreach mdl {no yes} { foreach mdl {no yes} {
foreach sdl {disabled swapdb} { foreach sdl {disabled swapdb} {
start_server {tags {"repl"}} { start_server {tags {"repl external:skip"}} {
set master [srv 0 client] set master [srv 0 client]
$master config set repl-diskless-sync $mdl $master config set repl-diskless-sync $mdl
$master config set repl-diskless-sync-delay 1 $master config set repl-diskless-sync-delay 1
@ -340,7 +340,7 @@ foreach mdl {no yes} {
} }
} }
start_server {tags {"repl"}} { start_server {tags {"repl external:skip"}} {
set master [srv 0 client] set master [srv 0 client]
set master_host [srv 0 host] set master_host [srv 0 host]
set master_port [srv 0 port] set master_port [srv 0 port]
@ -448,7 +448,7 @@ test {slave fails full sync and diskless load swapdb recovers it} {
assert_equal [$slave dbsize] 2000 assert_equal [$slave dbsize] 2000
} }
} }
} } {} {external:skip}
test {diskless loading short read} { test {diskless loading short read} {
start_server {tags {"repl"}} { start_server {tags {"repl"}} {
@ -547,7 +547,7 @@ test {diskless loading short read} {
$master config set rdb-key-save-delay 0 $master config set rdb-key-save-delay 0
} }
} }
} } {} {external:skip}
# get current stime and utime metrics for a thread (since it's creation) # get current stime and utime metrics for a thread (since it's creation)
proc get_cpu_metrics { statfile } { proc get_cpu_metrics { statfile } {
@ -581,7 +581,7 @@ proc compute_cpu_usage {start end} {
# test diskless rdb pipe with multiple replicas, which may drop half way # test diskless rdb pipe with multiple replicas, which may drop half way
start_server {tags {"repl"}} { start_server {tags {"repl external:skip"}} {
set master [srv 0 client] set master [srv 0 client]
$master config set repl-diskless-sync yes $master config set repl-diskless-sync yes
$master config set repl-diskless-sync-delay 1 $master config set repl-diskless-sync-delay 1
@ -769,7 +769,7 @@ test "diskless replication child being killed is collected" {
} }
} }
} }
} } {} {external:skip}
test "diskless replication read pipe cleanup" { test "diskless replication read pipe cleanup" {
# In diskless replication, we create a read pipe for the RDB, between the child and the parent. # In diskless replication, we create a read pipe for the RDB, between the child and the parent.
@ -808,7 +808,7 @@ test "diskless replication read pipe cleanup" {
$master ping $master ping
} }
} }
} } {} {external:skip}
test {replicaof right after disconnection} { test {replicaof right after disconnection} {
# this is a rare race condition that was reproduced sporadically by the psync2 unit. # this is a rare race condition that was reproduced sporadically by the psync2 unit.
@ -860,7 +860,7 @@ test {replicaof right after disconnection} {
} }
} }
} }
} } {} {external:skip}
test {Kill rdb child process if its dumping RDB is not useful} { test {Kill rdb child process if its dumping RDB is not useful} {
start_server {tags {"repl"}} { start_server {tags {"repl"}} {
@ -925,4 +925,4 @@ test {Kill rdb child process if its dumping RDB is not useful} {
} }
} }
} }
} } {} {external:skip}

View File

@ -158,14 +158,14 @@ proc server_is_up {host port retrynum} {
# there must be some intersection. If ::denytags are used, no intersection # there must be some intersection. If ::denytags are used, no intersection
# is allowed. Returns 1 if tags are acceptable or 0 otherwise, in which # is allowed. Returns 1 if tags are acceptable or 0 otherwise, in which
# case err_return names a return variable for the message to be logged. # case err_return names a return variable for the message to be logged.
proc tags_acceptable {err_return} { proc tags_acceptable {tags err_return} {
upvar $err_return err upvar $err_return err
# If tags are whitelisted, make sure there's match # If tags are whitelisted, make sure there's match
if {[llength $::allowtags] > 0} { if {[llength $::allowtags] > 0} {
set matched 0 set matched 0
foreach tag $::allowtags { foreach tag $::allowtags {
if {[lsearch $::tags $tag] >= 0} { if {[lsearch $tags $tag] >= 0} {
incr matched incr matched
} }
} }
@ -176,12 +176,27 @@ proc tags_acceptable {err_return} {
} }
foreach tag $::denytags { foreach tag $::denytags {
if {[lsearch $::tags $tag] >= 0} { if {[lsearch $tags $tag] >= 0} {
set err "Tag: $tag denied" set err "Tag: $tag denied"
return 0 return 0
} }
} }
if {$::external && [lsearch $tags "external:skip"] >= 0} {
set err "Not supported on external server"
return 0
}
if {$::singledb && [lsearch $tags "singledb:skip"] >= 0} {
set err "Not supported on singledb"
return 0
}
if {$::cluster_mode && [lsearch $tags "cluster:skip"] >= 0} {
set err "Not supported in cluster mode"
return 0
}
return 1 return 1
} }
@ -191,7 +206,7 @@ proc tags {tags code} {
# we want to get rid of the quotes in order to have a proper list # we want to get rid of the quotes in order to have a proper list
set tags [string map { \" "" } $tags] set tags [string map { \" "" } $tags]
set ::tags [concat $::tags $tags] set ::tags [concat $::tags $tags]
if {![tags_acceptable err]} { if {![tags_acceptable $::tags err]} {
incr ::num_aborted incr ::num_aborted
send_data_packet $::test_server_fd ignore $err send_data_packet $::test_server_fd ignore $err
set ::tags [lrange $::tags 0 end-[llength $tags]] set ::tags [lrange $::tags 0 end-[llength $tags]]
@ -268,6 +283,68 @@ proc dump_server_log {srv} {
puts "===== End of server log (pid $pid) =====\n" puts "===== End of server log (pid $pid) =====\n"
} }
proc run_external_server_test {code overrides} {
set srv {}
dict set srv "host" $::host
dict set srv "port" $::port
set client [redis $::host $::port 0 $::tls]
dict set srv "client" $client
if {!$::singledb} {
$client select 9
}
set config {}
dict set config "port" $::port
dict set srv "config" $config
# append the server to the stack
lappend ::servers $srv
if {[llength $::servers] > 1} {
if {$::verbose} {
puts "Notice: nested start_server statements in external server mode, test must be aware of that!"
}
}
r flushall
# store overrides
set saved_config {}
foreach {param val} $overrides {
dict set saved_config $param [lindex [r config get $param] 1]
r config set $param $val
# If we enable appendonly, wait for for rewrite to complete. This is
# required for tests that begin with a bg* command which will fail if
# the rewriteaof operation is not completed at this point.
if {$param == "appendonly" && $val == "yes"} {
waitForBgrewriteaof r
}
}
if {[catch {set retval [uplevel 2 $code]} error]} {
if {$::durable} {
set msg [string range $error 10 end]
lappend details $msg
lappend details $::errorInfo
lappend ::tests_failed $details
incr ::num_failed
send_data_packet $::test_server_fd err [join $details "\n"]
} else {
# Re-raise, let handler up the stack take care of this.
error $error $::errorInfo
}
}
# restore overrides
dict for {param val} $saved_config {
r config set $param $val
}
lpop ::servers
}
proc start_server {options {code undefined}} { proc start_server {options {code undefined}} {
# setup defaults # setup defaults
set baseconfig "default.conf" set baseconfig "default.conf"
@ -304,7 +381,7 @@ proc start_server {options {code undefined}} {
} }
# We skip unwanted tags # We skip unwanted tags
if {![tags_acceptable err]} { if {![tags_acceptable $::tags err]} {
incr ::num_aborted incr ::num_aborted
send_data_packet $::test_server_fd ignore $err send_data_packet $::test_server_fd ignore $err
set ::tags [lrange $::tags 0 end-[llength $tags]] set ::tags [lrange $::tags 0 end-[llength $tags]]
@ -314,36 +391,8 @@ proc start_server {options {code undefined}} {
# If we are running against an external server, we just push the # If we are running against an external server, we just push the
# host/port pair in the stack the first time # host/port pair in the stack the first time
if {$::external} { if {$::external} {
if {[llength $::servers] == 0} { run_external_server_test $code $overrides
set srv {}
dict set srv "host" $::host
dict set srv "port" $::port
set client [redis $::host $::port 0 $::tls]
dict set srv "client" $client
$client select 9
set config {}
dict set config "port" $::port
dict set srv "config" $config
# append the server to the stack
lappend ::servers $srv
}
r flushall
if {[catch {set retval [uplevel 1 $code]} error]} {
if {$::durable} {
set msg [string range $error 10 end]
lappend details $msg
lappend details $::errorInfo
lappend ::tests_failed $details
incr ::num_failed
send_data_packet $::test_server_fd err [join $details "\n"]
} else {
# Re-raise, let handler up the stack take care of this.
error $error $::errorInfo
}
}
set ::tags [lrange $::tags 0 end-[llength $tags]] set ::tags [lrange $::tags 0 end-[llength $tags]]
return return
} }

View File

@ -85,6 +85,9 @@ proc assert_error {pattern code} {
} }
proc assert_encoding {enc key} { proc assert_encoding {enc key} {
if {$::ignoreencoding} {
return
}
set dbg [r debug object $key] set dbg [r debug object $key]
assert_match "* encoding:$enc *" $dbg assert_match "* encoding:$enc *" $dbg
} }
@ -112,7 +115,7 @@ proc wait_for_condition {maxtries delay e _else_ elsescript} {
} }
} }
proc test {name code {okpattern undefined} {options undefined}} { proc test {name code {okpattern undefined} {tags {}}} {
# abort if test name in skiptests # abort if test name in skiptests
if {[lsearch $::skiptests $name] >= 0} { if {[lsearch $::skiptests $name] >= 0} {
incr ::num_skipped incr ::num_skipped
@ -127,20 +130,11 @@ proc test {name code {okpattern undefined} {options undefined}} {
return return
} }
# check if tagged with at least 1 tag to allow when there *is* a list set tags [concat $::tags $tags]
# of tags to allow, because default policy is to run everything if {![tags_acceptable $tags err]} {
if {[llength $::allowtags] > 0} { incr ::num_aborted
set matched 0 send_data_packet $::test_server_fd ignore "$name: $err"
foreach tag $::allowtags { return
if {[lsearch $::tags $tag] >= 0} {
incr matched
}
}
if {$matched < 1} {
incr ::num_aborted
send_data_packet $::test_server_fd ignore $name
return
}
} }
incr ::num_tests incr ::num_tests

View File

@ -250,13 +250,19 @@ proc findKeyWithType {r type} {
} }
proc createComplexDataset {r ops {opt {}}} { proc createComplexDataset {r ops {opt {}}} {
set useexpire [expr {[lsearch -exact $opt useexpire] != -1}]
if {[lsearch -exact $opt usetag] != -1} {
set tag "{t}"
} else {
set tag ""
}
for {set j 0} {$j < $ops} {incr j} { for {set j 0} {$j < $ops} {incr j} {
set k [randomKey] set k [randomKey]$tag
set k2 [randomKey] set k2 [randomKey]$tag
set f [randomValue] set f [randomValue]
set v [randomValue] set v [randomValue]
if {[lsearch -exact $opt useexpire] != -1} { if {$useexpire} {
if {rand() < 0.1} { if {rand() < 0.1} {
{*}$r expire [randomKey] [randomInt 2] {*}$r expire [randomKey] [randomInt 2]
} }
@ -353,8 +359,15 @@ proc formatCommand {args} {
proc csvdump r { proc csvdump r {
set o {} set o {}
for {set db 0} {$db < 16} {incr db} { if {$::singledb} {
{*}$r select $db set maxdb 1
} else {
set maxdb 16
}
for {set db 0} {$db < $maxdb} {incr db} {
if {!$::singledb} {
{*}$r select $db
}
foreach k [lsort [{*}$r keys *]] { foreach k [lsort [{*}$r keys *]] {
set type [{*}$r type $k] set type [{*}$r type $k]
append o [csvstring $db] , [csvstring $k] , [csvstring $type] , append o [csvstring $db] , [csvstring $k] , [csvstring $type] ,
@ -396,7 +409,9 @@ proc csvdump r {
} }
} }
} }
{*}$r select 9 if {!$::singledb} {
{*}$r select 9
}
return $o return $o
} }
@ -540,7 +555,7 @@ proc stop_bg_complex_data {handle} {
catch {exec /bin/kill -9 $handle} catch {exec /bin/kill -9 $handle}
} }
proc populate {num prefix size} { proc populate {num {prefix key:} {size 3}} {
set rd [redis_deferring_client] set rd [redis_deferring_client]
for {set j 0} {$j < $num} {incr j} { for {set j 0} {$j < $num} {incr j} {
$rd set $prefix$j [string repeat A $size] $rd set $prefix$j [string repeat A $size]
@ -777,6 +792,30 @@ proc punsubscribe {client {channels {}}} {
consume_subscribe_messages $client punsubscribe $channels consume_subscribe_messages $client punsubscribe $channels
} }
proc debug_digest_value {key} {
if {!$::ignoredigest} {
r debug digest-value $key
} else {
return "dummy-digest-value"
}
}
proc wait_for_blocked_client {} {
wait_for_condition 50 100 {
[s blocked_clients] ne 0
} else {
fail "no blocked clients"
}
}
proc wait_for_blocked_clients_count {count {maxtries 100} {delay 10}} {
wait_for_condition $maxtries $delay {
[s blocked_clients] == $count
} else {
fail "Timeout waiting for blocked clients"
}
}
proc read_from_aof {fp} { proc read_from_aof {fp} {
# Input fp is a blocking binary file descriptor of an opened AOF file. # Input fp is a blocking binary file descriptor of an opened AOF file.
if {[gets $fp count] == -1} return "" if {[gets $fp count] == -1} return ""
@ -802,3 +841,27 @@ proc assert_aof_content {aof_path patterns} {
assert_match [lindex $patterns $j] [read_from_aof $fp] assert_match [lindex $patterns $j] [read_from_aof $fp]
} }
} }
proc config_set {param value {options {}}} {
set mayfail 0
foreach option $options {
switch $option {
"mayfail" {
set mayfail 1
}
default {
error "Unknown option $option"
}
}
}
if {[catch {r config set $param $value} err]} {
if {!$mayfail} {
error $err
} else {
if {$::verbose} {
puts "Ignoring CONFIG SET $param $value failure: $err"
}
}
}
}

View File

@ -114,6 +114,10 @@ set ::stop_on_failure 0
set ::dump_logs 0 set ::dump_logs 0
set ::loop 0 set ::loop 0
set ::tlsdir "tests/tls" set ::tlsdir "tests/tls"
set ::singledb 0
set ::cluster_mode 0
set ::ignoreencoding 0
set ::ignoredigest 0
# Set to 1 when we are running in client mode. The Redis test uses a # Set to 1 when we are running in client mode. The Redis test uses a
# server-client model to run tests simultaneously. The server instance # server-client model to run tests simultaneously. The server instance
@ -191,7 +195,7 @@ proc reconnect {args} {
dict set srv "client" $client dict set srv "client" $client
# select the right db when we don't have to authenticate # select the right db when we don't have to authenticate
if {![dict exists $config "requirepass"]} { if {![dict exists $config "requirepass"] && !$::singledb} {
$client select 9 $client select 9
} }
@ -210,8 +214,14 @@ proc redis_deferring_client {args} {
set client [redis [srv $level "host"] [srv $level "port"] 1 $::tls] set client [redis [srv $level "host"] [srv $level "port"] 1 $::tls]
# select the right db and read the response (OK) # select the right db and read the response (OK)
$client select 9 if {!$::singledb} {
$client read $client select 9
$client read
} else {
# For timing/symmetry with the above select
$client ping
$client read
}
return $client return $client
} }
@ -225,8 +235,13 @@ proc redis_client {args} {
# create client that defers reading reply # create client that defers reading reply
set client [redis [srv $level "host"] [srv $level "port"] 0 $::tls] set client [redis [srv $level "host"] [srv $level "port"] 0 $::tls]
# select the right db and read the response (OK) # select the right db and read the response (OK), or at least ping
$client select 9 # the server if we're in a singledb mode.
if {$::singledb} {
$client ping
} else {
$client select 9
}
return $client return $client
} }
@ -552,6 +567,7 @@ proc print_help_screen {} {
"--config <k> <v> Extra config file argument." "--config <k> <v> Extra config file argument."
"--skipfile <file> Name of a file containing test names that should be skipped (one per line)." "--skipfile <file> Name of a file containing test names that should be skipped (one per line)."
"--skiptest <name> Name of a file containing test names that should be skipped (one per line)." "--skiptest <name> Name of a file containing test names that should be skipped (one per line)."
"--tags <tags> Run only tests having specified tags or not having '-' prefixed tags."
"--dont-clean Don't delete redis log files after the run." "--dont-clean Don't delete redis log files after the run."
"--no-latency Skip latency measurements and validation by some tests." "--no-latency Skip latency measurements and validation by some tests."
"--stop Blocks once the first test fails." "--stop Blocks once the first test fails."
@ -563,6 +579,10 @@ proc print_help_screen {} {
"--port <port> TCP port to use against external host." "--port <port> TCP port to use against external host."
"--baseport <port> Initial port number for spawned redis servers." "--baseport <port> Initial port number for spawned redis servers."
"--portcount <num> Port range for spawned redis servers." "--portcount <num> Port range for spawned redis servers."
"--singledb Use a single database, avoid SELECT."
"--cluster-mode Run tests in cluster protocol compatible mode."
"--ignore-encoding Don't validate object encoding."
"--ignore-digest Don't use debug digest validations."
"--help Print this help screen." "--help Print this help screen."
} "\n"] } "\n"]
} }
@ -669,6 +689,15 @@ for {set j 0} {$j < [llength $argv]} {incr j} {
} elseif {$opt eq {--timeout}} { } elseif {$opt eq {--timeout}} {
set ::timeout $arg set ::timeout $arg
incr j incr j
} elseif {$opt eq {--singledb}} {
set ::singledb 1
} elseif {$opt eq {--cluster-mode}} {
set ::cluster_mode 1
set ::singledb 1
} elseif {$opt eq {--ignore-encoding}} {
set ::ignoreencoding 1
} elseif {$opt eq {--ignore-digest}} {
set ::ignoredigest 1
} elseif {$opt eq {--help}} { } elseif {$opt eq {--help}} {
print_help_screen print_help_screen
exit 0 exit 0
@ -782,6 +811,7 @@ proc assert_replication_stream {s patterns} {
proc close_replication_stream {s} { proc close_replication_stream {s} {
close $s close $s
r config set repl-ping-replica-period 10 r config set repl-ping-replica-period 10
return
} }
# With the parallel test running multiple Redis instances at the same time # With the parallel test running multiple Redis instances at the same time

View File

@ -1,4 +1,4 @@
start_server {tags {"acl"}} { start_server {tags {"acl external:skip"}} {
test {Connections start with the default user} { test {Connections start with the default user} {
r ACL WHOAMI r ACL WHOAMI
} {default} } {default}
@ -482,7 +482,7 @@ start_server {tags {"acl"}} {
set server_path [tmpdir "server.acl"] set server_path [tmpdir "server.acl"]
exec cp -f tests/assets/user.acl $server_path exec cp -f tests/assets/user.acl $server_path
start_server [list overrides [list "dir" $server_path "aclfile" "user.acl"]] { start_server [list overrides [list "dir" $server_path "aclfile" "user.acl"] tags [list "external:skip"]] {
# user alice on allcommands allkeys >alice # user alice on allcommands allkeys >alice
# user bob on -@all +@set +acl ~set* >bob # user bob on -@all +@set +acl ~set* >bob
# user default on nopass ~* +@all # user default on nopass ~* +@all
@ -570,7 +570,7 @@ start_server [list overrides [list "dir" $server_path "aclfile" "user.acl"]] {
set server_path [tmpdir "resetchannels.acl"] set server_path [tmpdir "resetchannels.acl"]
exec cp -f tests/assets/nodefaultuser.acl $server_path exec cp -f tests/assets/nodefaultuser.acl $server_path
exec cp -f tests/assets/default.conf $server_path exec cp -f tests/assets/default.conf $server_path
start_server [list overrides [list "dir" $server_path "acl-pubsub-default" "resetchannels" "aclfile" "nodefaultuser.acl"]] { start_server [list overrides [list "dir" $server_path "acl-pubsub-default" "resetchannels" "aclfile" "nodefaultuser.acl"] tags [list "external:skip"]] {
test {Default user has access to all channels irrespective of flag} { test {Default user has access to all channels irrespective of flag} {
set channelinfo [dict get [r ACL getuser default] channels] set channelinfo [dict get [r ACL getuser default] channels]
@ -609,7 +609,7 @@ start_server [list overrides [list "dir" $server_path "acl-pubsub-default" "rese
set server_path [tmpdir "resetchannels.acl"] set server_path [tmpdir "resetchannels.acl"]
exec cp -f tests/assets/nodefaultuser.acl $server_path exec cp -f tests/assets/nodefaultuser.acl $server_path
exec cp -f tests/assets/default.conf $server_path exec cp -f tests/assets/default.conf $server_path
start_server [list overrides [list "dir" $server_path "acl-pubsub-default" "resetchannels" "aclfile" "nodefaultuser.acl"]] { start_server [list overrides [list "dir" $server_path "acl-pubsub-default" "resetchannels" "aclfile" "nodefaultuser.acl"] tags [list "external:skip"]] {
test {Only default user has access to all channels irrespective of flag} { test {Only default user has access to all channels irrespective of flag} {
set channelinfo [dict get [r ACL getuser default] channels] set channelinfo [dict get [r ACL getuser default] channels]
@ -620,7 +620,7 @@ start_server [list overrides [list "dir" $server_path "acl-pubsub-default" "rese
} }
start_server {overrides {user "default on nopass ~* +@all"}} { start_server {overrides {user "default on nopass ~* +@all"} tags {"external:skip"}} {
test {default: load from config file, can access any channels} { test {default: load from config file, can access any channels} {
r SUBSCRIBE foo r SUBSCRIBE foo
r PSUBSCRIBE bar* r PSUBSCRIBE bar*

View File

@ -1,4 +1,4 @@
start_server {tags {"aofrw"}} { start_server {tags {"aofrw external:skip"}} {
# Enable the AOF # Enable the AOF
r config set appendonly yes r config set appendonly yes
r config set auto-aof-rewrite-percentage 0 ; # Disable auto-rewrite. r config set auto-aof-rewrite-percentage 0 ; # Disable auto-rewrite.
@ -57,7 +57,7 @@ start_server {tags {"aofrw"}} {
} }
} }
start_server {tags {"aofrw"} overrides {aof-use-rdb-preamble no}} { start_server {tags {"aofrw external:skip"} overrides {aof-use-rdb-preamble no}} {
test {Turning off AOF kills the background writing child if any} { test {Turning off AOF kills the background writing child if any} {
r config set appendonly yes r config set appendonly yes
waitForBgrewriteaof r waitForBgrewriteaof r

View File

@ -1,11 +1,11 @@
start_server {tags {"auth"}} { start_server {tags {"auth external:skip"}} {
test {AUTH fails if there is no password configured server side} { test {AUTH fails if there is no password configured server side} {
catch {r auth foo} err catch {r auth foo} err
set _ $err set _ $err
} {ERR*any password*} } {ERR*any password*}
} }
start_server {tags {"auth"} overrides {requirepass foobar}} { start_server {tags {"auth external:skip"} overrides {requirepass foobar}} {
test {AUTH fails when a wrong password is given} { test {AUTH fails when a wrong password is given} {
catch {r auth wrong!} err catch {r auth wrong!} err
set _ $err set _ $err
@ -26,7 +26,7 @@ start_server {tags {"auth"} overrides {requirepass foobar}} {
} {101} } {101}
} }
start_server {tags {"auth_binary_password"}} { start_server {tags {"auth_binary_password external:skip"}} {
test {AUTH fails when binary password is wrong} { test {AUTH fails when binary password is wrong} {
r config set requirepass "abc\x00def" r config set requirepass "abc\x00def"
catch {r auth abc} err catch {r auth abc} err

View File

@ -200,7 +200,7 @@ start_server {tags {"bitops"}} {
} }
} }
start_server {tags {"repl"}} { start_server {tags {"repl external:skip"}} {
start_server {} { start_server {} {
set master [srv -1 client] set master [srv -1 client]
set master_host [srv -1 host] set master_host [srv -1 host]

View File

@ -121,15 +121,15 @@ start_server {tags {"bitops"}} {
} {74} } {74}
test {BITOP NOT (empty string)} { test {BITOP NOT (empty string)} {
r set s "" r set s{t} ""
r bitop not dest s r bitop not dest{t} s{t}
r get dest r get dest{t}
} {} } {}
test {BITOP NOT (known string)} { test {BITOP NOT (known string)} {
r set s "\xaa\x00\xff\x55" r set s{t} "\xaa\x00\xff\x55"
r bitop not dest s r bitop not dest{t} s{t}
r get dest r get dest{t}
} "\x55\xff\x00\xaa" } "\x55\xff\x00\xaa"
test {BITOP where dest and target are the same key} { test {BITOP where dest and target are the same key} {
@ -139,28 +139,28 @@ start_server {tags {"bitops"}} {
} "\x55\xff\x00\xaa" } "\x55\xff\x00\xaa"
test {BITOP AND|OR|XOR don't change the string with single input key} { test {BITOP AND|OR|XOR don't change the string with single input key} {
r set a "\x01\x02\xff" r set a{t} "\x01\x02\xff"
r bitop and res1 a r bitop and res1{t} a{t}
r bitop or res2 a r bitop or res2{t} a{t}
r bitop xor res3 a r bitop xor res3{t} a{t}
list [r get res1] [r get res2] [r get res3] list [r get res1{t}] [r get res2{t}] [r get res3{t}]
} [list "\x01\x02\xff" "\x01\x02\xff" "\x01\x02\xff"] } [list "\x01\x02\xff" "\x01\x02\xff" "\x01\x02\xff"]
test {BITOP missing key is considered a stream of zero} { test {BITOP missing key is considered a stream of zero} {
r set a "\x01\x02\xff" r set a{t} "\x01\x02\xff"
r bitop and res1 no-suck-key a r bitop and res1{t} no-suck-key{t} a{t}
r bitop or res2 no-suck-key a no-such-key r bitop or res2{t} no-suck-key{t} a{t} no-such-key{t}
r bitop xor res3 no-such-key a r bitop xor res3{t} no-such-key{t} a{t}
list [r get res1] [r get res2] [r get res3] list [r get res1{t}] [r get res2{t}] [r get res3{t}]
} [list "\x00\x00\x00" "\x01\x02\xff" "\x01\x02\xff"] } [list "\x00\x00\x00" "\x01\x02\xff" "\x01\x02\xff"]
test {BITOP shorter keys are zero-padded to the key with max length} { test {BITOP shorter keys are zero-padded to the key with max length} {
r set a "\x01\x02\xff\xff" r set a{t} "\x01\x02\xff\xff"
r set b "\x01\x02\xff" r set b{t} "\x01\x02\xff"
r bitop and res1 a b r bitop and res1{t} a{t} b{t}
r bitop or res2 a b r bitop or res2{t} a{t} b{t}
r bitop xor res3 a b r bitop xor res3{t} a{t} b{t}
list [r get res1] [r get res2] [r get res3] list [r get res1{t}] [r get res2{t}] [r get res3{t}]
} [list "\x01\x02\xff\x00" "\x01\x02\xff\xff" "\x00\x00\x00\xff"] } [list "\x01\x02\xff\x00" "\x01\x02\xff\xff" "\x00\x00\x00\xff"]
foreach op {and or xor} { foreach op {and or xor} {
@ -173,11 +173,11 @@ start_server {tags {"bitops"}} {
for {set j 0} {$j < $numvec} {incr j} { for {set j 0} {$j < $numvec} {incr j} {
set str [randstring 0 1000] set str [randstring 0 1000]
lappend vec $str lappend vec $str
lappend veckeys vector_$j lappend veckeys vector_$j{t}
r set vector_$j $str r set vector_$j{t} $str
} }
r bitop $op target {*}$veckeys r bitop $op target{t} {*}$veckeys
assert_equal [r get target] [simulate_bit_op $op {*}$vec] assert_equal [r get target{t}] [simulate_bit_op $op {*}$vec]
} }
} }
} }
@ -186,32 +186,32 @@ start_server {tags {"bitops"}} {
for {set i 0} {$i < 10} {incr i} { for {set i 0} {$i < 10} {incr i} {
r flushall r flushall
set str [randstring 0 1000] set str [randstring 0 1000]
r set str $str r set str{t} $str
r bitop not target str r bitop not target{t} str{t}
assert_equal [r get target] [simulate_bit_op not $str] assert_equal [r get target{t}] [simulate_bit_op not $str]
} }
} }
test {BITOP with integer encoded source objects} { test {BITOP with integer encoded source objects} {
r set a 1 r set a{t} 1
r set b 2 r set b{t} 2
r bitop xor dest a b a r bitop xor dest{t} a{t} b{t} a{t}
r get dest r get dest{t}
} {2} } {2}
test {BITOP with non string source key} { test {BITOP with non string source key} {
r del c r del c{t}
r set a 1 r set a{t} 1
r set b 2 r set b{t} 2
r lpush c foo r lpush c{t} foo
catch {r bitop xor dest a b c d} e catch {r bitop xor dest{t} a{t} b{t} c{t} d{t}} e
set e set e
} {WRONGTYPE*} } {WRONGTYPE*}
test {BITOP with empty string after non empty string (issue #529)} { test {BITOP with empty string after non empty string (issue #529)} {
r flushdb r flushdb
r set a "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" r set a{t} "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
r bitop or x a b r bitop or x{t} a{t} b{t}
} {32} } {32}
test {BITPOS bit=0 with empty key returns 0} { test {BITPOS bit=0 with empty key returns 0} {

View File

@ -46,7 +46,7 @@ start_server {tags {"dump"}} {
catch {r debug object foo} e catch {r debug object foo} e
r debug set-active-expire 1 r debug set-active-expire 1
set e set e
} {ERR no such key} } {ERR no such key} {needs:debug}
test {RESTORE can set LRU} { test {RESTORE can set LRU} {
r set foo bar r set foo bar
@ -56,8 +56,9 @@ start_server {tags {"dump"}} {
r restore foo 0 $encoded idletime 1000 r restore foo 0 $encoded idletime 1000
set idle [r object idletime foo] set idle [r object idletime foo]
assert {$idle >= 1000 && $idle <= 1010} assert {$idle >= 1000 && $idle <= 1010}
r get foo assert_equal [r get foo] {bar}
} {bar} r config set maxmemory-policy noeviction
} {OK} {needs:config-maxmemory}
test {RESTORE can set LFU} { test {RESTORE can set LFU} {
r set foo bar r set foo bar
@ -68,7 +69,9 @@ start_server {tags {"dump"}} {
set freq [r object freq foo] set freq [r object freq foo]
assert {$freq == 100} assert {$freq == 100}
r get foo r get foo
} {bar} assert_equal [r get foo] {bar}
r config set maxmemory-policy noeviction
} {OK} {needs:config-maxmemory}
test {RESTORE returns an error of the key already exists} { test {RESTORE returns an error of the key already exists} {
r set foo bar r set foo bar
@ -111,7 +114,7 @@ start_server {tags {"dump"}} {
r -1 migrate $second_host $second_port key 9 1000 r -1 migrate $second_host $second_port key 9 1000
assert_match {*migrate_cached_sockets:1*} [r -1 info] assert_match {*migrate_cached_sockets:1*} [r -1 info]
} }
} } {} {external:skip}
test {MIGRATE cached connections are released after some time} { test {MIGRATE cached connections are released after some time} {
after 15000 after 15000
@ -135,7 +138,7 @@ start_server {tags {"dump"}} {
assert {[$second get key] eq {Some Value}} assert {[$second get key] eq {Some Value}}
assert {[$second ttl key] == -1} assert {[$second ttl key] == -1}
} }
} } {} {external:skip}
test {MIGRATE is able to copy a key between two instances} { test {MIGRATE is able to copy a key between two instances} {
set first [srv 0 client] set first [srv 0 client]
@ -154,7 +157,7 @@ start_server {tags {"dump"}} {
assert {[$second exists list] == 1} assert {[$second exists list] == 1}
assert {[$first lrange list 0 -1] eq [$second lrange list 0 -1]} assert {[$first lrange list 0 -1] eq [$second lrange list 0 -1]}
} }
} } {} {external:skip}
test {MIGRATE will not overwrite existing keys, unless REPLACE is used} { test {MIGRATE will not overwrite existing keys, unless REPLACE is used} {
set first [srv 0 client] set first [srv 0 client]
@ -176,7 +179,7 @@ start_server {tags {"dump"}} {
assert {[$second exists list] == 1} assert {[$second exists list] == 1}
assert {[$first lrange list 0 -1] eq [$second lrange list 0 -1]} assert {[$first lrange list 0 -1] eq [$second lrange list 0 -1]}
} }
} } {} {external:skip}
test {MIGRATE propagates TTL correctly} { test {MIGRATE propagates TTL correctly} {
set first [srv 0 client] set first [srv 0 client]
@ -196,7 +199,7 @@ start_server {tags {"dump"}} {
assert {[$second get key] eq {Some Value}} assert {[$second get key] eq {Some Value}}
assert {[$second ttl key] >= 7 && [$second ttl key] <= 10} assert {[$second ttl key] >= 7 && [$second ttl key] <= 10}
} }
} } {} {external:skip}
test {MIGRATE can correctly transfer large values} { test {MIGRATE can correctly transfer large values} {
set first [srv 0 client] set first [srv 0 client]
@ -221,7 +224,7 @@ start_server {tags {"dump"}} {
assert {[$second ttl key] == -1} assert {[$second ttl key] == -1}
assert {[$second llen key] == 40000*20} assert {[$second llen key] == 40000*20}
} }
} } {} {external:skip}
test {MIGRATE can correctly transfer hashes} { test {MIGRATE can correctly transfer hashes} {
set first [srv 0 client] set first [srv 0 client]
@ -241,7 +244,7 @@ start_server {tags {"dump"}} {
assert {[$second exists key] == 1} assert {[$second exists key] == 1}
assert {[$second ttl key] == -1} assert {[$second ttl key] == -1}
} }
} } {} {external:skip}
test {MIGRATE timeout actually works} { test {MIGRATE timeout actually works} {
set first [srv 0 client] set first [srv 0 client]
@ -260,7 +263,7 @@ start_server {tags {"dump"}} {
catch {r -1 migrate $second_host $second_port key 9 500} e catch {r -1 migrate $second_host $second_port key 9 500} e
assert_match {IOERR*} $e assert_match {IOERR*} $e
} }
} } {} {external:skip}
test {MIGRATE can migrate multiple keys at once} { test {MIGRATE can migrate multiple keys at once} {
set first [srv 0 client] set first [srv 0 client]
@ -283,12 +286,12 @@ start_server {tags {"dump"}} {
assert {[$second get key2] eq {v2}} assert {[$second get key2] eq {v2}}
assert {[$second get key3] eq {v3}} assert {[$second get key3] eq {v3}}
} }
} } {} {external:skip}
test {MIGRATE with multiple keys must have empty key arg} { test {MIGRATE with multiple keys must have empty key arg} {
catch {r MIGRATE 127.0.0.1 6379 NotEmpty 9 5000 keys a b c} e catch {r MIGRATE 127.0.0.1 6379 NotEmpty 9 5000 keys a b c} e
set e set e
} {*empty string*} } {*empty string*} {external:skip}
test {MIGRATE with multiple keys migrate just existing ones} { test {MIGRATE with multiple keys migrate just existing ones} {
set first [srv 0 client] set first [srv 0 client]
@ -314,7 +317,7 @@ start_server {tags {"dump"}} {
assert {[$second get key2] eq {v2}} assert {[$second get key2] eq {v2}}
assert {[$second get key3] eq {v3}} assert {[$second get key3] eq {v3}}
} }
} } {} {external:skip}
test {MIGRATE with multiple keys: stress command rewriting} { test {MIGRATE with multiple keys: stress command rewriting} {
set first [srv 0 client] set first [srv 0 client]
@ -330,7 +333,7 @@ start_server {tags {"dump"}} {
assert {[$first dbsize] == 0} assert {[$first dbsize] == 0}
assert {[$second dbsize] == 15} assert {[$second dbsize] == 15}
} }
} } {} {external:skip}
test {MIGRATE with multiple keys: delete just ack keys} { test {MIGRATE with multiple keys: delete just ack keys} {
set first [srv 0 client] set first [srv 0 client]
@ -350,7 +353,7 @@ start_server {tags {"dump"}} {
assert {[$first exists c] == 1} assert {[$first exists c] == 1}
assert {[$first exists d] == 1} assert {[$first exists d] == 1}
} }
} } {} {external:skip}
test {MIGRATE AUTH: correct and wrong password cases} { test {MIGRATE AUTH: correct and wrong password cases} {
set first [srv 0 client] set first [srv 0 client]
@ -375,5 +378,5 @@ start_server {tags {"dump"}} {
catch {r -1 migrate $second_host $second_port list 9 5000 AUTH foobar} err catch {r -1 migrate $second_host $second_port list 9 5000 AUTH foobar} err
assert_match {*WRONGPASS*} $err assert_match {*WRONGPASS*} $err
} }
} } {} {external:skip}
} }

View File

@ -96,7 +96,9 @@ start_server {tags {"expire"}} {
# server is under pressure, so if it does not work give it a few more # server is under pressure, so if it does not work give it a few more
# chances. # chances.
for {set j 0} {$j < 30} {incr j} { for {set j 0} {$j < 30} {incr j} {
r del x y z r del x
r del y
r del z
r psetex x 100 somevalue r psetex x 100 somevalue
after 80 after 80
set a [r get x] set a [r get x]
@ -172,32 +174,34 @@ start_server {tags {"expire"}} {
r psetex key1 500 a r psetex key1 500 a
r psetex key2 500 a r psetex key2 500 a
r psetex key3 500 a r psetex key3 500 a
set size1 [r dbsize] assert_equal 3 [r dbsize]
# Redis expires random keys ten times every second so we are # Redis expires random keys ten times every second so we are
# fairly sure that all the three keys should be evicted after # fairly sure that all the three keys should be evicted after
# one second. # two seconds.
after 1000 wait_for_condition 20 100 {
set size2 [r dbsize] [r dbsize] eq 0
list $size1 $size2 } fail {
} {3 0} "Keys did not actively expire."
}
}
test {Redis should lazy expire keys} { test {Redis should lazy expire keys} {
r flushdb r flushdb
r debug set-active-expire 0 r debug set-active-expire 0
r psetex key1 500 a r psetex key1{t} 500 a
r psetex key2 500 a r psetex key2{t} 500 a
r psetex key3 500 a r psetex key3{t} 500 a
set size1 [r dbsize] set size1 [r dbsize]
# Redis expires random keys ten times every second so we are # Redis expires random keys ten times every second so we are
# fairly sure that all the three keys should be evicted after # fairly sure that all the three keys should be evicted after
# one second. # one second.
after 1000 after 1000
set size2 [r dbsize] set size2 [r dbsize]
r mget key1 key2 key3 r mget key1{t} key2{t} key3{t}
set size3 [r dbsize] set size3 [r dbsize]
r debug set-active-expire 1 r debug set-active-expire 1
list $size1 $size2 $size3 list $size1 $size2 $size3
} {3 3 0} } {3 3 0} {needs:debug}
test {EXPIRE should not resurrect keys (issue #1026)} { test {EXPIRE should not resurrect keys (issue #1026)} {
r debug set-active-expire 0 r debug set-active-expire 0
@ -207,7 +211,7 @@ start_server {tags {"expire"}} {
r expire foo 10 r expire foo 10
r debug set-active-expire 1 r debug set-active-expire 1
r exists foo r exists foo
} {0} } {0} {needs:debug}
test {5 keys in, 5 keys out} { test {5 keys in, 5 keys out} {
r flushdb r flushdb
@ -279,7 +283,7 @@ start_server {tags {"expire"}} {
} {-2} } {-2}
# Start a new server with empty data and AOF file. # Start a new server with empty data and AOF file.
start_server {overrides {appendonly {yes} appendfilename {appendonly.aof} appendfsync always}} { start_server {overrides {appendonly {yes} appendfilename {appendonly.aof} appendfsync always} tags {external:skip}} {
test {All time-to-live(TTL) in commands are propagated as absolute timestamp in milliseconds in AOF} { test {All time-to-live(TTL) in commands are propagated as absolute timestamp in milliseconds in AOF} {
# This test makes sure that expire times are propagated as absolute # This test makes sure that expire times are propagated as absolute
# times to the AOF file and not as relative time, so that when the AOF # times to the AOF file and not as relative time, so that when the AOF
@ -417,7 +421,7 @@ start_server {tags {"expire"}} {
assert_equal [r pexpiretime foo16] "-1" ; # foo16 has no TTL assert_equal [r pexpiretime foo16] "-1" ; # foo16 has no TTL
assert_equal [r pexpiretime foo17] $ttl17 assert_equal [r pexpiretime foo17] $ttl17
assert_equal [r pexpiretime foo18] $ttl18 assert_equal [r pexpiretime foo18] $ttl18
} } {} {needs:debug}
} }
test {All TTL in commands are propagated as absolute timestamp in replication stream} { test {All TTL in commands are propagated as absolute timestamp in replication stream} {
@ -484,7 +488,7 @@ start_server {tags {"expire"}} {
} }
# Start another server to test replication of TTLs # Start another server to test replication of TTLs
start_server {} { start_server {tags {needs:repl external:skip}} {
# Set the outer layer server as primary # Set the outer layer server as primary
set primary [srv -1 client] set primary [srv -1 client]
set primary_host [srv -1 host] set primary_host [srv -1 host]
@ -566,7 +570,7 @@ start_server {tags {"expire"}} {
r debug loadaof r debug loadaof
set ttl [r ttl foo] set ttl [r ttl foo]
assert {$ttl <= 98 && $ttl > 90} assert {$ttl <= 98 && $ttl > 90}
} } {} {needs:debug}
test {GETEX use of PERSIST option should remove TTL} { test {GETEX use of PERSIST option should remove TTL} {
r set foo bar EX 100 r set foo bar EX 100
@ -580,7 +584,7 @@ start_server {tags {"expire"}} {
after 2000 after 2000
r debug loadaof r debug loadaof
r ttl foo r ttl foo
} {-1} } {-1} {needs:debug}
test {GETEX propagate as to replica as PERSIST, DEL, or nothing} { test {GETEX propagate as to replica as PERSIST, DEL, or nothing} {
set repl [attach_to_replication_stream] set repl [attach_to_replication_stream]
@ -594,5 +598,5 @@ start_server {tags {"expire"}} {
{persist foo} {persist foo}
{del foo} {del foo}
} }
} } {} {needs:repl}
} }

View File

@ -325,74 +325,74 @@ start_server {tags {"geo"}} {
} }
test {GEORADIUS STORE option: syntax error} { test {GEORADIUS STORE option: syntax error} {
r del points r del points{t}
r geoadd points 13.361389 38.115556 "Palermo" \ r geoadd points{t} 13.361389 38.115556 "Palermo" \
15.087269 37.502669 "Catania" 15.087269 37.502669 "Catania"
catch {r georadius points 13.361389 38.115556 50 km store} e catch {r georadius points{t} 13.361389 38.115556 50 km store} e
set e set e
} {*ERR*syntax*} } {*ERR*syntax*}
test {GEOSEARCHSTORE STORE option: syntax error} { test {GEOSEARCHSTORE STORE option: syntax error} {
catch {r geosearchstore abc points fromlonlat 13.361389 38.115556 byradius 50 km store abc} e catch {r geosearchstore abc{t} points{t} fromlonlat 13.361389 38.115556 byradius 50 km store abc{t}} e
set e set e
} {*ERR*syntax*} } {*ERR*syntax*}
test {GEORANGE STORE option: incompatible options} { test {GEORANGE STORE option: incompatible options} {
r del points r del points{t}
r geoadd points 13.361389 38.115556 "Palermo" \ r geoadd points{t} 13.361389 38.115556 "Palermo" \
15.087269 37.502669 "Catania" 15.087269 37.502669 "Catania"
catch {r georadius points 13.361389 38.115556 50 km store points2 withdist} e catch {r georadius points{t} 13.361389 38.115556 50 km store points2{t} withdist} e
assert_match {*ERR*} $e assert_match {*ERR*} $e
catch {r georadius points 13.361389 38.115556 50 km store points2 withhash} e catch {r georadius points{t} 13.361389 38.115556 50 km store points2{t} withhash} e
assert_match {*ERR*} $e assert_match {*ERR*} $e
catch {r georadius points 13.361389 38.115556 50 km store points2 withcoords} e catch {r georadius points{t} 13.361389 38.115556 50 km store points2{t} withcoords} e
assert_match {*ERR*} $e assert_match {*ERR*} $e
} }
test {GEORANGE STORE option: plain usage} { test {GEORANGE STORE option: plain usage} {
r del points r del points{t}
r geoadd points 13.361389 38.115556 "Palermo" \ r geoadd points{t} 13.361389 38.115556 "Palermo" \
15.087269 37.502669 "Catania" 15.087269 37.502669 "Catania"
r georadius points 13.361389 38.115556 500 km store points2 r georadius points{t} 13.361389 38.115556 500 km store points2{t}
assert_equal [r zrange points 0 -1] [r zrange points2 0 -1] assert_equal [r zrange points{t} 0 -1] [r zrange points2{t} 0 -1]
} }
test {GEOSEARCHSTORE STORE option: plain usage} { test {GEOSEARCHSTORE STORE option: plain usage} {
r geosearchstore points2 points fromlonlat 13.361389 38.115556 byradius 500 km r geosearchstore points2{t} points{t} fromlonlat 13.361389 38.115556 byradius 500 km
assert_equal [r zrange points 0 -1] [r zrange points2 0 -1] assert_equal [r zrange points{t} 0 -1] [r zrange points2{t} 0 -1]
} }
test {GEORANGE STOREDIST option: plain usage} { test {GEORANGE STOREDIST option: plain usage} {
r del points r del points{t}
r geoadd points 13.361389 38.115556 "Palermo" \ r geoadd points{t} 13.361389 38.115556 "Palermo" \
15.087269 37.502669 "Catania" 15.087269 37.502669 "Catania"
r georadius points 13.361389 38.115556 500 km storedist points2 r georadius points{t} 13.361389 38.115556 500 km storedist points2{t}
set res [r zrange points2 0 -1 withscores] set res [r zrange points2{t} 0 -1 withscores]
assert {[lindex $res 1] < 1} assert {[lindex $res 1] < 1}
assert {[lindex $res 3] > 166} assert {[lindex $res 3] > 166}
assert {[lindex $res 3] < 167} assert {[lindex $res 3] < 167}
} }
test {GEOSEARCHSTORE STOREDIST option: plain usage} { test {GEOSEARCHSTORE STOREDIST option: plain usage} {
r geosearchstore points2 points fromlonlat 13.361389 38.115556 byradius 500 km storedist r geosearchstore points2{t} points{t} fromlonlat 13.361389 38.115556 byradius 500 km storedist
set res [r zrange points2 0 -1 withscores] set res [r zrange points2{t} 0 -1 withscores]
assert {[lindex $res 1] < 1} assert {[lindex $res 1] < 1}
assert {[lindex $res 3] > 166} assert {[lindex $res 3] > 166}
assert {[lindex $res 3] < 167} assert {[lindex $res 3] < 167}
} }
test {GEORANGE STOREDIST option: COUNT ASC and DESC} { test {GEORANGE STOREDIST option: COUNT ASC and DESC} {
r del points r del points{t}
r geoadd points 13.361389 38.115556 "Palermo" \ r geoadd points{t} 13.361389 38.115556 "Palermo" \
15.087269 37.502669 "Catania" 15.087269 37.502669 "Catania"
r georadius points 13.361389 38.115556 500 km storedist points2 asc count 1 r georadius points{t} 13.361389 38.115556 500 km storedist points2{t} asc count 1
assert {[r zcard points2] == 1} assert {[r zcard points2{t}] == 1}
set res [r zrange points2 0 -1 withscores] set res [r zrange points2{t} 0 -1 withscores]
assert {[lindex $res 0] eq "Palermo"} assert {[lindex $res 0] eq "Palermo"}
r georadius points 13.361389 38.115556 500 km storedist points2 desc count 1 r georadius points{t} 13.361389 38.115556 500 km storedist points2{t} desc count 1
assert {[r zcard points2] == 1} assert {[r zcard points2{t}] == 1}
set res [r zrange points2 0 -1 withscores] set res [r zrange points2{t} 0 -1 withscores]
assert {[lindex $res 0] eq "Catania"} assert {[lindex $res 0] eq "Catania"}
} }

View File

@ -2,7 +2,7 @@ start_server {tags {"hll"}} {
test {HyperLogLog self test passes} { test {HyperLogLog self test passes} {
catch {r pfselftest} e catch {r pfselftest} e
set e set e
} {OK} } {OK} {needs:pfdebug}
test {PFADD without arguments creates an HLL value} { test {PFADD without arguments creates an HLL value} {
r pfadd hll r pfadd hll
@ -57,11 +57,12 @@ start_server {tags {"hll"}} {
assert {[r pfdebug encoding hll] eq {dense}} assert {[r pfdebug encoding hll] eq {dense}}
} }
} }
} } {} {needs:pfdebug}
test {HyperLogLog sparse encoding stress test} { test {HyperLogLog sparse encoding stress test} {
for {set x 0} {$x < 1000} {incr x} { for {set x 0} {$x < 1000} {incr x} {
r del hll1 hll2 r del hll1
r del hll2
set numele [randomInt 100] set numele [randomInt 100]
set elements {} set elements {}
for {set j 0} {$j < $numele} {incr j} { for {set j 0} {$j < $numele} {incr j} {
@ -77,7 +78,7 @@ start_server {tags {"hll"}} {
# Cardinality estimated should match exactly. # Cardinality estimated should match exactly.
assert {[r pfcount hll1] eq [r pfcount hll2]} assert {[r pfcount hll1] eq [r pfcount hll2]}
} }
} } {} {needs:pfdebug}
test {Corrupted sparse HyperLogLogs are detected: Additional at tail} { test {Corrupted sparse HyperLogLogs are detected: Additional at tail} {
r del hll r del hll
@ -144,34 +145,34 @@ start_server {tags {"hll"}} {
} }
test {PFADD, PFCOUNT, PFMERGE type checking works} { test {PFADD, PFCOUNT, PFMERGE type checking works} {
r set foo bar r set foo{t} bar
catch {r pfadd foo 1} e catch {r pfadd foo{t} 1} e
assert_match {*WRONGTYPE*} $e assert_match {*WRONGTYPE*} $e
catch {r pfcount foo} e catch {r pfcount foo{t}} e
assert_match {*WRONGTYPE*} $e assert_match {*WRONGTYPE*} $e
catch {r pfmerge bar foo} e catch {r pfmerge bar{t} foo{t}} e
assert_match {*WRONGTYPE*} $e assert_match {*WRONGTYPE*} $e
catch {r pfmerge foo bar} e catch {r pfmerge foo{t} bar{t}} e
assert_match {*WRONGTYPE*} $e assert_match {*WRONGTYPE*} $e
} }
test {PFMERGE results on the cardinality of union of sets} { test {PFMERGE results on the cardinality of union of sets} {
r del hll hll1 hll2 hll3 r del hll{t} hll1{t} hll2{t} hll3{t}
r pfadd hll1 a b c r pfadd hll1{t} a b c
r pfadd hll2 b c d r pfadd hll2{t} b c d
r pfadd hll3 c d e r pfadd hll3{t} c d e
r pfmerge hll hll1 hll2 hll3 r pfmerge hll{t} hll1{t} hll2{t} hll3{t}
r pfcount hll r pfcount hll{t}
} {5} } {5}
test {PFCOUNT multiple-keys merge returns cardinality of union #1} { test {PFCOUNT multiple-keys merge returns cardinality of union #1} {
r del hll1 hll2 hll3 r del hll1{t} hll2{t} hll3{t}
for {set x 1} {$x < 10000} {incr x} { for {set x 1} {$x < 10000} {incr x} {
r pfadd hll1 "foo-$x" r pfadd hll1{t} "foo-$x"
r pfadd hll2 "bar-$x" r pfadd hll2{t} "bar-$x"
r pfadd hll3 "zap-$x" r pfadd hll3{t} "zap-$x"
set card [r pfcount hll1 hll2 hll3] set card [r pfcount hll1{t} hll2{t} hll3{t}]
set realcard [expr {$x*3}] set realcard [expr {$x*3}]
set err [expr {abs($card-$realcard)}] set err [expr {abs($card-$realcard)}]
assert {$err < (double($card)/100)*5} assert {$err < (double($card)/100)*5}
@ -179,17 +180,17 @@ start_server {tags {"hll"}} {
} }
test {PFCOUNT multiple-keys merge returns cardinality of union #2} { test {PFCOUNT multiple-keys merge returns cardinality of union #2} {
r del hll1 hll2 hll3 r del hll1{t} hll2{t} hll3{t}
set elements {} set elements {}
for {set x 1} {$x < 10000} {incr x} { for {set x 1} {$x < 10000} {incr x} {
for {set j 1} {$j <= 3} {incr j} { for {set j 1} {$j <= 3} {incr j} {
set rint [randomInt 20000] set rint [randomInt 20000]
r pfadd hll$j $rint r pfadd hll$j{t} $rint
lappend elements $rint lappend elements $rint
} }
} }
set realcard [llength [lsort -unique $elements]] set realcard [llength [lsort -unique $elements]]
set card [r pfcount hll1 hll2 hll3] set card [r pfcount hll1{t} hll2{t} hll3{t}]
set err [expr {abs($card-$realcard)}] set err [expr {abs($card-$realcard)}]
assert {$err < (double($card)/100)*5} assert {$err < (double($card)/100)*5}
} }
@ -198,7 +199,7 @@ start_server {tags {"hll"}} {
r del hll r del hll
r pfadd hll 1 2 3 r pfadd hll 1 2 3
llength [r pfdebug getreg hll] llength [r pfdebug getreg hll]
} {16384} } {16384} {needs:pfdebug}
test {PFADD / PFCOUNT cache invalidation works} { test {PFADD / PFCOUNT cache invalidation works} {
r del hll r del hll

View File

@ -6,7 +6,7 @@ proc errorstat {cmd} {
return [errorrstat $cmd r] return [errorrstat $cmd r]
} }
start_server {tags {"info"}} { start_server {tags {"info" "external:skip"}} {
start_server {} { start_server {} {
test {errorstats: failed call authentication error} { test {errorstats: failed call authentication error} {

View File

@ -21,9 +21,9 @@ start_server {tags {"introspection"}} {
test {TOUCH returns the number of existing keys specified} { test {TOUCH returns the number of existing keys specified} {
r flushdb r flushdb
r set key1 1 r set key1{t} 1
r set key2 2 r set key2{t} 2
r touch key0 key1 key2 key3 r touch key0{t} key1{t} key2{t} key3{t}
} 2 } 2
test {command stats for GEOADD} { test {command stats for GEOADD} {
@ -31,7 +31,7 @@ start_server {tags {"introspection"}} {
r GEOADD foo 0 0 bar r GEOADD foo 0 0 bar
assert_match {*calls=1,*} [cmdstat geoadd] assert_match {*calls=1,*} [cmdstat geoadd]
assert_match {} [cmdstat zadd] assert_match {} [cmdstat zadd]
} } {} {needs:config-resetstat}
test {command stats for EXPIRE} { test {command stats for EXPIRE} {
r config resetstat r config resetstat
@ -39,7 +39,7 @@ start_server {tags {"introspection"}} {
r EXPIRE foo 0 r EXPIRE foo 0
assert_match {*calls=1,*} [cmdstat expire] assert_match {*calls=1,*} [cmdstat expire]
assert_match {} [cmdstat del] assert_match {} [cmdstat del]
} } {} {needs:config-resetstat}
test {command stats for BRPOP} { test {command stats for BRPOP} {
r config resetstat r config resetstat
@ -47,21 +47,21 @@ start_server {tags {"introspection"}} {
r BRPOP list 0 r BRPOP list 0
assert_match {*calls=1,*} [cmdstat brpop] assert_match {*calls=1,*} [cmdstat brpop]
assert_match {} [cmdstat rpop] assert_match {} [cmdstat rpop]
} } {} {needs:config-resetstat}
test {command stats for MULTI} { test {command stats for MULTI} {
r config resetstat r config resetstat
r MULTI r MULTI
r set foo bar r set foo{t} bar
r GEOADD foo2 0 0 bar r GEOADD foo2{t} 0 0 bar
r EXPIRE foo2 0 r EXPIRE foo2{t} 0
r EXEC r EXEC
assert_match {*calls=1,*} [cmdstat multi] assert_match {*calls=1,*} [cmdstat multi]
assert_match {*calls=1,*} [cmdstat exec] assert_match {*calls=1,*} [cmdstat exec]
assert_match {*calls=1,*} [cmdstat set] assert_match {*calls=1,*} [cmdstat set]
assert_match {*calls=1,*} [cmdstat expire] assert_match {*calls=1,*} [cmdstat expire]
assert_match {*calls=1,*} [cmdstat geoadd] assert_match {*calls=1,*} [cmdstat geoadd]
} } {} {needs:config-resetstat}
test {command stats for scripts} { test {command stats for scripts} {
r config resetstat r config resetstat
@ -75,5 +75,5 @@ start_server {tags {"introspection"}} {
assert_match {*calls=2,*} [cmdstat set] assert_match {*calls=2,*} [cmdstat set]
assert_match {*calls=1,*} [cmdstat expire] assert_match {*calls=1,*} [cmdstat expire]
assert_match {*calls=1,*} [cmdstat geoadd] assert_match {*calls=1,*} [cmdstat geoadd]
} } {} {needs:config-resetstat}
} }

View File

@ -1,7 +1,7 @@
start_server {tags {"introspection"}} { start_server {tags {"introspection"}} {
test {CLIENT LIST} { test {CLIENT LIST} {
r client list r client list
} {*addr=*:* fd=* age=* idle=* flags=N db=9 sub=0 psub=0 multi=-1 qbuf=26 qbuf-free=* argv-mem=* obl=0 oll=0 omem=0 tot-mem=* events=r cmd=client*} } {*addr=*:* fd=* age=* idle=* flags=N db=* sub=0 psub=0 multi=-1 qbuf=26 qbuf-free=* argv-mem=* obl=0 oll=0 omem=0 tot-mem=* events=r cmd=client*}
test {CLIENT LIST with IDs} { test {CLIENT LIST with IDs} {
set myid [r client id] set myid [r client id]
@ -11,7 +11,7 @@ start_server {tags {"introspection"}} {
test {CLIENT INFO} { test {CLIENT INFO} {
r client info r client info
} {*addr=*:* fd=* age=* idle=* flags=N db=9 sub=0 psub=0 multi=-1 qbuf=26 qbuf-free=* argv-mem=* obl=0 oll=0 omem=0 tot-mem=* events=r cmd=client*} } {*addr=*:* fd=* age=* idle=* flags=N db=* sub=0 psub=0 multi=-1 qbuf=26 qbuf-free=* argv-mem=* obl=0 oll=0 omem=0 tot-mem=* events=r cmd=client*}
test {MONITOR can log executed commands} { test {MONITOR can log executed commands} {
set rd [redis_deferring_client] set rd [redis_deferring_client]
@ -50,7 +50,7 @@ start_server {tags {"introspection"}} {
assert_match {*"auth"*"(redacted)"*"(redacted)"*} [$rd read] assert_match {*"auth"*"(redacted)"*"(redacted)"*} [$rd read]
assert_match {*"hello"*"2"*"AUTH"*"(redacted)"*"(redacted)"*} [$rd read] assert_match {*"hello"*"2"*"AUTH"*"(redacted)"*"(redacted)"*} [$rd read]
$rd close $rd close
} } {0} {needs:repl}
test {MONITOR correctly handles multi-exec cases} { test {MONITOR correctly handles multi-exec cases} {
set rd [redis_deferring_client] set rd [redis_deferring_client]
@ -125,7 +125,7 @@ start_server {tags {"introspection"}} {
# Defaults # Defaults
assert_match [r config get save] {save {100 100}} assert_match [r config get save] {save {100 100}}
} }
} } {} {external:skip}
test {CONFIG sanity} { test {CONFIG sanity} {
# Do CONFIG GET, CONFIG SET and then CONFIG GET again # Do CONFIG GET, CONFIG SET and then CONFIG GET again
@ -178,6 +178,11 @@ start_server {tags {"introspection"}} {
} }
} }
# TODO: Remove this when CONFIG SET bind "" is fixed.
if {$::external} {
append skip_configs bind
}
set configs {} set configs {}
foreach {k v} [r config get *] { foreach {k v} [r config get *] {
if {[lsearch $skip_configs $k] != -1} { if {[lsearch $skip_configs $k] != -1} {
@ -224,7 +229,7 @@ start_server {tags {"introspection"}} {
dict for {k v} $configs { dict for {k v} $configs {
assert_equal $v [lindex [r config get $k] 1] assert_equal $v [lindex [r config get $k] 1]
} }
} } {} {external:skip}
test {CONFIG REWRITE handles save properly} { test {CONFIG REWRITE handles save properly} {
r config set save "3600 1 300 100 60 10000" r config set save "3600 1 300 100 60 10000"
@ -244,7 +249,7 @@ start_server {tags {"introspection"}} {
restart_server 0 true false restart_server 0 true false
assert_equal [r config get save] {save {}} assert_equal [r config get save] {save {}}
} }
} } {} {external:skip}
# Config file at this point is at a wierd state, and includes all # Config file at this point is at a wierd state, and includes all
# known keywords. Might be a good idea to avoid adding tests here. # known keywords. Might be a good idea to avoid adding tests here.

View File

@ -7,12 +7,18 @@ start_server {tags {"keyspace"}} {
} {} } {}
test {Vararg DEL} { test {Vararg DEL} {
r set foo1 a r set foo1{t} a
r set foo2 b r set foo2{t} b
r set foo3 c r set foo3{t} c
list [r del foo1 foo2 foo3 foo4] [r mget foo1 foo2 foo3] list [r del foo1{t} foo2{t} foo3{t} foo4{t}] [r mget foo1{t} foo2{t} foo3{t}]
} {3 {{} {} {}}} } {3 {{} {} {}}}
test {Untagged multi-key commands} {
r mset foo1 a foo2 b foo3 c
assert_equal {a b c {}} [r mget foo1 foo2 foo3 foo4]
r del foo1 foo2 foo3 foo4
} {3} {cluster:skip}
test {KEYS with pattern} { test {KEYS with pattern} {
foreach key {key_x key_y key_z foo_a foo_b foo_c} { foreach key {key_x key_y key_z foo_a foo_b foo_c} {
r set $key hello r set $key hello
@ -39,7 +45,7 @@ start_server {tags {"keyspace"}} {
after 1100 after 1100
assert_equal 0 [r del keyExpire] assert_equal 0 [r del keyExpire]
r debug set-active-expire 1 r debug set-active-expire 1
} } {OK} {needs:debug}
test {EXISTS} { test {EXISTS} {
set res {} set res {}
@ -74,10 +80,10 @@ start_server {tags {"keyspace"}} {
} {1} } {1}
test {RENAME basic usage} { test {RENAME basic usage} {
r set mykey hello r set mykey{t} hello
r rename mykey mykey1 r rename mykey{t} mykey1{t}
r rename mykey1 mykey2 r rename mykey1{t} mykey2{t}
r get mykey2 r get mykey2{t}
} {hello} } {hello}
test {RENAME source key should no longer exist} { test {RENAME source key should no longer exist} {
@ -85,35 +91,35 @@ start_server {tags {"keyspace"}} {
} {0} } {0}
test {RENAME against already existing key} { test {RENAME against already existing key} {
r set mykey a r set mykey{t} a
r set mykey2 b r set mykey2{t} b
r rename mykey2 mykey r rename mykey2{t} mykey{t}
set res [r get mykey] set res [r get mykey{t}]
append res [r exists mykey2] append res [r exists mykey2{t}]
} {b0} } {b0}
test {RENAMENX basic usage} { test {RENAMENX basic usage} {
r del mykey r del mykey{t}
r del mykey2 r del mykey2{t}
r set mykey foobar r set mykey{t} foobar
r renamenx mykey mykey2 r renamenx mykey{t} mykey2{t}
set res [r get mykey2] set res [r get mykey2{t}]
append res [r exists mykey] append res [r exists mykey{t}]
} {foobar0} } {foobar0}
test {RENAMENX against already existing key} { test {RENAMENX against already existing key} {
r set mykey foo r set mykey{t} foo
r set mykey2 bar r set mykey2{t} bar
r renamenx mykey mykey2 r renamenx mykey{t} mykey2{t}
} {0} } {0}
test {RENAMENX against already existing key (2)} { test {RENAMENX against already existing key (2)} {
set res [r get mykey] set res [r get mykey{t}]
append res [r get mykey2] append res [r get mykey2{t}]
} {foobar} } {foobar}
test {RENAME against non existing source key} { test {RENAME against non existing source key} {
catch {r rename nokey foobar} err catch {r rename nokey{t} foobar{t}} err
format $err format $err
} {ERR*} } {ERR*}
@ -134,22 +140,22 @@ start_server {tags {"keyspace"}} {
} {ERR*} } {ERR*}
test {RENAME with volatile key, should move the TTL as well} { test {RENAME with volatile key, should move the TTL as well} {
r del mykey mykey2 r del mykey{t} mykey2{t}
r set mykey foo r set mykey{t} foo
r expire mykey 100 r expire mykey{t} 100
assert {[r ttl mykey] > 95 && [r ttl mykey] <= 100} assert {[r ttl mykey{t}] > 95 && [r ttl mykey{t}] <= 100}
r rename mykey mykey2 r rename mykey{t} mykey2{t}
assert {[r ttl mykey2] > 95 && [r ttl mykey2] <= 100} assert {[r ttl mykey2{t}] > 95 && [r ttl mykey2{t}] <= 100}
} }
test {RENAME with volatile key, should not inherit TTL of target key} { test {RENAME with volatile key, should not inherit TTL of target key} {
r del mykey mykey2 r del mykey{t} mykey2{t}
r set mykey foo r set mykey{t} foo
r set mykey2 bar r set mykey2{t} bar
r expire mykey2 100 r expire mykey2{t} 100
assert {[r ttl mykey] == -1 && [r ttl mykey2] > 0} assert {[r ttl mykey{t}] == -1 && [r ttl mykey2{t}] > 0}
r rename mykey mykey2 r rename mykey{t} mykey2{t}
r ttl mykey2 r ttl mykey2{t}
} {-1} } {-1}
test {DEL all keys again (DB 0)} { test {DEL all keys again (DB 0)} {
@ -167,212 +173,216 @@ start_server {tags {"keyspace"}} {
set res [r dbsize] set res [r dbsize]
r select 9 r select 9
format $res format $res
} {0} } {0} {singledb:skip}
test {COPY basic usage for string} { test {COPY basic usage for string} {
r set mykey foobar r set mykey{t} foobar
set res {} set res {}
r copy mykey mynewkey r copy mykey{t} mynewkey{t}
lappend res [r get mynewkey] lappend res [r get mynewkey{t}]
lappend res [r dbsize] lappend res [r dbsize]
r copy mykey mynewkey DB 10 if {$::singledb} {
r select 10 assert_equal [list foobar 2] [format $res]
lappend res [r get mynewkey] } else {
lappend res [r dbsize] r copy mykey{t} mynewkey{t} DB 10
r select 9 r select 10
format $res lappend res [r get mynewkey{t}]
} [list foobar 2 foobar 1] lappend res [r dbsize]
r select 9
assert_equal [list foobar 2 foobar 1] [format $res]
}
}
test {COPY for string does not replace an existing key without REPLACE option} { test {COPY for string does not replace an existing key without REPLACE option} {
r set mykey2 hello r set mykey2{t} hello
catch {r copy mykey2 mynewkey DB 10} e catch {r copy mykey2{t} mynewkey{t} DB 10} e
set e set e
} {0} } {0} {singledb:skip}
test {COPY for string can replace an existing key with REPLACE option} { test {COPY for string can replace an existing key with REPLACE option} {
r copy mykey2 mynewkey DB 10 REPLACE r copy mykey2{t} mynewkey{t} DB 10 REPLACE
r select 10 r select 10
r get mynewkey r get mynewkey{t}
} {hello} } {hello} {singledb:skip}
test {COPY for string ensures that copied data is independent of copying data} { test {COPY for string ensures that copied data is independent of copying data} {
r flushdb r flushdb
r select 9 r select 9
r set mykey foobar r set mykey{t} foobar
set res {} set res {}
r copy mykey mynewkey DB 10 r copy mykey{t} mynewkey{t} DB 10
r select 10 r select 10
lappend res [r get mynewkey] lappend res [r get mynewkey{t}]
r set mynewkey hoge r set mynewkey{t} hoge
lappend res [r get mynewkey] lappend res [r get mynewkey{t}]
r select 9 r select 9
lappend res [r get mykey] lappend res [r get mykey{t}]
r select 10 r select 10
r flushdb r flushdb
r select 9 r select 9
format $res format $res
} [list foobar hoge foobar] } [list foobar hoge foobar] {singledb:skip}
test {COPY for string does not copy data to no-integer DB} { test {COPY for string does not copy data to no-integer DB} {
r set mykey foobar r set mykey{t} foobar
catch {r copy mykey mynewkey DB notanumber} e catch {r copy mykey{t} mynewkey{t} DB notanumber} e
set e set e
} {ERR value is not an integer or out of range} } {ERR value is not an integer or out of range}
test {COPY can copy key expire metadata as well} { test {COPY can copy key expire metadata as well} {
r set mykey foobar ex 100 r set mykey{t} foobar ex 100
r copy mykey mynewkey REPLACE r copy mykey{t} mynewkey{t} REPLACE
assert {[r ttl mynewkey] > 0 && [r ttl mynewkey] <= 100} assert {[r ttl mynewkey{t}] > 0 && [r ttl mynewkey{t}] <= 100}
assert {[r get mynewkey] eq "foobar"} assert {[r get mynewkey{t}] eq "foobar"}
} }
test {COPY does not create an expire if it does not exist} { test {COPY does not create an expire if it does not exist} {
r set mykey foobar r set mykey{t} foobar
assert {[r ttl mykey] == -1} assert {[r ttl mykey{t}] == -1}
r copy mykey mynewkey REPLACE r copy mykey{t} mynewkey{t} REPLACE
assert {[r ttl mynewkey] == -1} assert {[r ttl mynewkey{t}] == -1}
assert {[r get mynewkey] eq "foobar"} assert {[r get mynewkey{t}] eq "foobar"}
} }
test {COPY basic usage for list} { test {COPY basic usage for list} {
r del mylist mynewlist r del mylist{t} mynewlist{t}
r lpush mylist a b c d r lpush mylist{t} a b c d
r copy mylist mynewlist r copy mylist{t} mynewlist{t}
set digest [r debug digest-value mylist] set digest [debug_digest_value mylist{t}]
assert_equal $digest [r debug digest-value mynewlist] assert_equal $digest [debug_digest_value mynewlist{t}]
assert_equal 1 [r object refcount mylist] assert_equal 1 [r object refcount mylist{t}]
assert_equal 1 [r object refcount mynewlist] assert_equal 1 [r object refcount mynewlist{t}]
r del mylist r del mylist{t}
assert_equal $digest [r debug digest-value mynewlist] assert_equal $digest [debug_digest_value mynewlist{t}]
} }
test {COPY basic usage for intset set} { test {COPY basic usage for intset set} {
r del set1 newset1 r del set1{t} newset1{t}
r sadd set1 1 2 3 r sadd set1{t} 1 2 3
assert_encoding intset set1 assert_encoding intset set1{t}
r copy set1 newset1 r copy set1{t} newset1{t}
set digest [r debug digest-value set1] set digest [debug_digest_value set1{t}]
assert_equal $digest [r debug digest-value newset1] assert_equal $digest [debug_digest_value newset1{t}]
assert_equal 1 [r object refcount set1] assert_equal 1 [r object refcount set1{t}]
assert_equal 1 [r object refcount newset1] assert_equal 1 [r object refcount newset1{t}]
r del set1 r del set1{t}
assert_equal $digest [r debug digest-value newset1] assert_equal $digest [debug_digest_value newset1{t}]
} }
test {COPY basic usage for hashtable set} { test {COPY basic usage for hashtable set} {
r del set2 newset2 r del set2{t} newset2{t}
r sadd set2 1 2 3 a r sadd set2{t} 1 2 3 a
assert_encoding hashtable set2 assert_encoding hashtable set2{t}
r copy set2 newset2 r copy set2{t} newset2{t}
set digest [r debug digest-value set2] set digest [debug_digest_value set2{t}]
assert_equal $digest [r debug digest-value newset2] assert_equal $digest [debug_digest_value newset2{t}]
assert_equal 1 [r object refcount set2] assert_equal 1 [r object refcount set2{t}]
assert_equal 1 [r object refcount newset2] assert_equal 1 [r object refcount newset2{t}]
r del set2 r del set2{t}
assert_equal $digest [r debug digest-value newset2] assert_equal $digest [debug_digest_value newset2{t}]
} }
test {COPY basic usage for ziplist sorted set} { test {COPY basic usage for ziplist sorted set} {
r del zset1 newzset1 r del zset1{t} newzset1{t}
r zadd zset1 123 foobar r zadd zset1{t} 123 foobar
assert_encoding ziplist zset1 assert_encoding ziplist zset1{t}
r copy zset1 newzset1 r copy zset1{t} newzset1{t}
set digest [r debug digest-value zset1] set digest [debug_digest_value zset1{t}]
assert_equal $digest [r debug digest-value newzset1] assert_equal $digest [debug_digest_value newzset1{t}]
assert_equal 1 [r object refcount zset1] assert_equal 1 [r object refcount zset1{t}]
assert_equal 1 [r object refcount newzset1] assert_equal 1 [r object refcount newzset1{t}]
r del zset1 r del zset1{t}
assert_equal $digest [r debug digest-value newzset1] assert_equal $digest [debug_digest_value newzset1{t}]
} }
test {COPY basic usage for skiplist sorted set} { test {COPY basic usage for skiplist sorted set} {
r del zset2 newzset2 r del zset2{t} newzset2{t}
set original_max [lindex [r config get zset-max-ziplist-entries] 1] set original_max [lindex [r config get zset-max-ziplist-entries] 1]
r config set zset-max-ziplist-entries 0 r config set zset-max-ziplist-entries 0
for {set j 0} {$j < 130} {incr j} { for {set j 0} {$j < 130} {incr j} {
r zadd zset2 [randomInt 50] ele-[randomInt 10] r zadd zset2{t} [randomInt 50] ele-[randomInt 10]
} }
assert_encoding skiplist zset2 assert_encoding skiplist zset2{t}
r copy zset2 newzset2 r copy zset2{t} newzset2{t}
set digest [r debug digest-value zset2] set digest [debug_digest_value zset2{t}]
assert_equal $digest [r debug digest-value newzset2] assert_equal $digest [debug_digest_value newzset2{t}]
assert_equal 1 [r object refcount zset2] assert_equal 1 [r object refcount zset2{t}]
assert_equal 1 [r object refcount newzset2] assert_equal 1 [r object refcount newzset2{t}]
r del zset2 r del zset2{t}
assert_equal $digest [r debug digest-value newzset2] assert_equal $digest [debug_digest_value newzset2{t}]
r config set zset-max-ziplist-entries $original_max r config set zset-max-ziplist-entries $original_max
} }
test {COPY basic usage for ziplist hash} { test {COPY basic usage for ziplist hash} {
r del hash1 newhash1 r del hash1{t} newhash1{t}
r hset hash1 tmp 17179869184 r hset hash1{t} tmp 17179869184
assert_encoding ziplist hash1 assert_encoding ziplist hash1{t}
r copy hash1 newhash1 r copy hash1{t} newhash1{t}
set digest [r debug digest-value hash1] set digest [debug_digest_value hash1{t}]
assert_equal $digest [r debug digest-value newhash1] assert_equal $digest [debug_digest_value newhash1{t}]
assert_equal 1 [r object refcount hash1] assert_equal 1 [r object refcount hash1{t}]
assert_equal 1 [r object refcount newhash1] assert_equal 1 [r object refcount newhash1{t}]
r del hash1 r del hash1{t}
assert_equal $digest [r debug digest-value newhash1] assert_equal $digest [debug_digest_value newhash1{t}]
} }
test {COPY basic usage for hashtable hash} { test {COPY basic usage for hashtable hash} {
r del hash2 newhash2 r del hash2{t} newhash2{t}
set original_max [lindex [r config get hash-max-ziplist-entries] 1] set original_max [lindex [r config get hash-max-ziplist-entries] 1]
r config set hash-max-ziplist-entries 0 r config set hash-max-ziplist-entries 0
for {set i 0} {$i < 64} {incr i} { for {set i 0} {$i < 64} {incr i} {
r hset hash2 [randomValue] [randomValue] r hset hash2{t} [randomValue] [randomValue]
} }
assert_encoding hashtable hash2 assert_encoding hashtable hash2{t}
r copy hash2 newhash2 r copy hash2{t} newhash2{t}
set digest [r debug digest-value hash2] set digest [debug_digest_value hash2{t}]
assert_equal $digest [r debug digest-value newhash2] assert_equal $digest [debug_digest_value newhash2{t}]
assert_equal 1 [r object refcount hash2] assert_equal 1 [r object refcount hash2{t}]
assert_equal 1 [r object refcount newhash2] assert_equal 1 [r object refcount newhash2{t}]
r del hash2 r del hash2{t}
assert_equal $digest [r debug digest-value newhash2] assert_equal $digest [debug_digest_value newhash2{t}]
r config set hash-max-ziplist-entries $original_max r config set hash-max-ziplist-entries $original_max
} }
test {COPY basic usage for stream} { test {COPY basic usage for stream} {
r del mystream mynewstream r del mystream{t} mynewstream{t}
for {set i 0} {$i < 1000} {incr i} { for {set i 0} {$i < 1000} {incr i} {
r XADD mystream * item 2 value b r XADD mystream{t} * item 2 value b
} }
r copy mystream mynewstream r copy mystream{t} mynewstream{t}
set digest [r debug digest-value mystream] set digest [debug_digest_value mystream{t}]
assert_equal $digest [r debug digest-value mynewstream] assert_equal $digest [debug_digest_value mynewstream{t}]
assert_equal 1 [r object refcount mystream] assert_equal 1 [r object refcount mystream{t}]
assert_equal 1 [r object refcount mynewstream] assert_equal 1 [r object refcount mynewstream{t}]
r del mystream r del mystream{t}
assert_equal $digest [r debug digest-value mynewstream] assert_equal $digest [debug_digest_value mynewstream{t}]
} }
test {COPY basic usage for stream-cgroups} { test {COPY basic usage for stream-cgroups} {
r del x r del x{t}
r XADD x 100 a 1 r XADD x{t} 100 a 1
set id [r XADD x 101 b 1] set id [r XADD x{t} 101 b 1]
r XADD x 102 c 1 r XADD x{t} 102 c 1
r XADD x 103 e 1 r XADD x{t} 103 e 1
r XADD x 104 f 1 r XADD x{t} 104 f 1
r XADD x 105 g 1 r XADD x{t} 105 g 1
r XGROUP CREATE x g1 0 r XGROUP CREATE x{t} g1 0
r XGROUP CREATE x g2 0 r XGROUP CREATE x{t} g2 0
r XREADGROUP GROUP g1 Alice COUNT 1 STREAMS x > r XREADGROUP GROUP g1 Alice COUNT 1 STREAMS x{t} >
r XREADGROUP GROUP g1 Bob COUNT 1 STREAMS x > r XREADGROUP GROUP g1 Bob COUNT 1 STREAMS x{t} >
r XREADGROUP GROUP g1 Bob NOACK COUNT 1 STREAMS x > r XREADGROUP GROUP g1 Bob NOACK COUNT 1 STREAMS x{t} >
r XREADGROUP GROUP g2 Charlie COUNT 4 STREAMS x > r XREADGROUP GROUP g2 Charlie COUNT 4 STREAMS x{t} >
r XGROUP SETID x g1 $id r XGROUP SETID x{t} g1 $id
r XREADGROUP GROUP g1 Dave COUNT 3 STREAMS x > r XREADGROUP GROUP g1 Dave COUNT 3 STREAMS x{t} >
r XDEL x 103 r XDEL x{t} 103
r copy x newx r copy x{t} newx{t}
set info [r xinfo stream x full] set info [r xinfo stream x{t} full]
assert_equal $info [r xinfo stream newx full] assert_equal $info [r xinfo stream newx{t} full]
assert_equal 1 [r object refcount x] assert_equal 1 [r object refcount x{t}]
assert_equal 1 [r object refcount newx] assert_equal 1 [r object refcount newx{t}]
r del x r del x{t}
assert_equal $info [r xinfo stream newx full] assert_equal $info [r xinfo stream newx{t} full]
r flushdb r flushdb
} }
@ -387,18 +397,18 @@ start_server {tags {"keyspace"}} {
lappend res [r dbsize] lappend res [r dbsize]
r select 9 r select 9
format $res format $res
} [list 0 0 foobar 1] } [list 0 0 foobar 1] {singledb:skip}
test {MOVE against key existing in the target DB} { test {MOVE against key existing in the target DB} {
r set mykey hello r set mykey hello
r move mykey 10 r move mykey 10
} {0} } {0} {singledb:skip}
test {MOVE against non-integer DB (#1428)} { test {MOVE against non-integer DB (#1428)} {
r set mykey hello r set mykey hello
catch {r move mykey notanumber} e catch {r move mykey notanumber} e
set e set e
} {ERR value is not an integer or out of range} } {ERR value is not an integer or out of range} {singledb:skip}
test {MOVE can move key expire metadata as well} { test {MOVE can move key expire metadata as well} {
r select 10 r select 10
@ -411,7 +421,7 @@ start_server {tags {"keyspace"}} {
assert {[r ttl mykey] > 0 && [r ttl mykey] <= 100} assert {[r ttl mykey] > 0 && [r ttl mykey] <= 100}
assert {[r get mykey] eq "foo"} assert {[r get mykey] eq "foo"}
r select 9 r select 9
} } {OK} {singledb:skip}
test {MOVE does not create an expire if it does not exist} { test {MOVE does not create an expire if it does not exist} {
r select 10 r select 10
@ -424,7 +434,7 @@ start_server {tags {"keyspace"}} {
assert {[r ttl mykey] == -1} assert {[r ttl mykey] == -1}
assert {[r get mykey] eq "foo"} assert {[r get mykey] eq "foo"}
r select 9 r select 9
} } {OK} {singledb:skip}
test {SET/GET keys in different DBs} { test {SET/GET keys in different DBs} {
r set a hello r set a hello
@ -441,7 +451,7 @@ start_server {tags {"keyspace"}} {
lappend res [r get b] lappend res [r get b]
r select 9 r select 9
format $res format $res
} {hello world foo bared} } {hello world foo bared} {singledb:skip}
test {RANDOMKEY} { test {RANDOMKEY} {
r flushdb r flushdb

View File

@ -1,4 +1,4 @@
start_server {tags {"latency-monitor"}} { start_server {tags {"latency-monitor needs:latency"}} {
# Set a threshold high enough to avoid spurious latency events. # Set a threshold high enough to avoid spurious latency events.
r config set latency-monitor-threshold 200 r config set latency-monitor-threshold 200
r latency reset r latency reset

View File

@ -67,7 +67,7 @@ start_server {tags {"lazyfree"}} {
fail "lazyfree isn't done" fail "lazyfree isn't done"
} }
assert_equal [s lazyfreed_objects] 1 assert_equal [s lazyfreed_objects] 1
} } {} {needs:config-resetstat}
test "lazy free a stream with deleted cgroup" { test "lazy free a stream with deleted cgroup" {
r config resetstat r config resetstat
@ -83,5 +83,5 @@ start_server {tags {"lazyfree"}} {
fail "lazyfree isn't done" fail "lazyfree isn't done"
} }
assert_equal [s lazyfreed_objects] 0 assert_equal [s lazyfreed_objects] 0
} } {} {needs:config-resetstat}
} }

View File

@ -1,4 +1,4 @@
start_server {tags {"limits network"} overrides {maxclients 10}} { start_server {tags {"limits network external:skip"} overrides {maxclients 10}} {
if {$::tls} { if {$::tls} {
set expected_code "*I/O error*" set expected_code "*I/O error*"
} else { } else {

View File

@ -1,4 +1,4 @@
start_server {tags {"maxmemory"}} { start_server {tags {"maxmemory external:skip"}} {
test "Without maxmemory small integers are shared" { test "Without maxmemory small integers are shared" {
r config set maxmemory 0 r config set maxmemory 0
r set a 1 r set a 1
@ -144,7 +144,7 @@ start_server {tags {"maxmemory"}} {
} }
proc test_slave_buffers {test_name cmd_count payload_len limit_memory pipeline} { proc test_slave_buffers {test_name cmd_count payload_len limit_memory pipeline} {
start_server {tags {"maxmemory"}} { start_server {tags {"maxmemory external:skip"}} {
start_server {} { start_server {} {
set slave_pid [s process_id] set slave_pid [s process_id]
test "$test_name" { test "$test_name" {
@ -241,7 +241,7 @@ test_slave_buffers {slave buffer are counted correctly} 1000000 10 0 1
# test again with fewer (and bigger) commands without pipeline, but with eviction # test again with fewer (and bigger) commands without pipeline, but with eviction
test_slave_buffers "replica buffer don't induce eviction" 100000 100 1 0 test_slave_buffers "replica buffer don't induce eviction" 100000 100 1 0
start_server {tags {"maxmemory"}} { start_server {tags {"maxmemory external:skip"}} {
test {Don't rehash if used memory exceeds maxmemory after rehash} { test {Don't rehash if used memory exceeds maxmemory after rehash} {
r config set maxmemory 0 r config set maxmemory 0
r config set maxmemory-policy allkeys-random r config set maxmemory-policy allkeys-random
@ -261,7 +261,7 @@ start_server {tags {"maxmemory"}} {
} {4098} } {4098}
} }
start_server {tags {"maxmemory"}} { start_server {tags {"maxmemory external:skip"}} {
test {client tracking don't cause eviction feedback loop} { test {client tracking don't cause eviction feedback loop} {
r config set maxmemory 0 r config set maxmemory 0
r config set maxmemory-policy allkeys-lru r config set maxmemory-policy allkeys-lru

View File

@ -21,7 +21,7 @@ proc test_memory_efficiency {range} {
return $efficiency return $efficiency
} }
start_server {tags {"memefficiency"}} { start_server {tags {"memefficiency external:skip"}} {
foreach {size_range expected_min_efficiency} { foreach {size_range expected_min_efficiency} {
32 0.15 32 0.15
64 0.25 64 0.25
@ -37,7 +37,7 @@ start_server {tags {"memefficiency"}} {
} }
run_solo {defrag} { run_solo {defrag} {
start_server {tags {"defrag"} overrides {appendonly yes auto-aof-rewrite-percentage 0 save ""}} { start_server {tags {"defrag external:skip"} overrides {appendonly yes auto-aof-rewrite-percentage 0 save ""}} {
if {[string match {*jemalloc*} [s mem_allocator]] && [r debug mallctl arenas.page] <= 8192} { if {[string match {*jemalloc*} [s mem_allocator]] && [r debug mallctl arenas.page] <= 8192} {
test "Active defrag" { test "Active defrag" {
r config set hz 100 r config set hz 100

View File

@ -1,3 +1,13 @@
proc wait_for_dbsize {size} {
set r2 [redis_client]
wait_for_condition 50 100 {
[$r2 dbsize] == $size
} else {
fail "Target dbsize not reached"
}
$r2 close
}
start_server {tags {"multi"}} { start_server {tags {"multi"}} {
test {MUTLI / EXEC basics} { test {MUTLI / EXEC basics} {
r del mylist r del mylist
@ -47,47 +57,47 @@ start_server {tags {"multi"}} {
} {*ERR WATCH*} } {*ERR WATCH*}
test {EXEC fails if there are errors while queueing commands #1} { test {EXEC fails if there are errors while queueing commands #1} {
r del foo1 foo2 r del foo1{t} foo2{t}
r multi r multi
r set foo1 bar1 r set foo1{t} bar1
catch {r non-existing-command} catch {r non-existing-command}
r set foo2 bar2 r set foo2{t} bar2
catch {r exec} e catch {r exec} e
assert_match {EXECABORT*} $e assert_match {EXECABORT*} $e
list [r exists foo1] [r exists foo2] list [r exists foo1{t}] [r exists foo2{t}]
} {0 0} } {0 0}
test {EXEC fails if there are errors while queueing commands #2} { test {EXEC fails if there are errors while queueing commands #2} {
set rd [redis_deferring_client] set rd [redis_deferring_client]
r del foo1 foo2 r del foo1{t} foo2{t}
r multi r multi
r set foo1 bar1 r set foo1{t} bar1
$rd config set maxmemory 1 $rd config set maxmemory 1
assert {[$rd read] eq {OK}} assert {[$rd read] eq {OK}}
catch {r lpush mylist myvalue} catch {r lpush mylist{t} myvalue}
$rd config set maxmemory 0 $rd config set maxmemory 0
assert {[$rd read] eq {OK}} assert {[$rd read] eq {OK}}
r set foo2 bar2 r set foo2{t} bar2
catch {r exec} e catch {r exec} e
assert_match {EXECABORT*} $e assert_match {EXECABORT*} $e
$rd close $rd close
list [r exists foo1] [r exists foo2] list [r exists foo1{t}] [r exists foo2{t}]
} {0 0} } {0 0} {needs:config-maxmemory}
test {If EXEC aborts, the client MULTI state is cleared} { test {If EXEC aborts, the client MULTI state is cleared} {
r del foo1 foo2 r del foo1{t} foo2{t}
r multi r multi
r set foo1 bar1 r set foo1{t} bar1
catch {r non-existing-command} catch {r non-existing-command}
r set foo2 bar2 r set foo2{t} bar2
catch {r exec} e catch {r exec} e
assert_match {EXECABORT*} $e assert_match {EXECABORT*} $e
r ping r ping
} {PONG} } {PONG}
test {EXEC works on WATCHed key not modified} { test {EXEC works on WATCHed key not modified} {
r watch x y z r watch x{t} y{t} z{t}
r watch k r watch k{t}
r multi r multi
r ping r ping
r exec r exec
@ -103,9 +113,9 @@ start_server {tags {"multi"}} {
} {} } {}
test {EXEC fail on WATCHed key modified (1 key of 5 watched)} { test {EXEC fail on WATCHed key modified (1 key of 5 watched)} {
r set x 30 r set x{t} 30
r watch a b x k z r watch a{t} b{t} x{t} k{t} z{t}
r set x 40 r set x{t} 40
r multi r multi
r ping r ping
r exec r exec
@ -119,7 +129,7 @@ start_server {tags {"multi"}} {
r multi r multi
r ping r ping
r exec r exec
} {} } {} {cluster:skip}
test {After successful EXEC key is no longer watched} { test {After successful EXEC key is no longer watched} {
r set x 30 r set x 30
@ -205,7 +215,7 @@ start_server {tags {"multi"}} {
r multi r multi
r ping r ping
r exec r exec
} {} } {} {singledb:skip}
test {SWAPDB is able to touch the watched keys that do not exist} { test {SWAPDB is able to touch the watched keys that do not exist} {
r flushall r flushall
@ -217,7 +227,7 @@ start_server {tags {"multi"}} {
r multi r multi
r ping r ping
r exec r exec
} {} } {} {singledb:skip}
test {WATCH is able to remember the DB a key belongs to} { test {WATCH is able to remember the DB a key belongs to} {
r select 5 r select 5
@ -232,7 +242,7 @@ start_server {tags {"multi"}} {
# Restore original DB # Restore original DB
r select 9 r select 9
set res set res
} {PONG} } {PONG} {singledb:skip}
test {WATCH will consider touched keys target of EXPIRE} { test {WATCH will consider touched keys target of EXPIRE} {
r del x r del x
@ -245,11 +255,15 @@ start_server {tags {"multi"}} {
} {} } {}
test {WATCH will consider touched expired keys} { test {WATCH will consider touched expired keys} {
r flushall
r del x r del x
r set x foo r set x foo
r expire x 1 r expire x 1
r watch x r watch x
after 1100
# Wait for the keys to expire.
wait_for_dbsize 0
r multi r multi
r ping r ping
r exec r exec
@ -288,7 +302,7 @@ start_server {tags {"multi"}} {
{exec} {exec}
} }
close_replication_stream $repl close_replication_stream $repl
} } {} {needs:repl}
test {MULTI / EXEC is propagated correctly (empty transaction)} { test {MULTI / EXEC is propagated correctly (empty transaction)} {
set repl [attach_to_replication_stream] set repl [attach_to_replication_stream]
@ -300,7 +314,7 @@ start_server {tags {"multi"}} {
{set foo bar} {set foo bar}
} }
close_replication_stream $repl close_replication_stream $repl
} } {} {needs:repl}
test {MULTI / EXEC is propagated correctly (read-only commands)} { test {MULTI / EXEC is propagated correctly (read-only commands)} {
r set foo value1 r set foo value1
@ -314,10 +328,11 @@ start_server {tags {"multi"}} {
{set foo value2} {set foo value2}
} }
close_replication_stream $repl close_replication_stream $repl
} } {} {needs:repl}
test {MULTI / EXEC is propagated correctly (write command, no effect)} { test {MULTI / EXEC is propagated correctly (write command, no effect)} {
r del bar foo bar r del bar
r del foo
set repl [attach_to_replication_stream] set repl [attach_to_replication_stream]
r multi r multi
r del foo r del foo
@ -332,7 +347,7 @@ start_server {tags {"multi"}} {
{incr foo} {incr foo}
} }
close_replication_stream $repl close_replication_stream $repl
} } {} {needs:repl}
test {DISCARD should not fail during OOM} { test {DISCARD should not fail during OOM} {
set rd [redis_deferring_client] set rd [redis_deferring_client]
@ -346,7 +361,7 @@ start_server {tags {"multi"}} {
assert {[$rd read] eq {OK}} assert {[$rd read] eq {OK}}
$rd close $rd close
r ping r ping
} {PONG} } {PONG} {needs:config-maxmemory}
test {MULTI and script timeout} { test {MULTI and script timeout} {
# check that if MULTI arrives during timeout, it is either refused, or # check that if MULTI arrives during timeout, it is either refused, or
@ -460,8 +475,8 @@ start_server {tags {"multi"}} {
# make sure that the INCR wasn't executed # make sure that the INCR wasn't executed
assert { $xx == 1} assert { $xx == 1}
$r1 config set min-replicas-to-write 0 $r1 config set min-replicas-to-write 0
$r1 close; $r1 close
} } {0} {needs:repl}
test {exec with read commands and stale replica state change} { test {exec with read commands and stale replica state change} {
# check that exec that contains read commands fails if server state changed since they were queued # check that exec that contains read commands fails if server state changed since they were queued
@ -491,8 +506,8 @@ start_server {tags {"multi"}} {
set xx [r exec] set xx [r exec]
# make sure that the INCR was executed # make sure that the INCR was executed
assert { $xx == 1 } assert { $xx == 1 }
$r1 close; $r1 close
} } {0} {needs:repl cluster:skip}
test {EXEC with only read commands should not be rejected when OOM} { test {EXEC with only read commands should not be rejected when OOM} {
set r2 [redis_client] set r2 [redis_client]
@ -511,7 +526,7 @@ start_server {tags {"multi"}} {
# releasing OOM # releasing OOM
$r2 config set maxmemory 0 $r2 config set maxmemory 0
$r2 close $r2 close
} } {0} {needs:config-maxmemory}
test {EXEC with at least one use-memory command should fail} { test {EXEC with at least one use-memory command should fail} {
set r2 [redis_client] set r2 [redis_client]
@ -530,20 +545,20 @@ start_server {tags {"multi"}} {
# releasing OOM # releasing OOM
$r2 config set maxmemory 0 $r2 config set maxmemory 0
$r2 close $r2 close
} } {0} {needs:config-maxmemory}
test {Blocking commands ignores the timeout} { test {Blocking commands ignores the timeout} {
r xgroup create s g $ MKSTREAM r xgroup create s{t} g $ MKSTREAM
set m [r multi] set m [r multi]
r blpop empty_list 0 r blpop empty_list{t} 0
r brpop empty_list 0 r brpop empty_list{t} 0
r brpoplpush empty_list1 empty_list2 0 r brpoplpush empty_list1{t} empty_list2{t} 0
r blmove empty_list1 empty_list2 LEFT LEFT 0 r blmove empty_list1{t} empty_list2{t} LEFT LEFT 0
r bzpopmin empty_zset 0 r bzpopmin empty_zset{t} 0
r bzpopmax empty_zset 0 r bzpopmax empty_zset{t} 0
r xread BLOCK 0 STREAMS s $ r xread BLOCK 0 STREAMS s{t} $
r xreadgroup group g c BLOCK 0 STREAMS s > r xreadgroup group g c BLOCK 0 STREAMS s{t} >
set res [r exec] set res [r exec]
list $m $res list $m $res
@ -564,7 +579,7 @@ start_server {tags {"multi"}} {
{exec} {exec}
} }
close_replication_stream $repl close_replication_stream $repl
} } {} {needs:repl cluster:skip}
test {MULTI propagation of SCRIPT LOAD} { test {MULTI propagation of SCRIPT LOAD} {
set repl [attach_to_replication_stream] set repl [attach_to_replication_stream]
@ -582,7 +597,7 @@ start_server {tags {"multi"}} {
{exec} {exec}
} }
close_replication_stream $repl close_replication_stream $repl
} } {} {needs:repl}
test {MULTI propagation of SCRIPT LOAD} { test {MULTI propagation of SCRIPT LOAD} {
set repl [attach_to_replication_stream] set repl [attach_to_replication_stream]
@ -600,7 +615,7 @@ start_server {tags {"multi"}} {
{exec} {exec}
} }
close_replication_stream $repl close_replication_stream $repl
} } {} {needs:repl}
tags {"stream"} { tags {"stream"} {
test {MULTI propagation of XREADGROUP} { test {MULTI propagation of XREADGROUP} {
@ -624,7 +639,7 @@ start_server {tags {"multi"}} {
{exec} {exec}
} }
close_replication_stream $repl close_replication_stream $repl
} } {} {needs:repl}
} }
} }

View File

@ -20,7 +20,7 @@ test {CONFIG SET port number} {
$rd PING $rd PING
$rd close $rd close
} }
} } {} {external:skip}
test {CONFIG SET bind address} { test {CONFIG SET bind address} {
start_server {} { start_server {} {
@ -33,4 +33,4 @@ test {CONFIG SET bind address} {
$rd PING $rd PING
$rd close $rd close
} }
} } {} {external:skip}

View File

@ -1,4 +1,4 @@
start_server {tags {"obuf-limits"}} { start_server {tags {"obuf-limits external:skip"}} {
test {Client output buffer hard limit is enforced} { test {Client output buffer hard limit is enforced} {
r config set client-output-buffer-limit {pubsub 100000 0 0} r config set client-output-buffer-limit {pubsub 100000 0 0}
set rd1 [redis_deferring_client] set rd1 [redis_deferring_client]

View File

@ -2,7 +2,7 @@ set system_name [string tolower [exec uname -s]]
set user_id [exec id -u] set user_id [exec id -u]
if {$system_name eq {linux}} { if {$system_name eq {linux}} {
start_server {tags {"oom-score-adj"}} { start_server {tags {"oom-score-adj external:skip"}} {
proc get_oom_score_adj {{pid ""}} { proc get_oom_score_adj {{pid ""}} {
if {$pid == ""} { if {$pid == ""} {
set pid [srv 0 pid] set pid [srv 0 pid]

View File

@ -1,4 +1,4 @@
start_server {overrides {save ""} tags {"other"}} { start_server {tags {"other"}} {
if {$::force_failure} { if {$::force_failure} {
# This is used just for test suite development purposes. # This is used just for test suite development purposes.
test {Failing test} { test {Failing test} {
@ -17,7 +17,7 @@ start_server {overrides {save ""} tags {"other"}} {
r zadd mytestzset 20 b r zadd mytestzset 20 b
r zadd mytestzset 30 c r zadd mytestzset 30 c
r save r save
} {OK} } {OK} {needs:save}
tags {slow} { tags {slow} {
if {$::accurate} {set iterations 10000} else {set iterations 1000} if {$::accurate} {set iterations 10000} else {set iterations 1000}
@ -47,65 +47,65 @@ start_server {overrides {save ""} tags {"other"}} {
waitForBgsave r waitForBgsave r
r debug reload r debug reload
r get x r get x
} {10} } {10} {needs:save}
test {SELECT an out of range DB} { test {SELECT an out of range DB} {
catch {r select 1000000} err catch {r select 1000000} err
set _ $err set _ $err
} {*index is out of range*} } {*index is out of range*} {cluster:skip}
tags {consistency} { tags {consistency} {
if {true} { proc check_consistency {dumpname code} {
if {$::accurate} {set numops 10000} else {set numops 1000} set dump [csvdump r]
test {Check consistency of different data types after a reload} { set sha1 [r debug digest]
r flushdb
createComplexDataset r $numops
set dump [csvdump r]
set sha1 [r debug digest]
r debug reload
set sha1_after [r debug digest]
if {$sha1 eq $sha1_after} {
set _ 1
} else {
set newdump [csvdump r]
puts "Consistency test failed!"
puts "You can inspect the two dumps in /tmp/repldump*.txt"
set fd [open /tmp/repldump1.txt w] uplevel 1 $code
puts $fd $dump
close $fd
set fd [open /tmp/repldump2.txt w]
puts $fd $newdump
close $fd
set _ 0 set sha1_after [r debug digest]
} if {$sha1 eq $sha1_after} {
} {1} return 1
}
test {Same dataset digest if saving/reloading as AOF?} { # Failed
r config set aof-use-rdb-preamble no set newdump [csvdump r]
r bgrewriteaof puts "Consistency test failed!"
waitForBgrewriteaof r puts "You can inspect the two dumps in /tmp/${dumpname}*.txt"
r debug loadaof
set sha1_after [r debug digest]
if {$sha1 eq $sha1_after} {
set _ 1
} else {
set newdump [csvdump r]
puts "Consistency test failed!"
puts "You can inspect the two dumps in /tmp/aofdump*.txt"
set fd [open /tmp/aofdump1.txt w] set fd [open /tmp/${dumpname}1.txt w]
puts $fd $dump puts $fd $dump
close $fd close $fd
set fd [open /tmp/aofdump2.txt w] set fd [open /tmp/${dumpname}2.txt w]
puts $fd $newdump puts $fd $newdump
close $fd close $fd
set _ 0 return 0
}
} {1}
} }
if {$::accurate} {set numops 10000} else {set numops 1000}
test {Check consistency of different data types after a reload} {
r flushdb
createComplexDataset r $numops usetag
if {$::ignoredigest} {
set _ 1
} else {
check_consistency {repldump} {
r debug reload
}
}
} {1}
test {Same dataset digest if saving/reloading as AOF?} {
if {$::ignoredigest} {
set _ 1
} else {
check_consistency {aofdump} {
r config set aof-use-rdb-preamble no
r bgrewriteaof
waitForBgrewriteaof r
r debug loadaof
}
}
} {1} {needs:debug}
} }
test {EXPIRES after a reload (snapshot + append only file rewrite)} { test {EXPIRES after a reload (snapshot + append only file rewrite)} {
@ -122,7 +122,7 @@ start_server {overrides {save ""} tags {"other"}} {
set ttl [r ttl x] set ttl [r ttl x]
set e2 [expr {$ttl > 900 && $ttl <= 1000}] set e2 [expr {$ttl > 900 && $ttl <= 1000}]
list $e1 $e2 list $e1 $e2
} {1 1} } {1 1} {needs:debug needs:save}
test {EXPIRES after AOF reload (without rewrite)} { test {EXPIRES after AOF reload (without rewrite)} {
r flushdb r flushdb
@ -162,7 +162,7 @@ start_server {overrides {save ""} tags {"other"}} {
set ttl [r ttl pz] set ttl [r ttl pz]
assert {$ttl > 2900 && $ttl <= 3000} assert {$ttl > 2900 && $ttl <= 3000}
r config set appendonly no r config set appendonly no
} } {OK} {needs:debug}
tags {protocol} { tags {protocol} {
test {PIPELINING stresser (also a regression for the old epoll bug)} { test {PIPELINING stresser (also a regression for the old epoll bug)} {
@ -237,18 +237,23 @@ start_server {overrides {save ""} tags {"other"}} {
# Leave the user with a clean DB before to exit # Leave the user with a clean DB before to exit
test {FLUSHDB} { test {FLUSHDB} {
set aux {} set aux {}
r select 9 if {$::singledb} {
r flushdb r flushdb
lappend aux [r dbsize] lappend aux 0 [r dbsize]
r select 10 } else {
r flushdb r select 9
lappend aux [r dbsize] r flushdb
lappend aux [r dbsize]
r select 10
r flushdb
lappend aux [r dbsize]
}
} {0 0} } {0 0}
test {Perform a final SAVE to leave a clean DB on disk} { test {Perform a final SAVE to leave a clean DB on disk} {
waitForBgsave r waitForBgsave r
r save r save
} {OK} } {OK} {needs:save}
test {RESET clears client state} { test {RESET clears client state} {
r client setname test-client r client setname test-client
@ -258,7 +263,7 @@ start_server {overrides {save ""} tags {"other"}} {
set client [r client list] set client [r client list]
assert_match {*name= *} $client assert_match {*name= *} $client
assert_match {*flags=N *} $client assert_match {*flags=N *} $client
} } {} {needs:reset}
test {RESET clears MONITOR state} { test {RESET clears MONITOR state} {
set rd [redis_deferring_client] set rd [redis_deferring_client]
@ -269,7 +274,7 @@ start_server {overrides {save ""} tags {"other"}} {
assert_equal [$rd read] "RESET" assert_equal [$rd read] "RESET"
assert_no_match {*flags=O*} [r client list] assert_no_match {*flags=O*} [r client list]
} } {} {needs:reset}
test {RESET clears and discards MULTI state} { test {RESET clears and discards MULTI state} {
r multi r multi
@ -278,7 +283,7 @@ start_server {overrides {save ""} tags {"other"}} {
r reset r reset
catch {r exec} err catch {r exec} err
assert_match {*EXEC without MULTI*} $err assert_match {*EXEC without MULTI*} $err
} } {} {needs:reset}
test {RESET clears Pub/Sub state} { test {RESET clears Pub/Sub state} {
r subscribe channel-1 r subscribe channel-1
@ -286,7 +291,7 @@ start_server {overrides {save ""} tags {"other"}} {
# confirm we're not subscribed by executing another command # confirm we're not subscribed by executing another command
r set key val r set key val
} } {OK} {needs:reset}
test {RESET clears authenticated state} { test {RESET clears authenticated state} {
r acl setuser user1 on >secret +@all r acl setuser user1 on >secret +@all
@ -296,10 +301,10 @@ start_server {overrides {save ""} tags {"other"}} {
r reset r reset
assert_equal [r acl whoami] default assert_equal [r acl whoami] default
} } {} {needs:reset}
} }
start_server {tags {"other"}} { start_server {tags {"other external:skip"}} {
test {Don't rehash if redis has child proecess} { test {Don't rehash if redis has child proecess} {
r config set save "" r config set save ""
r config set rdb-key-save-delay 1000000 r config set rdb-key-save-delay 1000000
@ -322,7 +327,7 @@ start_server {tags {"other"}} {
# size is power of two and over 4098, so it is 8192 # size is power of two and over 4098, so it is 8192
r set k3 v3 r set k3 v3
assert_match "*table size: 8192*" [r debug HTSTATS 9] assert_match "*table size: 8192*" [r debug HTSTATS 9]
} } {} {needs:local-process}
} }
proc read_proc_title {pid} { proc read_proc_title {pid} {
@ -333,7 +338,7 @@ proc read_proc_title {pid} {
return $cmdline return $cmdline
} }
start_server {tags {"other"}} { start_server {tags {"other external:skip"}} {
test {Process title set as expected} { test {Process title set as expected} {
# Test only on Linux where it's easy to get cmdline without relying on tools. # Test only on Linux where it's easy to get cmdline without relying on tools.
# Skip valgrind as it messes up the arguments. # Skip valgrind as it messes up the arguments.

View File

@ -12,7 +12,7 @@ proc prepare_value {size} {
return $_v return $_v
} }
start_server {tags {"wait"}} { start_server {tags {"wait external:skip"}} {
start_server {} { start_server {} {
set slave [srv 0 client] set slave [srv 0 client]
set slave_host [srv 0 host] set slave_host [srv 0 host]

View File

@ -1,4 +1,10 @@
start_server {tags {"pubsub network"}} { start_server {tags {"pubsub network"}} {
if {$::singledb} {
set db 0
} else {
set db 9
}
test "Pub/Sub PING" { test "Pub/Sub PING" {
set rd1 [redis_deferring_client] set rd1 [redis_deferring_client]
subscribe $rd1 somechannel subscribe $rd1 somechannel
@ -182,7 +188,7 @@ start_server {tags {"pubsub network"}} {
set rd1 [redis_deferring_client] set rd1 [redis_deferring_client]
assert_equal {1} [psubscribe $rd1 *] assert_equal {1} [psubscribe $rd1 *]
r set foo bar r set foo bar
assert_equal {pmessage * __keyspace@9__:foo set} [$rd1 read] assert_equal "pmessage * __keyspace@${db}__:foo set" [$rd1 read]
$rd1 close $rd1 close
} }
@ -191,7 +197,7 @@ start_server {tags {"pubsub network"}} {
set rd1 [redis_deferring_client] set rd1 [redis_deferring_client]
assert_equal {1} [psubscribe $rd1 *] assert_equal {1} [psubscribe $rd1 *]
r set foo bar r set foo bar
assert_equal {pmessage * __keyevent@9__:set foo} [$rd1 read] assert_equal "pmessage * __keyevent@${db}__:set foo" [$rd1 read]
$rd1 close $rd1 close
} }
@ -200,8 +206,8 @@ start_server {tags {"pubsub network"}} {
set rd1 [redis_deferring_client] set rd1 [redis_deferring_client]
assert_equal {1} [psubscribe $rd1 *] assert_equal {1} [psubscribe $rd1 *]
r set foo bar r set foo bar
assert_equal {pmessage * __keyspace@9__:foo set} [$rd1 read] assert_equal "pmessage * __keyspace@${db}__:foo set" [$rd1 read]
assert_equal {pmessage * __keyevent@9__:set foo} [$rd1 read] assert_equal "pmessage * __keyevent@${db}__:set foo" [$rd1 read]
$rd1 close $rd1 close
} }
@ -213,8 +219,8 @@ start_server {tags {"pubsub network"}} {
r set foo bar r set foo bar
r lpush mylist a r lpush mylist a
# No notification for set, because only list commands are enabled. # No notification for set, because only list commands are enabled.
assert_equal {pmessage * __keyspace@9__:mylist lpush} [$rd1 read] assert_equal "pmessage * __keyspace@${db}__:mylist lpush" [$rd1 read]
assert_equal {pmessage * __keyevent@9__:lpush mylist} [$rd1 read] assert_equal "pmessage * __keyevent@${db}__:lpush mylist" [$rd1 read]
$rd1 close $rd1 close
} }
@ -225,10 +231,10 @@ start_server {tags {"pubsub network"}} {
r set foo bar r set foo bar
r expire foo 1 r expire foo 1
r del foo r del foo
assert_equal {pmessage * __keyspace@9__:foo expire} [$rd1 read] assert_equal "pmessage * __keyspace@${db}__:foo expire" [$rd1 read]
assert_equal {pmessage * __keyevent@9__:expire foo} [$rd1 read] assert_equal "pmessage * __keyevent@${db}__:expire foo" [$rd1 read]
assert_equal {pmessage * __keyspace@9__:foo del} [$rd1 read] assert_equal "pmessage * __keyspace@${db}__:foo del" [$rd1 read]
assert_equal {pmessage * __keyevent@9__:del foo} [$rd1 read] assert_equal "pmessage * __keyevent@${db}__:del foo" [$rd1 read]
$rd1 close $rd1 close
} }
@ -240,12 +246,12 @@ start_server {tags {"pubsub network"}} {
r lpush mylist a r lpush mylist a
r rpush mylist a r rpush mylist a
r rpop mylist r rpop mylist
assert_equal {pmessage * __keyspace@9__:mylist lpush} [$rd1 read] assert_equal "pmessage * __keyspace@${db}__:mylist lpush" [$rd1 read]
assert_equal {pmessage * __keyevent@9__:lpush mylist} [$rd1 read] assert_equal "pmessage * __keyevent@${db}__:lpush mylist" [$rd1 read]
assert_equal {pmessage * __keyspace@9__:mylist rpush} [$rd1 read] assert_equal "pmessage * __keyspace@${db}__:mylist rpush" [$rd1 read]
assert_equal {pmessage * __keyevent@9__:rpush mylist} [$rd1 read] assert_equal "pmessage * __keyevent@${db}__:rpush mylist" [$rd1 read]
assert_equal {pmessage * __keyspace@9__:mylist rpop} [$rd1 read] assert_equal "pmessage * __keyspace@${db}__:mylist rpop" [$rd1 read]
assert_equal {pmessage * __keyevent@9__:rpop mylist} [$rd1 read] assert_equal "pmessage * __keyevent@${db}__:rpop mylist" [$rd1 read]
$rd1 close $rd1 close
} }
@ -258,9 +264,9 @@ start_server {tags {"pubsub network"}} {
r srem myset x r srem myset x
r sadd myset x y z r sadd myset x y z
r srem myset x r srem myset x
assert_equal {pmessage * __keyspace@9__:myset sadd} [$rd1 read] assert_equal "pmessage * __keyspace@${db}__:myset sadd" [$rd1 read]
assert_equal {pmessage * __keyspace@9__:myset sadd} [$rd1 read] assert_equal "pmessage * __keyspace@${db}__:myset sadd" [$rd1 read]
assert_equal {pmessage * __keyspace@9__:myset srem} [$rd1 read] assert_equal "pmessage * __keyspace@${db}__:myset srem" [$rd1 read]
$rd1 close $rd1 close
} }
@ -273,9 +279,9 @@ start_server {tags {"pubsub network"}} {
r zrem myzset x r zrem myzset x
r zadd myzset 3 x 4 y 5 z r zadd myzset 3 x 4 y 5 z
r zrem myzset x r zrem myzset x
assert_equal {pmessage * __keyspace@9__:myzset zadd} [$rd1 read] assert_equal "pmessage * __keyspace@${db}__:myzset zadd" [$rd1 read]
assert_equal {pmessage * __keyspace@9__:myzset zadd} [$rd1 read] assert_equal "pmessage * __keyspace@${db}__:myzset zadd" [$rd1 read]
assert_equal {pmessage * __keyspace@9__:myzset zrem} [$rd1 read] assert_equal "pmessage * __keyspace@${db}__:myzset zrem" [$rd1 read]
$rd1 close $rd1 close
} }
@ -286,8 +292,8 @@ start_server {tags {"pubsub network"}} {
assert_equal {1} [psubscribe $rd1 *] assert_equal {1} [psubscribe $rd1 *]
r hmset myhash yes 1 no 0 r hmset myhash yes 1 no 0
r hincrby myhash yes 10 r hincrby myhash yes 10
assert_equal {pmessage * __keyspace@9__:myhash hset} [$rd1 read] assert_equal "pmessage * __keyspace@${db}__:myhash hset" [$rd1 read]
assert_equal {pmessage * __keyspace@9__:myhash hincrby} [$rd1 read] assert_equal "pmessage * __keyspace@${db}__:myhash hincrby" [$rd1 read]
$rd1 close $rd1 close
} }
@ -302,7 +308,7 @@ start_server {tags {"pubsub network"}} {
} else { } else {
fail "Key does not expire?!" fail "Key does not expire?!"
} }
assert_equal {pmessage * __keyevent@9__:expired foo} [$rd1 read] assert_equal "pmessage * __keyevent@${db}__:expired foo" [$rd1 read]
$rd1 close $rd1 close
} }
@ -312,7 +318,7 @@ start_server {tags {"pubsub network"}} {
set rd1 [redis_deferring_client] set rd1 [redis_deferring_client]
assert_equal {1} [psubscribe $rd1 *] assert_equal {1} [psubscribe $rd1 *]
r psetex foo 100 1 r psetex foo 100 1
assert_equal {pmessage * __keyevent@9__:expired foo} [$rd1 read] assert_equal "pmessage * __keyevent@${db}__:expired foo" [$rd1 read]
$rd1 close $rd1 close
} }
@ -324,10 +330,11 @@ start_server {tags {"pubsub network"}} {
assert_equal {1} [psubscribe $rd1 *] assert_equal {1} [psubscribe $rd1 *]
r set foo bar r set foo bar
r config set maxmemory 1 r config set maxmemory 1
assert_equal {pmessage * __keyevent@9__:evicted foo} [$rd1 read] assert_equal "pmessage * __keyevent@${db}__:evicted foo" [$rd1 read]
r config set maxmemory 0 r config set maxmemory 0
$rd1 close $rd1 close
} r config set maxmemory-policy noeviction
} {OK} {needs:config-maxmemory}
test "Keyspace notifications: test CONFIG GET/SET of event flags" { test "Keyspace notifications: test CONFIG GET/SET of event flags" {
r config set notify-keyspace-events gKE r config set notify-keyspace-events gKE

View File

@ -1,7 +1,7 @@
start_server {tags {"scan network"}} { start_server {tags {"scan network"}} {
test "SCAN basic" { test "SCAN basic" {
r flushdb r flushdb
r debug populate 1000 populate 1000
set cur 0 set cur 0
set keys {} set keys {}
@ -19,7 +19,7 @@ start_server {tags {"scan network"}} {
test "SCAN COUNT" { test "SCAN COUNT" {
r flushdb r flushdb
r debug populate 1000 populate 1000
set cur 0 set cur 0
set keys {} set keys {}
@ -37,7 +37,7 @@ start_server {tags {"scan network"}} {
test "SCAN MATCH" { test "SCAN MATCH" {
r flushdb r flushdb
r debug populate 1000 populate 1000
set cur 0 set cur 0
set keys {} set keys {}
@ -56,7 +56,7 @@ start_server {tags {"scan network"}} {
test "SCAN TYPE" { test "SCAN TYPE" {
r flushdb r flushdb
# populate only creates strings # populate only creates strings
r debug populate 1000 populate 1000
# Check non-strings are excluded # Check non-strings are excluded
set cur 0 set cur 0
@ -114,7 +114,7 @@ start_server {tags {"scan network"}} {
r sadd set {*}$elements r sadd set {*}$elements
# Verify that the encoding matches. # Verify that the encoding matches.
assert {[r object encoding set] eq $enc} assert_encoding $enc set
# Test SSCAN # Test SSCAN
set cur 0 set cur 0
@ -148,7 +148,7 @@ start_server {tags {"scan network"}} {
r hmset hash {*}$elements r hmset hash {*}$elements
# Verify that the encoding matches. # Verify that the encoding matches.
assert {[r object encoding hash] eq $enc} assert_encoding $enc hash
# Test HSCAN # Test HSCAN
set cur 0 set cur 0
@ -188,7 +188,7 @@ start_server {tags {"scan network"}} {
r zadd zset {*}$elements r zadd zset {*}$elements
# Verify that the encoding matches. # Verify that the encoding matches.
assert {[r object encoding zset] eq $enc} assert_encoding $enc zset
# Test ZSCAN # Test ZSCAN
set cur 0 set cur 0
@ -214,7 +214,7 @@ start_server {tags {"scan network"}} {
test "SCAN guarantees check under write load" { test "SCAN guarantees check under write load" {
r flushdb r flushdb
r debug populate 100 populate 100
# We start scanning here, so keys from 0 to 99 should all be # We start scanning here, so keys from 0 to 99 should all be
# reported at the end of the iteration. # reported at the end of the iteration.

View File

@ -35,8 +35,8 @@ start_server {tags {"scripting"}} {
} {1 2 3 ciao {1 2}} } {1 2 3 ciao {1 2}}
test {EVAL - Are the KEYS and ARGV arrays populated correctly?} { test {EVAL - Are the KEYS and ARGV arrays populated correctly?} {
r eval {return {KEYS[1],KEYS[2],ARGV[1],ARGV[2]}} 2 a b c d r eval {return {KEYS[1],KEYS[2],ARGV[1],ARGV[2]}} 2 a{t} b{t} c{t} d{t}
} {a b c d} } {a{t} b{t} c{t} d{t}}
test {EVAL - is Lua able to call Redis API?} { test {EVAL - is Lua able to call Redis API?} {
r set mykey myval r set mykey myval
@ -116,7 +116,7 @@ start_server {tags {"scripting"}} {
r select 10 r select 10
r set mykey "this is DB 10" r set mykey "this is DB 10"
r eval {return redis.pcall('get',KEYS[1])} 1 mykey r eval {return redis.pcall('get',KEYS[1])} 1 mykey
} {this is DB 10} } {this is DB 10} {singledb:skip}
test {EVAL - SELECT inside Lua should not affect the caller} { test {EVAL - SELECT inside Lua should not affect the caller} {
# here we DB 10 is selected # here we DB 10 is selected
@ -125,7 +125,7 @@ start_server {tags {"scripting"}} {
set res [r get mykey] set res [r get mykey]
r select 9 r select 9
set res set res
} {original value} } {original value} {singledb:skip}
if 0 { if 0 {
test {EVAL - Script can't run more than configured time limit} { test {EVAL - Script can't run more than configured time limit} {
@ -195,7 +195,7 @@ start_server {tags {"scripting"}} {
} e } e
r debug lua-always-replicate-commands 1 r debug lua-always-replicate-commands 1
set e set e
} {*not allowed after*} } {*not allowed after*} {needs:debug}
test {EVAL - No arguments to redis.call/pcall is considered an error} { test {EVAL - No arguments to redis.call/pcall is considered an error} {
set e {} set e {}
@ -368,25 +368,25 @@ start_server {tags {"scripting"}} {
set res [r eval {return redis.call('smembers',KEYS[1])} 1 myset] set res [r eval {return redis.call('smembers',KEYS[1])} 1 myset]
r debug lua-always-replicate-commands 1 r debug lua-always-replicate-commands 1
set res set res
} {a aa aaa azz b c d e f g h i l m n o p q r s t u v z} } {a aa aaa azz b c d e f g h i l m n o p q r s t u v z} {needs:debug}
test "SORT is normally not alpha re-ordered for the scripting engine" { test "SORT is normally not alpha re-ordered for the scripting engine" {
r del myset r del myset
r sadd myset 1 2 3 4 10 r sadd myset 1 2 3 4 10
r eval {return redis.call('sort',KEYS[1],'desc')} 1 myset r eval {return redis.call('sort',KEYS[1],'desc')} 1 myset
} {10 4 3 2 1} } {10 4 3 2 1} {cluster:skip}
test "SORT BY <constant> output gets ordered for scripting" { test "SORT BY <constant> output gets ordered for scripting" {
r del myset r del myset
r sadd myset a b c d e f g h i l m n o p q r s t u v z aa aaa azz r sadd myset a b c d e f g h i l m n o p q r s t u v z aa aaa azz
r eval {return redis.call('sort',KEYS[1],'by','_')} 1 myset r eval {return redis.call('sort',KEYS[1],'by','_')} 1 myset
} {a aa aaa azz b c d e f g h i l m n o p q r s t u v z} } {a aa aaa azz b c d e f g h i l m n o p q r s t u v z} {cluster:skip}
test "SORT BY <constant> with GET gets ordered for scripting" { test "SORT BY <constant> with GET gets ordered for scripting" {
r del myset r del myset
r sadd myset a b c r sadd myset a b c
r eval {return redis.call('sort',KEYS[1],'by','_','get','#','get','_:*')} 1 myset r eval {return redis.call('sort',KEYS[1],'by','_','get','#','get','_:*')} 1 myset
} {a {} b {} c {}} } {a {} b {} c {}} {cluster:skip}
test "redis.sha1hex() implementation" { test "redis.sha1hex() implementation" {
list [r eval {return redis.sha1hex('')} 0] \ list [r eval {return redis.sha1hex('')} 0] \
@ -477,9 +477,9 @@ start_server {tags {"scripting"}} {
r debug loadaof r debug loadaof
set res [r get foo] set res [r get foo]
r slaveof no one r slaveof no one
r config set aof-use-rdb-preamble yes
set res set res
} {102} } {102} {external:skip}
r config set aof-use-rdb-preamble yes
test {EVAL timeout from AOF} { test {EVAL timeout from AOF} {
# generate a long running script that is propagated to the AOF as script # generate a long running script that is propagated to the AOF as script
@ -516,16 +516,16 @@ start_server {tags {"scripting"}} {
if {$::verbose} { puts "loading took $elapsed milliseconds" } if {$::verbose} { puts "loading took $elapsed milliseconds" }
$rd close $rd close
r get x r get x
} {y} } {y} {external:skip}
test {We can call scripts rewriting client->argv from Lua} { test {We can call scripts rewriting client->argv from Lua} {
r del myset r del myset
r sadd myset a b c r sadd myset a b c
r mset a 1 b 2 c 3 d 4 r mset a{t} 1 b{t} 2 c{t} 3 d{t} 4
assert {[r spop myset] ne {}} assert {[r spop myset] ne {}}
assert {[r spop myset 1] ne {}} assert {[r spop myset 1] ne {}}
assert {[r spop myset] ne {}} assert {[r spop myset] ne {}}
assert {[r mget a b c d] eq {1 2 3 4}} assert {[r mget a{t} b{t} c{t} d{t}] eq {1 2 3 4}}
assert {[r spop myset] eq {}} assert {[r spop myset] eq {}}
} }
@ -539,7 +539,7 @@ start_server {tags {"scripting"}} {
end end
redis.call('rpush','mylist',unpack(x)) redis.call('rpush','mylist',unpack(x))
return redis.call('lrange','mylist',0,-1) return redis.call('lrange','mylist',0,-1)
} 0 } 1 mylist
} {1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100} } {1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100}
test {Number conversion precision test (issue #1118)} { test {Number conversion precision test (issue #1118)} {
@ -547,14 +547,14 @@ start_server {tags {"scripting"}} {
local value = 9007199254740991 local value = 9007199254740991
redis.call("set","foo",value) redis.call("set","foo",value)
return redis.call("get","foo") return redis.call("get","foo")
} 0 } 1 foo
} {9007199254740991} } {9007199254740991}
test {String containing number precision test (regression of issue #1118)} { test {String containing number precision test (regression of issue #1118)} {
r eval { r eval {
redis.call("set", "key", "12039611435714932082") redis.call("set", "key", "12039611435714932082")
return redis.call("get", "key") return redis.call("get", "key")
} 0 } 1 key
} {12039611435714932082} } {12039611435714932082}
test {Verify negative arg count is error instead of crash (issue #1842)} { test {Verify negative arg count is error instead of crash (issue #1842)} {
@ -565,13 +565,13 @@ start_server {tags {"scripting"}} {
test {Correct handling of reused argv (issue #1939)} { test {Correct handling of reused argv (issue #1939)} {
r eval { r eval {
for i = 0, 10 do for i = 0, 10 do
redis.call('SET', 'a', '1') redis.call('SET', 'a{t}', '1')
redis.call('MGET', 'a', 'b', 'c') redis.call('MGET', 'a{t}', 'b{t}', 'c{t}')
redis.call('EXPIRE', 'a', 0) redis.call('EXPIRE', 'a{t}', 0)
redis.call('GET', 'a') redis.call('GET', 'a{t}')
redis.call('MGET', 'a', 'b', 'c') redis.call('MGET', 'a{t}', 'b{t}', 'c{t}')
end end
} 0 } 3 a{t} b{t} c{t}
} }
test {Functions in the Redis namespace are able to report errors} { test {Functions in the Redis namespace are able to report errors} {
@ -705,7 +705,7 @@ start_server {tags {"scripting"}} {
assert_match {UNKILLABLE*} $e assert_match {UNKILLABLE*} $e
catch {r ping} e catch {r ping} e
assert_match {BUSY*} $e assert_match {BUSY*} $e
} } {} {external:skip}
# Note: keep this test at the end of this server stanza because it # Note: keep this test at the end of this server stanza because it
# kills the server. # kills the server.
@ -717,11 +717,11 @@ start_server {tags {"scripting"}} {
# Make sure the server was killed # Make sure the server was killed
catch {set rd [redis_deferring_client]} e catch {set rd [redis_deferring_client]} e
assert_match {*connection refused*} $e assert_match {*connection refused*} $e
} } {} {external:skip}
} }
foreach cmdrepl {0 1} { foreach cmdrepl {0 1} {
start_server {tags {"scripting repl"}} { start_server {tags {"scripting repl needs:debug external:skip"}} {
start_server {} { start_server {} {
if {$cmdrepl == 1} { if {$cmdrepl == 1} {
set rt "(commands replication)" set rt "(commands replication)"
@ -817,12 +817,12 @@ foreach cmdrepl {0 1} {
} else { } else {
fail "Master-Replica desync after Lua script using SELECT." fail "Master-Replica desync after Lua script using SELECT."
} }
} } {} {singledb:skip}
} }
} }
} }
start_server {tags {"scripting repl"}} { start_server {tags {"scripting repl external:skip"}} {
start_server {overrides {appendonly yes aof-use-rdb-preamble no}} { start_server {overrides {appendonly yes aof-use-rdb-preamble no}} {
test "Connect a replica to the master instance" { test "Connect a replica to the master instance" {
r -1 slaveof [srv 0 host] [srv 0 port] r -1 slaveof [srv 0 host] [srv 0 port]
@ -929,7 +929,7 @@ start_server {tags {"scripting repl"}} {
} }
} }
start_server {tags {"scripting"}} { start_server {tags {"scripting external:skip"}} {
r script debug sync r script debug sync
r eval {return 'hello'} 0 r eval {return 'hello'} 0
r eval {return 'hello'} 0 r eval {return 'hello'} 0

View File

@ -1,4 +1,4 @@
start_server {tags {"shutdown"}} { start_server {tags {"shutdown external:skip"}} {
test {Temp rdb will be deleted if we use bg_unlink when shutdown} { test {Temp rdb will be deleted if we use bg_unlink when shutdown} {
for {set i 0} {$i < 20} {incr i} { for {set i 0} {$i < 20} {incr i} {
r set $i $i r set $i $i
@ -25,7 +25,7 @@ start_server {tags {"shutdown"}} {
} }
} }
start_server {tags {"shutdown"}} { start_server {tags {"shutdown external:skip"}} {
test {Temp rdb will be deleted in signal handle} { test {Temp rdb will be deleted in signal handle} {
for {set i 0} {$i < 20} {incr i} { for {set i 0} {$i < 20} {incr i} {
r set $i $i r set $i $i

View File

@ -1,5 +1,8 @@
start_server {tags {"slowlog"} overrides {slowlog-log-slower-than 1000000}} { start_server {tags {"slowlog"} overrides {slowlog-log-slower-than 1000000}} {
test {SLOWLOG - check that it starts with an empty log} { test {SLOWLOG - check that it starts with an empty log} {
if {$::external} {
r slowlog reset
}
r slowlog len r slowlog len
} {0} } {0}
@ -9,7 +12,7 @@ start_server {tags {"slowlog"} overrides {slowlog-log-slower-than 1000000}} {
assert_equal [r slowlog len] 0 assert_equal [r slowlog len] 0
r debug sleep 0.2 r debug sleep 0.2
assert_equal [r slowlog len] 1 assert_equal [r slowlog len] 1
} } {} {needs:debug}
test {SLOWLOG - max entries is correctly handled} { test {SLOWLOG - max entries is correctly handled} {
r config set slowlog-log-slower-than 0 r config set slowlog-log-slower-than 0
@ -35,11 +38,13 @@ start_server {tags {"slowlog"} overrides {slowlog-log-slower-than 1000000}} {
r debug sleep 0.2 r debug sleep 0.2
set e [lindex [r slowlog get] 0] set e [lindex [r slowlog get] 0]
assert_equal [llength $e] 6 assert_equal [llength $e] 6
assert_equal [lindex $e 0] 105 if {!$::external} {
assert_equal [lindex $e 0] 105
}
assert_equal [expr {[lindex $e 2] > 100000}] 1 assert_equal [expr {[lindex $e 2] > 100000}] 1
assert_equal [lindex $e 3] {debug sleep 0.2} assert_equal [lindex $e 3] {debug sleep 0.2}
assert_equal {foobar} [lindex $e 5] assert_equal {foobar} [lindex $e 5]
} } {} {needs:debug}
test {SLOWLOG - Certain commands are omitted that contain sensitive information} { test {SLOWLOG - Certain commands are omitted that contain sensitive information} {
r config set slowlog-log-slower-than 0 r config set slowlog-log-slower-than 0
@ -57,7 +62,7 @@ start_server {tags {"slowlog"} overrides {slowlog-log-slower-than 1000000}} {
assert_equal {config set masterauth (redacted)} [lindex [lindex [r slowlog get] 2] 3] assert_equal {config set masterauth (redacted)} [lindex [lindex [r slowlog get] 2] 3]
assert_equal {acl setuser (redacted) (redacted) (redacted)} [lindex [lindex [r slowlog get] 1] 3] assert_equal {acl setuser (redacted) (redacted) (redacted)} [lindex [lindex [r slowlog get] 1] 3]
assert_equal {config set slowlog-log-slower-than 0} [lindex [lindex [r slowlog get] 0] 3] assert_equal {config set slowlog-log-slower-than 0} [lindex [lindex [r slowlog get] 0] 3]
} } {} {needs:repl}
test {SLOWLOG - Some commands can redact sensitive fields} { test {SLOWLOG - Some commands can redact sensitive fields} {
r config set slowlog-log-slower-than 0 r config set slowlog-log-slower-than 0
@ -72,7 +77,7 @@ start_server {tags {"slowlog"} overrides {slowlog-log-slower-than 1000000}} {
assert_match {* key 9 5000} [lindex [lindex [r slowlog get] 2] 3] assert_match {* key 9 5000} [lindex [lindex [r slowlog get] 2] 3]
assert_match {* key 9 5000 AUTH (redacted)} [lindex [lindex [r slowlog get] 1] 3] assert_match {* key 9 5000 AUTH (redacted)} [lindex [lindex [r slowlog get] 1] 3]
assert_match {* key 9 5000 AUTH2 (redacted) (redacted)} [lindex [lindex [r slowlog get] 0] 3] assert_match {* key 9 5000 AUTH2 (redacted) (redacted)} [lindex [lindex [r slowlog get] 0] 3]
} } {} {needs:repl}
test {SLOWLOG - Rewritten commands are logged as their original command} { test {SLOWLOG - Rewritten commands are logged as their original command} {
r config set slowlog-log-slower-than 0 r config set slowlog-log-slower-than 0
@ -111,11 +116,7 @@ start_server {tags {"slowlog"} overrides {slowlog-log-slower-than 1000000}} {
# blocked BLPOP is replicated as LPOP # blocked BLPOP is replicated as LPOP
set rd [redis_deferring_client] set rd [redis_deferring_client]
$rd blpop l 0 $rd blpop l 0
wait_for_condition 50 100 { wait_for_blocked_clients_count 1 50 100
[s blocked_clients] eq {1}
} else {
fail "Clients are not blocked"
}
r multi r multi
r lpush l foo r lpush l foo
r slowlog reset r slowlog reset
@ -129,7 +130,7 @@ start_server {tags {"slowlog"} overrides {slowlog-log-slower-than 1000000}} {
r config set slowlog-log-slower-than 0 r config set slowlog-log-slower-than 0
r slowlog reset r slowlog reset
r sadd set 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 r sadd set 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
set e [lindex [r slowlog get] 0] set e [lindex [r slowlog get] end-1]
lindex $e 3 lindex $e 3
} {sadd set 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 {... (2 more arguments)}} } {sadd set 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 {... (2 more arguments)}}
@ -138,7 +139,7 @@ start_server {tags {"slowlog"} overrides {slowlog-log-slower-than 1000000}} {
r slowlog reset r slowlog reset
set arg [string repeat A 129] set arg [string repeat A 129]
r sadd set foo $arg r sadd set foo $arg
set e [lindex [r slowlog get] 0] set e [lindex [r slowlog get] end-1]
lindex $e 3 lindex $e 3
} {sadd set foo {AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA... (1 more bytes)}} } {sadd set foo {AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA... (1 more bytes)}}
@ -152,7 +153,7 @@ start_server {tags {"slowlog"} overrides {slowlog-log-slower-than 1000000}} {
assert_equal [r slowlog len] 1 assert_equal [r slowlog len] 1
set e [lindex [r slowlog get] 0] set e [lindex [r slowlog get] 0]
assert_equal [lindex $e 3] {debug sleep 0.2} assert_equal [lindex $e 3] {debug sleep 0.2}
} } {} {needs:debug}
test {SLOWLOG - can clean older entries} { test {SLOWLOG - can clean older entries} {
r client setname lastentry_client r client setname lastentry_client
@ -161,7 +162,7 @@ start_server {tags {"slowlog"} overrides {slowlog-log-slower-than 1000000}} {
assert {[llength [r slowlog get]] == 1} assert {[llength [r slowlog get]] == 1}
set e [lindex [r slowlog get] 0] set e [lindex [r slowlog get] 0]
assert_equal {lastentry_client} [lindex $e 5] assert_equal {lastentry_client} [lindex $e 5]
} } {} {needs:debug}
test {SLOWLOG - can be disabled} { test {SLOWLOG - can be disabled} {
r config set slowlog-max-len 1 r config set slowlog-max-len 1
@ -173,5 +174,5 @@ start_server {tags {"slowlog"} overrides {slowlog-log-slower-than 1000000}} {
r slowlog reset r slowlog reset
r debug sleep 0.2 r debug sleep 0.2
assert_equal [r slowlog len] 0 assert_equal [r slowlog len] 0
} } {} {needs:debug}
} }

View File

@ -47,28 +47,28 @@ start_server {
test "$title: SORT BY key" { test "$title: SORT BY key" {
assert_equal $result [r sort tosort BY weight_*] assert_equal $result [r sort tosort BY weight_*]
} } {} {cluster:skip}
test "$title: SORT BY key with limit" { test "$title: SORT BY key with limit" {
assert_equal [lrange $result 5 9] [r sort tosort BY weight_* LIMIT 5 5] assert_equal [lrange $result 5 9] [r sort tosort BY weight_* LIMIT 5 5]
} } {} {cluster:skip}
test "$title: SORT BY hash field" { test "$title: SORT BY hash field" {
assert_equal $result [r sort tosort BY wobj_*->weight] assert_equal $result [r sort tosort BY wobj_*->weight]
} } {} {cluster:skip}
} }
set result [create_random_dataset 16 lpush] set result [create_random_dataset 16 lpush]
test "SORT GET #" { test "SORT GET #" {
assert_equal [lsort -integer $result] [r sort tosort GET #] assert_equal [lsort -integer $result] [r sort tosort GET #]
} } {} {cluster:skip}
test "SORT GET <const>" { test "SORT GET <const>" {
r del foo r del foo
set res [r sort tosort GET foo] set res [r sort tosort GET foo]
assert_equal 16 [llength $res] assert_equal 16 [llength $res]
foreach item $res { assert_equal {} $item } foreach item $res { assert_equal {} $item }
} } {} {cluster:skip}
test "SORT GET (key and hash) with sanity check" { test "SORT GET (key and hash) with sanity check" {
set l1 [r sort tosort GET # GET weight_*] set l1 [r sort tosort GET # GET weight_*]
@ -78,21 +78,21 @@ start_server {
assert_equal $w1 [r get weight_$id1] assert_equal $w1 [r get weight_$id1]
assert_equal $w2 [r get weight_$id1] assert_equal $w2 [r get weight_$id1]
} }
} } {} {cluster:skip}
test "SORT BY key STORE" { test "SORT BY key STORE" {
r sort tosort BY weight_* store sort-res r sort tosort BY weight_* store sort-res
assert_equal $result [r lrange sort-res 0 -1] assert_equal $result [r lrange sort-res 0 -1]
assert_equal 16 [r llen sort-res] assert_equal 16 [r llen sort-res]
assert_encoding quicklist sort-res assert_encoding quicklist sort-res
} } {} {cluster:skip}
test "SORT BY hash field STORE" { test "SORT BY hash field STORE" {
r sort tosort BY wobj_*->weight store sort-res r sort tosort BY wobj_*->weight store sort-res
assert_equal $result [r lrange sort-res 0 -1] assert_equal $result [r lrange sort-res 0 -1]
assert_equal 16 [r llen sort-res] assert_equal 16 [r llen sort-res]
assert_encoding quicklist sort-res assert_encoding quicklist sort-res
} } {} {cluster:skip}
test "SORT extracts STORE correctly" { test "SORT extracts STORE correctly" {
r command getkeys sort abc store def r command getkeys sort abc store def
@ -188,21 +188,21 @@ start_server {
test "SORT with STORE returns zero if result is empty (github issue 224)" { test "SORT with STORE returns zero if result is empty (github issue 224)" {
r flushdb r flushdb
r sort foo store bar r sort foo{t} store bar{t}
} {0} } {0}
test "SORT with STORE does not create empty lists (github issue 224)" { test "SORT with STORE does not create empty lists (github issue 224)" {
r flushdb r flushdb
r lpush foo bar r lpush foo{t} bar
r sort foo alpha limit 10 10 store zap r sort foo{t} alpha limit 10 10 store zap{t}
r exists zap r exists zap{t}
} {0} } {0}
test "SORT with STORE removes key if result is empty (github issue 227)" { test "SORT with STORE removes key if result is empty (github issue 227)" {
r flushdb r flushdb
r lpush foo bar r lpush foo{t} bar
r sort emptylist store foo r sort emptylist{t} store foo{t}
r exists foo r exists foo{t}
} {0} } {0}
test "SORT with BY <constant> and STORE should still order output" { test "SORT with BY <constant> and STORE should still order output" {
@ -210,7 +210,7 @@ start_server {
r sadd myset a b c d e f g h i l m n o p q r s t u v z aa aaa azz r sadd myset a b c d e f g h i l m n o p q r s t u v z aa aaa azz
r sort myset alpha by _ store mylist r sort myset alpha by _ store mylist
r lrange mylist 0 -1 r lrange mylist 0 -1
} {a aa aaa azz b c d e f g h i l m n o p q r s t u v z} } {a aa aaa azz b c d e f g h i l m n o p q r s t u v z} {cluster:skip}
test "SORT will complain with numerical sorting and bad doubles (1)" { test "SORT will complain with numerical sorting and bad doubles (1)" {
r del myset r del myset
@ -227,7 +227,7 @@ start_server {
set e {} set e {}
catch {r sort myset by score:*} e catch {r sort myset by score:*} e
set e set e
} {*ERR*double*} } {*ERR*double*} {cluster:skip}
test "SORT BY sub-sorts lexicographically if score is the same" { test "SORT BY sub-sorts lexicographically if score is the same" {
r del myset r del myset
@ -236,32 +236,32 @@ start_server {
set score:$ele 100 set score:$ele 100
} }
r sort myset by score:* r sort myset by score:*
} {a aa aaa azz b c d e f g h i l m n o p q r s t u v z} } {a aa aaa azz b c d e f g h i l m n o p q r s t u v z} {cluster:skip}
test "SORT GET with pattern ending with just -> does not get hash field" { test "SORT GET with pattern ending with just -> does not get hash field" {
r del mylist r del mylist
r lpush mylist a r lpush mylist a
r set x:a-> 100 r set x:a-> 100
r sort mylist by num get x:*-> r sort mylist by num get x:*->
} {100} } {100} {cluster:skip}
test "SORT by nosort retains native order for lists" { test "SORT by nosort retains native order for lists" {
r del testa r del testa
r lpush testa 2 1 4 3 5 r lpush testa 2 1 4 3 5
r sort testa by nosort r sort testa by nosort
} {5 3 4 1 2} } {5 3 4 1 2} {cluster:skip}
test "SORT by nosort plus store retains native order for lists" { test "SORT by nosort plus store retains native order for lists" {
r del testa r del testa
r lpush testa 2 1 4 3 5 r lpush testa 2 1 4 3 5
r sort testa by nosort store testb r sort testa by nosort store testb
r lrange testb 0 -1 r lrange testb 0 -1
} {5 3 4 1 2} } {5 3 4 1 2} {cluster:skip}
test "SORT by nosort with limit returns based on original list order" { test "SORT by nosort with limit returns based on original list order" {
r sort testa by nosort limit 0 3 store testb r sort testa by nosort limit 0 3 store testb
r lrange testb 0 -1 r lrange testb 0 -1
} {5 3 4} } {5 3 4} {cluster:skip}
tags {"slow"} { tags {"slow"} {
set num 100 set num 100
@ -277,7 +277,7 @@ start_server {
puts -nonewline "\n Average time to sort: [expr double($elapsed)/100] milliseconds " puts -nonewline "\n Average time to sort: [expr double($elapsed)/100] milliseconds "
flush stdout flush stdout
} }
} } {} {cluster:skip}
test "SORT speed, $num element list BY hash field, 100 times" { test "SORT speed, $num element list BY hash field, 100 times" {
set start [clock clicks -milliseconds] set start [clock clicks -milliseconds]
@ -289,7 +289,7 @@ start_server {
puts -nonewline "\n Average time to sort: [expr double($elapsed)/100] milliseconds " puts -nonewline "\n Average time to sort: [expr double($elapsed)/100] milliseconds "
flush stdout flush stdout
} }
} } {} {cluster:skip}
test "SORT speed, $num element list directly, 100 times" { test "SORT speed, $num element list directly, 100 times" {
set start [clock clicks -milliseconds] set start [clock clicks -milliseconds]
@ -313,6 +313,6 @@ start_server {
puts -nonewline "\n Average time to sort: [expr double($elapsed)/100] milliseconds " puts -nonewline "\n Average time to sort: [expr double($elapsed)/100] milliseconds "
flush stdout flush stdout
} }
} } {} {cluster:skip}
} }
} }

View File

@ -40,20 +40,20 @@ start_server {tags {"tracking network"}} {
} {*OK} } {*OK}
test {The other connection is able to get invalidations} { test {The other connection is able to get invalidations} {
r SET a 1 r SET a{t} 1
r SET b 1 r SET b{t} 1
r GET a r GET a{t}
r INCR b ; # This key should not be notified, since it wasn't fetched. r INCR b{t} ; # This key should not be notified, since it wasn't fetched.
r INCR a r INCR a{t}
set keys [lindex [$rd_redirection read] 2] set keys [lindex [$rd_redirection read] 2]
assert {[llength $keys] == 1} assert {[llength $keys] == 1}
assert {[lindex $keys 0] eq {a}} assert {[lindex $keys 0] eq {a{t}}}
} }
test {The client is now able to disable tracking} { test {The client is now able to disable tracking} {
# Make sure to add a few more keys in the tracking list # Make sure to add a few more keys in the tracking list
# so that we can check for leaks, as a side effect. # so that we can check for leaks, as a side effect.
r MGET a b c d e f g r MGET a{t} b{t} c{t} d{t} e{t} f{t} g{t}
r CLIENT TRACKING off r CLIENT TRACKING off
} {*OK} } {*OK}
@ -62,28 +62,28 @@ start_server {tags {"tracking network"}} {
} {*OK*} } {*OK*}
test {The connection gets invalidation messages about all the keys} { test {The connection gets invalidation messages about all the keys} {
r MSET a 1 b 2 c 3 r MSET a{t} 1 b{t} 2 c{t} 3
set keys [lsort [lindex [$rd_redirection read] 2]] set keys [lsort [lindex [$rd_redirection read] 2]]
assert {$keys eq {a b c}} assert {$keys eq {a{t} b{t} c{t}}}
} }
test {Clients can enable the BCAST mode with prefixes} { test {Clients can enable the BCAST mode with prefixes} {
r CLIENT TRACKING off r CLIENT TRACKING off
r CLIENT TRACKING on BCAST REDIRECT $redir_id PREFIX a: PREFIX b: r CLIENT TRACKING on BCAST REDIRECT $redir_id PREFIX a: PREFIX b:
r MULTI r MULTI
r INCR a:1 r INCR a:1{t}
r INCR a:2 r INCR a:2{t}
r INCR b:1 r INCR b:1{t}
r INCR b:2 r INCR b:2{t}
# we should not get this key # we should not get this key
r INCR c:1 r INCR c:1{t}
r EXEC r EXEC
# Because of the internals, we know we are going to receive # Because of the internals, we know we are going to receive
# two separated notifications for the two different prefixes. # two separated notifications for the two different prefixes.
set keys1 [lsort [lindex [$rd_redirection read] 2]] set keys1 [lsort [lindex [$rd_redirection read] 2]]
set keys2 [lsort [lindex [$rd_redirection read] 2]] set keys2 [lsort [lindex [$rd_redirection read] 2]]
set keys [lsort [list {*}$keys1 {*}$keys2]] set keys [lsort [list {*}$keys1 {*}$keys2]]
assert {$keys eq {a:1 a:2 b:1 b:2}} assert {$keys eq {a:1{t} a:2{t} b:1{t} b:2{t}}}
} }
test {Adding prefixes to BCAST mode works} { test {Adding prefixes to BCAST mode works} {
@ -96,16 +96,16 @@ start_server {tags {"tracking network"}} {
test {Tracking NOLOOP mode in standard mode works} { test {Tracking NOLOOP mode in standard mode works} {
r CLIENT TRACKING off r CLIENT TRACKING off
r CLIENT TRACKING on REDIRECT $redir_id NOLOOP r CLIENT TRACKING on REDIRECT $redir_id NOLOOP
r MGET otherkey1 loopkey otherkey2 r MGET otherkey1{t} loopkey{t} otherkey2{t}
$rd_sg SET otherkey1 1; # We should get this $rd_sg SET otherkey1{t} 1; # We should get this
r SET loopkey 1 ; # We should not get this r SET loopkey{t} 1 ; # We should not get this
$rd_sg SET otherkey2 1; # We should get this $rd_sg SET otherkey2{t} 1; # We should get this
# Because of the internals, we know we are going to receive # Because of the internals, we know we are going to receive
# two separated notifications for the two different keys. # two separated notifications for the two different keys.
set keys1 [lsort [lindex [$rd_redirection read] 2]] set keys1 [lsort [lindex [$rd_redirection read] 2]]
set keys2 [lsort [lindex [$rd_redirection read] 2]] set keys2 [lsort [lindex [$rd_redirection read] 2]]
set keys [lsort [list {*}$keys1 {*}$keys2]] set keys [lsort [list {*}$keys1 {*}$keys2]]
assert {$keys eq {otherkey1 otherkey2}} assert {$keys eq {otherkey1{t} otherkey2{t}}}
} }
test {Tracking NOLOOP mode in BCAST mode works} { test {Tracking NOLOOP mode in BCAST mode works} {
@ -220,16 +220,16 @@ start_server {tags {"tracking network"}} {
r CLIENT TRACKING on REDIRECT $redir_id r CLIENT TRACKING on REDIRECT $redir_id
$rd CLIENT TRACKING on REDIRECT $redir_id $rd CLIENT TRACKING on REDIRECT $redir_id
assert_equal OK [$rd read] ; # Consume the TRACKING reply assert_equal OK [$rd read] ; # Consume the TRACKING reply
$rd_sg MSET key1 1 key2 1 $rd_sg MSET key1{t} 1 key2{t} 1
r GET key1 r GET key1{t}
$rd GET key2 $rd GET key2{t}
assert_equal 1 [$rd read] ; # Consume the GET reply assert_equal 1 [$rd read] ; # Consume the GET reply
$rd_sg INCR key1 $rd_sg INCR key1{t}
$rd_sg INCR key2 $rd_sg INCR key2{t}
set res1 [lindex [$rd_redirection read] 2] set res1 [lindex [$rd_redirection read] 2]
set res2 [lindex [$rd_redirection read] 2] set res2 [lindex [$rd_redirection read] 2]
assert {$res1 eq {key1}} assert {$res1 eq {key1{t}}}
assert {$res2 eq {key2}} assert {$res2 eq {key2{t}}}
} }
test {Different clients using different protocols can track the same key} { test {Different clients using different protocols can track the same key} {
@ -356,9 +356,9 @@ start_server {tags {"tracking network"}} {
test {Tracking gets notification on tracking table key eviction} { test {Tracking gets notification on tracking table key eviction} {
r CLIENT TRACKING off r CLIENT TRACKING off
r CLIENT TRACKING on REDIRECT $redir_id NOLOOP r CLIENT TRACKING on REDIRECT $redir_id NOLOOP
r MSET key1 1 key2 2 r MSET key1{t} 1 key2{t} 2
# Let the server track the two keys for us # Let the server track the two keys for us
r MGET key1 key2 r MGET key1{t} key2{t}
# Force the eviction of all the keys but one: # Force the eviction of all the keys but one:
r config set tracking-table-max-keys 1 r config set tracking-table-max-keys 1
# Note that we may have other keys in the table for this client, # Note that we may have other keys in the table for this client,
@ -368,11 +368,11 @@ start_server {tags {"tracking network"}} {
# otherwise the test will die for timeout. # otherwise the test will die for timeout.
while 1 { while 1 {
set keys [lindex [$rd_redirection read] 2] set keys [lindex [$rd_redirection read] 2]
if {$keys eq {key1} || $keys eq {key2}} break if {$keys eq {key1{t}} || $keys eq {key2{t}}} break
} }
# We should receive an expire notification for one of # We should receive an expire notification for one of
# the two keys (only one must remain) # the two keys (only one must remain)
assert {$keys eq {key1} || $keys eq {key2}} assert {$keys eq {key1{t}} || $keys eq {key2{t}}}
} }
test {Invalidation message received for flushall} { test {Invalidation message received for flushall} {

View File

@ -61,8 +61,8 @@ start_server {tags {"hash"}} {
set res [r hrandfield myhash 3] set res [r hrandfield myhash 3]
assert_equal [llength $res] 3 assert_equal [llength $res] 3
assert_equal [llength [lindex $res 1]] 1 assert_equal [llength [lindex $res 1]] 1
r hello 2
} }
r hello 2
test "HRANDFIELD count of 0 is handled correctly" { test "HRANDFIELD count of 0 is handled correctly" {
r hrandfield myhash 0 r hrandfield myhash 0
@ -445,7 +445,7 @@ start_server {tags {"hash"}} {
test {Is a ziplist encoded Hash promoted on big payload?} { test {Is a ziplist encoded Hash promoted on big payload?} {
r hset smallhash foo [string repeat a 1024] r hset smallhash foo [string repeat a 1024]
r debug object smallhash r debug object smallhash
} {*hashtable*} } {*hashtable*} {needs:debug}
test {HINCRBY against non existing database key} { test {HINCRBY against non existing database key} {
r del htest r del htest
@ -709,7 +709,7 @@ start_server {tags {"hash"}} {
for {set i 0} {$i < 64} {incr i} { for {set i 0} {$i < 64} {incr i} {
r hset myhash [randomValue] [randomValue] r hset myhash [randomValue] [randomValue]
} }
assert {[r object encoding myhash] eq {hashtable}} assert_encoding hashtable myhash
} }
} }
@ -733,8 +733,8 @@ start_server {tags {"hash"}} {
test {Hash ziplist of various encodings} { test {Hash ziplist of various encodings} {
r del k r del k
r config set hash-max-ziplist-entries 1000000000 config_set hash-max-ziplist-entries 1000000000
r config set hash-max-ziplist-value 1000000000 config_set hash-max-ziplist-value 1000000000
r hset k ZIP_INT_8B 127 r hset k ZIP_INT_8B 127
r hset k ZIP_INT_16B 32767 r hset k ZIP_INT_16B 32767
r hset k ZIP_INT_32B 2147483647 r hset k ZIP_INT_32B 2147483647
@ -748,8 +748,8 @@ start_server {tags {"hash"}} {
set dump [r dump k] set dump [r dump k]
# will be converted to dict at RESTORE # will be converted to dict at RESTORE
r config set hash-max-ziplist-entries 2 config_set hash-max-ziplist-entries 2
r config set sanitize-dump-payload no config_set sanitize-dump-payload no mayfail
r restore kk 0 $dump r restore kk 0 $dump
set kk [r hgetall kk] set kk [r hgetall kk]
@ -765,7 +765,7 @@ start_server {tags {"hash"}} {
} {ZIP_INT_8B 127 ZIP_INT_16B 32767 ZIP_INT_32B 2147483647 ZIP_INT_64B 9223372036854775808 ZIP_INT_IMM_MIN 0 ZIP_INT_IMM_MAX 12} } {ZIP_INT_8B 127 ZIP_INT_16B 32767 ZIP_INT_32B 2147483647 ZIP_INT_64B 9223372036854775808 ZIP_INT_IMM_MIN 0 ZIP_INT_IMM_MAX 12}
test {Hash ziplist of various encodings - sanitize dump} { test {Hash ziplist of various encodings - sanitize dump} {
r config set sanitize-dump-payload yes config_set sanitize-dump-payload yes mayfail
r restore kk 0 $dump replace r restore kk 0 $dump replace
set k [r hgetall k] set k [r hgetall k]
set kk [r hgetall kk] set kk [r hgetall kk]

View File

@ -63,7 +63,7 @@ start_server {tags {"incr"}} {
assert {[r object refcount foo] > 1} assert {[r object refcount foo] > 1}
r incr foo r incr foo
assert {[r object refcount foo] == 1} assert {[r object refcount foo] == 1}
} } {} {needs:debug}
test {INCR can modify objects in-place} { test {INCR can modify objects in-place} {
r set foo 20000 r set foo 20000
@ -75,7 +75,7 @@ start_server {tags {"incr"}} {
assert {[string range $old 0 2] eq "at:"} assert {[string range $old 0 2] eq "at:"}
assert {[string range $new 0 2] eq "at:"} assert {[string range $new 0 2] eq "at:"}
assert {$old eq $new} assert {$old eq $new}
} } {} {needs:debug}
test {INCRBYFLOAT against non existing key} { test {INCRBYFLOAT against non existing key} {
r del novar r del novar

View File

@ -1,5 +1,5 @@
start_server { start_server {
tags {list ziplist} tags {"list ziplist"}
overrides { overrides {
"list-max-ziplist-size" 16 "list-max-ziplist-size" 16
} }

View File

@ -1,11 +1,3 @@
proc wait_for_blocked_client {} {
wait_for_condition 50 100 {
[s blocked_clients] ne 0
} else {
fail "no blocked clients"
}
}
start_server { start_server {
tags {"list"} tags {"list"}
overrides { overrides {
@ -172,75 +164,75 @@ start_server {
test "BLPOP, BRPOP: multiple existing lists - $type" { test "BLPOP, BRPOP: multiple existing lists - $type" {
set rd [redis_deferring_client] set rd [redis_deferring_client]
create_list blist1 "a $large c" create_list blist1{t} "a $large c"
create_list blist2 "d $large f" create_list blist2{t} "d $large f"
$rd blpop blist1 blist2 1 $rd blpop blist1{t} blist2{t} 1
assert_equal {blist1 a} [$rd read] assert_equal {blist1{t} a} [$rd read]
$rd brpop blist1 blist2 1 $rd brpop blist1{t} blist2{t} 1
assert_equal {blist1 c} [$rd read] assert_equal {blist1{t} c} [$rd read]
assert_equal 1 [r llen blist1] assert_equal 1 [r llen blist1{t}]
assert_equal 3 [r llen blist2] assert_equal 3 [r llen blist2{t}]
$rd blpop blist2 blist1 1 $rd blpop blist2{t} blist1{t} 1
assert_equal {blist2 d} [$rd read] assert_equal {blist2{t} d} [$rd read]
$rd brpop blist2 blist1 1 $rd brpop blist2{t} blist1{t} 1
assert_equal {blist2 f} [$rd read] assert_equal {blist2{t} f} [$rd read]
assert_equal 1 [r llen blist1] assert_equal 1 [r llen blist1{t}]
assert_equal 1 [r llen blist2] assert_equal 1 [r llen blist2{t}]
} }
test "BLPOP, BRPOP: second list has an entry - $type" { test "BLPOP, BRPOP: second list has an entry - $type" {
set rd [redis_deferring_client] set rd [redis_deferring_client]
r del blist1 r del blist1{t}
create_list blist2 "d $large f" create_list blist2{t} "d $large f"
$rd blpop blist1 blist2 1 $rd blpop blist1{t} blist2{t} 1
assert_equal {blist2 d} [$rd read] assert_equal {blist2{t} d} [$rd read]
$rd brpop blist1 blist2 1 $rd brpop blist1{t} blist2{t} 1
assert_equal {blist2 f} [$rd read] assert_equal {blist2{t} f} [$rd read]
assert_equal 0 [r llen blist1] assert_equal 0 [r llen blist1{t}]
assert_equal 1 [r llen blist2] assert_equal 1 [r llen blist2{t}]
} }
test "BRPOPLPUSH - $type" { test "BRPOPLPUSH - $type" {
r del target r del target{t}
r rpush target bar r rpush target{t} bar
set rd [redis_deferring_client] set rd [redis_deferring_client]
create_list blist "a b $large c d" create_list blist{t} "a b $large c d"
$rd brpoplpush blist target 1 $rd brpoplpush blist{t} target{t} 1
assert_equal d [$rd read] assert_equal d [$rd read]
assert_equal d [r lpop target] assert_equal d [r lpop target{t}]
assert_equal "a b $large c" [r lrange blist 0 -1] assert_equal "a b $large c" [r lrange blist{t} 0 -1]
} }
foreach wherefrom {left right} { foreach wherefrom {left right} {
foreach whereto {left right} { foreach whereto {left right} {
test "BLMOVE $wherefrom $whereto - $type" { test "BLMOVE $wherefrom $whereto - $type" {
r del target r del target{t}
r rpush target bar r rpush target{t} bar
set rd [redis_deferring_client] set rd [redis_deferring_client]
create_list blist "a b $large c d" create_list blist{t} "a b $large c d"
$rd blmove blist target $wherefrom $whereto 1 $rd blmove blist{t} target{t} $wherefrom $whereto 1
set poppedelement [$rd read] set poppedelement [$rd read]
if {$wherefrom eq "right"} { if {$wherefrom eq "right"} {
assert_equal d $poppedelement assert_equal d $poppedelement
assert_equal "a b $large c" [r lrange blist 0 -1] assert_equal "a b $large c" [r lrange blist{t} 0 -1]
} else { } else {
assert_equal a $poppedelement assert_equal a $poppedelement
assert_equal "b $large c d" [r lrange blist 0 -1] assert_equal "b $large c d" [r lrange blist{t} 0 -1]
} }
if {$whereto eq "right"} { if {$whereto eq "right"} {
assert_equal $poppedelement [r rpop target] assert_equal $poppedelement [r rpop target{t}]
} else { } else {
assert_equal $poppedelement [r lpop target] assert_equal $poppedelement [r lpop target{t}]
} }
} }
} }
@ -280,23 +272,23 @@ start_server {
test "BLPOP with same key multiple times should work (issue #801)" { test "BLPOP with same key multiple times should work (issue #801)" {
set rd [redis_deferring_client] set rd [redis_deferring_client]
r del list1 list2 r del list1{t} list2{t}
# Data arriving after the BLPOP. # Data arriving after the BLPOP.
$rd blpop list1 list2 list2 list1 0 $rd blpop list1{t} list2{t} list2{t} list1{t} 0
r lpush list1 a r lpush list1{t} a
assert_equal [$rd read] {list1 a} assert_equal [$rd read] {list1{t} a}
$rd blpop list1 list2 list2 list1 0 $rd blpop list1{t} list2{t} list2{t} list1{t} 0
r lpush list2 b r lpush list2{t} b
assert_equal [$rd read] {list2 b} assert_equal [$rd read] {list2{t} b}
# Data already there. # Data already there.
r lpush list1 a r lpush list1{t} a
r lpush list2 b r lpush list2{t} b
$rd blpop list1 list2 list2 list1 0 $rd blpop list1{t} list2{t} list2{t} list1{t} 0
assert_equal [$rd read] {list1 a} assert_equal [$rd read] {list1{t} a}
$rd blpop list1 list2 list2 list1 0 $rd blpop list1{t} list2{t} list2{t} list1{t} 0
assert_equal [$rd read] {list2 b} assert_equal [$rd read] {list2{t} b}
} }
test "MULTI/EXEC is isolated from the point of view of BLPOP" { test "MULTI/EXEC is isolated from the point of view of BLPOP" {
@ -313,7 +305,7 @@ start_server {
test "BLPOP with variadic LPUSH" { test "BLPOP with variadic LPUSH" {
set rd [redis_deferring_client] set rd [redis_deferring_client]
r del blist target r del blist
if {$::valgrind} {after 100} if {$::valgrind} {after 100}
$rd blpop blist 0 $rd blpop blist 0
if {$::valgrind} {after 100} if {$::valgrind} {after 100}
@ -325,37 +317,29 @@ start_server {
test "BRPOPLPUSH with zero timeout should block indefinitely" { test "BRPOPLPUSH with zero timeout should block indefinitely" {
set rd [redis_deferring_client] set rd [redis_deferring_client]
r del blist target r del blist{t} target{t}
r rpush target bar r rpush target{t} bar
$rd brpoplpush blist target 0 $rd brpoplpush blist{t} target{t} 0
wait_for_condition 100 10 { wait_for_blocked_clients_count 1
[s blocked_clients] == 1 r rpush blist{t} foo
} else {
fail "Timeout waiting for blocked clients"
}
r rpush blist foo
assert_equal foo [$rd read] assert_equal foo [$rd read]
assert_equal {foo bar} [r lrange target 0 -1] assert_equal {foo bar} [r lrange target{t} 0 -1]
} }
foreach wherefrom {left right} { foreach wherefrom {left right} {
foreach whereto {left right} { foreach whereto {left right} {
test "BLMOVE $wherefrom $whereto with zero timeout should block indefinitely" { test "BLMOVE $wherefrom $whereto with zero timeout should block indefinitely" {
set rd [redis_deferring_client] set rd [redis_deferring_client]
r del blist target r del blist{t} target{t}
r rpush target bar r rpush target{t} bar
$rd blmove blist target $wherefrom $whereto 0 $rd blmove blist{t} target{t} $wherefrom $whereto 0
wait_for_condition 100 10 { wait_for_blocked_clients_count 1
[s blocked_clients] == 1 r rpush blist{t} foo
} else {
fail "Timeout waiting for blocked clients"
}
r rpush blist foo
assert_equal foo [$rd read] assert_equal foo [$rd read]
if {$whereto eq "right"} { if {$whereto eq "right"} {
assert_equal {bar foo} [r lrange target 0 -1] assert_equal {bar foo} [r lrange target{t} 0 -1]
} else { } else {
assert_equal {foo bar} [r lrange target 0 -1] assert_equal {foo bar} [r lrange target{t} 0 -1]
} }
} }
} }
@ -366,146 +350,138 @@ start_server {
test "BLMOVE ($wherefrom, $whereto) with a client BLPOPing the target list" { test "BLMOVE ($wherefrom, $whereto) with a client BLPOPing the target list" {
set rd [redis_deferring_client] set rd [redis_deferring_client]
set rd2 [redis_deferring_client] set rd2 [redis_deferring_client]
r del blist target r del blist{t} target{t}
$rd2 blpop target 0 $rd2 blpop target{t} 0
$rd blmove blist target $wherefrom $whereto 0 $rd blmove blist{t} target{t} $wherefrom $whereto 0
wait_for_condition 100 10 { wait_for_blocked_clients_count 2
[s blocked_clients] == 2 r rpush blist{t} foo
} else {
fail "Timeout waiting for blocked clients"
}
r rpush blist foo
assert_equal foo [$rd read] assert_equal foo [$rd read]
assert_equal {target foo} [$rd2 read] assert_equal {target{t} foo} [$rd2 read]
assert_equal 0 [r exists target] assert_equal 0 [r exists target{t}]
} }
} }
} }
test "BRPOPLPUSH with wrong source type" { test "BRPOPLPUSH with wrong source type" {
set rd [redis_deferring_client] set rd [redis_deferring_client]
r del blist target r del blist{t} target{t}
r set blist nolist r set blist{t} nolist
$rd brpoplpush blist target 1 $rd brpoplpush blist{t} target{t} 1
assert_error "WRONGTYPE*" {$rd read} assert_error "WRONGTYPE*" {$rd read}
} }
test "BRPOPLPUSH with wrong destination type" { test "BRPOPLPUSH with wrong destination type" {
set rd [redis_deferring_client] set rd [redis_deferring_client]
r del blist target r del blist{t} target{t}
r set target nolist r set target{t} nolist
r lpush blist foo r lpush blist{t} foo
$rd brpoplpush blist target 1 $rd brpoplpush blist{t} target{t} 1
assert_error "WRONGTYPE*" {$rd read} assert_error "WRONGTYPE*" {$rd read}
set rd [redis_deferring_client] set rd [redis_deferring_client]
r del blist target r del blist{t} target{t}
r set target nolist r set target{t} nolist
$rd brpoplpush blist target 0 $rd brpoplpush blist{t} target{t} 0
wait_for_condition 100 10 { wait_for_blocked_clients_count 1
[s blocked_clients] == 1 r rpush blist{t} foo
} else {
fail "Timeout waiting for blocked clients"
}
r rpush blist foo
assert_error "WRONGTYPE*" {$rd read} assert_error "WRONGTYPE*" {$rd read}
assert_equal {foo} [r lrange blist 0 -1] assert_equal {foo} [r lrange blist{t} 0 -1]
} }
test "BRPOPLPUSH maintains order of elements after failure" { test "BRPOPLPUSH maintains order of elements after failure" {
set rd [redis_deferring_client] set rd [redis_deferring_client]
r del blist target r del blist{t} target{t}
r set target nolist r set target{t} nolist
$rd brpoplpush blist target 0 $rd brpoplpush blist{t} target{t} 0
r rpush blist a b c r rpush blist{t} a b c
assert_error "WRONGTYPE*" {$rd read} assert_error "WRONGTYPE*" {$rd read}
r lrange blist 0 -1 r lrange blist{t} 0 -1
} {a b c} } {a b c}
test "BRPOPLPUSH with multiple blocked clients" { test "BRPOPLPUSH with multiple blocked clients" {
set rd1 [redis_deferring_client] set rd1 [redis_deferring_client]
set rd2 [redis_deferring_client] set rd2 [redis_deferring_client]
r del blist target1 target2 r del blist{t} target1{t} target2{t}
r set target1 nolist r set target1{t} nolist
$rd1 brpoplpush blist target1 0 $rd1 brpoplpush blist{t} target1{t} 0
$rd2 brpoplpush blist target2 0 $rd2 brpoplpush blist{t} target2{t} 0
r lpush blist foo r lpush blist{t} foo
assert_error "WRONGTYPE*" {$rd1 read} assert_error "WRONGTYPE*" {$rd1 read}
assert_equal {foo} [$rd2 read] assert_equal {foo} [$rd2 read]
assert_equal {foo} [r lrange target2 0 -1] assert_equal {foo} [r lrange target2{t} 0 -1]
} }
test "Linked LMOVEs" { test "Linked LMOVEs" {
set rd1 [redis_deferring_client] set rd1 [redis_deferring_client]
set rd2 [redis_deferring_client] set rd2 [redis_deferring_client]
r del list1 list2 list3 r del list1{t} list2{t} list3{t}
$rd1 blmove list1 list2 right left 0 $rd1 blmove list1{t} list2{t} right left 0
$rd2 blmove list2 list3 left right 0 $rd2 blmove list2{t} list3{t} left right 0
r rpush list1 foo r rpush list1{t} foo
assert_equal {} [r lrange list1 0 -1] assert_equal {} [r lrange list1{t} 0 -1]
assert_equal {} [r lrange list2 0 -1] assert_equal {} [r lrange list2{t} 0 -1]
assert_equal {foo} [r lrange list3 0 -1] assert_equal {foo} [r lrange list3{t} 0 -1]
} }
test "Circular BRPOPLPUSH" { test "Circular BRPOPLPUSH" {
set rd1 [redis_deferring_client] set rd1 [redis_deferring_client]
set rd2 [redis_deferring_client] set rd2 [redis_deferring_client]
r del list1 list2 r del list1{t} list2{t}
$rd1 brpoplpush list1 list2 0 $rd1 brpoplpush list1{t} list2{t} 0
$rd2 brpoplpush list2 list1 0 $rd2 brpoplpush list2{t} list1{t} 0
r rpush list1 foo r rpush list1{t} foo
assert_equal {foo} [r lrange list1 0 -1] assert_equal {foo} [r lrange list1{t} 0 -1]
assert_equal {} [r lrange list2 0 -1] assert_equal {} [r lrange list2{t} 0 -1]
} }
test "Self-referential BRPOPLPUSH" { test "Self-referential BRPOPLPUSH" {
set rd [redis_deferring_client] set rd [redis_deferring_client]
r del blist r del blist{t}
$rd brpoplpush blist blist 0 $rd brpoplpush blist{t} blist{t} 0
r rpush blist foo r rpush blist{t} foo
assert_equal {foo} [r lrange blist 0 -1] assert_equal {foo} [r lrange blist{t} 0 -1]
} }
test "BRPOPLPUSH inside a transaction" { test "BRPOPLPUSH inside a transaction" {
r del xlist target r del xlist{t} target{t}
r lpush xlist foo r lpush xlist{t} foo
r lpush xlist bar r lpush xlist{t} bar
r multi r multi
r brpoplpush xlist target 0 r brpoplpush xlist{t} target{t} 0
r brpoplpush xlist target 0 r brpoplpush xlist{t} target{t} 0
r brpoplpush xlist target 0 r brpoplpush xlist{t} target{t} 0
r lrange xlist 0 -1 r lrange xlist{t} 0 -1
r lrange target 0 -1 r lrange target{t} 0 -1
r exec r exec
} {foo bar {} {} {bar foo}} } {foo bar {} {} {bar foo}}
test "PUSH resulting from BRPOPLPUSH affect WATCH" { test "PUSH resulting from BRPOPLPUSH affect WATCH" {
set blocked_client [redis_deferring_client] set blocked_client [redis_deferring_client]
set watching_client [redis_deferring_client] set watching_client [redis_deferring_client]
r del srclist dstlist somekey r del srclist{t} dstlist{t} somekey{t}
r set somekey somevalue r set somekey{t} somevalue
$blocked_client brpoplpush srclist dstlist 0 $blocked_client brpoplpush srclist{t} dstlist{t} 0
$watching_client watch dstlist $watching_client watch dstlist{t}
$watching_client read $watching_client read
$watching_client multi $watching_client multi
$watching_client read $watching_client read
$watching_client get somekey $watching_client get somekey{t}
$watching_client read $watching_client read
r lpush srclist element r lpush srclist{t} element
$watching_client exec $watching_client exec
$watching_client read $watching_client read
} {} } {}
@ -513,60 +489,52 @@ start_server {
test "BRPOPLPUSH does not affect WATCH while still blocked" { test "BRPOPLPUSH does not affect WATCH while still blocked" {
set blocked_client [redis_deferring_client] set blocked_client [redis_deferring_client]
set watching_client [redis_deferring_client] set watching_client [redis_deferring_client]
r del srclist dstlist somekey r del srclist{t} dstlist{t} somekey{t}
r set somekey somevalue r set somekey{t} somevalue
$blocked_client brpoplpush srclist dstlist 0 $blocked_client brpoplpush srclist{t} dstlist{t} 0
$watching_client watch dstlist $watching_client watch dstlist{t}
$watching_client read $watching_client read
$watching_client multi $watching_client multi
$watching_client read $watching_client read
$watching_client get somekey $watching_client get somekey{t}
$watching_client read $watching_client read
$watching_client exec $watching_client exec
# Blocked BLPOPLPUSH may create problems, unblock it. # Blocked BLPOPLPUSH may create problems, unblock it.
r lpush srclist element r lpush srclist{t} element
$watching_client read $watching_client read
} {somevalue} } {somevalue}
test {BRPOPLPUSH timeout} { test {BRPOPLPUSH timeout} {
set rd [redis_deferring_client] set rd [redis_deferring_client]
$rd brpoplpush foo_list bar_list 1 $rd brpoplpush foo_list{t} bar_list{t} 1
wait_for_condition 100 10 { wait_for_blocked_clients_count 1
[s blocked_clients] == 1 wait_for_blocked_clients_count 0 500 10
} else {
fail "Timeout waiting for blocked client"
}
wait_for_condition 500 10 {
[s blocked_clients] == 0
} else {
fail "Timeout waiting for client to unblock"
}
$rd read $rd read
} {} } {}
test "BLPOP when new key is moved into place" { test "BLPOP when new key is moved into place" {
set rd [redis_deferring_client] set rd [redis_deferring_client]
$rd blpop foo 5 $rd blpop foo{t} 5
r lpush bob abc def hij r lpush bob{t} abc def hij
r rename bob foo r rename bob{t} foo{t}
$rd read $rd read
} {foo hij} } {foo{t} hij}
test "BLPOP when result key is created by SORT..STORE" { test "BLPOP when result key is created by SORT..STORE" {
set rd [redis_deferring_client] set rd [redis_deferring_client]
# zero out list from previous test without explicit delete # zero out list from previous test without explicit delete
r lpop foo r lpop foo{t}
r lpop foo r lpop foo{t}
r lpop foo r lpop foo{t}
$rd blpop foo 5 $rd blpop foo{t} 5
r lpush notfoo hello hola aguacate konichiwa zanzibar r lpush notfoo{t} hello hola aguacate konichiwa zanzibar
r sort notfoo ALPHA store foo r sort notfoo{t} ALPHA store foo{t}
$rd read $rd read
} {foo aguacate} } {foo{t} aguacate}
foreach {pop} {BLPOP BRPOP} { foreach {pop} {BLPOP BRPOP} {
test "$pop: with single empty list argument" { test "$pop: with single empty list argument" {
@ -605,34 +573,34 @@ start_server {
test "$pop: second argument is not a list" { test "$pop: second argument is not a list" {
set rd [redis_deferring_client] set rd [redis_deferring_client]
r del blist1 blist2 r del blist1{t} blist2{t}
r set blist2 nolist r set blist2{t} nolist{t}
$rd $pop blist1 blist2 1 $rd $pop blist1{t} blist2{t} 1
assert_error "WRONGTYPE*" {$rd read} assert_error "WRONGTYPE*" {$rd read}
} }
test "$pop: timeout" { test "$pop: timeout" {
set rd [redis_deferring_client] set rd [redis_deferring_client]
r del blist1 blist2 r del blist1{t} blist2{t}
$rd $pop blist1 blist2 1 $rd $pop blist1{t} blist2{t} 1
assert_equal {} [$rd read] assert_equal {} [$rd read]
} }
test "$pop: arguments are empty" { test "$pop: arguments are empty" {
set rd [redis_deferring_client] set rd [redis_deferring_client]
r del blist1 blist2 r del blist1{t} blist2{t}
$rd $pop blist1 blist2 1 $rd $pop blist1{t} blist2{t} 1
r rpush blist1 foo r rpush blist1{t} foo
assert_equal {blist1 foo} [$rd read] assert_equal {blist1{t} foo} [$rd read]
assert_equal 0 [r exists blist1] assert_equal 0 [r exists blist1{t}]
assert_equal 0 [r exists blist2] assert_equal 0 [r exists blist2{t}]
$rd $pop blist1 blist2 1 $rd $pop blist1{t} blist2{t} 1
r rpush blist2 foo r rpush blist2{t} foo
assert_equal {blist2 foo} [$rd read] assert_equal {blist2{t} foo} [$rd read]
assert_equal 0 [r exists blist1] assert_equal 0 [r exists blist1{t}]
assert_equal 0 [r exists blist2] assert_equal 0 [r exists blist2{t}]
} }
} }
@ -726,7 +694,7 @@ start_server {
assert_encoding $type mylist assert_encoding $type mylist
check_numbered_list_consistency mylist check_numbered_list_consistency mylist
check_random_access_consistency mylist check_random_access_consistency mylist
} } {} {needs:debug}
} }
test {LLEN against non-list value error} { test {LLEN against non-list value error} {
@ -757,60 +725,60 @@ start_server {
foreach {type large} [array get largevalue] { foreach {type large} [array get largevalue] {
test "RPOPLPUSH base case - $type" { test "RPOPLPUSH base case - $type" {
r del mylist1 mylist2 r del mylist1{t} mylist2{t}
create_list mylist1 "a $large c d" create_list mylist1{t} "a $large c d"
assert_equal d [r rpoplpush mylist1 mylist2] assert_equal d [r rpoplpush mylist1{t} mylist2{t}]
assert_equal c [r rpoplpush mylist1 mylist2] assert_equal c [r rpoplpush mylist1{t} mylist2{t}]
assert_equal "a $large" [r lrange mylist1 0 -1] assert_equal "a $large" [r lrange mylist1{t} 0 -1]
assert_equal "c d" [r lrange mylist2 0 -1] assert_equal "c d" [r lrange mylist2{t} 0 -1]
assert_encoding quicklist mylist2 assert_encoding quicklist mylist2{t}
} }
foreach wherefrom {left right} { foreach wherefrom {left right} {
foreach whereto {left right} { foreach whereto {left right} {
test "LMOVE $wherefrom $whereto base case - $type" { test "LMOVE $wherefrom $whereto base case - $type" {
r del mylist1 mylist2 r del mylist1{t} mylist2{t}
if {$wherefrom eq "right"} { if {$wherefrom eq "right"} {
create_list mylist1 "c d $large a" create_list mylist1{t} "c d $large a"
} else { } else {
create_list mylist1 "a $large c d" create_list mylist1{t} "a $large c d"
} }
assert_equal a [r lmove mylist1 mylist2 $wherefrom $whereto] assert_equal a [r lmove mylist1{t} mylist2{t} $wherefrom $whereto]
assert_equal $large [r lmove mylist1 mylist2 $wherefrom $whereto] assert_equal $large [r lmove mylist1{t} mylist2{t} $wherefrom $whereto]
assert_equal "c d" [r lrange mylist1 0 -1] assert_equal "c d" [r lrange mylist1{t} 0 -1]
if {$whereto eq "right"} { if {$whereto eq "right"} {
assert_equal "a $large" [r lrange mylist2 0 -1] assert_equal "a $large" [r lrange mylist2{t} 0 -1]
} else { } else {
assert_equal "$large a" [r lrange mylist2 0 -1] assert_equal "$large a" [r lrange mylist2{t} 0 -1]
} }
assert_encoding quicklist mylist2 assert_encoding quicklist mylist2{t}
} }
} }
} }
test "RPOPLPUSH with the same list as src and dst - $type" { test "RPOPLPUSH with the same list as src and dst - $type" {
create_list mylist "a $large c" create_list mylist{t} "a $large c"
assert_equal "a $large c" [r lrange mylist 0 -1] assert_equal "a $large c" [r lrange mylist{t} 0 -1]
assert_equal c [r rpoplpush mylist mylist] assert_equal c [r rpoplpush mylist{t} mylist{t}]
assert_equal "c a $large" [r lrange mylist 0 -1] assert_equal "c a $large" [r lrange mylist{t} 0 -1]
} }
foreach wherefrom {left right} { foreach wherefrom {left right} {
foreach whereto {left right} { foreach whereto {left right} {
test "LMOVE $wherefrom $whereto with the same list as src and dst - $type" { test "LMOVE $wherefrom $whereto with the same list as src and dst - $type" {
if {$wherefrom eq "right"} { if {$wherefrom eq "right"} {
create_list mylist "a $large c" create_list mylist{t} "a $large c"
assert_equal "a $large c" [r lrange mylist 0 -1] assert_equal "a $large c" [r lrange mylist{t} 0 -1]
} else { } else {
create_list mylist "c a $large" create_list mylist{t} "c a $large"
assert_equal "c a $large" [r lrange mylist 0 -1] assert_equal "c a $large" [r lrange mylist{t} 0 -1]
} }
assert_equal c [r lmove mylist mylist $wherefrom $whereto] assert_equal c [r lmove mylist{t} mylist{t} $wherefrom $whereto]
if {$whereto eq "right"} { if {$whereto eq "right"} {
assert_equal "a $large c" [r lrange mylist 0 -1] assert_equal "a $large c" [r lrange mylist{t} 0 -1]
} else { } else {
assert_equal "c a $large" [r lrange mylist 0 -1] assert_equal "c a $large" [r lrange mylist{t} 0 -1]
} }
} }
} }
@ -818,44 +786,44 @@ start_server {
foreach {othertype otherlarge} [array get largevalue] { foreach {othertype otherlarge} [array get largevalue] {
test "RPOPLPUSH with $type source and existing target $othertype" { test "RPOPLPUSH with $type source and existing target $othertype" {
create_list srclist "a b c $large" create_list srclist{t} "a b c $large"
create_list dstlist "$otherlarge" create_list dstlist{t} "$otherlarge"
assert_equal $large [r rpoplpush srclist dstlist] assert_equal $large [r rpoplpush srclist{t} dstlist{t}]
assert_equal c [r rpoplpush srclist dstlist] assert_equal c [r rpoplpush srclist{t} dstlist{t}]
assert_equal "a b" [r lrange srclist 0 -1] assert_equal "a b" [r lrange srclist{t} 0 -1]
assert_equal "c $large $otherlarge" [r lrange dstlist 0 -1] assert_equal "c $large $otherlarge" [r lrange dstlist{t} 0 -1]
# When we rpoplpush'ed a large value, dstlist should be # When we rpoplpush'ed a large value, dstlist should be
# converted to the same encoding as srclist. # converted to the same encoding as srclist.
if {$type eq "linkedlist"} { if {$type eq "linkedlist"} {
assert_encoding quicklist dstlist assert_encoding quicklist dstlist{t}
} }
} }
foreach wherefrom {left right} { foreach wherefrom {left right} {
foreach whereto {left right} { foreach whereto {left right} {
test "LMOVE $wherefrom $whereto with $type source and existing target $othertype" { test "LMOVE $wherefrom $whereto with $type source and existing target $othertype" {
create_list dstlist "$otherlarge" create_list dstlist{t} "$otherlarge"
if {$wherefrom eq "right"} { if {$wherefrom eq "right"} {
create_list srclist "a b c $large" create_list srclist{t} "a b c $large"
} else { } else {
create_list srclist "$large c a b" create_list srclist{t} "$large c a b"
} }
assert_equal $large [r lmove srclist dstlist $wherefrom $whereto] assert_equal $large [r lmove srclist{t} dstlist{t} $wherefrom $whereto]
assert_equal c [r lmove srclist dstlist $wherefrom $whereto] assert_equal c [r lmove srclist{t} dstlist{t} $wherefrom $whereto]
assert_equal "a b" [r lrange srclist 0 -1] assert_equal "a b" [r lrange srclist{t} 0 -1]
if {$whereto eq "right"} { if {$whereto eq "right"} {
assert_equal "$otherlarge $large c" [r lrange dstlist 0 -1] assert_equal "$otherlarge $large c" [r lrange dstlist{t} 0 -1]
} else { } else {
assert_equal "c $large $otherlarge" [r lrange dstlist 0 -1] assert_equal "c $large $otherlarge" [r lrange dstlist{t} 0 -1]
} }
# When we lmoved a large value, dstlist should be # When we lmoved a large value, dstlist should be
# converted to the same encoding as srclist. # converted to the same encoding as srclist.
if {$type eq "linkedlist"} { if {$type eq "linkedlist"} {
assert_encoding quicklist dstlist assert_encoding quicklist dstlist{t}
} }
} }
} }
@ -864,31 +832,31 @@ start_server {
} }
test {RPOPLPUSH against non existing key} { test {RPOPLPUSH against non existing key} {
r del srclist dstlist r del srclist{t} dstlist{t}
assert_equal {} [r rpoplpush srclist dstlist] assert_equal {} [r rpoplpush srclist{t} dstlist{t}]
assert_equal 0 [r exists srclist] assert_equal 0 [r exists srclist{t}]
assert_equal 0 [r exists dstlist] assert_equal 0 [r exists dstlist{t}]
} }
test {RPOPLPUSH against non list src key} { test {RPOPLPUSH against non list src key} {
r del srclist dstlist r del srclist{t} dstlist{t}
r set srclist x r set srclist{t} x
assert_error WRONGTYPE* {r rpoplpush srclist dstlist} assert_error WRONGTYPE* {r rpoplpush srclist{t} dstlist{t}}
assert_type string srclist assert_type string srclist{t}
assert_equal 0 [r exists newlist] assert_equal 0 [r exists newlist{t}]
} }
test {RPOPLPUSH against non list dst key} { test {RPOPLPUSH against non list dst key} {
create_list srclist {a b c d} create_list srclist{t} {a b c d}
r set dstlist x r set dstlist{t} x
assert_error WRONGTYPE* {r rpoplpush srclist dstlist} assert_error WRONGTYPE* {r rpoplpush srclist{t} dstlist{t}}
assert_type string dstlist assert_type string dstlist{t}
assert_equal {a b c d} [r lrange srclist 0 -1] assert_equal {a b c d} [r lrange srclist{t} 0 -1]
} }
test {RPOPLPUSH against non existing src key} { test {RPOPLPUSH against non existing src key} {
r del srclist dstlist r del srclist{t} dstlist{t}
assert_equal {} [r rpoplpush srclist dstlist] assert_equal {} [r rpoplpush srclist{t} dstlist{t}]
} {} } {}
foreach {type large} [array get largevalue] { foreach {type large} [array get largevalue] {
@ -1121,7 +1089,7 @@ start_server {
set k [r lrange k 0 -1] set k [r lrange k 0 -1]
set dump [r dump k] set dump [r dump k]
r config set sanitize-dump-payload no config_set sanitize-dump-payload no mayfail
r restore kk 0 $dump r restore kk 0 $dump
set kk [r lrange kk 0 -1] set kk [r lrange kk 0 -1]
@ -1141,7 +1109,7 @@ start_server {
} {12 0 9223372036854775808 2147483647 32767 127} } {12 0 9223372036854775808 2147483647 32767 127}
test {List ziplist of various encodings - sanitize dump} { test {List ziplist of various encodings - sanitize dump} {
r config set sanitize-dump-payload yes config_set sanitize-dump-payload yes mayfail
r restore kk 0 $dump replace r restore kk 0 $dump replace
set k [r lrange k 0 -1] set k [r lrange k 0 -1]
set kk [r lrange kk 0 -1] set kk [r lrange kk 0 -1]

View File

@ -97,7 +97,9 @@ start_server {
} }
test "Set encoding after DEBUG RELOAD" { test "Set encoding after DEBUG RELOAD" {
r del myintset myhashset mylargeintset r del myintset
r del myhashset
r del mylargeintset
for {set i 0} {$i < 100} {incr i} { r sadd myintset $i } for {set i 0} {$i < 100} {incr i} { r sadd myintset $i }
for {set i 0} {$i < 1280} {incr i} { r sadd mylargeintset $i } for {set i 0} {$i < 1280} {incr i} { r sadd mylargeintset $i }
for {set i 0} {$i < 256} {incr i} { r sadd myhashset [format "i%03d" $i] } for {set i 0} {$i < 256} {incr i} { r sadd myhashset [format "i%03d" $i] }
@ -109,7 +111,7 @@ start_server {
assert_encoding intset myintset assert_encoding intset myintset
assert_encoding hashtable mylargeintset assert_encoding hashtable mylargeintset
assert_encoding hashtable myhashset assert_encoding hashtable myhashset
} } {} {needs:debug}
test {SREM basics - regular set} { test {SREM basics - regular set} {
create_set myset {foo bar ciao} create_set myset {foo bar ciao}
@ -143,19 +145,19 @@ start_server {
foreach {type} {hashtable intset} { foreach {type} {hashtable intset} {
for {set i 1} {$i <= 5} {incr i} { for {set i 1} {$i <= 5} {incr i} {
r del [format "set%d" $i] r del [format "set%d{t}" $i]
} }
for {set i 0} {$i < 200} {incr i} { for {set i 0} {$i < 200} {incr i} {
r sadd set1 $i r sadd set1{t} $i
r sadd set2 [expr $i+195] r sadd set2{t} [expr $i+195]
} }
foreach i {199 195 1000 2000} { foreach i {199 195 1000 2000} {
r sadd set3 $i r sadd set3{t} $i
} }
for {set i 5} {$i < 200} {incr i} { for {set i 5} {$i < 200} {incr i} {
r sadd set4 $i r sadd set4{t} $i
} }
r sadd set5 0 r sadd set5{t} 0
# To make sure the sets are encoded as the type we are testing -- also # To make sure the sets are encoded as the type we are testing -- also
# when the VM is enabled and the values may be swapped in and out # when the VM is enabled and the values may be swapped in and out
@ -167,81 +169,81 @@ start_server {
} }
for {set i 1} {$i <= 5} {incr i} { for {set i 1} {$i <= 5} {incr i} {
r sadd [format "set%d" $i] $large r sadd [format "set%d{t}" $i] $large
} }
test "Generated sets must be encoded as $type" { test "Generated sets must be encoded as $type" {
for {set i 1} {$i <= 5} {incr i} { for {set i 1} {$i <= 5} {incr i} {
assert_encoding $type [format "set%d" $i] assert_encoding $type [format "set%d{t}" $i]
} }
} }
test "SINTER with two sets - $type" { test "SINTER with two sets - $type" {
assert_equal [list 195 196 197 198 199 $large] [lsort [r sinter set1 set2]] assert_equal [list 195 196 197 198 199 $large] [lsort [r sinter set1{t} set2{t}]]
} }
test "SINTERSTORE with two sets - $type" { test "SINTERSTORE with two sets - $type" {
r sinterstore setres set1 set2 r sinterstore setres{t} set1{t} set2{t}
assert_encoding $type setres assert_encoding $type setres{t}
assert_equal [list 195 196 197 198 199 $large] [lsort [r smembers setres]] assert_equal [list 195 196 197 198 199 $large] [lsort [r smembers setres{t}]]
} }
test "SINTERSTORE with two sets, after a DEBUG RELOAD - $type" { test "SINTERSTORE with two sets, after a DEBUG RELOAD - $type" {
r debug reload r debug reload
r sinterstore setres set1 set2 r sinterstore setres{t} set1{t} set2{t}
assert_encoding $type setres assert_encoding $type setres{t}
assert_equal [list 195 196 197 198 199 $large] [lsort [r smembers setres]] assert_equal [list 195 196 197 198 199 $large] [lsort [r smembers setres{t}]]
} } {} {needs:debug}
test "SUNION with two sets - $type" { test "SUNION with two sets - $type" {
set expected [lsort -uniq "[r smembers set1] [r smembers set2]"] set expected [lsort -uniq "[r smembers set1{t}] [r smembers set2{t}]"]
assert_equal $expected [lsort [r sunion set1 set2]] assert_equal $expected [lsort [r sunion set1{t} set2{t}]]
} }
test "SUNIONSTORE with two sets - $type" { test "SUNIONSTORE with two sets - $type" {
r sunionstore setres set1 set2 r sunionstore setres{t} set1{t} set2{t}
assert_encoding $type setres assert_encoding $type setres{t}
set expected [lsort -uniq "[r smembers set1] [r smembers set2]"] set expected [lsort -uniq "[r smembers set1{t}] [r smembers set2{t}]"]
assert_equal $expected [lsort [r smembers setres]] assert_equal $expected [lsort [r smembers setres{t}]]
} }
test "SINTER against three sets - $type" { test "SINTER against three sets - $type" {
assert_equal [list 195 199 $large] [lsort [r sinter set1 set2 set3]] assert_equal [list 195 199 $large] [lsort [r sinter set1{t} set2{t} set3{t}]]
} }
test "SINTERSTORE with three sets - $type" { test "SINTERSTORE with three sets - $type" {
r sinterstore setres set1 set2 set3 r sinterstore setres{t} set1{t} set2{t} set3{t}
assert_equal [list 195 199 $large] [lsort [r smembers setres]] assert_equal [list 195 199 $large] [lsort [r smembers setres{t}]]
} }
test "SUNION with non existing keys - $type" { test "SUNION with non existing keys - $type" {
set expected [lsort -uniq "[r smembers set1] [r smembers set2]"] set expected [lsort -uniq "[r smembers set1{t}] [r smembers set2{t}]"]
assert_equal $expected [lsort [r sunion nokey1 set1 set2 nokey2]] assert_equal $expected [lsort [r sunion nokey1{t} set1{t} set2{t} nokey2{t}]]
} }
test "SDIFF with two sets - $type" { test "SDIFF with two sets - $type" {
assert_equal {0 1 2 3 4} [lsort [r sdiff set1 set4]] assert_equal {0 1 2 3 4} [lsort [r sdiff set1{t} set4{t}]]
} }
test "SDIFF with three sets - $type" { test "SDIFF with three sets - $type" {
assert_equal {1 2 3 4} [lsort [r sdiff set1 set4 set5]] assert_equal {1 2 3 4} [lsort [r sdiff set1{t} set4{t} set5{t}]]
} }
test "SDIFFSTORE with three sets - $type" { test "SDIFFSTORE with three sets - $type" {
r sdiffstore setres set1 set4 set5 r sdiffstore setres{t} set1{t} set4{t} set5{t}
# When we start with intsets, we should always end with intsets. # When we start with intsets, we should always end with intsets.
if {$type eq {intset}} { if {$type eq {intset}} {
assert_encoding intset setres assert_encoding intset setres{t}
} }
assert_equal {1 2 3 4} [lsort [r smembers setres]] assert_equal {1 2 3 4} [lsort [r smembers setres{t}]]
} }
} }
test "SDIFF with first set empty" { test "SDIFF with first set empty" {
r del set1 set2 set3 r del set1{t} set2{t} set3{t}
r sadd set2 1 2 3 4 r sadd set2{t} 1 2 3 4
r sadd set3 a b c d r sadd set3{t} a b c d
r sdiff set1 set2 set3 r sdiff set1{t} set2{t} set3{t}
} {} } {}
test "SDIFF with same set two times" { test "SDIFF with same set two times" {
@ -258,11 +260,11 @@ start_server {
set num_sets [expr {[randomInt 10]+1}] set num_sets [expr {[randomInt 10]+1}]
for {set i 0} {$i < $num_sets} {incr i} { for {set i 0} {$i < $num_sets} {incr i} {
set num_elements [randomInt 100] set num_elements [randomInt 100]
r del set_$i r del set_$i{t}
lappend args set_$i lappend args set_$i{t}
while {$num_elements} { while {$num_elements} {
set ele [randomValue] set ele [randomValue]
r sadd set_$i $ele r sadd set_$i{t} $ele
if {$i == 0} { if {$i == 0} {
set s($ele) x set s($ele) x
} else { } else {
@ -277,42 +279,42 @@ start_server {
} }
test "SINTER against non-set should throw error" { test "SINTER against non-set should throw error" {
r set key1 x r set key1{t} x
assert_error "WRONGTYPE*" {r sinter key1 noset} assert_error "WRONGTYPE*" {r sinter key1{t} noset{t}}
} }
test "SUNION against non-set should throw error" { test "SUNION against non-set should throw error" {
r set key1 x r set key1{t} x
assert_error "WRONGTYPE*" {r sunion key1 noset} assert_error "WRONGTYPE*" {r sunion key1{t} noset{t}}
} }
test "SINTER should handle non existing key as empty" { test "SINTER should handle non existing key as empty" {
r del set1 set2 set3 r del set1{t} set2{t} set3{t}
r sadd set1 a b c r sadd set1{t} a b c
r sadd set2 b c d r sadd set2{t} b c d
r sinter set1 set2 set3 r sinter set1{t} set2{t} set3{t}
} {} } {}
test "SINTER with same integer elements but different encoding" { test "SINTER with same integer elements but different encoding" {
r del set1 set2 r del set1{t} set2{t}
r sadd set1 1 2 3 r sadd set1{t} 1 2 3
r sadd set2 1 2 3 a r sadd set2{t} 1 2 3 a
r srem set2 a r srem set2{t} a
assert_encoding intset set1 assert_encoding intset set1{t}
assert_encoding hashtable set2 assert_encoding hashtable set2{t}
lsort [r sinter set1 set2] lsort [r sinter set1{t} set2{t}]
} {1 2 3} } {1 2 3}
test "SINTERSTORE against non existing keys should delete dstkey" { test "SINTERSTORE against non existing keys should delete dstkey" {
r set setres xxx r set setres{t} xxx
assert_equal 0 [r sinterstore setres foo111 bar222] assert_equal 0 [r sinterstore setres{t} foo111{t} bar222{t}]
assert_equal 0 [r exists setres] assert_equal 0 [r exists setres{t}]
} }
test "SUNIONSTORE against non existing keys should delete dstkey" { test "SUNIONSTORE against non existing keys should delete dstkey" {
r set setres xxx r set setres{t} xxx
assert_equal 0 [r sunionstore setres foo111 bar222] assert_equal 0 [r sunionstore setres{t} foo111{t} bar222{t}]
assert_equal 0 [r exists setres] assert_equal 0 [r exists setres{t}]
} }
foreach {type contents} {hashtable {a b c} intset {1 2 3}} { foreach {type contents} {hashtable {a b c} intset {1 2 3}} {
@ -555,81 +557,81 @@ start_server {
} }
proc setup_move {} { proc setup_move {} {
r del myset3 myset4 r del myset3{t} myset4{t}
create_set myset1 {1 a b} create_set myset1{t} {1 a b}
create_set myset2 {2 3 4} create_set myset2{t} {2 3 4}
assert_encoding hashtable myset1 assert_encoding hashtable myset1{t}
assert_encoding intset myset2 assert_encoding intset myset2{t}
} }
test "SMOVE basics - from regular set to intset" { test "SMOVE basics - from regular set to intset" {
# move a non-integer element to an intset should convert encoding # move a non-integer element to an intset should convert encoding
setup_move setup_move
assert_equal 1 [r smove myset1 myset2 a] assert_equal 1 [r smove myset1{t} myset2{t} a]
assert_equal {1 b} [lsort [r smembers myset1]] assert_equal {1 b} [lsort [r smembers myset1{t}]]
assert_equal {2 3 4 a} [lsort [r smembers myset2]] assert_equal {2 3 4 a} [lsort [r smembers myset2{t}]]
assert_encoding hashtable myset2 assert_encoding hashtable myset2{t}
# move an integer element should not convert the encoding # move an integer element should not convert the encoding
setup_move setup_move
assert_equal 1 [r smove myset1 myset2 1] assert_equal 1 [r smove myset1{t} myset2{t} 1]
assert_equal {a b} [lsort [r smembers myset1]] assert_equal {a b} [lsort [r smembers myset1{t}]]
assert_equal {1 2 3 4} [lsort [r smembers myset2]] assert_equal {1 2 3 4} [lsort [r smembers myset2{t}]]
assert_encoding intset myset2 assert_encoding intset myset2{t}
} }
test "SMOVE basics - from intset to regular set" { test "SMOVE basics - from intset to regular set" {
setup_move setup_move
assert_equal 1 [r smove myset2 myset1 2] assert_equal 1 [r smove myset2{t} myset1{t} 2]
assert_equal {1 2 a b} [lsort [r smembers myset1]] assert_equal {1 2 a b} [lsort [r smembers myset1{t}]]
assert_equal {3 4} [lsort [r smembers myset2]] assert_equal {3 4} [lsort [r smembers myset2{t}]]
} }
test "SMOVE non existing key" { test "SMOVE non existing key" {
setup_move setup_move
assert_equal 0 [r smove myset1 myset2 foo] assert_equal 0 [r smove myset1{t} myset2{t} foo]
assert_equal 0 [r smove myset1 myset1 foo] assert_equal 0 [r smove myset1{t} myset1{t} foo]
assert_equal {1 a b} [lsort [r smembers myset1]] assert_equal {1 a b} [lsort [r smembers myset1{t}]]
assert_equal {2 3 4} [lsort [r smembers myset2]] assert_equal {2 3 4} [lsort [r smembers myset2{t}]]
} }
test "SMOVE non existing src set" { test "SMOVE non existing src set" {
setup_move setup_move
assert_equal 0 [r smove noset myset2 foo] assert_equal 0 [r smove noset{t} myset2{t} foo]
assert_equal {2 3 4} [lsort [r smembers myset2]] assert_equal {2 3 4} [lsort [r smembers myset2{t}]]
} }
test "SMOVE from regular set to non existing destination set" { test "SMOVE from regular set to non existing destination set" {
setup_move setup_move
assert_equal 1 [r smove myset1 myset3 a] assert_equal 1 [r smove myset1{t} myset3{t} a]
assert_equal {1 b} [lsort [r smembers myset1]] assert_equal {1 b} [lsort [r smembers myset1{t}]]
assert_equal {a} [lsort [r smembers myset3]] assert_equal {a} [lsort [r smembers myset3{t}]]
assert_encoding hashtable myset3 assert_encoding hashtable myset3{t}
} }
test "SMOVE from intset to non existing destination set" { test "SMOVE from intset to non existing destination set" {
setup_move setup_move
assert_equal 1 [r smove myset2 myset3 2] assert_equal 1 [r smove myset2{t} myset3{t} 2]
assert_equal {3 4} [lsort [r smembers myset2]] assert_equal {3 4} [lsort [r smembers myset2{t}]]
assert_equal {2} [lsort [r smembers myset3]] assert_equal {2} [lsort [r smembers myset3{t}]]
assert_encoding intset myset3 assert_encoding intset myset3{t}
} }
test "SMOVE wrong src key type" { test "SMOVE wrong src key type" {
r set x 10 r set x{t} 10
assert_error "WRONGTYPE*" {r smove x myset2 foo} assert_error "WRONGTYPE*" {r smove x{t} myset2{t} foo}
} }
test "SMOVE wrong dst key type" { test "SMOVE wrong dst key type" {
r set x 10 r set x{t} 10
assert_error "WRONGTYPE*" {r smove myset2 x foo} assert_error "WRONGTYPE*" {r smove myset2{t} x{t} foo}
} }
test "SMOVE with identical source and destination" { test "SMOVE with identical source and destination" {
r del set r del set{t}
r sadd set a b c r sadd set{t} a b c
r smove set set b r smove set{t} set{t} b
lsort [r smembers set] lsort [r smembers set{t}]
} {a b c} } {a b c}
tags {slow} { tags {slow} {

View File

@ -214,24 +214,24 @@ start_server {
} }
test {RENAME can unblock XREADGROUP with data} { test {RENAME can unblock XREADGROUP with data} {
r del mystream r del mystream{t}
r XGROUP CREATE mystream mygroup $ MKSTREAM r XGROUP CREATE mystream{t} mygroup $ MKSTREAM
set rd [redis_deferring_client] set rd [redis_deferring_client]
$rd XREADGROUP GROUP mygroup Alice BLOCK 0 STREAMS mystream ">" $rd XREADGROUP GROUP mygroup Alice BLOCK 0 STREAMS mystream{t} ">"
r XGROUP CREATE mystream2 mygroup $ MKSTREAM r XGROUP CREATE mystream2{t} mygroup $ MKSTREAM
r XADD mystream2 100 f1 v1 r XADD mystream2{t} 100 f1 v1
r RENAME mystream2 mystream r RENAME mystream2{t} mystream{t}
assert_equal "{mystream {{100-0 {f1 v1}}}}" [$rd read] ;# mystream2 had mygroup before RENAME assert_equal "{mystream{t} {{100-0 {f1 v1}}}}" [$rd read] ;# mystream2{t} had mygroup before RENAME
} }
test {RENAME can unblock XREADGROUP with -NOGROUP} { test {RENAME can unblock XREADGROUP with -NOGROUP} {
r del mystream r del mystream{t}
r XGROUP CREATE mystream mygroup $ MKSTREAM r XGROUP CREATE mystream{t} mygroup $ MKSTREAM
set rd [redis_deferring_client] set rd [redis_deferring_client]
$rd XREADGROUP GROUP mygroup Alice BLOCK 0 STREAMS mystream ">" $rd XREADGROUP GROUP mygroup Alice BLOCK 0 STREAMS mystream{t} ">"
r XADD mystream2 100 f1 v1 r XADD mystream2{t} 100 f1 v1
r RENAME mystream2 mystream r RENAME mystream2{t} mystream{t}
assert_error "*NOGROUP*" {$rd read} ;# mystream2 didn't have mygroup before RENAME assert_error "*NOGROUP*" {$rd read} ;# mystream2{t} didn't have mygroup before RENAME
} }
test {XCLAIM can claim PEL items from another consumer} { test {XCLAIM can claim PEL items from another consumer} {
@ -548,7 +548,7 @@ start_server {
assert_error "*NOGROUP*" {r XGROUP CREATECONSUMER mystream mygroup consumer} assert_error "*NOGROUP*" {r XGROUP CREATECONSUMER mystream mygroup consumer}
} }
start_server {tags {"stream"} overrides {appendonly yes aof-use-rdb-preamble no appendfsync always}} { start_server {tags {"stream needs:debug"} overrides {appendonly yes aof-use-rdb-preamble no appendfsync always}} {
test {XREADGROUP with NOACK creates consumer} { test {XREADGROUP with NOACK creates consumer} {
r del mystream r del mystream
r XGROUP CREATE mystream mygroup $ MKSTREAM r XGROUP CREATE mystream mygroup $ MKSTREAM
@ -596,7 +596,7 @@ start_server {
} }
} }
start_server {} { start_server {tags {"external:skip"}} {
set master [srv -1 client] set master [srv -1 client]
set master_host [srv -1 host] set master_host [srv -1 host]
set master_port [srv -1 port] set master_port [srv -1 port]
@ -647,7 +647,7 @@ start_server {
} }
} }
start_server {tags {"stream"} overrides {appendonly yes aof-use-rdb-preamble no}} { start_server {tags {"stream needs:debug"} overrides {appendonly yes aof-use-rdb-preamble no}} {
test {Empty stream with no lastid can be rewrite into AOF correctly} { test {Empty stream with no lastid can be rewrite into AOF correctly} {
r XGROUP CREATE mystream group-name $ MKSTREAM r XGROUP CREATE mystream group-name $ MKSTREAM
assert {[dict get [r xinfo stream mystream] length] == 0} assert {[dict get [r xinfo stream mystream] length] == 0}

View File

@ -117,6 +117,7 @@ start_server {
test {XADD with MAXLEN option and the '~' argument} { test {XADD with MAXLEN option and the '~' argument} {
r DEL mystream r DEL mystream
r config set stream-node-max-entries 100
for {set j 0} {$j < 1000} {incr j} { for {set j 0} {$j < 1000} {incr j} {
if {rand() < 0.9} { if {rand() < 0.9} {
r XADD mystream MAXLEN ~ 555 * xitem $j r XADD mystream MAXLEN ~ 555 * xitem $j
@ -172,19 +173,23 @@ start_server {
assert_equal [r XRANGE mystream - +] {{3-0 {f v}} {4-0 {f v}} {5-0 {f v}}} assert_equal [r XRANGE mystream - +] {{3-0 {f v}} {4-0 {f v}} {5-0 {f v}}}
} }
test {XADD mass insertion and XLEN} { proc insert_into_stream_key {key {count 10000}} {
r DEL mystream
r multi r multi
for {set j 0} {$j < 10000} {incr j} { for {set j 0} {$j < $count} {incr j} {
# From time to time insert a field with a different set # From time to time insert a field with a different set
# of fields in order to stress the stream compression code. # of fields in order to stress the stream compression code.
if {rand() < 0.9} { if {rand() < 0.9} {
r XADD mystream * item $j r XADD $key * item $j
} else { } else {
r XADD mystream * item $j otherfield foo r XADD $key * item $j otherfield foo
} }
} }
r exec r exec
}
test {XADD mass insertion and XLEN} {
r DEL mystream
insert_into_stream_key mystream
set items [r XRANGE mystream - +] set items [r XRANGE mystream - +]
for {set j 0} {$j < 10000} {incr j} { for {set j 0} {$j < 10000} {incr j} {
@ -267,32 +272,33 @@ start_server {
} }
test {Non blocking XREAD with empty streams} { test {Non blocking XREAD with empty streams} {
set res [r XREAD STREAMS s1 s2 0-0 0-0] set res [r XREAD STREAMS s1{t} s2{t} 0-0 0-0]
assert {$res eq {}} assert {$res eq {}}
} }
test {XREAD with non empty second stream} { test {XREAD with non empty second stream} {
set res [r XREAD COUNT 1 STREAMS nostream mystream 0-0 0-0] insert_into_stream_key mystream{t}
assert {[lindex $res 0 0] eq {mystream}} set res [r XREAD COUNT 1 STREAMS nostream{t} mystream{t} 0-0 0-0]
assert {[lindex $res 0 0] eq {mystream{t}}}
assert {[lrange [lindex $res 0 1 0 1] 0 1] eq {item 0}} assert {[lrange [lindex $res 0 1 0 1] 0 1] eq {item 0}}
} }
test {Blocking XREAD waiting new data} { test {Blocking XREAD waiting new data} {
r XADD s2 * old abcd1234 r XADD s2{t} * old abcd1234
set rd [redis_deferring_client] set rd [redis_deferring_client]
$rd XREAD BLOCK 20000 STREAMS s1 s2 s3 $ $ $ $rd XREAD BLOCK 20000 STREAMS s1{t} s2{t} s3{t} $ $ $
r XADD s2 * new abcd1234 r XADD s2{t} * new abcd1234
set res [$rd read] set res [$rd read]
assert {[lindex $res 0 0] eq {s2}} assert {[lindex $res 0 0] eq {s2{t}}}
assert {[lindex $res 0 1 0 1] eq {new abcd1234}} assert {[lindex $res 0 1 0 1] eq {new abcd1234}}
} }
test {Blocking XREAD waiting old data} { test {Blocking XREAD waiting old data} {
set rd [redis_deferring_client] set rd [redis_deferring_client]
$rd XREAD BLOCK 20000 STREAMS s1 s2 s3 $ 0-0 $ $rd XREAD BLOCK 20000 STREAMS s1{t} s2{t} s3{t} $ 0-0 $
r XADD s2 * foo abcd1234 r XADD s2{t} * foo abcd1234
set res [$rd read] set res [$rd read]
assert {[lindex $res 0 0] eq {s2}} assert {[lindex $res 0 0] eq {s2{t}}}
assert {[lindex $res 0 1 0 1] eq {old abcd1234}} assert {[lindex $res 0 1 0 1] eq {old abcd1234}}
} }
@ -410,12 +416,13 @@ start_server {
} }
test {XRANGE fuzzing} { test {XRANGE fuzzing} {
set items [r XRANGE mystream{t} - +]
set low_id [lindex $items 0 0] set low_id [lindex $items 0 0]
set high_id [lindex $items end 0] set high_id [lindex $items end 0]
for {set j 0} {$j < 100} {incr j} { for {set j 0} {$j < 100} {incr j} {
set start [streamRandomID $low_id $high_id] set start [streamRandomID $low_id $high_id]
set end [streamRandomID $low_id $high_id] set end [streamRandomID $low_id $high_id]
set range [r xrange mystream $start $end] set range [r xrange mystream{t} $start $end]
set tcl_range [streamSimulateXRANGE $items $start $end] set tcl_range [streamSimulateXRANGE $items $start $end]
if {$range ne $tcl_range} { if {$range ne $tcl_range} {
puts "*** WARNING *** - XRANGE fuzzing mismatch: $start - $end" puts "*** WARNING *** - XRANGE fuzzing mismatch: $start - $end"
@ -546,7 +553,7 @@ start_server {
} }
} }
start_server {tags {"stream"} overrides {appendonly yes}} { start_server {tags {"stream needs:debug"} overrides {appendonly yes}} {
test {XADD with MAXLEN > xlen can propagate correctly} { test {XADD with MAXLEN > xlen can propagate correctly} {
for {set j 0} {$j < 100} {incr j} { for {set j 0} {$j < 100} {incr j} {
r XADD mystream * xitem v r XADD mystream * xitem v
@ -561,7 +568,7 @@ start_server {tags {"stream"} overrides {appendonly yes}} {
} }
} }
start_server {tags {"stream"} overrides {appendonly yes}} { start_server {tags {"stream needs:debug"} overrides {appendonly yes}} {
test {XADD with MINID > lastid can propagate correctly} { test {XADD with MINID > lastid can propagate correctly} {
for {set j 0} {$j < 100} {incr j} { for {set j 0} {$j < 100} {incr j} {
set id [expr {$j+1}] set id [expr {$j+1}]
@ -577,7 +584,7 @@ start_server {tags {"stream"} overrides {appendonly yes}} {
} }
} }
start_server {tags {"stream"} overrides {appendonly yes}} { start_server {tags {"stream needs:debug"} overrides {appendonly yes stream-node-max-entries 100}} {
test {XADD with ~ MAXLEN can propagate correctly} { test {XADD with ~ MAXLEN can propagate correctly} {
for {set j 0} {$j < 100} {incr j} { for {set j 0} {$j < 100} {incr j} {
r XADD mystream * xitem v r XADD mystream * xitem v
@ -593,7 +600,7 @@ start_server {tags {"stream"} overrides {appendonly yes}} {
} }
} }
start_server {tags {"stream"} overrides {appendonly yes stream-node-max-entries 10}} { start_server {tags {"stream needs:debug"} overrides {appendonly yes stream-node-max-entries 10}} {
test {XADD with ~ MAXLEN and LIMIT can propagate correctly} { test {XADD with ~ MAXLEN and LIMIT can propagate correctly} {
for {set j 0} {$j < 100} {incr j} { for {set j 0} {$j < 100} {incr j} {
r XADD mystream * xitem v r XADD mystream * xitem v
@ -607,7 +614,7 @@ start_server {tags {"stream"} overrides {appendonly yes stream-node-max-entries
} }
} }
start_server {tags {"stream"} overrides {appendonly yes}} { start_server {tags {"stream needs:debug"} overrides {appendonly yes stream-node-max-entries 100}} {
test {XADD with ~ MINID can propagate correctly} { test {XADD with ~ MINID can propagate correctly} {
for {set j 0} {$j < 100} {incr j} { for {set j 0} {$j < 100} {incr j} {
set id [expr {$j+1}] set id [expr {$j+1}]
@ -624,7 +631,7 @@ start_server {tags {"stream"} overrides {appendonly yes}} {
} }
} }
start_server {tags {"stream"} overrides {appendonly yes stream-node-max-entries 10}} { start_server {tags {"stream needs:debug"} overrides {appendonly yes stream-node-max-entries 10}} {
test {XADD with ~ MINID and LIMIT can propagate correctly} { test {XADD with ~ MINID and LIMIT can propagate correctly} {
for {set j 0} {$j < 100} {incr j} { for {set j 0} {$j < 100} {incr j} {
set id [expr {$j+1}] set id [expr {$j+1}]
@ -639,7 +646,7 @@ start_server {tags {"stream"} overrides {appendonly yes stream-node-max-entries
} }
} }
start_server {tags {"stream"} overrides {appendonly yes stream-node-max-entries 10}} { start_server {tags {"stream needs:debug"} overrides {appendonly yes stream-node-max-entries 10}} {
test {XTRIM with ~ MAXLEN can propagate correctly} { test {XTRIM with ~ MAXLEN can propagate correctly} {
for {set j 0} {$j < 100} {incr j} { for {set j 0} {$j < 100} {incr j} {
r XADD mystream * xitem v r XADD mystream * xitem v
@ -678,7 +685,7 @@ start_server {tags {"stream xsetid"}} {
} {ERR no such key} } {ERR no such key}
} }
start_server {tags {"stream"} overrides {appendonly yes aof-use-rdb-preamble no}} { start_server {tags {"stream needs:debug"} overrides {appendonly yes aof-use-rdb-preamble no}} {
test {Empty stream can be rewrite into AOF correctly} { test {Empty stream can be rewrite into AOF correctly} {
r XADD mystream MAXLEN 0 * a b r XADD mystream MAXLEN 0 * a b
assert {[dict get [r xinfo stream mystream] length] == 0} assert {[dict get [r xinfo stream mystream] length] == 0}

View File

@ -173,7 +173,7 @@ start_server {tags {"string"}} {
{set foo bar} {set foo bar}
{del foo} {del foo}
} }
} } {} {needs:repl}
test {GETEX without argument does not propagate to replica} { test {GETEX without argument does not propagate to replica} {
set repl [attach_to_replication_stream] set repl [attach_to_replication_stream]
@ -185,23 +185,23 @@ start_server {tags {"string"}} {
{set foo bar} {set foo bar}
{del foo} {del foo}
} }
} } {} {needs:repl}
test {MGET} { test {MGET} {
r flushdb r flushdb
r set foo BAR r set foo{t} BAR
r set bar FOO r set bar{t} FOO
r mget foo bar r mget foo{t} bar{t}
} {BAR FOO} } {BAR FOO}
test {MGET against non existing key} { test {MGET against non existing key} {
r mget foo baazz bar r mget foo{t} baazz{t} bar{t}
} {BAR {} FOO} } {BAR {} FOO}
test {MGET against non-string key} { test {MGET against non-string key} {
r sadd myset ciao r sadd myset{t} ciao
r sadd myset bau r sadd myset{t} bau
r mget foo baazz bar myset r mget foo{t} baazz{t} bar{t} myset{t}
} {BAR {} FOO {}} } {BAR {} FOO {}}
test {GETSET (set new value)} { test {GETSET (set new value)} {
@ -215,21 +215,21 @@ start_server {tags {"string"}} {
} {bar xyz} } {bar xyz}
test {MSET base case} { test {MSET base case} {
r mset x 10 y "foo bar" z "x x x x x x x\n\n\r\n" r mset x{t} 10 y{t} "foo bar" z{t} "x x x x x x x\n\n\r\n"
r mget x y z r mget x{t} y{t} z{t}
} [list 10 {foo bar} "x x x x x x x\n\n\r\n"] } [list 10 {foo bar} "x x x x x x x\n\n\r\n"]
test {MSET wrong number of args} { test {MSET wrong number of args} {
catch {r mset x 10 y "foo bar" z} err catch {r mset x{t} 10 y{t} "foo bar" z{t}} err
format $err format $err
} {*wrong number*} } {*wrong number*}
test {MSETNX with already existent key} { test {MSETNX with already existent key} {
list [r msetnx x1 xxx y2 yyy x 20] [r exists x1] [r exists y2] list [r msetnx x1{t} xxx y2{t} yyy x{t} 20] [r exists x1{t}] [r exists y2{t}]
} {0 0 0} } {0 0 0}
test {MSETNX with not existing keys} { test {MSETNX with not existing keys} {
list [r msetnx x1 xxx y2 yyy] [r get x1] [r get y2] list [r msetnx x1{t} xxx y2{t} yyy] [r get x1{t}] [r get y2{t}]
} {1 xxx yyy} } {1 xxx yyy}
test "STRLEN against non-existing key" { test "STRLEN against non-existing key" {
@ -582,20 +582,20 @@ start_server {tags {"string"}} {
} [string length $rnalcs] } [string length $rnalcs]
test {LCS with KEYS option} { test {LCS with KEYS option} {
r set virus1 $rna1 r set virus1{t} $rna1
r set virus2 $rna2 r set virus2{t} $rna2
r STRALGO LCS KEYS virus1 virus2 r STRALGO LCS KEYS virus1{t} virus2{t}
} $rnalcs } $rnalcs
test {LCS indexes} { test {LCS indexes} {
dict get [r STRALGO LCS IDX KEYS virus1 virus2] matches dict get [r STRALGO LCS IDX KEYS virus1{t} virus2{t}] matches
} {{{238 238} {239 239}} {{236 236} {238 238}} {{229 230} {236 237}} {{224 224} {235 235}} {{1 222} {13 234}}} } {{{238 238} {239 239}} {{236 236} {238 238}} {{229 230} {236 237}} {{224 224} {235 235}} {{1 222} {13 234}}}
test {LCS indexes with match len} { test {LCS indexes with match len} {
dict get [r STRALGO LCS IDX KEYS virus1 virus2 WITHMATCHLEN] matches dict get [r STRALGO LCS IDX KEYS virus1{t} virus2{t} WITHMATCHLEN] matches
} {{{238 238} {239 239} 1} {{236 236} {238 238} 1} {{229 230} {236 237} 2} {{224 224} {235 235} 1} {{1 222} {13 234} 222}} } {{{238 238} {239 239} 1} {{236 236} {238 238} 1} {{229 230} {236 237} 2} {{224 224} {235 235} 1} {{1 222} {13 234} 222}}
test {LCS indexes with match len and minimum match len} { test {LCS indexes with match len and minimum match len} {
dict get [r STRALGO LCS IDX KEYS virus1 virus2 WITHMATCHLEN MINMATCHLEN 5] matches dict get [r STRALGO LCS IDX KEYS virus1{t} virus2{t} WITHMATCHLEN MINMATCHLEN 5] matches
} {{{1 222} {13 234} 222}} } {{{1 222} {13 234} 222}}
} }

View File

@ -642,9 +642,9 @@ start_server {tags {"zset"}} {
} }
test "ZUNIONSTORE against non-existing key doesn't set destination - $encoding" { test "ZUNIONSTORE against non-existing key doesn't set destination - $encoding" {
r del zseta r del zseta{t}
assert_equal 0 [r zunionstore dst_key 1 zseta] assert_equal 0 [r zunionstore dst_key{t} 1 zseta{t}]
assert_equal 0 [r exists dst_key] assert_equal 0 [r exists dst_key{t}]
} }
test "ZUNION/ZINTER/ZDIFF against non-existing key - $encoding" { test "ZUNION/ZINTER/ZDIFF against non-existing key - $encoding" {
@ -655,214 +655,214 @@ start_server {tags {"zset"}} {
} }
test "ZUNIONSTORE with empty set - $encoding" { test "ZUNIONSTORE with empty set - $encoding" {
r del zseta zsetb r del zseta{t} zsetb{t}
r zadd zseta 1 a r zadd zseta{t} 1 a
r zadd zseta 2 b r zadd zseta{t} 2 b
r zunionstore zsetc 2 zseta zsetb r zunionstore zsetc{t} 2 zseta{t} zsetb{t}
r zrange zsetc 0 -1 withscores r zrange zsetc{t} 0 -1 withscores
} {a 1 b 2} } {a 1 b 2}
test "ZUNION/ZINTER/ZDIFF with empty set - $encoding" { test "ZUNION/ZINTER/ZDIFF with empty set - $encoding" {
r del zseta zsetb r del zseta{t} zsetb{t}
r zadd zseta 1 a r zadd zseta{t} 1 a
r zadd zseta 2 b r zadd zseta{t} 2 b
assert_equal {a 1 b 2} [r zunion 2 zseta zsetb withscores] assert_equal {a 1 b 2} [r zunion 2 zseta{t} zsetb{t} withscores]
assert_equal {} [r zinter 2 zseta zsetb withscores] assert_equal {} [r zinter 2 zseta{t} zsetb{t} withscores]
assert_equal {a 1 b 2} [r zdiff 2 zseta zsetb withscores] assert_equal {a 1 b 2} [r zdiff 2 zseta{t} zsetb{t} withscores]
} }
test "ZUNIONSTORE basics - $encoding" { test "ZUNIONSTORE basics - $encoding" {
r del zseta zsetb zsetc r del zseta{t} zsetb{t} zsetc{t}
r zadd zseta 1 a r zadd zseta{t} 1 a
r zadd zseta 2 b r zadd zseta{t} 2 b
r zadd zseta 3 c r zadd zseta{t} 3 c
r zadd zsetb 1 b r zadd zsetb{t} 1 b
r zadd zsetb 2 c r zadd zsetb{t} 2 c
r zadd zsetb 3 d r zadd zsetb{t} 3 d
assert_equal 4 [r zunionstore zsetc 2 zseta zsetb] assert_equal 4 [r zunionstore zsetc{t} 2 zseta{t} zsetb{t}]
assert_equal {a 1 b 3 d 3 c 5} [r zrange zsetc 0 -1 withscores] assert_equal {a 1 b 3 d 3 c 5} [r zrange zsetc{t} 0 -1 withscores]
} }
test "ZUNION/ZINTER/ZDIFF with integer members - $encoding" { test "ZUNION/ZINTER/ZDIFF with integer members - $encoding" {
r del zsetd zsetf r del zsetd{t} zsetf{t}
r zadd zsetd 1 1 r zadd zsetd{t} 1 1
r zadd zsetd 2 2 r zadd zsetd{t} 2 2
r zadd zsetd 3 3 r zadd zsetd{t} 3 3
r zadd zsetf 1 1 r zadd zsetf{t} 1 1
r zadd zsetf 3 3 r zadd zsetf{t} 3 3
r zadd zsetf 4 4 r zadd zsetf{t} 4 4
assert_equal {1 2 2 2 4 4 3 6} [r zunion 2 zsetd zsetf withscores] assert_equal {1 2 2 2 4 4 3 6} [r zunion 2 zsetd{t} zsetf{t} withscores]
assert_equal {1 2 3 6} [r zinter 2 zsetd zsetf withscores] assert_equal {1 2 3 6} [r zinter 2 zsetd{t} zsetf{t} withscores]
assert_equal {2 2} [r zdiff 2 zsetd zsetf withscores] assert_equal {2 2} [r zdiff 2 zsetd{t} zsetf{t} withscores]
} }
test "ZUNIONSTORE with weights - $encoding" { test "ZUNIONSTORE with weights - $encoding" {
assert_equal 4 [r zunionstore zsetc 2 zseta zsetb weights 2 3] assert_equal 4 [r zunionstore zsetc{t} 2 zseta{t} zsetb{t} weights 2 3]
assert_equal {a 2 b 7 d 9 c 12} [r zrange zsetc 0 -1 withscores] assert_equal {a 2 b 7 d 9 c 12} [r zrange zsetc{t} 0 -1 withscores]
} }
test "ZUNION with weights - $encoding" { test "ZUNION with weights - $encoding" {
assert_equal {a 2 b 7 d 9 c 12} [r zunion 2 zseta zsetb weights 2 3 withscores] assert_equal {a 2 b 7 d 9 c 12} [r zunion 2 zseta{t} zsetb{t} weights 2 3 withscores]
assert_equal {b 7 c 12} [r zinter 2 zseta zsetb weights 2 3 withscores] assert_equal {b 7 c 12} [r zinter 2 zseta{t} zsetb{t} weights 2 3 withscores]
} }
test "ZUNIONSTORE with a regular set and weights - $encoding" { test "ZUNIONSTORE with a regular set and weights - $encoding" {
r del seta r del seta{t}
r sadd seta a r sadd seta{t} a
r sadd seta b r sadd seta{t} b
r sadd seta c r sadd seta{t} c
assert_equal 4 [r zunionstore zsetc 2 seta zsetb weights 2 3] assert_equal 4 [r zunionstore zsetc{t} 2 seta{t} zsetb{t} weights 2 3]
assert_equal {a 2 b 5 c 8 d 9} [r zrange zsetc 0 -1 withscores] assert_equal {a 2 b 5 c 8 d 9} [r zrange zsetc{t} 0 -1 withscores]
} }
test "ZUNIONSTORE with AGGREGATE MIN - $encoding" { test "ZUNIONSTORE with AGGREGATE MIN - $encoding" {
assert_equal 4 [r zunionstore zsetc 2 zseta zsetb aggregate min] assert_equal 4 [r zunionstore zsetc{t} 2 zseta{t} zsetb{t} aggregate min]
assert_equal {a 1 b 1 c 2 d 3} [r zrange zsetc 0 -1 withscores] assert_equal {a 1 b 1 c 2 d 3} [r zrange zsetc{t} 0 -1 withscores]
} }
test "ZUNION/ZINTER with AGGREGATE MIN - $encoding" { test "ZUNION/ZINTER with AGGREGATE MIN - $encoding" {
assert_equal {a 1 b 1 c 2 d 3} [r zunion 2 zseta zsetb aggregate min withscores] assert_equal {a 1 b 1 c 2 d 3} [r zunion 2 zseta{t} zsetb{t} aggregate min withscores]
assert_equal {b 1 c 2} [r zinter 2 zseta zsetb aggregate min withscores] assert_equal {b 1 c 2} [r zinter 2 zseta{t} zsetb{t} aggregate min withscores]
} }
test "ZUNIONSTORE with AGGREGATE MAX - $encoding" { test "ZUNIONSTORE with AGGREGATE MAX - $encoding" {
assert_equal 4 [r zunionstore zsetc 2 zseta zsetb aggregate max] assert_equal 4 [r zunionstore zsetc{t} 2 zseta{t} zsetb{t} aggregate max]
assert_equal {a 1 b 2 c 3 d 3} [r zrange zsetc 0 -1 withscores] assert_equal {a 1 b 2 c 3 d 3} [r zrange zsetc{t} 0 -1 withscores]
} }
test "ZUNION/ZINTER with AGGREGATE MAX - $encoding" { test "ZUNION/ZINTER with AGGREGATE MAX - $encoding" {
assert_equal {a 1 b 2 c 3 d 3} [r zunion 2 zseta zsetb aggregate max withscores] assert_equal {a 1 b 2 c 3 d 3} [r zunion 2 zseta{t} zsetb{t} aggregate max withscores]
assert_equal {b 2 c 3} [r zinter 2 zseta zsetb aggregate max withscores] assert_equal {b 2 c 3} [r zinter 2 zseta{t} zsetb{t} aggregate max withscores]
} }
test "ZINTERSTORE basics - $encoding" { test "ZINTERSTORE basics - $encoding" {
assert_equal 2 [r zinterstore zsetc 2 zseta zsetb] assert_equal 2 [r zinterstore zsetc{t} 2 zseta{t} zsetb{t}]
assert_equal {b 3 c 5} [r zrange zsetc 0 -1 withscores] assert_equal {b 3 c 5} [r zrange zsetc{t} 0 -1 withscores]
} }
test "ZINTER basics - $encoding" { test "ZINTER basics - $encoding" {
assert_equal {b 3 c 5} [r zinter 2 zseta zsetb withscores] assert_equal {b 3 c 5} [r zinter 2 zseta{t} zsetb{t} withscores]
} }
test "ZINTER RESP3 - $encoding" { test "ZINTER RESP3 - $encoding" {
r hello 3 r hello 3
assert_equal {{b 3.0} {c 5.0}} [r zinter 2 zseta zsetb withscores] assert_equal {{b 3.0} {c 5.0}} [r zinter 2 zseta{t} zsetb{t} withscores]
r hello 2
} }
r hello 2
test "ZINTERSTORE with weights - $encoding" { test "ZINTERSTORE with weights - $encoding" {
assert_equal 2 [r zinterstore zsetc 2 zseta zsetb weights 2 3] assert_equal 2 [r zinterstore zsetc{t} 2 zseta{t} zsetb{t} weights 2 3]
assert_equal {b 7 c 12} [r zrange zsetc 0 -1 withscores] assert_equal {b 7 c 12} [r zrange zsetc{t} 0 -1 withscores]
} }
test "ZINTER with weights - $encoding" { test "ZINTER with weights - $encoding" {
assert_equal {b 7 c 12} [r zinter 2 zseta zsetb weights 2 3 withscores] assert_equal {b 7 c 12} [r zinter 2 zseta{t} zsetb{t} weights 2 3 withscores]
} }
test "ZINTERSTORE with a regular set and weights - $encoding" { test "ZINTERSTORE with a regular set and weights - $encoding" {
r del seta r del seta{t}
r sadd seta a r sadd seta{t} a
r sadd seta b r sadd seta{t} b
r sadd seta c r sadd seta{t} c
assert_equal 2 [r zinterstore zsetc 2 seta zsetb weights 2 3] assert_equal 2 [r zinterstore zsetc{t} 2 seta{t} zsetb{t} weights 2 3]
assert_equal {b 5 c 8} [r zrange zsetc 0 -1 withscores] assert_equal {b 5 c 8} [r zrange zsetc{t} 0 -1 withscores]
} }
test "ZINTERSTORE with AGGREGATE MIN - $encoding" { test "ZINTERSTORE with AGGREGATE MIN - $encoding" {
assert_equal 2 [r zinterstore zsetc 2 zseta zsetb aggregate min] assert_equal 2 [r zinterstore zsetc{t} 2 zseta{t} zsetb{t} aggregate min]
assert_equal {b 1 c 2} [r zrange zsetc 0 -1 withscores] assert_equal {b 1 c 2} [r zrange zsetc{t} 0 -1 withscores]
} }
test "ZINTERSTORE with AGGREGATE MAX - $encoding" { test "ZINTERSTORE with AGGREGATE MAX - $encoding" {
assert_equal 2 [r zinterstore zsetc 2 zseta zsetb aggregate max] assert_equal 2 [r zinterstore zsetc{t} 2 zseta{t} zsetb{t} aggregate max]
assert_equal {b 2 c 3} [r zrange zsetc 0 -1 withscores] assert_equal {b 2 c 3} [r zrange zsetc{t} 0 -1 withscores]
} }
foreach cmd {ZUNIONSTORE ZINTERSTORE} { foreach cmd {ZUNIONSTORE ZINTERSTORE} {
test "$cmd with +inf/-inf scores - $encoding" { test "$cmd with +inf/-inf scores - $encoding" {
r del zsetinf1 zsetinf2 r del zsetinf1{t} zsetinf2{t}
r zadd zsetinf1 +inf key r zadd zsetinf1{t} +inf key
r zadd zsetinf2 +inf key r zadd zsetinf2{t} +inf key
r $cmd zsetinf3 2 zsetinf1 zsetinf2 r $cmd zsetinf3{t} 2 zsetinf1{t} zsetinf2{t}
assert_equal inf [r zscore zsetinf3 key] assert_equal inf [r zscore zsetinf3{t} key]
r zadd zsetinf1 -inf key r zadd zsetinf1{t} -inf key
r zadd zsetinf2 +inf key r zadd zsetinf2{t} +inf key
r $cmd zsetinf3 2 zsetinf1 zsetinf2 r $cmd zsetinf3{t} 2 zsetinf1{t} zsetinf2{t}
assert_equal 0 [r zscore zsetinf3 key] assert_equal 0 [r zscore zsetinf3{t} key]
r zadd zsetinf1 +inf key r zadd zsetinf1{t} +inf key
r zadd zsetinf2 -inf key r zadd zsetinf2{t} -inf key
r $cmd zsetinf3 2 zsetinf1 zsetinf2 r $cmd zsetinf3{t} 2 zsetinf1{t} zsetinf2{t}
assert_equal 0 [r zscore zsetinf3 key] assert_equal 0 [r zscore zsetinf3{t} key]
r zadd zsetinf1 -inf key r zadd zsetinf1{t} -inf key
r zadd zsetinf2 -inf key r zadd zsetinf2{t} -inf key
r $cmd zsetinf3 2 zsetinf1 zsetinf2 r $cmd zsetinf3{t} 2 zsetinf1{t} zsetinf2{t}
assert_equal -inf [r zscore zsetinf3 key] assert_equal -inf [r zscore zsetinf3{t} key]
} }
test "$cmd with NaN weights - $encoding" { test "$cmd with NaN weights - $encoding" {
r del zsetinf1 zsetinf2 r del zsetinf1{t} zsetinf2{t}
r zadd zsetinf1 1.0 key r zadd zsetinf1{t} 1.0 key
r zadd zsetinf2 1.0 key r zadd zsetinf2{t} 1.0 key
assert_error "*weight*not*float*" { assert_error "*weight*not*float*" {
r $cmd zsetinf3 2 zsetinf1 zsetinf2 weights nan nan r $cmd zsetinf3{t} 2 zsetinf1{t} zsetinf2{t} weights nan nan
} }
} }
} }
test "ZDIFFSTORE basics - $encoding" { test "ZDIFFSTORE basics - $encoding" {
assert_equal 1 [r zdiffstore zsetc 2 zseta zsetb] assert_equal 1 [r zdiffstore zsetc{t} 2 zseta{t} zsetb{t}]
assert_equal {a 1} [r zrange zsetc 0 -1 withscores] assert_equal {a 1} [r zrange zsetc{t} 0 -1 withscores]
} }
test "ZDIFF basics - $encoding" { test "ZDIFF basics - $encoding" {
assert_equal {a 1} [r zdiff 2 zseta zsetb withscores] assert_equal {a 1} [r zdiff 2 zseta{t} zsetb{t} withscores]
} }
test "ZDIFFSTORE with a regular set - $encoding" { test "ZDIFFSTORE with a regular set - $encoding" {
r del seta r del seta{t}
r sadd seta a r sadd seta{t} a
r sadd seta b r sadd seta{t} b
r sadd seta c r sadd seta{t} c
assert_equal 1 [r zdiffstore zsetc 2 seta zsetb] assert_equal 1 [r zdiffstore zsetc{t} 2 seta{t} zsetb{t}]
assert_equal {a 1} [r zrange zsetc 0 -1 withscores] assert_equal {a 1} [r zrange zsetc{t} 0 -1 withscores]
} }
test "ZDIFF subtracting set from itself - $encoding" { test "ZDIFF subtracting set from itself - $encoding" {
assert_equal 0 [r zdiffstore zsetc 2 zseta zseta] assert_equal 0 [r zdiffstore zsetc{t} 2 zseta{t} zseta{t}]
assert_equal {} [r zrange zsetc 0 -1 withscores] assert_equal {} [r zrange zsetc{t} 0 -1 withscores]
} }
test "ZDIFF algorithm 1 - $encoding" { test "ZDIFF algorithm 1 - $encoding" {
r del zseta zsetb zsetc r del zseta{t} zsetb{t} zsetc{t}
r zadd zseta 1 a r zadd zseta{t} 1 a
r zadd zseta 2 b r zadd zseta{t} 2 b
r zadd zseta 3 c r zadd zseta{t} 3 c
r zadd zsetb 1 b r zadd zsetb{t} 1 b
r zadd zsetb 2 c r zadd zsetb{t} 2 c
r zadd zsetb 3 d r zadd zsetb{t} 3 d
assert_equal 1 [r zdiffstore zsetc 2 zseta zsetb] assert_equal 1 [r zdiffstore zsetc{t} 2 zseta{t} zsetb{t}]
assert_equal {a 1} [r zrange zsetc 0 -1 withscores] assert_equal {a 1} [r zrange zsetc{t} 0 -1 withscores]
} }
test "ZDIFF algorithm 2 - $encoding" { test "ZDIFF algorithm 2 - $encoding" {
r del zseta zsetb zsetc zsetd zsete r del zseta{t} zsetb{t} zsetc{t} zsetd{t} zsete{t}
r zadd zseta 1 a r zadd zseta{t} 1 a
r zadd zseta 2 b r zadd zseta{t} 2 b
r zadd zseta 3 c r zadd zseta{t} 3 c
r zadd zseta 5 e r zadd zseta{t} 5 e
r zadd zsetb 1 b r zadd zsetb{t} 1 b
r zadd zsetc 1 c r zadd zsetc{t} 1 c
r zadd zsetd 1 d r zadd zsetd{t} 1 d
assert_equal 2 [r zdiffstore zsete 4 zseta zsetb zsetc zsetd] assert_equal 2 [r zdiffstore zsete{t} 4 zseta{t} zsetb{t} zsetc{t} zsetd{t}]
assert_equal {a 1 e 5} [r zrange zsete 0 -1 withscores] assert_equal {a 1 e 5} [r zrange zsete{t} 0 -1 withscores]
} }
test "ZDIFF fuzzing - $encoding" { test "ZDIFF fuzzing - $encoding" {
@ -873,11 +873,11 @@ start_server {tags {"zset"}} {
set num_sets [expr {[randomInt 10]+1}] set num_sets [expr {[randomInt 10]+1}]
for {set i 0} {$i < $num_sets} {incr i} { for {set i 0} {$i < $num_sets} {incr i} {
set num_elements [randomInt 100] set num_elements [randomInt 100]
r del zset_$i r del zset_$i{t}
lappend args zset_$i lappend args zset_$i{t}
while {$num_elements} { while {$num_elements} {
set ele [randomValue] set ele [randomValue]
r zadd zset_$i [randomInt 100] $ele r zadd zset_$i{t} [randomInt 100] $ele
if {$i == 0} { if {$i == 0} {
set s($ele) x set s($ele) x
} else { } else {
@ -906,7 +906,10 @@ start_server {tags {"zset"}} {
} }
test "ZPOP with count - $encoding" { test "ZPOP with count - $encoding" {
r del z1 z2 z3 foo r del z1
r del z2
r del z3
r del foo
r set foo bar r set foo bar
assert_equal {} [r zpopmin z1 2] assert_equal {} [r zpopmin z1 2]
assert_error "*WRONGTYPE*" {r zpopmin foo 2} assert_error "*WRONGTYPE*" {r zpopmin foo 2}
@ -930,34 +933,34 @@ start_server {tags {"zset"}} {
test "BZPOP with multiple existing sorted sets - $encoding" { test "BZPOP with multiple existing sorted sets - $encoding" {
set rd [redis_deferring_client] set rd [redis_deferring_client]
create_zset z1 {0 a 1 b 2 c} create_zset z1{t} {0 a 1 b 2 c}
create_zset z2 {3 d 4 e 5 f} create_zset z2{t} {3 d 4 e 5 f}
$rd bzpopmin z1 z2 5 $rd bzpopmin z1{t} z2{t} 5
assert_equal {z1 a 0} [$rd read] assert_equal {z1{t} a 0} [$rd read]
$rd bzpopmax z1 z2 5 $rd bzpopmax z1{t} z2{t} 5
assert_equal {z1 c 2} [$rd read] assert_equal {z1{t} c 2} [$rd read]
assert_equal 1 [r zcard z1] assert_equal 1 [r zcard z1{t}]
assert_equal 3 [r zcard z2] assert_equal 3 [r zcard z2{t}]
$rd bzpopmax z2 z1 5 $rd bzpopmax z2{t} z1{t} 5
assert_equal {z2 f 5} [$rd read] assert_equal {z2{t} f 5} [$rd read]
$rd bzpopmin z2 z1 5 $rd bzpopmin z2{t} z1{t} 5
assert_equal {z2 d 3} [$rd read] assert_equal {z2{t} d 3} [$rd read]
assert_equal 1 [r zcard z1] assert_equal 1 [r zcard z1{t}]
assert_equal 1 [r zcard z2] assert_equal 1 [r zcard z2{t}]
} }
test "BZPOP second sorted set has members - $encoding" { test "BZPOP second sorted set has members - $encoding" {
set rd [redis_deferring_client] set rd [redis_deferring_client]
r del z1 r del z1{t}
create_zset z2 {3 d 4 e 5 f} create_zset z2{t} {3 d 4 e 5 f}
$rd bzpopmax z1 z2 5 $rd bzpopmax z1{t} z2{t} 5
assert_equal {z2 f 5} [$rd read] assert_equal {z2{t} f 5} [$rd read]
$rd bzpopmin z2 z1 5 $rd bzpopmin z2{t} z1{t} 5
assert_equal {z2 d 3} [$rd read] assert_equal {z2{t} d 3} [$rd read]
assert_equal 0 [r zcard z1] assert_equal 0 [r zcard z1{t}]
assert_equal 1 [r zcard z2] assert_equal 1 [r zcard z2{t}]
} }
r config set zset-max-ziplist-entries $original_max_entries r config set zset-max-ziplist-entries $original_max_entries
@ -968,52 +971,52 @@ start_server {tags {"zset"}} {
basics skiplist basics skiplist
test {ZINTERSTORE regression with two sets, intset+hashtable} { test {ZINTERSTORE regression with two sets, intset+hashtable} {
r del seta setb setc r del seta{t} setb{t} setc{t}
r sadd set1 a r sadd set1{t} a
r sadd set2 10 r sadd set2{t} 10
r zinterstore set3 2 set1 set2 r zinterstore set3{t} 2 set1{t} set2{t}
} {0} } {0}
test {ZUNIONSTORE regression, should not create NaN in scores} { test {ZUNIONSTORE regression, should not create NaN in scores} {
r zadd z -inf neginf r zadd z{t} -inf neginf
r zunionstore out 1 z weights 0 r zunionstore out{t} 1 z{t} weights 0
r zrange out 0 -1 withscores r zrange out{t} 0 -1 withscores
} {neginf 0} } {neginf 0}
test {ZINTERSTORE #516 regression, mixed sets and ziplist zsets} { test {ZINTERSTORE #516 regression, mixed sets and ziplist zsets} {
r sadd one 100 101 102 103 r sadd one{t} 100 101 102 103
r sadd two 100 200 201 202 r sadd two{t} 100 200 201 202
r zadd three 1 500 1 501 1 502 1 503 1 100 r zadd three{t} 1 500 1 501 1 502 1 503 1 100
r zinterstore to_here 3 one two three WEIGHTS 0 0 1 r zinterstore to_here{t} 3 one{t} two{t} three{t} WEIGHTS 0 0 1
r zrange to_here 0 -1 r zrange to_here{t} 0 -1
} {100} } {100}
test {ZUNIONSTORE result is sorted} { test {ZUNIONSTORE result is sorted} {
# Create two sets with common and not common elements, perform # Create two sets with common and not common elements, perform
# the UNION, check that elements are still sorted. # the UNION, check that elements are still sorted.
r del one two dest r del one{t} two{t} dest{t}
set cmd1 [list r zadd one] set cmd1 [list r zadd one{t}]
set cmd2 [list r zadd two] set cmd2 [list r zadd two{t}]
for {set j 0} {$j < 1000} {incr j} { for {set j 0} {$j < 1000} {incr j} {
lappend cmd1 [expr rand()] [randomInt 1000] lappend cmd1 [expr rand()] [randomInt 1000]
lappend cmd2 [expr rand()] [randomInt 1000] lappend cmd2 [expr rand()] [randomInt 1000]
} }
{*}$cmd1 {*}$cmd1
{*}$cmd2 {*}$cmd2
assert {[r zcard one] > 100} assert {[r zcard one{t}] > 100}
assert {[r zcard two] > 100} assert {[r zcard two{t}] > 100}
r zunionstore dest 2 one two r zunionstore dest{t} 2 one{t} two{t}
set oldscore 0 set oldscore 0
foreach {ele score} [r zrange dest 0 -1 withscores] { foreach {ele score} [r zrange dest{t} 0 -1 withscores] {
assert {$score >= $oldscore} assert {$score >= $oldscore}
set oldscore $score set oldscore $score
} }
} }
test "ZUNIONSTORE/ZINTERSTORE/ZDIFFSTORE error if using WITHSCORES " { test "ZUNIONSTORE/ZINTERSTORE/ZDIFFSTORE error if using WITHSCORES " {
assert_error "*ERR*syntax*" {r zunionstore foo 2 zsetd zsetf withscores} assert_error "*ERR*syntax*" {r zunionstore foo{t} 2 zsetd{t} zsetf{t} withscores}
assert_error "*ERR*syntax*" {r zinterstore foo 2 zsetd zsetf withscores} assert_error "*ERR*syntax*" {r zinterstore foo{t} 2 zsetd{t} zsetf{t} withscores}
assert_error "*ERR*syntax*" {r zdiffstore foo 2 zsetd zsetf withscores} assert_error "*ERR*syntax*" {r zdiffstore foo{t} 2 zsetd{t} zsetf{t} withscores}
} }
test {ZMSCORE retrieve} { test {ZMSCORE retrieve} {
@ -1119,7 +1122,7 @@ start_server {tags {"zset"}} {
for {set i 0} {$i < $elements} {incr i} { for {set i 0} {$i < $elements} {incr i} {
assert_equal [lindex $aux $i] [r zscore zscoretest $i] assert_equal [lindex $aux $i] [r zscore zscoretest $i]
} }
} } {} {needs:debug}
test "ZSET sorting stresser - $encoding" { test "ZSET sorting stresser - $encoding" {
set delta 0 set delta 0
@ -1318,16 +1321,16 @@ start_server {tags {"zset"}} {
test "ZREMRANGEBYLEX fuzzy test, 100 ranges in $elements element sorted set - $encoding" { test "ZREMRANGEBYLEX fuzzy test, 100 ranges in $elements element sorted set - $encoding" {
set lexset {} set lexset {}
r del zset zsetcopy r del zset{t} zsetcopy{t}
for {set j 0} {$j < $elements} {incr j} { for {set j 0} {$j < $elements} {incr j} {
set e [randstring 0 30 alpha] set e [randstring 0 30 alpha]
lappend lexset $e lappend lexset $e
r zadd zset 0 $e r zadd zset{t} 0 $e
} }
set lexset [lsort -unique $lexset] set lexset [lsort -unique $lexset]
for {set j 0} {$j < 100} {incr j} { for {set j 0} {$j < 100} {incr j} {
# Copy... # Copy...
r zunionstore zsetcopy 1 zset r zunionstore zsetcopy{t} 1 zset{t}
set lexsetcopy $lexset set lexsetcopy $lexset
set min [randstring 0 30 alpha] set min [randstring 0 30 alpha]
@ -1338,13 +1341,13 @@ start_server {tags {"zset"}} {
if {$maxinc} {set cmax "\[$max"} else {set cmax "($max"} if {$maxinc} {set cmax "\[$max"} else {set cmax "($max"}
# Make sure data is the same in both sides # Make sure data is the same in both sides
assert {[r zrange zset 0 -1] eq $lexset} assert {[r zrange zset{t} 0 -1] eq $lexset}
# Get the range we are going to remove # Get the range we are going to remove
set torem [r zrangebylex zset $cmin $cmax] set torem [r zrangebylex zset{t} $cmin $cmax]
set toremlen [r zlexcount zset $cmin $cmax] set toremlen [r zlexcount zset{t} $cmin $cmax]
r zremrangebylex zsetcopy $cmin $cmax r zremrangebylex zsetcopy{t} $cmin $cmax
set output [r zrange zsetcopy 0 -1] set output [r zrange zsetcopy{t} 0 -1]
# Remove the range with Tcl from the original list # Remove the range with Tcl from the original list
if {$toremlen} { if {$toremlen} {
@ -1434,23 +1437,23 @@ start_server {tags {"zset"}} {
test "BZPOPMIN with same key multiple times should work" { test "BZPOPMIN with same key multiple times should work" {
set rd [redis_deferring_client] set rd [redis_deferring_client]
r del z1 z2 r del z1{t} z2{t}
# Data arriving after the BZPOPMIN. # Data arriving after the BZPOPMIN.
$rd bzpopmin z1 z2 z2 z1 0 $rd bzpopmin z1{t} z2{t} z2{t} z1{t} 0
r zadd z1 0 a r zadd z1{t} 0 a
assert_equal [$rd read] {z1 a 0} assert_equal [$rd read] {z1{t} a 0}
$rd bzpopmin z1 z2 z2 z1 0 $rd bzpopmin z1{t} z2{t} z2{t} z1{t} 0
r zadd z2 1 b r zadd z2{t} 1 b
assert_equal [$rd read] {z2 b 1} assert_equal [$rd read] {z2{t} b 1}
# Data already there. # Data already there.
r zadd z1 0 a r zadd z1{t} 0 a
r zadd z2 1 b r zadd z2{t} 1 b
$rd bzpopmin z1 z2 z2 z1 0 $rd bzpopmin z1{t} z2{t} z2{t} z1{t} 0
assert_equal [$rd read] {z1 a 0} assert_equal [$rd read] {z1{t} a 0}
$rd bzpopmin z1 z2 z2 z1 0 $rd bzpopmin z1{t} z2{t} z2{t} z1{t} 0
assert_equal [$rd read] {z2 b 1} assert_equal [$rd read] {z2{t} b 1}
} }
test "MULTI/EXEC is isolated from the point of view of BZPOPMIN" { test "MULTI/EXEC is isolated from the point of view of BZPOPMIN" {
@ -1522,89 +1525,89 @@ start_server {tags {"zset"}} {
test {ZRANGESTORE basic} { test {ZRANGESTORE basic} {
r flushall r flushall
r zadd z1 1 a 2 b 3 c 4 d r zadd z1{t} 1 a 2 b 3 c 4 d
set res [r zrangestore z2 z1 0 -1] set res [r zrangestore z2{t} z1{t} 0 -1]
assert_equal $res 4 assert_equal $res 4
r zrange z2 0 -1 withscores r zrange z2{t} 0 -1 withscores
} {a 1 b 2 c 3 d 4} } {a 1 b 2 c 3 d 4}
test {ZRANGESTORE RESP3} { test {ZRANGESTORE RESP3} {
r hello 3 r hello 3
r zrange z2 0 -1 withscores assert_equal [r zrange z2{t} 0 -1 withscores] {{a 1.0} {b 2.0} {c 3.0} {d 4.0}}
} {{a 1.0} {b 2.0} {c 3.0} {d 4.0}} r hello 2
r hello 2 }
test {ZRANGESTORE range} { test {ZRANGESTORE range} {
set res [r zrangestore z2 z1 1 2] set res [r zrangestore z2{t} z1{t} 1 2]
assert_equal $res 2 assert_equal $res 2
r zrange z2 0 -1 withscores r zrange z2{t} 0 -1 withscores
} {b 2 c 3} } {b 2 c 3}
test {ZRANGESTORE BYLEX} { test {ZRANGESTORE BYLEX} {
set res [r zrangestore z2 z1 \[b \[c BYLEX] set res [r zrangestore z2{t} z1{t} \[b \[c BYLEX]
assert_equal $res 2 assert_equal $res 2
r zrange z2 0 -1 withscores r zrange z2{t} 0 -1 withscores
} {b 2 c 3} } {b 2 c 3}
test {ZRANGESTORE BYSCORE} { test {ZRANGESTORE BYSCORE} {
set res [r zrangestore z2 z1 1 2 BYSCORE] set res [r zrangestore z2{t} z1{t} 1 2 BYSCORE]
assert_equal $res 2 assert_equal $res 2
r zrange z2 0 -1 withscores r zrange z2{t} 0 -1 withscores
} {a 1 b 2} } {a 1 b 2}
test {ZRANGESTORE BYSCORE LIMIT} { test {ZRANGESTORE BYSCORE LIMIT} {
set res [r zrangestore z2 z1 0 5 BYSCORE LIMIT 0 2] set res [r zrangestore z2{t} z1{t} 0 5 BYSCORE LIMIT 0 2]
assert_equal $res 2 assert_equal $res 2
r zrange z2 0 -1 withscores r zrange z2{t} 0 -1 withscores
} {a 1 b 2} } {a 1 b 2}
test {ZRANGESTORE BYSCORE REV LIMIT} { test {ZRANGESTORE BYSCORE REV LIMIT} {
set res [r zrangestore z2 z1 5 0 BYSCORE REV LIMIT 0 2] set res [r zrangestore z2{t} z1{t} 5 0 BYSCORE REV LIMIT 0 2]
assert_equal $res 2 assert_equal $res 2
r zrange z2 0 -1 withscores r zrange z2{t} 0 -1 withscores
} {c 3 d 4} } {c 3 d 4}
test {ZRANGE BYSCORE REV LIMIT} { test {ZRANGE BYSCORE REV LIMIT} {
r zrange z1 5 0 BYSCORE REV LIMIT 0 2 WITHSCORES r zrange z1{t} 5 0 BYSCORE REV LIMIT 0 2 WITHSCORES
} {d 4 c 3} } {d 4 c 3}
test {ZRANGESTORE - empty range} { test {ZRANGESTORE - empty range} {
set res [r zrangestore z2 z1 5 6] set res [r zrangestore z2{t} z1{t} 5 6]
assert_equal $res 0 assert_equal $res 0
r exists z2 r exists z2{t}
} {0} } {0}
test {ZRANGESTORE BYLEX - empty range} { test {ZRANGESTORE BYLEX - empty range} {
set res [r zrangestore z2 z1 \[f \[g BYLEX] set res [r zrangestore z2{t} z1{t} \[f \[g BYLEX]
assert_equal $res 0 assert_equal $res 0
r exists z2 r exists z2{t}
} {0} } {0}
test {ZRANGESTORE BYSCORE - empty range} { test {ZRANGESTORE BYSCORE - empty range} {
set res [r zrangestore z2 z1 5 6 BYSCORE] set res [r zrangestore z2{t} z1{t} 5 6 BYSCORE]
assert_equal $res 0 assert_equal $res 0
r exists z2 r exists z2{t}
} {0} } {0}
test {ZRANGE BYLEX} { test {ZRANGE BYLEX} {
r zrange z1 \[b \[c BYLEX r zrange z1{t} \[b \[c BYLEX
} {b c} } {b c}
test {ZRANGESTORE invalid syntax} { test {ZRANGESTORE invalid syntax} {
catch {r zrangestore z2 z1 0 -1 limit 1 2} err catch {r zrangestore z2{t} z1{t} 0 -1 limit 1 2} err
assert_match "*syntax*" $err assert_match "*syntax*" $err
catch {r zrangestore z2 z1 0 -1 WITHSCORES} err catch {r zrangestore z2{t} z1{t} 0 -1 WITHSCORES} err
assert_match "*syntax*" $err assert_match "*syntax*" $err
} }
test {ZRANGE invalid syntax} { test {ZRANGE invalid syntax} {
catch {r zrange z1 0 -1 limit 1 2} err catch {r zrange z1{t} 0 -1 limit 1 2} err
assert_match "*syntax*" $err assert_match "*syntax*" $err
catch {r zrange z1 0 -1 BYLEX WITHSCORES} err catch {r zrange z1{t} 0 -1 BYLEX WITHSCORES} err
assert_match "*syntax*" $err assert_match "*syntax*" $err
catch {r zrevrange z1 0 -1 BYSCORE} err catch {r zrevrange z1{t} 0 -1 BYSCORE} err
assert_match "*syntax*" $err assert_match "*syntax*" $err
catch {r zrangebyscore z1 0 -1 REV} err catch {r zrangebyscore z1{t} 0 -1 REV} err
assert_match "*syntax*" $err assert_match "*syntax*" $err
} }
@ -1643,8 +1646,8 @@ start_server {tags {"zset"}} {
set res [r zrandmember myzset 3] set res [r zrandmember myzset 3]
assert_equal [llength $res] 3 assert_equal [llength $res] 3
assert_equal [llength [lindex $res 1]] 1 assert_equal [llength [lindex $res 1]] 1
r hello 2
} }
r hello 2
test "ZRANDMEMBER count of 0 is handled correctly" { test "ZRANDMEMBER count of 0 is handled correctly" {
r zrandmember myzset 0 r zrandmember myzset 0

View File

@ -1,6 +1,6 @@
source tests/support/cli.tcl source tests/support/cli.tcl
start_server {tags {"wait network"}} { start_server {tags {"wait network external:skip"}} {
start_server {} { start_server {} {
set slave [srv 0 client] set slave [srv 0 client]
set slave_host [srv 0 host] set slave_host [srv 0 host]