Squash merging 125 typo/grammar/comment/doc PRs (#7773)

List of squashed commits or PRs
===============================

commit 66801ea
Author: hwware <wen.hui.ware@gmail.com>
Date:   Mon Jan 13 00:54:31 2020 -0500

    typo fix in acl.c

commit 46f55db
Author: Itamar Haber <itamar@redislabs.com>
Date:   Sun Sep 6 18:24:11 2020 +0300

    Updates a couple of comments

    Specifically:

    * RM_AutoMemory completed instead of pointing to docs
    * Updated link to custom type doc

commit 61a2aa0
Author: xindoo <xindoo@qq.com>
Date:   Tue Sep 1 19:24:59 2020 +0800

    Correct errors in code comments

commit a5871d1
Author: yz1509 <pro-756@qq.com>
Date:   Tue Sep 1 18:36:06 2020 +0800

    fix typos in module.c

commit 41eede7
Author: bookug <bookug@qq.com>
Date:   Sat Aug 15 01:11:33 2020 +0800

    docs: fix typos in comments

commit c303c84
Author: lazy-snail <ws.niu@outlook.com>
Date:   Fri Aug 7 11:15:44 2020 +0800

    fix spelling in redis.conf

commit 1eb76bf
Author: zhujian <zhujianxyz@gmail.com>
Date:   Thu Aug 6 15:22:10 2020 +0800

    add a missing 'n' in comment

commit 1530ec2
Author: Daniel Dai <764122422@qq.com>
Date:   Mon Jul 27 00:46:35 2020 -0400

    fix spelling in tracking.c

commit e517b31
Author: Hunter-Chen <huntcool001@gmail.com>
Date:   Fri Jul 17 22:33:32 2020 +0800

    Update redis.conf

    Co-authored-by: Itamar Haber <itamar@redislabs.com>

commit c300eff
Author: Hunter-Chen <huntcool001@gmail.com>
Date:   Fri Jul 17 22:33:23 2020 +0800

    Update redis.conf

    Co-authored-by: Itamar Haber <itamar@redislabs.com>

commit 4c058a8
Author: 陈浩鹏 <chenhaopeng@heytea.com>
Date:   Thu Jun 25 19:00:56 2020 +0800

    Grammar fix and clarification

commit 5fcaa81
Author: bodong.ybd <bodong.ybd@alibaba-inc.com>
Date:   Fri Jun 19 10:09:00 2020 +0800

    Fix typos

commit 4caca9a
Author: Pruthvi P <pruthvi@ixigo.com>
Date:   Fri May 22 00:33:22 2020 +0530

    Fix typo eviciton => eviction

commit b2a25f6
Author: Brad Dunbar <dunbarb2@gmail.com>
Date:   Sun May 17 12:39:59 2020 -0400

    Fix a typo.

commit 12842ae
Author: hwware <wen.hui.ware@gmail.com>
Date:   Sun May 3 17:16:59 2020 -0400

    fix spelling in redis conf

commit ddba07c
Author: Chris Lamb <chris@chris-lamb.co.uk>
Date:   Sat May 2 23:25:34 2020 +0100

    Correct a "conflicts" spelling error.

commit 8fc7bf2
Author: Nao YONASHIRO <yonashiro@r.recruit.co.jp>
Date:   Thu Apr 30 10:25:27 2020 +0900

    docs: fix EXPIRE_FAST_CYCLE_DURATION to ACTIVE_EXPIRE_CYCLE_FAST_DURATION

commit 9b2b67a
Author: Brad Dunbar <dunbarb2@gmail.com>
Date:   Fri Apr 24 11:46:22 2020 -0400

    Fix a typo.

commit 0746f10
Author: devilinrust <63737265+devilinrust@users.noreply.github.com>
Date:   Thu Apr 16 00:17:53 2020 +0200

    Fix typos in server.c

commit 92b588d
Author: benjessop12 <56115861+benjessop12@users.noreply.github.com>
Date:   Mon Apr 13 13:43:55 2020 +0100

    Fix spelling mistake in lazyfree.c

commit 1da37aa
Merge: 2d4ba28 af347a8
Author: hwware <wen.hui.ware@gmail.com>
Date:   Thu Mar 5 22:41:31 2020 -0500

    Merge remote-tracking branch 'upstream/unstable' into expiretypofix

commit 2d4ba28
Author: hwware <wen.hui.ware@gmail.com>
Date:   Mon Mar 2 00:09:40 2020 -0500

    fix typo in expire.c

commit 1a746f7
Author: SennoYuki <minakami1yuki@gmail.com>
Date:   Thu Feb 27 16:54:32 2020 +0800

    fix typo

commit 8599b1a
Author: dongheejeong <donghee950403@gmail.com>
Date:   Sun Feb 16 20:31:43 2020 +0000

    Fix typo in server.c

commit f38d4e8
Author: hwware <wen.hui.ware@gmail.com>
Date:   Sun Feb 2 22:58:38 2020 -0500

    fix typo in evict.c

commit fe143fc
Author: Leo Murillo <leonardo.murillo@gmail.com>
Date:   Sun Feb 2 01:57:22 2020 -0600

    Fix a few typos in redis.conf

commit 1ab4d21
Author: viraja1 <anchan.viraj@gmail.com>
Date:   Fri Dec 27 17:15:58 2019 +0530

    Fix typo in Latency API docstring

commit ca1f70e
Author: gosth <danxuedexing@qq.com>
Date:   Wed Dec 18 15:18:02 2019 +0800

    fix typo in sort.c

commit a57c06b
Author: ZYunH <zyunhjob@163.com>
Date:   Mon Dec 16 22:28:46 2019 +0800

    fix-zset-typo

commit b8c92b5
Author: git-hulk <hulk.website@gmail.com>
Date:   Mon Dec 16 15:51:42 2019 +0800

    FIX: typo in cluster.c, onformation->information

commit 9dd981c
Author: wujm2007 <jim.wujm@gmail.com>
Date:   Mon Dec 16 09:37:52 2019 +0800

    Fix typo

commit e132d7a
Author: Sebastien Williams-Wynn <s.williamswynn.mail@gmail.com>
Date:   Fri Nov 15 00:14:07 2019 +0000

    Minor typo change

commit 47f44d5
Author: happynote3966 <01ssrmikururudevice01@gmail.com>
Date:   Mon Nov 11 22:08:48 2019 +0900

    fix comment typo in redis-cli.c

commit b8bdb0d
Author: fulei <fulei@kuaishou.com>
Date:   Wed Oct 16 18:00:17 2019 +0800

    Fix a spelling mistake of comments  in defragDictBucketCallback

commit 0def46a
Author: fulei <fulei@kuaishou.com>
Date:   Wed Oct 16 13:09:27 2019 +0800

    fix some spelling mistakes of comments in defrag.c

commit f3596fd
Author: Phil Rajchgot <tophil@outlook.com>
Date:   Sun Oct 13 02:02:32 2019 -0400

    Typo and grammar fixes

    Redis and its documentation are great -- just wanted to submit a few corrections in the spirit of Hacktoberfest. Thanks for all your work on this project. I use it all the time and it works beautifully.

commit 2b928cd
Author: KangZhiDong <worldkzd@gmail.com>
Date:   Sun Sep 1 07:03:11 2019 +0800

    fix typos

commit 33aea14
Author: Axlgrep <axlgrep@gmail.com>
Date:   Tue Aug 27 11:02:18 2019 +0800

    Fixed eviction spelling issues

commit e282a80
Author: Simen Flatby <simen@oms.no>
Date:   Tue Aug 20 15:25:51 2019 +0200

    Update comments to reflect prop name

    In the comments the prop is referenced as replica-validity-factor,
    but it is really named cluster-replica-validity-factor.

commit 74d1f9a
Author: Jim Green <jimgreen2013@qq.com>
Date:   Tue Aug 20 20:00:31 2019 +0800

    fix comment error, the code is ok

commit eea1407
Author: Liao Tonglang <liaotonglang@gmail.com>
Date:   Fri May 31 10:16:18 2019 +0800

    typo fix

    fix cna't to can't

commit 0da553c
Author: KAWACHI Takashi <tkawachi@gmail.com>
Date:   Wed Jul 17 00:38:16 2019 +0900

    Fix typo

commit 7fc8fb6
Author: Michael Prokop <mika@grml.org>
Date:   Tue May 28 17:58:42 2019 +0200

    Typo fixes

    s/familar/familiar/
    s/compatiblity/compatibility/
    s/ ot / to /
    s/itsef/itself/

commit 5f46c9d
Author: zhumoing <34539422+zhumoing@users.noreply.github.com>
Date:   Tue May 21 21:16:50 2019 +0800

    typo-fixes

    typo-fixes

commit 321dfe1
Author: wxisme <850885154@qq.com>
Date:   Sat Mar 16 15:10:55 2019 +0800

    typo fix

commit b4fb131
Merge: 267e0e6 3df1eb8
Author: Nikitas Bastas <nikitasbst@gmail.com>
Date:   Fri Feb 8 22:55:45 2019 +0200

    Merge branch 'unstable' of antirez/redis into unstable

commit 267e0e6
Author: Nikitas Bastas <nikitasbst@gmail.com>
Date:   Wed Jan 30 21:26:04 2019 +0200

    Minor typo fix

commit 30544e7
Author: inshal96 <39904558+inshal96@users.noreply.github.com>
Date:   Fri Jan 4 16:54:50 2019 +0500

    remove an extra 'a' in the comments

commit 337969d
Author: BrotherGao <yangdongheng11@gmail.com>
Date:   Sat Dec 29 12:37:29 2018 +0800

    fix typo in redis.conf

commit 9f4b121
Merge: 423a030 e504583
Author: BrotherGao <yangdongheng@xiaomi.com>
Date:   Sat Dec 29 11:41:12 2018 +0800

    Merge branch 'unstable' of antirez/redis into unstable

commit 423a030
Merge: 42b02b7 46a51cd
Author: 杨东衡 <yangdongheng@xiaomi.com>
Date:   Tue Dec 4 23:56:11 2018 +0800

    Merge branch 'unstable' of antirez/redis into unstable

commit 42b02b7
Merge: 68c0e6e b8febe6
Author: Dongheng Yang <yangdongheng11@gmail.com>
Date:   Sun Oct 28 15:54:23 2018 +0800

    Merge pull request #1 from antirez/unstable

    update local data

commit 714b589
Author: Christian <crifei93@gmail.com>
Date:   Fri Dec 28 01:17:26 2018 +0100

    fix typo "resulution"

commit e23259d
Author: garenchan <1412950785@qq.com>
Date:   Wed Dec 26 09:58:35 2018 +0800

    fix typo: segfauls -> segfault

commit a9359f8
Author: xjp <jianping_xie@aliyun.com>
Date:   Tue Dec 18 17:31:44 2018 +0800

    Fixed REDISMODULE_H spell bug

commit a12c3e4
Author: jdiaz <jrd.palacios@gmail.com>
Date:   Sat Dec 15 23:39:52 2018 -0600

    Fixes hyperloglog hash function comment block description

commit 770eb11
Author: 林上耀 <1210tom@163.com>
Date:   Sun Nov 25 17:16:10 2018 +0800

    fix typo

commit fd97fbb
Author: Chris Lamb <chris@chris-lamb.co.uk>
Date:   Fri Nov 23 17:14:01 2018 +0100

    Correct "unsupported" typo.

commit a85522d
Author: Jungnam Lee <jungnam.lee@oracle.com>
Date:   Thu Nov 8 23:01:29 2018 +0900

    fix typo in test comments

commit ade8007
Author: Arun Kumar <palerdot@users.noreply.github.com>
Date:   Tue Oct 23 16:56:35 2018 +0530

    Fixed grammatical typo

    Fixed typo for word 'dictionary'

commit 869ee39
Author: Hamid Alaei <hamid.a85@gmail.com>
Date:   Sun Aug 12 16:40:02 2018 +0430

    fix documentations: (ThreadSafeContextStart/Stop -> ThreadSafeContextLock/Unlock), minor typo

commit f89d158
Author: Mayank Jain <mayankjain255@gmail.com>
Date:   Tue Jul 31 23:01:21 2018 +0530

    Updated README.md with some spelling corrections.

    Made correction in spelling of some misspelled words.

commit 892198e
Author: dsomeshwar <someshwar.dhayalan@gmail.com>
Date:   Sat Jul 21 23:23:04 2018 +0530

    typo fix

commit 8a4d780
Author: Itamar Haber <itamar@redislabs.com>
Date:   Mon Apr 30 02:06:52 2018 +0300

    Fixes some typos

commit e3acef6
Author: Noah Rosamilia <ivoahivoah@gmail.com>
Date:   Sat Mar 3 23:41:21 2018 -0500

    Fix typo in /deps/README.md

commit 04442fb
Author: WuYunlong <xzsyeb@126.com>
Date:   Sat Mar 3 10:32:42 2018 +0800

    Fix typo in readSyncBulkPayload() comment.

commit 9f36880
Author: WuYunlong <xzsyeb@126.com>
Date:   Sat Mar 3 10:20:37 2018 +0800

    replication.c comment: run_id -> replid.

commit f866b4a
Author: Francesco 'makevoid' Canessa <makevoid@gmail.com>
Date:   Thu Feb 22 22:01:56 2018 +0000

    fix comment typo in server.c

commit 0ebc69b
Author: 줍 <jubee0124@gmail.com>
Date:   Mon Feb 12 16:38:48 2018 +0900

    Fix typo in redis.conf

    Fix `five behaviors` to `eight behaviors` in [this sentence ](antirez/redis@unstable/redis.conf#L564)

commit b50a620
Author: martinbroadhurst <martinbroadhurst@users.noreply.github.com>
Date:   Thu Dec 28 12:07:30 2017 +0000

    Fix typo in valgrind.sup

commit 7d8f349
Author: Peter Boughton <peter@sorcerersisle.com>
Date:   Mon Nov 27 19:52:19 2017 +0000

    Update CONTRIBUTING; refer doc updates to redis-doc repo.

commit 02dec7e
Author: Klauswk <klauswk1@hotmail.com>
Date:   Tue Oct 24 16:18:38 2017 -0200

    Fix typo in comment

commit e1efbc8
Author: chenshi <baiwfg2@gmail.com>
Date:   Tue Oct 3 18:26:30 2017 +0800

    Correct two spelling errors of comments

commit 93327d8
Author: spacewander <spacewanderlzx@gmail.com>
Date:   Wed Sep 13 16:47:24 2017 +0800

    Update the comment for OBJ_ENCODING_EMBSTR_SIZE_LIMIT's value

    The value of OBJ_ENCODING_EMBSTR_SIZE_LIMIT is 44 now instead of 39.

commit 63d361f
Author: spacewander <spacewanderlzx@gmail.com>
Date:   Tue Sep 12 15:06:42 2017 +0800

    Fix <prevlen> related doc in ziplist.c

    According to the definition of ZIP_BIG_PREVLEN and other related code,
    the guard of single byte <prevlen> should be 254 instead of 255.

commit ebe228d
Author: hanael80 <hanael80@gmail.com>
Date:   Tue Aug 15 09:09:40 2017 +0900

    Fix typo

commit 6b696e6
Author: Matt Robenolt <matt@ydekproductions.com>
Date:   Mon Aug 14 14:50:47 2017 -0700

    Fix typo in LATENCY DOCTOR output

commit a2ec6ae
Author: caosiyang <caosiyang@qiyi.com>
Date:   Tue Aug 15 14:15:16 2017 +0800

    Fix a typo: form => from

commit 3ab7699
Author: caosiyang <caosiyang@qiyi.com>
Date:   Thu Aug 10 18:40:33 2017 +0800

    Fix a typo: replicationFeedSlavesFromMaster() => replicationFeedSlavesFromMasterStream()

commit 72d43ef
Author: caosiyang <caosiyang@qiyi.com>
Date:   Tue Aug 8 15:57:25 2017 +0800

    fix a typo: servewr => server

commit 707c958
Author: Bo Cai <charpty@gmail.com>
Date:   Wed Jul 26 21:49:42 2017 +0800

    redis-cli.c typo: conut -> count.

    Signed-off-by: Bo Cai <charpty@gmail.com>

commit b9385b2
Author: JackDrogon <jack.xsuperman@gmail.com>
Date:   Fri Jun 30 14:22:31 2017 +0800

    Fix some spell problems

commit 20d9230
Author: akosel <aaronjkosel@gmail.com>
Date:   Sun Jun 4 19:35:13 2017 -0500

    Fix typo

commit b167bfc
Author: Krzysiek Witkowicz <krzysiekwitkowicz@gmail.com>
Date:   Mon May 22 21:32:27 2017 +0100

    Fix #4008 small typo in comment

commit 2b78ac8
Author: Jake Clarkson <jacobwclarkson@gmail.com>
Date:   Wed Apr 26 15:49:50 2017 +0100

    Correct typo in tests/unit/hyperloglog.tcl

commit b0f1cdb
Author: Qi Luo <qiluo-msft@users.noreply.github.com>
Date:   Wed Apr 19 14:25:18 2017 -0700

    Fix typo

commit a90b0f9
Author: charsyam <charsyam@naver.com>
Date:   Thu Mar 16 18:19:53 2017 +0900

    fix typos

    fix typos

    fix typos

commit 8430a79
Author: Richard Hart <richardhart92@gmail.com>
Date:   Mon Mar 13 22:17:41 2017 -0400

    Fixed log message typo in listenToPort.

commit 481a1c2
Author: Vinod Kumar <kumar003vinod@gmail.com>
Date:   Sun Jan 15 23:04:51 2017 +0530

    src/db.c: Correct "save" -> "safe" typo

commit 586b4d3
Author: wangshaonan <wshn13@gmail.com>
Date:   Wed Dec 21 20:28:27 2016 +0800

    Fix typo they->the in helloworld.c

commit c1c4b5e
Author: Jenner <hypxm@qq.com>
Date:   Mon Dec 19 16:39:46 2016 +0800

    typo error

commit 1ee1a3f
Author: tielei <43289893@qq.com>
Date:   Mon Jul 18 13:52:25 2016 +0800

    fix some comments

commit 11a41fb
Author: Otto Kekäläinen <otto@seravo.fi>
Date:   Sun Jul 3 10:23:55 2016 +0100

    Fix spelling in documentation and comments

commit 5fb5d82
Author: francischan <f1ancis621@gmail.com>
Date:   Tue Jun 28 00:19:33 2016 +0800

    Fix outdated comments about redis.c file.
    It should now refer to server.c file.

commit 6b254bc
Author: lmatt-bit <lmatt123n@gmail.com>
Date:   Thu Apr 21 21:45:58 2016 +0800

    Refine the comment of dictRehashMilliseconds func

SLAVECONF->REPLCONF in comment - by andyli029

commit ee9869f
Author: clark.kang <charsyam@naver.com>
Date:   Tue Mar 22 11:09:51 2016 +0900

    fix typos

commit f7b3b11
Author: Harisankar H <harisankarh@gmail.com>
Date:   Wed Mar 9 11:49:42 2016 +0530

    Typo correction: "faield" --> "failed"

    Typo correction: "faield" --> "failed"

commit 3fd40fc
Author: Itamar Haber <itamar@redislabs.com>
Date:   Thu Feb 25 10:31:51 2016 +0200

    Fixes a typo in comments

commit 621c160
Author: Prayag Verma <prayag.verma@gmail.com>
Date:   Mon Feb 1 12:36:20 2016 +0530

    Fix typo in Readme.md

    Spelling mistakes -
    `eviciton` > `eviction`
    `familar` > `familiar`

commit d7d07d6
Author: WonCheol Lee <toctoc21c@gmail.com>
Date:   Wed Dec 30 15:11:34 2015 +0900

    Typo fixed

commit a4dade7
Author: Felix Bünemann <buenemann@louis.info>
Date:   Mon Dec 28 11:02:55 2015 +0100

    [ci skip] Improve supervised upstart config docs

    This mentions that "expect stop" is required for supervised upstart
    to work correctly. See http://upstart.ubuntu.com/cookbook/#expect-stop
    for an explanation.

commit d9caba9
Author: daurnimator <quae@daurnimator.com>
Date:   Mon Dec 21 18:30:03 2015 +1100

    README: Remove trailing whitespace

commit 72d42e5
Author: daurnimator <quae@daurnimator.com>
Date:   Mon Dec 21 18:29:32 2015 +1100

    README: Fix typo. th => the

commit dd6e957
Author: daurnimator <quae@daurnimator.com>
Date:   Mon Dec 21 18:29:20 2015 +1100

    README: Fix typo. familar => familiar

commit 3a12b23
Author: daurnimator <quae@daurnimator.com>
Date:   Mon Dec 21 18:28:54 2015 +1100

    README: Fix typo. eviciton => eviction

commit 2d1d03b
Author: daurnimator <quae@daurnimator.com>
Date:   Mon Dec 21 18:21:45 2015 +1100

    README: Fix typo. sever => server

commit 3973b06
Author: Itamar Haber <itamar@garantiadata.com>
Date:   Sat Dec 19 17:01:20 2015 +0200

    Typo fix

commit 4f2e460
Author: Steve Gao <fu@2token.com>
Date:   Fri Dec 4 10:22:05 2015 +0800

    Update README - fix typos

commit b21667c
Author: binyan <binbin.yan@nokia.com>
Date:   Wed Dec 2 22:48:37 2015 +0800

    delete redundancy color judge in sdscatcolor

commit 88894c7
Author: binyan <binbin.yan@nokia.com>
Date:   Wed Dec 2 22:14:42 2015 +0800

    the example output shoule be HelloWorld

commit 2763470
Author: binyan <binbin.yan@nokia.com>
Date:   Wed Dec 2 17:41:39 2015 +0800

    modify error word keyevente

    Signed-off-by: binyan <binbin.yan@nokia.com>

commit 0847b3d
Author: Bruno Martins <bscmartins@gmail.com>
Date:   Wed Nov 4 11:37:01 2015 +0000

    typo

commit bbb9e9e
Author: dawedawe <dawedawe@gmx.de>
Date:   Fri Mar 27 00:46:41 2015 +0100

    typo: zimap -> zipmap

commit 5ed297e
Author: Axel Advento <badwolf.bloodseeker.rev@gmail.com>
Date:   Tue Mar 3 15:58:29 2015 +0800

    Fix 'salve' typos to 'slave'

commit edec9d6
Author: LudwikJaniuk <ludvig.janiuk@gmail.com>
Date:   Wed Jun 12 14:12:47 2019 +0200

    Update README.md

    Co-Authored-By: Qix <Qix-@users.noreply.github.com>

commit 692a7af
Author: LudwikJaniuk <ludvig.janiuk@gmail.com>
Date:   Tue May 28 14:32:04 2019 +0200

    grammar

commit d962b0a
Author: Nick Frost <nickfrostatx@gmail.com>
Date:   Wed Jul 20 15:17:12 2016 -0700

    Minor grammar fix

commit 24fff01aaccaf5956973ada8c50ceb1462e211c6 (typos)
Author: Chad Miller <chadm@squareup.com>
Date:   Tue Sep 8 13:46:11 2020 -0400

    Fix faulty comment about operation of unlink()

commit 3cd5c1f3326c52aa552ada7ec797c6bb16452355
Author: Kevin <kevin.xgr@gmail.com>
Date:   Wed Nov 20 00:13:50 2019 +0800

    Fix typo in server.c.

From a83af59 Mon Sep 17 00:00:00 2001
From: wuwo <wuwo@wacai.com>
Date: Fri, 17 Mar 2017 20:37:45 +0800
Subject: [PATCH] falure to failure

From c961896 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=E5=B7=A6=E6=87=B6?= <veficos@gmail.com>
Date: Sat, 27 May 2017 15:33:04 +0800
Subject: [PATCH] fix typo

From e600ef2 Mon Sep 17 00:00:00 2001
From: "rui.zou" <rui.zou@yunify.com>
Date: Sat, 30 Sep 2017 12:38:15 +0800
Subject: [PATCH] fix a typo

From c7d07fa Mon Sep 17 00:00:00 2001
From: Alexandre Perrin <alex@kaworu.ch>
Date: Thu, 16 Aug 2018 10:35:31 +0200
Subject: [PATCH] deps README.md typo

From b25cb67 Mon Sep 17 00:00:00 2001
From: Guy Korland <gkorland@gmail.com>
Date: Wed, 26 Sep 2018 10:55:37 +0300
Subject: [PATCH 1/2] fix typos in header

From ad28ca6 Mon Sep 17 00:00:00 2001
From: Guy Korland <gkorland@gmail.com>
Date: Wed, 26 Sep 2018 11:02:36 +0300
Subject: [PATCH 2/2] fix typos

commit 34924cdedd8552466fc22c1168d49236cb7ee915
Author: Adrian Lynch <adi_ady_ade@hotmail.com>
Date:   Sat Apr 4 21:59:15 2015 +0100

    Typos fixed

commit fd2a1e7
Author: Jan <jsteemann@users.noreply.github.com>
Date:   Sat Oct 27 19:13:01 2018 +0200

    Fix typos

    Fix typos

commit e14e47c1a234b53b0e103c5f6a1c61481cbcbb02
Author: Andy Lester <andy@petdance.com>
Date:   Fri Aug 2 22:30:07 2019 -0500

    Fix multiple misspellings of "following"

commit 79b948ce2dac6b453fe80995abbcaac04c213d5a
Author: Andy Lester <andy@petdance.com>
Date:   Fri Aug 2 22:24:28 2019 -0500

    Fix misspelling of create-cluster

commit 1fffde52666dc99ab35efbd31071a4c008cb5a71
Author: Andy Lester <andy@petdance.com>
Date:   Wed Jul 31 17:57:56 2019 -0500

    Fix typos

commit 204c9ba9651e9e05fd73936b452b9a30be456cfe
Author: Xiaobo Zhu <xiaobo.zhu@shopee.com>
Date:   Tue Aug 13 22:19:25 2019 +0800

    fix typos

Squashed commit of the following:

commit 1d9aaf8
Author: danmedani <danmedani@gmail.com>
Date:   Sun Aug 2 11:40:26 2015 -0700

README typo fix.

Squashed commit of the following:

commit 32bfa7c
Author: Erik Dubbelboer <erik@dubbelboer.com>
Date:   Mon Jul 6 21:15:08 2015 +0200

Fixed grammer

Squashed commit of the following:

commit b24f69c
Author: Sisir Koppaka <sisir.koppaka@gmail.com>
Date:   Mon Mar 2 22:38:45 2015 -0500

utils/hashtable/rehashing.c: Fix typos

Squashed commit of the following:

commit 4e04082
Author: Erik Dubbelboer <erik@dubbelboer.com>
Date:   Mon Mar 23 08:22:21 2015 +0000

Small config file documentation improvements

Squashed commit of the following:

commit acb8773
Author: ctd1500 <ctd1500@gmail.com>
Date:   Fri May 8 01:52:48 2015 -0700

Typo and grammar fixes in readme

commit 2eb75b6
Author: ctd1500 <ctd1500@gmail.com>
Date:   Fri May 8 01:36:18 2015 -0700

fixed redis.conf comment

Squashed commit of the following:

commit a8249a2
Author: Masahiko Sawada <sawada.mshk@gmail.com>
Date:   Fri Dec 11 11:39:52 2015 +0530

Revise correction of typos.

Squashed commit of the following:

commit 3c02028
Author: zhaojun11 <zhaojun11@jd.com>
Date:   Wed Jan 17 19:05:28 2018 +0800

Fix typos include two code typos in cluster.c and latency.c

Squashed commit of the following:

commit 9dba47c
Author: q191201771 <191201771@qq.com>
Date:   Sat Jan 4 11:31:04 2020 +0800

fix function listCreate comment in adlist.c

Update src/server.c

commit 2c7c2cb536e78dd211b1ac6f7bda00f0f54faaeb
Author: charpty <charpty@gmail.com>
Date:   Tue May 1 23:16:59 2018 +0800

    server.c typo: modules system dictionary type comment

    Signed-off-by: charpty <charpty@gmail.com>

commit a8395323fb63cb59cb3591cb0f0c8edb7c29a680
Author: Itamar Haber <itamar@redislabs.com>
Date:   Sun May 6 00:25:18 2018 +0300

    Updates test_helper.tcl's help with undocumented options

    Specifically:

    * Host
    * Port
    * Client

commit bde6f9ced15755cd6407b4af7d601b030f36d60b
Author: wxisme <850885154@qq.com>
Date:   Wed Aug 8 15:19:19 2018 +0800

    fix comments in deps files

commit 3172474ba991532ab799ee1873439f3402412331
Author: wxisme <850885154@qq.com>
Date:   Wed Aug 8 14:33:49 2018 +0800

    fix some comments

commit 01b6f2b6858b5cf2ce4ad5092d2c746e755f53f0
Author: Thor Juhasz <thor@juhasz.pro>
Date:   Sun Nov 18 14:37:41 2018 +0100

    Minor fixes to comments

    Found some parts a little unclear on a first read, which prompted me to have a better look at the file and fix some minor things I noticed.
    Fixing minor typos and grammar. There are no changes to configuration options.
    These changes are only meant to help the user better understand the explanations to the various configuration options
This commit is contained in:
Oran Agra 2020-09-10 13:43:38 +03:00 committed by GitHub
parent 8ea131fc80
commit 1c71038540
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
80 changed files with 440 additions and 420 deletions

View File

@ -40,6 +40,10 @@ Redis follows a responsible disclosure process:
4. We push a fix release and your bug can be posted publicly with credit in
release notes and the version history (and our thanks!)
Issues and pull requests for documentation belong on the redis-doc repo:
https://github.com/redis/redis-doc
# How to provide a patch for a new feature
1. If it is a major feature or a semantical change, please don't start coding

View File

@ -3,22 +3,22 @@ This README is just a fast *quick start* document. You can find more detailed do
What is Redis?
--------------
Redis is often referred as a *data structures* server. What this means is that Redis provides access to mutable data structures via a set of commands, which are sent using a *server-client* model with TCP sockets and a simple protocol. So different processes can query and modify the same data structures in a shared way.
Redis is often referred to as a *data structures* server. What this means is that Redis provides access to mutable data structures via a set of commands, which are sent using a *server-client* model with TCP sockets and a simple protocol. So different processes can query and modify the same data structures in a shared way.
Data structures implemented into Redis have a few special properties:
* Redis cares to store them on disk, even if they are always served and modified into the server memory. This means that Redis is fast, but that is also non-volatile.
* Implementation of data structures stress on memory efficiency, so data structures inside Redis will likely use less memory compared to the same data structure modeled using an high level programming language.
* Redis offers a number of features that are natural to find in a database, like replication, tunable levels of durability, cluster, high availability.
* Redis cares to store them on disk, even if they are always served and modified into the server memory. This means that Redis is fast, but that it is also non-volatile.
* The implementation of data structures emphasizes memory efficiency, so data structures inside Redis will likely use less memory compared to the same data structure modelled using a high-level programming language.
* Redis offers a number of features that are natural to find in a database, like replication, tunable levels of durability, clustering, and high availability.
Another good example is to think of Redis as a more complex version of memcached, where the operations are not just SETs and GETs, but operations to work with complex data types like Lists, Sets, ordered data structures, and so forth.
Another good example is to think of Redis as a more complex version of memcached, where the operations are not just SETs and GETs, but operations that work with complex data types like Lists, Sets, ordered data structures, and so forth.
If you want to know more, this is a list of selected starting points:
* Introduction to Redis data types. http://redis.io/topics/data-types-intro
* Try Redis directly inside your browser. http://try.redis.io
* The full list of Redis commands. http://redis.io/commands
* There is much more inside the Redis official documentation. http://redis.io/documentation
* There is much more inside the official Redis documentation. http://redis.io/documentation
Building Redis
--------------
@ -29,7 +29,7 @@ and 64 bit systems.
It may compile on Solaris derived systems (for instance SmartOS) but our
support for this platform is *best effort* and Redis is not guaranteed to
work as well as in Linux, OSX, and \*BSD there.
work as well as in Linux, OSX, and \*BSD.
It is as simple as:
@ -63,7 +63,7 @@ installed):
Fixing build problems with dependencies or cached build options
---------
Redis has some dependencies which are included into the `deps` directory.
Redis has some dependencies which are included in the `deps` directory.
`make` does not automatically rebuild dependencies even if something in
the source code of dependencies changes.
@ -90,7 +90,7 @@ with a 64 bit target, or the other way around, you need to perform a
In case of build errors when trying to build a 32 bit binary of Redis, try
the following steps:
* Install the packages libc6-dev-i386 (also try g++-multilib).
* Install the package libc6-dev-i386 (also try g++-multilib).
* Try using the following command line instead of `make 32bit`:
`make CFLAGS="-m32 -march=native" LDFLAGS="-m32"`
@ -126,15 +126,15 @@ To build with support for the processor's internal instruction clock, use:
Verbose build
-------------
Redis will build with a user friendly colorized output by default.
If you want to see a more verbose output use the following:
Redis will build with a user-friendly colorized output by default.
If you want to see a more verbose output, use the following:
% make V=1
Running Redis
-------------
To run Redis with the default configuration just type:
To run Redis with the default configuration, just type:
% cd src
% ./redis-server
@ -185,7 +185,7 @@ You can find the list of all the available commands at http://redis.io/commands.
Installing Redis
-----------------
In order to install Redis binaries into /usr/local/bin just use:
In order to install Redis binaries into /usr/local/bin, just use:
% make install
@ -194,8 +194,8 @@ different destination.
Make install will just install binaries in your system, but will not configure
init scripts and configuration files in the appropriate place. This is not
needed if you want just to play a bit with Redis, but if you are installing
it the proper way for a production system, we have a script doing this
needed if you just want to play a bit with Redis, but if you are installing
it the proper way for a production system, we have a script that does this
for Ubuntu and Debian systems:
% cd utils
@ -213,7 +213,7 @@ You'll be able to stop and start Redis using the script named
Code contributions
-----------------
Note: by contributing code to the Redis project in any form, including sending
Note: By contributing code to the Redis project in any form, including sending
a pull request via Github, a code fragment or patch via private email or
public discussion groups, you agree to release your code under the terms
of the BSD license that you can find in the [COPYING][1] file included in the Redis
@ -263,7 +263,7 @@ of complexity incrementally.
Note: lately Redis was refactored quite a bit. Function names and file
names have been changed, so you may find that this documentation reflects the
`unstable` branch more closely. For instance in Redis 3.0 the `server.c`
`unstable` branch more closely. For instance, in Redis 3.0 the `server.c`
and `server.h` files were named `redis.c` and `redis.h`. However the overall
structure is the same. Keep in mind that all the new developments and pull
requests should be performed against the `unstable` branch.
@ -308,7 +308,7 @@ The client structure defines a *connected client*:
* The `fd` field is the client socket file descriptor.
* `argc` and `argv` are populated with the command the client is executing, so that functions implementing a given Redis command can read the arguments.
* `querybuf` accumulates the requests from the client, which are parsed by the Redis server according to the Redis protocol and executed by calling the implementations of the commands the client is executing.
* `reply` and `buf` are dynamic and static buffers that accumulate the replies the server sends to the client. These buffers are incrementally written to the socket as soon as the file descriptor is writable.
* `reply` and `buf` are dynamic and static buffers that accumulate the replies the server sends to the client. These buffers are incrementally written to the socket as soon as the file descriptor is writeable.
As you can see in the client structure above, arguments in a command
are described as `robj` structures. The following is the full `robj`
@ -341,13 +341,13 @@ This is the entry point of the Redis server, where the `main()` function
is defined. The following are the most important steps in order to startup
the Redis server.
* `initServerConfig()` setups the default values of the `server` structure.
* `initServerConfig()` sets up the default values of the `server` structure.
* `initServer()` allocates the data structures needed to operate, setup the listening socket, and so forth.
* `aeMain()` starts the event loop which listens for new connections.
There are two special functions called periodically by the event loop:
1. `serverCron()` is called periodically (according to `server.hz` frequency), and performs tasks that must be performed from time to time, like checking for timedout clients.
1. `serverCron()` is called periodically (according to `server.hz` frequency), and performs tasks that must be performed from time to time, like checking for timed out clients.
2. `beforeSleep()` is called every time the event loop fired, Redis served a few requests, and is returning back into the event loop.
Inside server.c you can find code that handles other vital things of the Redis server:
@ -364,16 +364,16 @@ This file defines all the I/O functions with clients, masters and replicas
(which in Redis are just special clients):
* `createClient()` allocates and initializes a new client.
* the `addReply*()` family of functions are used by commands implementations in order to append data to the client structure, that will be transmitted to the client as a reply for a given command executed.
* the `addReply*()` family of functions are used by command implementations in order to append data to the client structure, that will be transmitted to the client as a reply for a given command executed.
* `writeToClient()` transmits the data pending in the output buffers to the client and is called by the *writable event handler* `sendReplyToClient()`.
* `readQueryFromClient()` is the *readable event handler* and accumulates data from read from the client into the query buffer.
* `readQueryFromClient()` is the *readable event handler* and accumulates data read from the client into the query buffer.
* `processInputBuffer()` is the entry point in order to parse the client query buffer according to the Redis protocol. Once commands are ready to be processed, it calls `processCommand()` which is defined inside `server.c` in order to actually execute the command.
* `freeClient()` deallocates, disconnects and removes a client.
aof.c and rdb.c
---
As you can guess from the names these files implement the RDB and AOF
As you can guess from the names, these files implement the RDB and AOF
persistence for Redis. Redis uses a persistence model based on the `fork()`
system call in order to create a thread with the same (shared) memory
content of the main Redis thread. This secondary thread dumps the content
@ -385,13 +385,13 @@ The implementation inside `aof.c` has additional functions in order to
implement an API that allows commands to append new commands into the AOF
file as clients execute them.
The `call()` function defined inside `server.c` is responsible to call
The `call()` function defined inside `server.c` is responsible for calling
the functions that in turn will write the commands into the AOF.
db.c
---
Certain Redis commands operate on specific data types, others are general.
Certain Redis commands operate on specific data types; others are general.
Examples of generic commands are `DEL` and `EXPIRE`. They operate on keys
and not on their values specifically. All those generic commands are
defined inside `db.c`.
@ -399,7 +399,7 @@ defined inside `db.c`.
Moreover `db.c` implements an API in order to perform certain operations
on the Redis dataset without directly accessing the internal data structures.
The most important functions inside `db.c` which are used in many commands
The most important functions inside `db.c` which are used in many command
implementations are the following:
* `lookupKeyRead()` and `lookupKeyWrite()` are used in order to get a pointer to the value associated to a given key, or `NULL` if the key does not exist.
@ -417,7 +417,7 @@ The `robj` structure defining Redis objects was already described. Inside
a basic level, like functions to allocate new objects, handle the reference
counting and so forth. Notable functions inside this file:
* `incrRefcount()` and `decrRefCount()` are used in order to increment or decrement an object reference count. When it drops to 0 the object is finally freed.
* `incrRefCount()` and `decrRefCount()` are used in order to increment or decrement an object reference count. When it drops to 0 the object is finally freed.
* `createObject()` allocates a new object. There are also specialized functions to allocate string objects having a specific content, like `createStringObjectFromLongLong()` and similar functions.
This file also implements the `OBJECT` command.
@ -441,12 +441,12 @@ replicas, or to continue the replication after a disconnection.
Other C files
---
* `t_hash.c`, `t_list.c`, `t_set.c`, `t_string.c`, `t_zset.c` and `t_stream.c` contains the implementation of the Redis data types. They implement both an API to access a given data type, and the client commands implementations for these data types.
* `t_hash.c`, `t_list.c`, `t_set.c`, `t_string.c`, `t_zset.c` and `t_stream.c` contains the implementation of the Redis data types. They implement both an API to access a given data type, and the client command implementations for these data types.
* `ae.c` implements the Redis event loop, it's a self contained library which is simple to read and understand.
* `sds.c` is the Redis string library, check http://github.com/antirez/sds for more information.
* `anet.c` is a library to use POSIX networking in a simpler way compared to the raw interface exposed by the kernel.
* `dict.c` is an implementation of a non-blocking hash table which rehashes incrementally.
* `scripting.c` implements Lua scripting. It is completely self contained from the rest of the Redis implementation and is simple enough to understand if you are familiar with the Lua API.
* `scripting.c` implements Lua scripting. It is completely self-contained and isolated from the rest of the Redis implementation and is simple enough to understand if you are familiar with the Lua API.
* `cluster.c` implements the Redis Cluster. Probably a good read only after being very familiar with the rest of the Redis code base. If you want to read `cluster.c` make sure to read the [Redis Cluster specification][3].
[3]: http://redis.io/topics/cluster-spec
@ -472,12 +472,12 @@ top comment inside `server.c`.
After the command operates in some way, it returns a reply to the client,
usually using `addReply()` or a similar function defined inside `networking.c`.
There are tons of commands implementations inside the Redis source code
that can serve as examples of actual commands implementations. To write
a few toy commands can be a good exercise to familiarize with the code base.
There are tons of command implementations inside the Redis source code
that can serve as examples of actual commands implementations. Writing
a few toy commands can be a good exercise to get familiar with the code base.
There are also many other files not described here, but it is useless to
cover everything. We want to just help you with the first steps.
cover everything. We just want to help you with the first steps.
Eventually you'll find your way inside the Redis code base :-)
Enjoy!

8
deps/README.md vendored
View File

@ -21,7 +21,7 @@ just following these steps:
1. Remove the jemalloc directory.
2. Substitute it with the new jemalloc source tree.
3. Edit the Makefile localted in the same directory as the README you are
3. Edit the Makefile located in the same directory as the README you are
reading, and change the --with-version in the Jemalloc configure script
options with the version you are using. This is required because otherwise
Jemalloc configuration script is broken and will not work nested in another
@ -33,7 +33,7 @@ If you want to upgrade Jemalloc while also providing support for
active defragmentation, in addition to the above steps you need to perform
the following additional steps:
5. In Jemalloc three, file `include/jemalloc/jemalloc_macros.h.in`, make sure
5. In Jemalloc tree, file `include/jemalloc/jemalloc_macros.h.in`, make sure
to add `#define JEMALLOC_FRAG_HINT`.
6. Implement the function `je_get_defrag_hint()` inside `src/jemalloc.c`. You
can see how it is implemented in the current Jemalloc source tree shipped
@ -49,7 +49,7 @@ Hiredis uses the SDS string library, that must be the same version used inside R
1. Check with diff if hiredis API changed and what impact it could have in Redis.
2. Make sure that the SDS library inside Hiredis and inside Redis are compatible.
3. After the upgrade, run the Redis Sentinel test.
4. Check manually that redis-cli and redis-benchmark behave as expecteed, since we have no tests for CLI utilities currently.
4. Check manually that redis-cli and redis-benchmark behave as expected, since we have no tests for CLI utilities currently.
Linenoise
---
@ -77,6 +77,6 @@ and our version:
1. Makefile is modified to allow a different compiler than GCC.
2. We have the implementation source code, and directly link to the following external libraries: `lua_cjson.o`, `lua_struct.o`, `lua_cmsgpack.o` and `lua_bit.o`.
3. There is a security fix in `ldo.c`, line 498: The check for `LUA_SIGNATURE[0]` is removed in order toa void direct bytecode execution.
3. There is a security fix in `ldo.c`, line 498: The check for `LUA_SIGNATURE[0]` is removed in order to avoid direct bytecode execution.

View File

@ -625,7 +625,7 @@ static void refreshMultiLine(struct linenoiseState *l) {
rpos2 = (plen+l->pos+l->cols)/l->cols; /* current cursor relative row. */
lndebug("rpos2 %d", rpos2);
/* Go up till we reach the expected positon. */
/* Go up till we reach the expected position. */
if (rows-rpos2 > 0) {
lndebug("go-up %d", rows-rpos2);
snprintf(seq,64,"\x1b[%dA", rows-rpos2);
@ -767,7 +767,7 @@ void linenoiseEditBackspace(struct linenoiseState *l) {
}
}
/* Delete the previosu word, maintaining the cursor at the start of the
/* Delete the previous word, maintaining the cursor at the start of the
* current word. */
void linenoiseEditDeletePrevWord(struct linenoiseState *l) {
size_t old_pos = l->pos;

View File

@ -24,7 +24,7 @@
# to customize a few per-server settings. Include files can include
# other files, so use this wisely.
#
# Notice option "include" won't be rewritten by command "CONFIG REWRITE"
# Note that option "include" won't be rewritten by command "CONFIG REWRITE"
# from admin or Redis Sentinel. Since Redis always uses the last processed
# line as value of a configuration directive, you'd better put includes
# at the beginning of this file to avoid overwriting config change at runtime.
@ -46,7 +46,7 @@
################################## NETWORK #####################################
# By default, if no "bind" configuration directive is specified, Redis listens
# for connections from all the network interfaces available on the server.
# for connections from all available network interfaces on the host machine.
# It is possible to listen to just one or multiple selected interfaces using
# the "bind" configuration directive, followed by one or more IP addresses.
#
@ -58,13 +58,12 @@
# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the
# internet, binding to all the interfaces is dangerous and will expose the
# instance to everybody on the internet. So by default we uncomment the
# following bind directive, that will force Redis to listen only into
# the IPv4 loopback interface address (this means Redis will be able to
# accept connections only from clients running into the same computer it
# is running).
# following bind directive, that will force Redis to listen only on the
# IPv4 loopback interface address (this means Redis will only be able to
# accept client connections from the same host that it is running on).
#
# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
# JUST COMMENT THE FOLLOWING LINE.
# JUST COMMENT OUT THE FOLLOWING LINE.
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
bind 127.0.0.1
@ -93,8 +92,8 @@ port 6379
# TCP listen() backlog.
#
# In high requests-per-second environments you need an high backlog in order
# to avoid slow clients connections issues. Note that the Linux kernel
# In high requests-per-second environments you need a high backlog in order
# to avoid slow clients connection issues. Note that the Linux kernel
# will silently truncate it to the value of /proc/sys/net/core/somaxconn so
# make sure to raise both the value of somaxconn and tcp_max_syn_backlog
# in order to get the desired effect.
@ -118,8 +117,8 @@ timeout 0
# of communication. This is useful for two reasons:
#
# 1) Detect dead peers.
# 2) Take the connection alive from the point of view of network
# equipment in the middle.
# 2) Force network equipment in the middle to consider the connection to be
# alive.
#
# On Linux, the specified value (in seconds) is the period used to send ACKs.
# Note that to close the connection the double of the time is needed.
@ -228,11 +227,12 @@ daemonize no
# supervision tree. Options:
# supervised no - no supervision interaction
# supervised upstart - signal upstart by putting Redis into SIGSTOP mode
# requires "expect stop" in your upstart job config
# supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
# supervised auto - detect upstart or systemd method based on
# UPSTART_JOB or NOTIFY_SOCKET environment variables
# Note: these supervision methods only signal "process is ready."
# They do not enable continuous liveness pings back to your supervisor.
# They do not enable continuous pings back to your supervisor.
supervised no
# If a pid file is specified, Redis writes it where specified at startup
@ -302,7 +302,7 @@ always-show-logo no
# Will save the DB if both the given number of seconds and the given
# number of write operations against the DB occurred.
#
# In the example below the behaviour will be to save:
# In the example below the behavior will be to save:
# after 900 sec (15 min) if at least 1 key changed
# after 300 sec (5 min) if at least 10 keys changed
# after 60 sec if at least 10000 keys changed
@ -335,7 +335,7 @@ save 60 10000
stop-writes-on-bgsave-error yes
# Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win.
# By default compression is enabled as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
rdbcompression yes
@ -423,11 +423,11 @@ dir ./
# still reply to client requests, possibly with out of date data, or the
# data set may just be empty if this is the first synchronization.
#
# 2) if replica-serve-stale-data is set to 'no' the replica will reply with
# an error "SYNC with master in progress" to all the kind of commands
# but to INFO, replicaOF, AUTH, PING, SHUTDOWN, REPLCONF, ROLE, CONFIG,
# SUBSCRIBE, UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH, PUBSUB,
# COMMAND, POST, HOST: and LATENCY.
# 2) If replica-serve-stale-data is set to 'no' the replica will reply with
# an error "SYNC with master in progress" to all commands except:
# INFO, REPLICAOF, AUTH, PING, SHUTDOWN, REPLCONF, ROLE, CONFIG, SUBSCRIBE,
# UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH, PUBSUB, COMMAND, POST,
# HOST and LATENCY.
#
replica-serve-stale-data yes
@ -498,7 +498,7 @@ repl-diskless-sync-delay 5
#
# Replica can load the RDB it reads from the replication link directly from the
# socket, or store the RDB to a file and read that file after it was completely
# recived from the master.
# received from the master.
#
# In many cases the disk is slower than the network, and storing and loading
# the RDB file may increase replication time (and even increase the master's
@ -528,7 +528,8 @@ repl-diskless-load disabled
#
# It is important to make sure that this value is greater than the value
# specified for repl-ping-replica-period otherwise a timeout will be detected
# every time there is low traffic between the master and the replica.
# every time there is low traffic between the master and the replica. The default
# value is 60 seconds.
#
# repl-timeout 60
@ -553,21 +554,21 @@ repl-disable-tcp-nodelay no
# partial resync is enough, just passing the portion of data the replica
# missed while disconnected.
#
# The bigger the replication backlog, the longer the time the replica can be
# disconnected and later be able to perform a partial resynchronization.
# The bigger the replication backlog, the longer the replica can endure the
# disconnect and later be able to perform a partial resynchronization.
#
# The backlog is only allocated once there is at least a replica connected.
# The backlog is only allocated if there is at least one replica connected.
#
# repl-backlog-size 1mb
# After a master has no longer connected replicas for some time, the backlog
# will be freed. The following option configures the amount of seconds that
# need to elapse, starting from the time the last replica disconnected, for
# the backlog buffer to be freed.
# After a master has no connected replicas for some time, the backlog will be
# freed. The following option configures the amount of seconds that need to
# elapse, starting from the time the last replica disconnected, for the backlog
# buffer to be freed.
#
# Note that replicas never free the backlog for timeout, since they may be
# promoted to masters later, and should be able to correctly "partially
# resynchronize" with the replicas: hence they should always accumulate backlog.
# resynchronize" with other replicas: hence they should always accumulate backlog.
#
# A value of 0 means to never release the backlog.
#
@ -617,8 +618,8 @@ replica-priority 100
# Another place where this info is available is in the output of the
# "ROLE" command of a master.
#
# The listed IP and address normally reported by a replica is obtained
# in the following way:
# The listed IP address and port normally reported by a replica is
# obtained in the following way:
#
# IP: The address is auto detected by checking the peer address
# of the socket used by the replica to connect with the master.
@ -628,7 +629,7 @@ replica-priority 100
# listen for connections.
#
# However when port forwarding or Network Address Translation (NAT) is
# used, the replica may be actually reachable via different IP and port
# used, the replica may actually be reachable via different IP and port
# pairs. The following two options can be used by a replica in order to
# report to its master a specific set of IP and port, so that both INFO
# and ROLE will report those values.
@ -645,7 +646,7 @@ replica-priority 100
# This is implemented using an invalidation table that remembers, using
# 16 millions of slots, what clients may have certain subsets of keys. In turn
# this is used in order to send invalidation messages to clients. Please
# to understand more about the feature check this page:
# check this page to understand more about the feature:
#
# https://redis.io/topics/client-side-caching
#
@ -677,7 +678,7 @@ replica-priority 100
################################## SECURITY ###################################
# Warning: since Redis is pretty fast an outside user can try up to
# Warning: since Redis is pretty fast, an outside user can try up to
# 1 million passwords per second against a modern box. This means that you
# should use very strong passwords, otherwise they will be very easy to break.
# Note that because the password is really a shared secret between the client
@ -701,7 +702,7 @@ replica-priority 100
# AUTH (or the HELLO command AUTH option) in order to be authenticated and
# start to work.
#
# The ACL rules that describe what an user can do are the following:
# The ACL rules that describe what a user can do are the following:
#
# on Enable the user: it is possible to authenticate as this user.
# off Disable the user: it's no longer possible to authenticate
@ -729,7 +730,7 @@ replica-priority 100
# It is possible to specify multiple patterns.
# allkeys Alias for ~*
# resetkeys Flush the list of allowed keys patterns.
# ><password> Add this passowrd to the list of valid password for the user.
# ><password> Add this password to the list of valid password for the user.
# For example >mypass will add "mypass" to the list.
# This directive clears the "nopass" flag (see later).
# <<password> Remove this password from the list of valid passwords.
@ -783,7 +784,7 @@ acllog-max-len 128
#
# Instead of configuring users here in this file, it is possible to use
# a stand-alone file just listing users. The two methods cannot be mixed:
# if you configure users here and at the same time you activate the exteranl
# if you configure users here and at the same time you activate the external
# ACL file, the server will refuse to start.
#
# The format of the external ACL user file is exactly the same as the
@ -791,7 +792,7 @@ acllog-max-len 128
#
# aclfile /etc/redis/users.acl
# IMPORTANT NOTE: starting with Redis 6 "requirepass" is just a compatiblity
# IMPORTANT NOTE: starting with Redis 6 "requirepass" is just a compatibility
# layer on top of the new ACL system. The option effect will be just setting
# the password for the default user. Clients will still authenticate using
# AUTH <password> as usually, or more explicitly with AUTH default <password>
@ -902,8 +903,8 @@ acllog-max-len 128
# LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can tune it for speed or
# accuracy. For default Redis will check five keys and pick the one that was
# used less recently, you can change the sample size using the following
# accuracy. By default Redis will check five keys and pick the one that was
# used least recently, you can change the sample size using the following
# configuration directive.
#
# The default of 5 produces good enough results. 10 Approximates very closely
@ -943,8 +944,8 @@ acllog-max-len 128
# it is possible to increase the expire "effort" that is normally set to
# "1", to a greater value, up to the value "10". At its maximum value the
# system will use more CPU, longer cycles (and technically may introduce
# more latency), and will tollerate less already expired keys still present
# in the system. It's a tradeoff betweeen memory, CPU and latecy.
# more latency), and will tolerate less already expired keys still present
# in the system. It's a tradeoff between memory, CPU and latency.
#
# active-expire-effort 1
@ -1012,7 +1013,7 @@ lazyfree-lazy-user-del no
#
# Now it is also possible to handle Redis clients socket reads and writes
# in different I/O threads. Since especially writing is so slow, normally
# Redis users use pipelining in order to speedup the Redis performances per
# Redis users use pipelining in order to speed up the Redis performances per
# core, and spawn multiple instances in order to scale more. Using I/O
# threads it is possible to easily speedup two times Redis without resorting
# to pipelining nor sharding of the instance.
@ -1030,7 +1031,7 @@ lazyfree-lazy-user-del no
#
# io-threads 4
#
# Setting io-threads to 1 will just use the main thread as usually.
# Setting io-threads to 1 will just use the main thread as usual.
# When I/O threads are enabled, we only use threads for writes, that is
# to thread the write(2) syscall and transfer the client buffers to the
# socket. However it is also possible to enable threading of reads and
@ -1047,7 +1048,7 @@ lazyfree-lazy-user-del no
#
# NOTE 2: If you want to test the Redis speedup using redis-benchmark, make
# sure you also run the benchmark itself in threaded mode, using the
# --threads option to match the number of Redis theads, otherwise you'll not
# --threads option to match the number of Redis threads, otherwise you'll not
# be able to notice the improvements.
############################ KERNEL OOM CONTROL ##############################
@ -1200,8 +1201,8 @@ aof-load-truncated yes
#
# [RDB file][AOF tail]
#
# When loading Redis recognizes that the AOF file starts with the "REDIS"
# string and loads the prefixed RDB file, and continues loading the AOF
# When loading, Redis recognizes that the AOF file starts with the "REDIS"
# string and loads the prefixed RDB file, then continues loading the AOF
# tail.
aof-use-rdb-preamble yes
@ -1215,7 +1216,7 @@ aof-use-rdb-preamble yes
#
# When a long running script exceeds the maximum execution time only the
# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
# used to stop a script that did not yet called write commands. The second
# used to stop a script that did not yet call any write commands. The second
# is the only way to shut down the server in the case a write command was
# already issued by the script but the user doesn't want to wait for the natural
# termination of the script.
@ -1241,7 +1242,7 @@ lua-time-limit 5000
# Cluster node timeout is the amount of milliseconds a node must be unreachable
# for it to be considered in failure state.
# Most other internal time limits are multiple of the node timeout.
# Most other internal time limits are a multiple of the node timeout.
#
# cluster-node-timeout 15000
@ -1268,18 +1269,18 @@ lua-time-limit 5000
# the failover if, since the last interaction with the master, the time
# elapsed is greater than:
#
# (node-timeout * replica-validity-factor) + repl-ping-replica-period
# (node-timeout * cluster-replica-validity-factor) + repl-ping-replica-period
#
# So for example if node-timeout is 30 seconds, and the replica-validity-factor
# So for example if node-timeout is 30 seconds, and the cluster-replica-validity-factor
# is 10, and assuming a default repl-ping-replica-period of 10 seconds, the
# replica will not try to failover if it was not able to talk with the master
# for longer than 310 seconds.
#
# A large replica-validity-factor may allow replicas with too old data to failover
# A large cluster-replica-validity-factor may allow replicas with too old data to failover
# a master, while a too small value may prevent the cluster from being able to
# elect a replica at all.
#
# For maximum availability, it is possible to set the replica-validity-factor
# For maximum availability, it is possible to set the cluster-replica-validity-factor
# to a value of 0, which means, that replicas will always try to failover the
# master regardless of the last time they interacted with the master.
# (However they'll always try to apply a delay proportional to their
@ -1310,7 +1311,7 @@ lua-time-limit 5000
# cluster-migration-barrier 1
# By default Redis Cluster nodes stop accepting queries if they detect there
# is at least an hash slot uncovered (no available node is serving it).
# is at least a hash slot uncovered (no available node is serving it).
# This way if the cluster is partially down (for example a range of hash slots
# are no longer covered) all the cluster becomes, eventually, unavailable.
# It automatically returns available as soon as all the slots are covered again.
@ -1365,7 +1366,7 @@ lua-time-limit 5000
# * cluster-announce-port
# * cluster-announce-bus-port
#
# Each instruct the node about its address, client port, and cluster message
# Each instructs the node about its address, client port, and cluster message
# bus port. The information is then published in the header of the bus packets
# so that other nodes will be able to correctly map the address of the node
# publishing the information.
@ -1376,7 +1377,7 @@ lua-time-limit 5000
# Note that when remapped, the bus port may not be at the fixed offset of
# clients port + 10000, so you can specify any port and bus-port depending
# on how they get remapped. If the bus-port is not set, a fixed offset of
# 10000 will be used as usually.
# 10000 will be used as usual.
#
# Example:
#
@ -1505,7 +1506,7 @@ notify-keyspace-events ""
# two kind of inline requests that were anyway illegal: an empty request
# or any request that starts with "/" (there are no Redis commands starting
# with such a slash). Normal RESP2/RESP3 requests are completely out of the
# path of the Gopher protocol implementation and are served as usually as well.
# path of the Gopher protocol implementation and are served as usual as well.
#
# If you open a connection to Redis when Gopher is enabled and send it
# a string like "/foo", if there is a key named "/foo" it is served via the
@ -1677,7 +1678,7 @@ client-output-buffer-limit pubsub 32mb 8mb 60
# client-query-buffer-limit 1gb
# In the Redis protocol, bulk requests, that are, elements representing single
# strings, are normally limited ot 512 mb. However you can change this limit
# strings, are normally limited to 512 mb. However you can change this limit
# here, but must be 1mb or greater
#
# proto-max-bulk-len 512mb
@ -1706,7 +1707,7 @@ hz 10
#
# Since the default HZ value by default is conservatively set to 10, Redis
# offers, and enables by default, the ability to use an adaptive HZ value
# which will temporary raise when there are many connected clients.
# which will temporarily raise when there are many connected clients.
#
# When dynamic HZ is enabled, the actual configured HZ will be used
# as a baseline, but multiples of the configured HZ value will be actually
@ -1773,7 +1774,7 @@ rdb-save-incremental-fsync yes
# for the key counter to be divided by two (or decremented if it has a value
# less <= 10).
#
# The default value for the lfu-decay-time is 1. A Special value of 0 means to
# The default value for the lfu-decay-time is 1. A special value of 0 means to
# decay the counter every time it happens to be scanned.
#
# lfu-log-factor 10
@ -1793,7 +1794,7 @@ rdb-save-incremental-fsync yes
# restart is needed in order to lower the fragmentation, or at least to flush
# away all the data and create it again. However thanks to this feature
# implemented by Oran Agra for Redis 4.0 this process can happen at runtime
# in an "hot" way, while the server is running.
# in a "hot" way, while the server is running.
#
# Basically when the fragmentation is over a certain level (see the
# configuration options below) Redis will start to create new copies of the
@ -1870,3 +1871,4 @@ jemalloc-bg-thread yes
#
# Set bgsave child process to cpu affinity 1,10,11
# bgsave_cpulist 1,10-11

View File

@ -259,6 +259,6 @@ sentinel deny-scripts-reconfig yes
# SENTINEL SET can also be used in order to perform this configuration at runtime.
#
# In order to set a command back to its original name (undo the renaming), it
# is possible to just rename a command to itsef:
# is possible to just rename a command to itself:
#
# SENTINEL rename-command mymaster CONFIG CONFIG

View File

@ -289,7 +289,7 @@ void ACLFreeUserAndKillClients(user *u) {
while ((ln = listNext(&li)) != NULL) {
client *c = listNodeValue(ln);
if (c->user == u) {
/* We'll free the conenction asynchronously, so
/* We'll free the connection asynchronously, so
* in theory to set a different user is not needed.
* However if there are bugs in Redis, soon or later
* this may result in some security hole: it's much

View File

@ -34,8 +34,9 @@
#include "zmalloc.h"
/* Create a new list. The created list can be freed with
* AlFreeList(), but private value of every node need to be freed
* by the user before to call AlFreeList().
* listRelease(), but private value of every node need to be freed
* by the user before to call listRelease(), or by setting a free method using
* listSetFreeMethod.
*
* On error, NULL is returned. Otherwise the pointer to the new list. */
list *listCreate(void)
@ -217,8 +218,8 @@ void listRewindTail(list *list, listIter *li) {
* listDelNode(), but not to remove other elements.
*
* The function returns a pointer to the next element of the list,
* or NULL if there are no more elements, so the classical usage patter
* is:
* or NULL if there are no more elements, so the classical usage
* pattern is:
*
* iter = listGetIterator(list,<direction>);
* while ((node = listNext(iter)) != NULL) {

View File

@ -405,7 +405,7 @@ int aeProcessEvents(aeEventLoop *eventLoop, int flags)
int fired = 0; /* Number of events fired for current fd. */
/* Normally we execute the readable event first, and the writable
* event laster. This is useful as sometimes we may be able
* event later. This is useful as sometimes we may be able
* to serve the reply of a query immediately after processing the
* query.
*
@ -413,7 +413,7 @@ int aeProcessEvents(aeEventLoop *eventLoop, int flags)
* asking us to do the reverse: never fire the writable event
* after the readable. In such a case, we invert the calls.
* This is useful when, for instance, we want to do things
* in the beforeSleep() hook, like fsynching a file to disk,
* in the beforeSleep() hook, like fsyncing a file to disk,
* before replying to a client. */
int invert = fe->mask & AE_BARRIER;

View File

@ -232,7 +232,7 @@ static void aeApiDelEvent(aeEventLoop *eventLoop, int fd, int mask) {
/*
* ENOMEM is a potentially transient condition, but the kernel won't
* generally return it unless things are really bad. EAGAIN indicates
* we've reached an resource limit, for which it doesn't make sense to
* we've reached a resource limit, for which it doesn't make sense to
* retry (counter-intuitively). All other errors indicate a bug. In any
* of these cases, the best we can do is to abort.
*/

View File

@ -544,7 +544,7 @@ sds catAppendOnlyGenericCommand(sds dst, int argc, robj **argv) {
return dst;
}
/* Create the sds representation of an PEXPIREAT command, using
/* Create the sds representation of a PEXPIREAT command, using
* 'seconds' as time to live and 'cmd' to understand what command
* we are translating into a PEXPIREAT.
*
@ -1818,7 +1818,7 @@ void backgroundRewriteDoneHandler(int exitcode, int bysignal) {
"Background AOF rewrite terminated with error");
} else {
/* SIGUSR1 is whitelisted, so we have a way to kill a child without
* tirggering an error condition. */
* triggering an error condition. */
if (bysignal != SIGUSR1)
server.aof_lastbgrewrite_status = C_ERR;

View File

@ -21,7 +21,7 @@
*
* Never use return value from the macros, instead use the AtomicGetIncr()
* if you need to get the current value and increment it atomically, like
* in the followign example:
* in the following example:
*
* long oldvalue;
* atomicGetIncr(myvar,oldvalue,1);

View File

@ -36,7 +36,7 @@
/* Count number of bits set in the binary array pointed by 's' and long
* 'count' bytes. The implementation of this function is required to
* work with a input string length up to 512 MB. */
* work with an input string length up to 512 MB. */
size_t redisPopcount(void *s, long count) {
size_t bits = 0;
unsigned char *p = s;
@ -107,7 +107,7 @@ long redisBitpos(void *s, unsigned long count, int bit) {
int found;
/* Process whole words first, seeking for first word that is not
* all ones or all zeros respectively if we are lookig for zeros
* all ones or all zeros respectively if we are looking for zeros
* or ones. This is much faster with large strings having contiguous
* blocks of 1 or 0 bits compared to the vanilla bit per bit processing.
*
@ -496,7 +496,7 @@ robj *lookupStringForBitCommand(client *c, size_t maxbit) {
* in 'len'. The user is required to pass (likely stack allocated) buffer
* 'llbuf' of at least LONG_STR_SIZE bytes. Such a buffer is used in the case
* the object is integer encoded in order to provide the representation
* without usign heap allocation.
* without using heap allocation.
*
* The function returns the pointer to the object array of bytes representing
* the string it contains, that may be a pointer to 'llbuf' or to the

View File

@ -53,7 +53,7 @@
* to 0, no timeout is processed).
* It usually just needs to send a reply to the client.
*
* When implementing a new type of blocking opeation, the implementation
* When implementing a new type of blocking operation, the implementation
* should modify unblockClient() and replyToBlockedClientTimedOut() in order
* to handle the btype-specific behavior of this two functions.
* If the blocking operation waits for certain keys to change state, the
@ -118,7 +118,7 @@ void processUnblockedClients(void) {
/* This function will schedule the client for reprocessing at a safe time.
*
* This is useful when a client was blocked for some reason (blocking opeation,
* This is useful when a client was blocked for some reason (blocking operation,
* CLIENT PAUSE, or whatever), because it may end with some accumulated query
* buffer that needs to be processed ASAP:
*

View File

@ -377,7 +377,7 @@ void clusterSaveConfigOrDie(int do_fsync) {
}
}
/* Lock the cluster config using flock(), and leaks the file descritor used to
/* Lock the cluster config using flock(), and leaks the file descriptor used to
* acquire the lock so that the file will be locked forever.
*
* This works because we always update nodes.conf with a new version
@ -544,13 +544,13 @@ void clusterInit(void) {
/* Reset a node performing a soft or hard reset:
*
* 1) All other nodes are forget.
* 1) All other nodes are forgotten.
* 2) All the assigned / open slots are released.
* 3) If the node is a slave, it turns into a master.
* 5) Only for hard reset: a new Node ID is generated.
* 6) Only for hard reset: currentEpoch and configEpoch are set to 0.
* 7) The new configuration is saved and the cluster state updated.
* 8) If the node was a slave, the whole data set is flushed away. */
* 4) Only for hard reset: a new Node ID is generated.
* 5) Only for hard reset: currentEpoch and configEpoch are set to 0.
* 6) The new configuration is saved and the cluster state updated.
* 7) If the node was a slave, the whole data set is flushed away. */
void clusterReset(int hard) {
dictIterator *di;
dictEntry *de;
@ -646,7 +646,7 @@ static void clusterConnAcceptHandler(connection *conn) {
/* Create a link object we use to handle the connection.
* It gets passed to the readable handler when data is available.
* Initiallly the link->node pointer is set to NULL as we don't know
* Initially the link->node pointer is set to NULL as we don't know
* which node is, but the right node is references once we know the
* node identity. */
link = createClusterLink(NULL);
@ -1060,7 +1060,7 @@ uint64_t clusterGetMaxEpoch(void) {
* 3) Persist the configuration on disk before sending packets with the
* new configuration.
*
* If the new config epoch is generated and assigend, C_OK is returned,
* If the new config epoch is generated and assigned, C_OK is returned,
* otherwise C_ERR is returned (since the node has already the greatest
* configuration around) and no operation is performed.
*
@ -1133,7 +1133,7 @@ int clusterBumpConfigEpochWithoutConsensus(void) {
*
* In general we want a system that eventually always ends with different
* masters having different configuration epochs whatever happened, since
* nothign is worse than a split-brain condition in a distributed system.
* nothing is worse than a split-brain condition in a distributed system.
*
* BEHAVIOR
*
@ -1192,7 +1192,7 @@ void clusterHandleConfigEpochCollision(clusterNode *sender) {
* entries from the black list. This is an O(N) operation but it is not a
* problem since add / exists operations are called very infrequently and
* the hash table is supposed to contain very little elements at max.
* However without the cleanup during long uptimes and with some automated
* However without the cleanup during long uptime and with some automated
* node add/removal procedures, entries could accumulate. */
void clusterBlacklistCleanup(void) {
dictIterator *di;
@ -1346,12 +1346,12 @@ int clusterHandshakeInProgress(char *ip, int port, int cport) {
return de != NULL;
}
/* Start an handshake with the specified address if there is not one
/* Start a handshake with the specified address if there is not one
* already in progress. Returns non-zero if the handshake was actually
* started. On error zero is returned and errno is set to one of the
* following values:
*
* EAGAIN - There is already an handshake in progress for this address.
* EAGAIN - There is already a handshake in progress for this address.
* EINVAL - IP or port are not valid. */
int clusterStartHandshake(char *ip, int port, int cport) {
clusterNode *n;
@ -1793,7 +1793,7 @@ int clusterProcessPacket(clusterLink *link) {
if (sender) sender->data_received = now;
if (sender && !nodeInHandshake(sender)) {
/* Update our curretEpoch if we see a newer epoch in the cluster. */
/* Update our currentEpoch if we see a newer epoch in the cluster. */
senderCurrentEpoch = ntohu64(hdr->currentEpoch);
senderConfigEpoch = ntohu64(hdr->configEpoch);
if (senderCurrentEpoch > server.cluster->currentEpoch)
@ -2480,7 +2480,7 @@ void clusterSetGossipEntry(clusterMsg *hdr, int i, clusterNode *n) {
}
/* Send a PING or PONG packet to the specified node, making sure to add enough
* gossip informations. */
* gossip information. */
void clusterSendPing(clusterLink *link, int type) {
unsigned char *buf;
clusterMsg *hdr;
@ -2500,7 +2500,7 @@ void clusterSendPing(clusterLink *link, int type) {
* node_timeout we exchange with each other node at least 4 packets
* (we ping in the worst case in node_timeout/2 time, and we also
* receive two pings from the host), we have a total of 8 packets
* in the node_timeout*2 falure reports validity time. So we have
* in the node_timeout*2 failure reports validity time. So we have
* that, for a single PFAIL node, we can expect to receive the following
* number of failure reports (in the specified window of time):
*
@ -2527,7 +2527,7 @@ void clusterSendPing(clusterLink *link, int type) {
* faster to propagate to go from PFAIL to FAIL state. */
int pfail_wanted = server.cluster->stats_pfail_nodes;
/* Compute the maxium totlen to allocate our buffer. We'll fix the totlen
/* Compute the maximum totlen to allocate our buffer. We'll fix the totlen
* later according to the number of gossip sections we really were able
* to put inside the packet. */
totlen = sizeof(clusterMsg)-sizeof(union clusterMsgData);
@ -2564,7 +2564,7 @@ void clusterSendPing(clusterLink *link, int type) {
if (this->flags & (CLUSTER_NODE_HANDSHAKE|CLUSTER_NODE_NOADDR) ||
(this->link == NULL && this->numslots == 0))
{
freshnodes--; /* Tecnically not correct, but saves CPU. */
freshnodes--; /* Technically not correct, but saves CPU. */
continue;
}
@ -3149,7 +3149,7 @@ void clusterHandleSlaveFailover(void) {
}
}
/* If the previous failover attempt timedout and the retry time has
/* If the previous failover attempt timeout and the retry time has
* elapsed, we can setup a new one. */
if (auth_age > auth_retry_time) {
server.cluster->failover_auth_time = mstime() +
@ -3255,7 +3255,7 @@ void clusterHandleSlaveFailover(void) {
*
* Slave migration is the process that allows a slave of a master that is
* already covered by at least another slave, to "migrate" to a master that
* is orpaned, that is, left with no working slaves.
* is orphaned, that is, left with no working slaves.
* ------------------------------------------------------------------------- */
/* This function is responsible to decide if this replica should be migrated
@ -3272,7 +3272,7 @@ void clusterHandleSlaveFailover(void) {
* the nodes anyway, so we spend time into clusterHandleSlaveMigration()
* if definitely needed.
*
* The fuction is called with a pre-computed max_slaves, that is the max
* The function is called with a pre-computed max_slaves, that is the max
* number of working (not in FAIL state) slaves for a single master.
*
* Additional conditions for migration are examined inside the function.
@ -3391,7 +3391,7 @@ void clusterHandleSlaveMigration(int max_slaves) {
* data loss due to the asynchronous master-slave replication.
* -------------------------------------------------------------------------- */
/* Reset the manual failover state. This works for both masters and slavesa
/* Reset the manual failover state. This works for both masters and slaves
* as all the state about manual failover is cleared.
*
* The function can be used both to initialize the manual failover state at
@ -3683,7 +3683,7 @@ void clusterCron(void) {
replicationSetMaster(myself->slaveof->ip, myself->slaveof->port);
}
/* Abourt a manual failover if the timeout is reached. */
/* Abort a manual failover if the timeout is reached. */
manualFailoverCheckTimeout();
if (nodeIsSlave(myself)) {
@ -3788,12 +3788,12 @@ int clusterNodeSetSlotBit(clusterNode *n, int slot) {
* target for replicas migration, if and only if at least one of
* the other masters has slaves right now.
*
* Normally masters are valid targerts of replica migration if:
* Normally masters are valid targets of replica migration if:
* 1. The used to have slaves (but no longer have).
* 2. They are slaves failing over a master that used to have slaves.
*
* However new masters with slots assigned are considered valid
* migration tagets if the rest of the cluster is not a slave-less.
* migration targets if the rest of the cluster is not a slave-less.
*
* See https://github.com/antirez/redis/issues/3043 for more info. */
if (n->numslots == 1 && clusterMastersHaveSlaves())
@ -3977,7 +3977,7 @@ void clusterUpdateState(void) {
* A) If no other node is in charge according to the current cluster
* configuration, we add these slots to our node.
* B) If according to our config other nodes are already in charge for
* this lots, we set the slots as IMPORTING from our point of view
* this slots, we set the slots as IMPORTING from our point of view
* in order to justify we have those slots, and in order to make
* redis-trib aware of the issue, so that it can try to fix it.
* 2) If we find data in a DB different than DB0 we return C_ERR to
@ -4507,7 +4507,7 @@ NULL
}
/* If this slot is in migrating status but we have no keys
* for it assigning the slot to another node will clear
* the migratig status. */
* the migrating status. */
if (countKeysInSlot(slot) == 0 &&
server.cluster->migrating_slots_to[slot])
server.cluster->migrating_slots_to[slot] = NULL;
@ -4856,7 +4856,7 @@ NULL
server.cluster->currentEpoch = epoch;
/* No need to fsync the config here since in the unlucky event
* of a failure to persist the config, the conflict resolution code
* will assign an unique config to this node. */
* will assign a unique config to this node. */
clusterDoBeforeSleep(CLUSTER_TODO_UPDATE_STATE|
CLUSTER_TODO_SAVE_CONFIG);
addReply(c,shared.ok);
@ -4904,7 +4904,7 @@ void createDumpPayload(rio *payload, robj *o, robj *key) {
unsigned char buf[2];
uint64_t crc;
/* Serialize the object in a RDB-like format. It consist of an object type
/* Serialize the object in an RDB-like format. It consist of an object type
* byte followed by the serialized object. This is understood by RESTORE. */
rioInitWithBuffer(payload,sdsempty());
serverAssert(rdbSaveObjectType(payload,o));
@ -5571,7 +5571,7 @@ void readwriteCommand(client *c) {
* resharding in progress).
*
* On success the function returns the node that is able to serve the request.
* If the node is not 'myself' a redirection must be perfomed. The kind of
* If the node is not 'myself' a redirection must be performed. The kind of
* redirection is specified setting the integer passed by reference
* 'error_code', which will be set to CLUSTER_REDIR_ASK or
* CLUSTER_REDIR_MOVED.
@ -5698,7 +5698,7 @@ clusterNode *getNodeByQuery(client *c, struct redisCommand *cmd, robj **argv, in
}
}
/* Migarting / Improrting slot? Count keys we don't have. */
/* Migrating / Importing slot? Count keys we don't have. */
if ((migrating_slot || importing_slot) &&
lookupKeyRead(&server.db[0],thiskey) == NULL)
{
@ -5767,7 +5767,7 @@ clusterNode *getNodeByQuery(client *c, struct redisCommand *cmd, robj **argv, in
}
/* Handle the read-only client case reading from a slave: if this
* node is a slave and the request is about an hash slot our master
* node is a slave and the request is about a hash slot our master
* is serving, we can reply without redirection. */
int is_readonly_command = (c->cmd->flags & CMD_READONLY) ||
(c->cmd->proc == execCommand && !(c->mstate.cmd_inv_flags & CMD_READONLY));
@ -5781,7 +5781,7 @@ clusterNode *getNodeByQuery(client *c, struct redisCommand *cmd, robj **argv, in
}
/* Base case: just return the right node. However if this node is not
* myself, set error_code to MOVED since we need to issue a rediretion. */
* myself, set error_code to MOVED since we need to issue a redirection. */
if (n != myself && error_code) *error_code = CLUSTER_REDIR_MOVED;
return n;
}
@ -5827,7 +5827,7 @@ void clusterRedirectClient(client *c, clusterNode *n, int hashslot, int error_co
* 3) The client may remain blocked forever (or up to the max timeout time)
* waiting for a key change that will never happen.
*
* If the client is found to be blocked into an hash slot this node no
* If the client is found to be blocked into a hash slot this node no
* longer handles, the client is sent a redirection error, and the function
* returns 1. Otherwise 0 is returned and no operation is performed. */
int clusterRedirectBlockedClientIfNeeded(client *c) {

View File

@ -51,8 +51,8 @@ typedef struct clusterLink {
#define CLUSTER_NODE_HANDSHAKE 32 /* We have still to exchange the first ping */
#define CLUSTER_NODE_NOADDR 64 /* We don't know the address of this node */
#define CLUSTER_NODE_MEET 128 /* Send a MEET message to this node */
#define CLUSTER_NODE_MIGRATE_TO 256 /* Master elegible for replica migration. */
#define CLUSTER_NODE_NOFAILOVER 512 /* Slave will not try to failver. */
#define CLUSTER_NODE_MIGRATE_TO 256 /* Master eligible for replica migration. */
#define CLUSTER_NODE_NOFAILOVER 512 /* Slave will not try to failover. */
#define CLUSTER_NODE_NULL_NAME "\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"
#define nodeIsMaster(n) ((n)->flags & CLUSTER_NODE_MASTER)
@ -164,10 +164,10 @@ typedef struct clusterState {
clusterNode *mf_slave; /* Slave performing the manual failover. */
/* Manual failover state of slave. */
long long mf_master_offset; /* Master offset the slave needs to start MF
or zero if stil not received. */
or zero if still not received. */
int mf_can_start; /* If non-zero signal that the manual failover
can start requesting masters vote. */
/* The followign fields are used by masters to take state on elections. */
/* The following fields are used by masters to take state on elections. */
uint64_t lastVoteEpoch; /* Epoch of the last vote granted. */
int todo_before_sleep; /* Things to do in clusterBeforeSleep(). */
/* Messages received and sent by type. */

View File

@ -1288,7 +1288,7 @@ void rewriteConfigNumericalOption(struct rewriteConfigState *state, const char *
rewriteConfigRewriteLine(state,option,line,force);
}
/* Rewrite a octal option. */
/* Rewrite an octal option. */
void rewriteConfigOctalOption(struct rewriteConfigState *state, char *option, int value, int defvalue) {
int force = value != defvalue;
sds line = sdscatprintf(sdsempty(),"%s %o",option,value);
@ -2106,7 +2106,7 @@ static int isValidAOFfilename(char *val, char **err) {
static int updateHZ(long long val, long long prev, char **err) {
UNUSED(prev);
UNUSED(err);
/* Hz is more an hint from the user, so we accept values out of range
/* Hz is more a hint from the user, so we accept values out of range
* but cap them to reasonable values. */
server.config_hz = val;
if (server.config_hz < CONFIG_MIN_HZ) server.config_hz = CONFIG_MIN_HZ;
@ -2124,7 +2124,7 @@ static int updateJemallocBgThread(int val, int prev, char **err) {
static int updateReplBacklogSize(long long val, long long prev, char **err) {
/* resizeReplicationBacklog sets server.repl_backlog_size, and relies on
* being able to tell when the size changes, so restore prev becore calling it. */
* being able to tell when the size changes, so restore prev before calling it. */
UNUSED(err);
server.repl_backlog_size = prev;
resizeReplicationBacklog(val);

View File

@ -166,7 +166,7 @@ void setproctitle(const char *fmt, ...);
#endif /* BYTE_ORDER */
/* Sometimes after including an OS-specific header that defines the
* endianess we end with __BYTE_ORDER but not with BYTE_ORDER that is what
* endianness we end with __BYTE_ORDER but not with BYTE_ORDER that is what
* the Redis code uses. In this case let's define everything without the
* underscores. */
#ifndef BYTE_ORDER

View File

@ -106,7 +106,7 @@ static inline int connAccept(connection *conn, ConnectionCallbackFunc accept_han
}
/* Establish a connection. The connect_handler will be called when the connection
* is established, or if an error has occured.
* is established, or if an error has occurred.
*
* The connection handler will be responsible to set up any read/write handlers
* as needed.
@ -168,7 +168,7 @@ static inline int connSetReadHandler(connection *conn, ConnectionCallbackFunc fu
/* Set a write handler, and possibly enable a write barrier, this flag is
* cleared when write handler is changed or removed.
* With barroer enabled, we never fire the event if the read handler already
* With barrier enabled, we never fire the event if the read handler already
* fired in the same event loop iteration. Useful when you want to persist
* things to disk before sending replies, and want to do that in a group fashion. */
static inline int connSetWriteHandlerWithBarrier(connection *conn, ConnectionCallbackFunc func, int barrier) {

View File

@ -116,7 +116,7 @@ robj *lookupKeyReadWithFlags(redisDb *db, robj *key, int flags) {
* However, if the command caller is not the master, and as additional
* safety measure, the command invoked is a read-only command, we can
* safely return NULL here, and provide a more consistent behavior
* to clients accessign expired values in a read-only fashion, that
* to clients accessing expired values in a read-only fashion, that
* will say the key as non existing.
*
* Notably this covers GETs when slaves are used to scale reads. */
@ -371,7 +371,7 @@ robj *dbUnshareStringValue(redisDb *db, robj *key, robj *o) {
* firing module events.
* and the function to return ASAP.
*
* On success the fuction returns the number of keys removed from the
* On success the function returns the number of keys removed from the
* database(s). Otherwise -1 is returned in the specific case the
* DB number is out of range, and errno is set to EINVAL. */
long long emptyDbGeneric(redisDb *dbarray, int dbnum, int flags, void(callback)(void*)) {
@ -863,7 +863,7 @@ void scanGenericCommand(client *c, robj *o, unsigned long cursor) {
/* Filter element if it is an expired key. */
if (!filter && o == NULL && expireIfNeeded(c->db, kobj)) filter = 1;
/* Remove the element and its associted value if needed. */
/* Remove the element and its associated value if needed. */
if (filter) {
decrRefCount(kobj);
listDelNode(keys, node);
@ -1359,7 +1359,7 @@ int *getKeysUsingCommandTable(struct redisCommand *cmd,robj **argv, int argc, in
/* Return all the arguments that are keys in the command passed via argc / argv.
*
* The command returns the positions of all the key arguments inside the array,
* so the actual return value is an heap allocated array of integers. The
* so the actual return value is a heap allocated array of integers. The
* length of the array is returned by reference into *numkeys.
*
* 'cmd' must be point to the corresponding entry into the redisCommand

View File

@ -398,7 +398,7 @@ void debugCommand(client *c) {
"OOM -- Crash the server simulating an out-of-memory error.",
"PANIC -- Crash the server simulating a panic.",
"POPULATE <count> [prefix] [size] -- Create <count> string keys named key:<num>. If a prefix is specified is used instead of the 'key' prefix.",
"RELOAD [MERGE] [NOFLUSH] [NOSAVE] -- Save the RDB on disk and reload it back in memory. By default it will save the RDB file and load it back. With the NOFLUSH option the current database is not removed before loading the new one, but conficts in keys will kill the server with an exception. When MERGE is used, conflicting keys will be loaded (the key in the loaded RDB file will win). When NOSAVE is used, the server will not save the current dataset in the RDB file before loading. Use DEBUG RELOAD NOSAVE when you want just to load the RDB file you placed in the Redis working directory in order to replace the current dataset in memory. Use DEBUG RELOAD NOSAVE NOFLUSH MERGE when you want to add what is in the current RDB file placed in the Redis current directory, with the current memory content. Use DEBUG RELOAD when you want to verify Redis is able to persist the current dataset in the RDB file, flush the memory content, and load it back.",
"RELOAD [MERGE] [NOFLUSH] [NOSAVE] -- Save the RDB on disk and reload it back in memory. By default it will save the RDB file and load it back. With the NOFLUSH option the current database is not removed before loading the new one, but conflicts in keys will kill the server with an exception. When MERGE is used, conflicting keys will be loaded (the key in the loaded RDB file will win). When NOSAVE is used, the server will not save the current dataset in the RDB file before loading. Use DEBUG RELOAD NOSAVE when you want just to load the RDB file you placed in the Redis working directory in order to replace the current dataset in memory. Use DEBUG RELOAD NOSAVE NOFLUSH MERGE when you want to add what is in the current RDB file placed in the Redis current directory, with the current memory content. Use DEBUG RELOAD when you want to verify Redis is able to persist the current dataset in the RDB file, flush the memory content, and load it back.",
"RESTART -- Graceful restart: save config, db, restart.",
"SDSLEN <key> -- Show low level SDS string info representing key and value.",
"SEGFAULT -- Crash the server with sigsegv.",
@ -467,7 +467,7 @@ NULL
}
}
/* The default beahvior is to save the RDB file before loading
/* The default behavior is to save the RDB file before loading
* it back. */
if (save) {
rdbSaveInfo rsi, *rsiptr;
@ -1502,7 +1502,7 @@ void logCurrentClient(void) {
#define MEMTEST_MAX_REGIONS 128
/* A non destructive memory test executed during segfauls. */
/* A non destructive memory test executed during segfault. */
int memtest_test_linux_anonymous_maps(void) {
FILE *fp;
char line[1024];

View File

@ -47,11 +47,11 @@ int je_get_defrag_hint(void* ptr);
/* forward declarations*/
void defragDictBucketCallback(void *privdata, dictEntry **bucketref);
dictEntry* replaceSateliteDictKeyPtrAndOrDefragDictEntry(dict *d, sds oldkey, sds newkey, uint64_t hash, long *defragged);
dictEntry* replaceSatelliteDictKeyPtrAndOrDefragDictEntry(dict *d, sds oldkey, sds newkey, uint64_t hash, long *defragged);
/* Defrag helper for generic allocations.
*
* returns NULL in case the allocatoin wasn't moved.
* returns NULL in case the allocation wasn't moved.
* when it returns a non-null value, the old pointer was already released
* and should NOT be accessed. */
void* activeDefragAlloc(void *ptr) {
@ -74,7 +74,7 @@ void* activeDefragAlloc(void *ptr) {
/*Defrag helper for sds strings
*
* returns NULL in case the allocatoin wasn't moved.
* returns NULL in case the allocation wasn't moved.
* when it returns a non-null value, the old pointer was already released
* and should NOT be accessed. */
sds activeDefragSds(sds sdsptr) {
@ -90,7 +90,7 @@ sds activeDefragSds(sds sdsptr) {
/* Defrag helper for robj and/or string objects
*
* returns NULL in case the allocatoin wasn't moved.
* returns NULL in case the allocation wasn't moved.
* when it returns a non-null value, the old pointer was already released
* and should NOT be accessed. */
robj *activeDefragStringOb(robj* ob, long *defragged) {
@ -130,11 +130,11 @@ robj *activeDefragStringOb(robj* ob, long *defragged) {
}
/* Defrag helper for dictEntries to be used during dict iteration (called on
* each step). Teturns a stat of how many pointers were moved. */
* each step). Returns a stat of how many pointers were moved. */
long dictIterDefragEntry(dictIterator *iter) {
/* This function is a little bit dirty since it messes with the internals
* of the dict and it's iterator, but the benefit is that it is very easy
* to use, and require no other chagnes in the dict. */
* to use, and require no other changes in the dict. */
long defragged = 0;
dictht *ht;
/* Handle the next entry (if there is one), and update the pointer in the
@ -238,7 +238,7 @@ double *zslDefrag(zskiplist *zsl, double score, sds oldele, sds newele) {
return NULL;
}
/* Defrag helpler for sorted set.
/* Defrag helper for sorted set.
* Defrag a single dict entry key name, and corresponding skiplist struct */
long activeDefragZsetEntry(zset *zs, dictEntry *de) {
sds newsds;
@ -349,7 +349,7 @@ long activeDefragSdsListAndDict(list *l, dict *d, int dict_val_type) {
if ((newsds = activeDefragSds(sdsele))) {
/* When defragging an sds value, we need to update the dict key */
uint64_t hash = dictGetHash(d, newsds);
replaceSateliteDictKeyPtrAndOrDefragDictEntry(d, sdsele, newsds, hash, &defragged);
replaceSatelliteDictKeyPtrAndOrDefragDictEntry(d, sdsele, newsds, hash, &defragged);
ln->value = newsds;
defragged++;
}
@ -385,7 +385,7 @@ long activeDefragSdsListAndDict(list *l, dict *d, int dict_val_type) {
* moved. Return value is the the dictEntry if found, or NULL if not found.
* NOTE: this is very ugly code, but it let's us avoid the complication of
* doing a scan on another dict. */
dictEntry* replaceSateliteDictKeyPtrAndOrDefragDictEntry(dict *d, sds oldkey, sds newkey, uint64_t hash, long *defragged) {
dictEntry* replaceSatelliteDictKeyPtrAndOrDefragDictEntry(dict *d, sds oldkey, sds newkey, uint64_t hash, long *defragged) {
dictEntry **deref = dictFindEntryRefByPtrAndHash(d, oldkey, hash);
if (deref) {
dictEntry *de = *deref;
@ -433,7 +433,7 @@ long activeDefragQuickListNodes(quicklist *ql) {
}
/* when the value has lots of elements, we want to handle it later and not as
* oart of the main dictionary scan. this is needed in order to prevent latency
* part of the main dictionary scan. this is needed in order to prevent latency
* spikes when handling large items */
void defragLater(redisDb *db, dictEntry *kde) {
sds key = sdsdup(dictGetKey(kde));
@ -814,7 +814,7 @@ long defragKey(redisDb *db, dictEntry *de) {
* I can't search in db->expires for that key after i already released
* the pointer it holds it won't be able to do the string compare */
uint64_t hash = dictGetHash(db->dict, de->key);
replaceSateliteDictKeyPtrAndOrDefragDictEntry(db->expires, keysds, newsds, hash, &defragged);
replaceSatelliteDictKeyPtrAndOrDefragDictEntry(db->expires, keysds, newsds, hash, &defragged);
}
/* Try to defrag robj and / or string value. */
@ -885,7 +885,7 @@ void defragScanCallback(void *privdata, const dictEntry *de) {
server.stat_active_defrag_scanned++;
}
/* Defrag scan callback for each hash table bicket,
/* Defrag scan callback for each hash table bucket,
* used in order to defrag the dictEntry allocations. */
void defragDictBucketCallback(void *privdata, dictEntry **bucketref) {
UNUSED(privdata); /* NOTE: this function is also used by both activeDefragCycle and scanLaterHash, etc. don't use privdata */
@ -919,7 +919,7 @@ float getAllocatorFragmentation(size_t *out_frag_bytes) {
return frag_pct;
}
/* We may need to defrag other globals, one small allcation can hold a full allocator run.
/* We may need to defrag other globals, one small allocation can hold a full allocator run.
* so although small, it is still important to defrag these */
long defragOtherGlobals() {
long defragged = 0;
@ -1090,7 +1090,7 @@ void activeDefragCycle(void) {
if (hasActiveChildProcess())
return; /* Defragging memory while there's a fork will just do damage. */
/* Once a second, check if we the fragmentation justfies starting a scan
/* Once a second, check if the fragmentation justfies starting a scan
* or making it more aggressive. */
run_with_period(1000) {
computeDefragCycles();
@ -1160,7 +1160,7 @@ void activeDefragCycle(void) {
* (if we have a lot of pointers in one hash bucket or rehasing),
* check if we reached the time limit.
* But regardless, don't start a new db in this loop, this is because after
* the last db we call defragOtherGlobals, which must be done in once cycle */
* the last db we call defragOtherGlobals, which must be done in one cycle */
if (!cursor || (++iterations > 16 ||
server.stat_active_defrag_hits - prev_defragged > 512 ||
server.stat_active_defrag_scanned - prev_scanned > 64)) {

View File

@ -237,7 +237,9 @@ long long timeInMilliseconds(void) {
return (((long long)tv.tv_sec)*1000)+(tv.tv_usec/1000);
}
/* Rehash for an amount of time between ms milliseconds and ms+1 milliseconds */
/* Rehash in ms+"delta" milliseconds. The value of "delta" is larger
* than 0, and is smaller than 1 in most cases. The exact upper bound
* depends on the running time of dictRehash(d,100).*/
int dictRehashMilliseconds(dict *d, int ms) {
if (d->iterators > 0) return 0;
@ -749,7 +751,7 @@ unsigned int dictGetSomeKeys(dict *d, dictEntry **des, unsigned int count) {
* this function instead what we do is to consider a "linear" range of the table
* that may be constituted of N buckets with chains of different lengths
* appearing one after the other. Then we report a random element in the range.
* In this way we smooth away the problem of different chain lenghts. */
* In this way we smooth away the problem of different chain lengths. */
#define GETFAIR_NUM_ENTRIES 15
dictEntry *dictGetFairRandomKey(dict *d) {
dictEntry *entries[GETFAIR_NUM_ENTRIES];
@ -1119,7 +1121,7 @@ size_t _dictGetStatsHt(char *buf, size_t bufsize, dictht *ht, int tableid) {
i, clvector[i], ((float)clvector[i]/ht->size)*100);
}
/* Unlike snprintf(), teturn the number of characters actually written. */
/* Unlike snprintf(), return the number of characters actually written. */
if (bufsize) buf[bufsize-1] = '\0';
return strlen(buf);
}

View File

@ -8,7 +8,7 @@
* to be backward compatible are still in big endian) because most of the
* production environments are little endian, and we have a lot of conversions
* in a few places because ziplists, intsets, zipmaps, need to be endian-neutral
* even in memory, since they are serialied on RDB files directly with a single
* even in memory, since they are serialized on RDB files directly with a single
* write(2) without other additional steps.
*
* ----------------------------------------------------------------------------

View File

@ -41,7 +41,7 @@
/* To improve the quality of the LRU approximation we take a set of keys
* that are good candidate for eviction across freeMemoryIfNeeded() calls.
*
* Entries inside the eviciton pool are taken ordered by idle time, putting
* Entries inside the eviction pool are taken ordered by idle time, putting
* greater idle times to the right (ascending order).
*
* When an LFU policy is used instead, a reverse frequency indication is used
@ -242,7 +242,7 @@ void evictionPoolPopulate(int dbid, dict *sampledict, dict *keydict, struct evic
/* Try to reuse the cached SDS string allocated in the pool entry,
* because allocating and deallocating this object is costly
* (according to the profiler, not my fantasy. Remember:
* premature optimizbla bla bla bla. */
* premature optimization bla bla bla. */
int klen = sdslen(key);
if (klen > EVPOOL_CACHED_SDS_SIZE) {
pool[k].key = sdsdup(key);
@ -342,7 +342,7 @@ unsigned long LFUDecrAndReturn(robj *o) {
}
/* ----------------------------------------------------------------------------
* The external API for eviction: freeMemroyIfNeeded() is called by the
* The external API for eviction: freeMemoryIfNeeded() is called by the
* server when there is data to add in order to make space if needed.
* --------------------------------------------------------------------------*/
@ -441,7 +441,7 @@ int getMaxmemoryState(size_t *total, size_t *logical, size_t *tofree, float *lev
*
* The function returns C_OK if we are under the memory limit or if we
* were over the limit, but the attempt to free memory was successful.
* Otehrwise if we are over the memory limit, but not enough memory
* Otherwise if we are over the memory limit, but not enough memory
* was freed to return back under the limit, the function returns C_ERR. */
int freeMemoryIfNeeded(void) {
int keys_freed = 0;

View File

@ -97,7 +97,7 @@ int activeExpireCycleTryExpire(redisDb *db, dictEntry *de, long long now) {
* conditions:
*
* If type is ACTIVE_EXPIRE_CYCLE_FAST the function will try to run a
* "fast" expire cycle that takes no longer than EXPIRE_FAST_CYCLE_DURATION
* "fast" expire cycle that takes no longer than ACTIVE_EXPIRE_CYCLE_FAST_DURATION
* microseconds, and is not repeated again before the same amount of time.
* The cycle will also refuse to run at all if the latest slow cycle did not
* terminate because of a time limit condition.
@ -414,7 +414,7 @@ void expireSlaveKeys(void) {
else
dictDelete(slaveKeysWithExpire,keyname);
/* Stop conditions: found 3 keys we cna't expire in a row or
/* Stop conditions: found 3 keys we can't expire in a row or
* time limit was reached. */
cycles++;
if (noexpire > 3) break;
@ -466,7 +466,7 @@ size_t getSlaveKeyWithExpireCount(void) {
*
* Note: technically we should handle the case of a single DB being flushed
* but it is not worth it since anyway race conditions using the same set
* of key names in a wriatable slave and in its master will lead to
* of key names in a writable slave and in its master will lead to
* inconsistencies. This is just a best-effort thing we do. */
void flushSlaveKeysWithExpireList(void) {
if (slaveKeysWithExpire) {
@ -490,7 +490,7 @@ int checkAlreadyExpired(long long when) {
*----------------------------------------------------------------------------*/
/* This is the generic command implementation for EXPIRE, PEXPIRE, EXPIREAT
* and PEXPIREAT. Because the commad second argument may be relative or absolute
* and PEXPIREAT. Because the command second argument may be relative or absolute
* the "basetime" argument is used to signal what the base time is (either 0
* for *AT variants of the command, or the current time for relative expires).
*

View File

@ -143,8 +143,8 @@ double extractUnitOrReply(client *c, robj *unit) {
}
/* Input Argument Helper.
* Extract the dinstance from the specified two arguments starting at 'argv'
* that shouldbe in the form: <number> <unit> and return the dinstance in the
* Extract the distance from the specified two arguments starting at 'argv'
* that should be in the form: <number> <unit>, and return the distance in the
* specified unit on success. *conversions is populated with the coefficient
* to use in order to convert meters to the unit.
*
@ -788,7 +788,7 @@ void geoposCommand(client *c) {
/* GEODIST key ele1 ele2 [unit]
*
* Return the distance, in meters by default, otherwise accordig to "unit",
* Return the distance, in meters by default, otherwise according to "unit",
* between points ele1 and ele2. If one or more elements are missing NULL
* is returned. */
void geodistCommand(client *c) {

View File

@ -68,7 +68,7 @@ uint8_t geohashEstimateStepsByRadius(double range_meters, double lat) {
}
step -= 2; /* Make sure range is included in most of the base cases. */
/* Wider range torwards the poles... Note: it is possible to do better
/* Wider range towards the poles... Note: it is possible to do better
* than this approximation by computing the distance between meridians
* at this latitude, but this does the trick for now. */
if (lat > 66 || lat < -66) {
@ -84,7 +84,7 @@ uint8_t geohashEstimateStepsByRadius(double range_meters, double lat) {
/* Return the bounding box of the search area centered at latitude,longitude
* having a radius of radius_meter. bounds[0] - bounds[2] is the minimum
* and maxium longitude, while bounds[1] - bounds[3] is the minimum and
* and maximum longitude, while bounds[1] - bounds[3] is the minimum and
* maximum latitude.
*
* This function does not behave correctly with very large radius values, for

View File

@ -36,9 +36,9 @@
/* The Redis HyperLogLog implementation is based on the following ideas:
*
* * The use of a 64 bit hash function as proposed in [1], in order to don't
* limited to cardinalities up to 10^9, at the cost of just 1 additional
* bit per register.
* * The use of a 64 bit hash function as proposed in [1], in order to estimate
* cardinalities larger than 10^9, at the cost of just 1 additional bit per
* register.
* * The use of 16384 6-bit registers for a great level of accuracy, using
* a total of 12k per key.
* * The use of the Redis string data type. No new type is introduced.
@ -279,7 +279,7 @@ static char *invalid_hll_err = "-INVALIDOBJ Corrupted HLL object detected\r\n";
* So we right shift of 0 bits (no shift in practice) and
* left shift the next byte of 8 bits, even if we don't use it,
* but this has the effect of clearing the bits so the result
* will not be affacted after the OR.
* will not be affected after the OR.
*
* -------------------------------------------------------------------------
*
@ -297,7 +297,7 @@ static char *invalid_hll_err = "-INVALIDOBJ Corrupted HLL object detected\r\n";
* |11000000| <- Our byte at b0
* +--------+
*
* To create a AND-mask to clear the bits about this position, we just
* To create an AND-mask to clear the bits about this position, we just
* initialize the mask with the value 63, left shift it of "fs" bits,
* and finally invert the result.
*
@ -766,7 +766,7 @@ int hllSparseSet(robj *o, long index, uint8_t count) {
* by a ZERO opcode with len > 1, or by an XZERO opcode.
*
* In those cases the original opcode must be split into multiple
* opcodes. The worst case is an XZERO split in the middle resuling into
* opcodes. The worst case is an XZERO split in the middle resulting into
* XZERO - VAL - XZERO, so the resulting sequence max length is
* 5 bytes.
*
@ -899,7 +899,7 @@ promote: /* Promote to dense representation. */
* the element belongs to is incremented if needed.
*
* This function is actually a wrapper for hllSparseSet(), it only performs
* the hashshing of the elmenet to obtain the index and zeros run length. */
* the hashshing of the element to obtain the index and zeros run length. */
int hllSparseAdd(robj *o, unsigned char *ele, size_t elesize) {
long index;
uint8_t count = hllPatLen(ele,elesize,&index);
@ -1014,7 +1014,7 @@ uint64_t hllCount(struct hllhdr *hdr, int *invalid) {
double m = HLL_REGISTERS;
double E;
int j;
/* Note that reghisto size could be just HLL_Q+2, becuase HLL_Q+1 is
/* Note that reghisto size could be just HLL_Q+2, because HLL_Q+1 is
* the maximum frequency of the "000...1" sequence the hash function is
* able to return. However it is slow to check for sanity of the
* input: instead we history array at a safe size: overflows will

View File

@ -85,7 +85,7 @@ int THPGetAnonHugePagesSize(void) {
/* ---------------------------- Latency API --------------------------------- */
/* Latency monitor initialization. We just need to create the dictionary
* of time series, each time serie is created on demand in order to avoid
* of time series, each time series is created on demand in order to avoid
* having a fixed list to maintain. */
void latencyMonitorInit(void) {
server.latency_events = dictCreate(&latencyTimeSeriesDictType,NULL);
@ -154,7 +154,7 @@ int latencyResetEvent(char *event_to_reset) {
/* Analyze the samples available for a given event and return a structure
* populate with different metrics, average, MAD, min, max, and so forth.
* Check latency.h definition of struct latenctStat for more info.
* Check latency.h definition of struct latencyStats for more info.
* If the specified event has no elements the structure is populate with
* zero values. */
void analyzeLatencyForEvent(char *event, struct latencyStats *ls) {
@ -343,7 +343,7 @@ sds createLatencyReport(void) {
}
if (!strcasecmp(event,"aof-fstat") ||
!strcasecmp(event,"rdb-unlik-temp-file")) {
!strcasecmp(event,"rdb-unlink-temp-file")) {
advise_disk_contention = 1;
advise_local_disk = 1;
advices += 2;
@ -396,7 +396,7 @@ sds createLatencyReport(void) {
/* Better VM. */
report = sdscat(report,"\nI have a few advices for you:\n\n");
if (advise_better_vm) {
report = sdscat(report,"- If you are using a virtual machine, consider upgrading it with a faster one using an hypervisior that provides less latency during fork() calls. Xen is known to have poor fork() performance. Even in the context of the same VM provider, certain kinds of instances can execute fork faster than others.\n");
report = sdscat(report,"- If you are using a virtual machine, consider upgrading it with a faster one using a hypervisior that provides less latency during fork() calls. Xen is known to have poor fork() performance. Even in the context of the same VM provider, certain kinds of instances can execute fork faster than others.\n");
}
/* Slow log. */
@ -416,7 +416,7 @@ sds createLatencyReport(void) {
if (advise_scheduler) {
report = sdscat(report,"- The system is slow to execute Redis code paths not containing system calls. This usually means the system does not provide Redis CPU time to run for long periods. You should try to:\n"
" 1) Lower the system load.\n"
" 2) Use a computer / VM just for Redis if you are running other softawre in the same system.\n"
" 2) Use a computer / VM just for Redis if you are running other software in the same system.\n"
" 3) Check if you have a \"noisy neighbour\" problem.\n"
" 4) Check with 'redis-cli --intrinsic-latency 100' what is the intrinsic latency in your system.\n"
" 5) Check if the problem is allocator-related by recompiling Redis with MALLOC=libc, if you are using Jemalloc. However this may create fragmentation problems.\n");
@ -432,7 +432,7 @@ sds createLatencyReport(void) {
}
if (advise_data_writeback) {
report = sdscat(report,"- Mounting ext3/4 filesystems with data=writeback can provide a performance boost compared to data=ordered, however this mode of operation provides less guarantees, and sometimes it can happen that after a hard crash the AOF file will have an half-written command at the end and will require to be repaired before Redis restarts.\n");
report = sdscat(report,"- Mounting ext3/4 filesystems with data=writeback can provide a performance boost compared to data=ordered, however this mode of operation provides less guarantees, and sometimes it can happen that after a hard crash the AOF file will have a half-written command at the end and will require to be repaired before Redis restarts.\n");
}
if (advise_disk_contention) {

View File

@ -15,7 +15,7 @@ size_t lazyfreeGetPendingObjectsCount(void) {
/* Return the amount of work needed in order to free an object.
* The return value is not always the actual number of allocations the
* object is compoesd of, but a number proportional to it.
* object is composed of, but a number proportional to it.
*
* For strings the function always returns 1.
*
@ -137,7 +137,7 @@ void emptyDbAsync(redisDb *db) {
}
/* Empty the slots-keys map of Redis CLuster by creating a new empty one
* and scheduiling the old for lazy freeing. */
* and scheduling the old for lazy freeing. */
void slotToKeyFlushAsync(void) {
rax *old = server.cluster->slots_to_keys;
@ -156,7 +156,7 @@ void lazyfreeFreeObjectFromBioThread(robj *o) {
}
/* Release a database from the lazyfree thread. The 'db' pointer is the
* database which was substitutied with a fresh one in the main thread
* database which was substituted with a fresh one in the main thread
* when the database was logically deleted. 'sl' is a skiplist used by
* Redis Cluster in order to take the hash slots -> keys mapping. This
* may be NULL if Redis Cluster is disabled. */

View File

@ -405,7 +405,7 @@ unsigned char *lpNext(unsigned char *lp, unsigned char *p) {
}
/* If 'p' points to an element of the listpack, calling lpPrev() will return
* the pointer to the preivous element (the one on the left), or NULL if 'p'
* the pointer to the previous element (the one on the left), or NULL if 'p'
* already pointed to the first element of the listpack. */
unsigned char *lpPrev(unsigned char *lp, unsigned char *p) {
if (p-lp == LP_HDR_SIZE) return NULL;

View File

@ -85,7 +85,7 @@ void lolwutCommand(client *c) {
}
/* ========================== LOLWUT Canvase ===============================
* Many LOWUT versions will likely print some computer art to the screen.
* Many LOLWUT versions will likely print some computer art to the screen.
* This is the case with LOLWUT 5 and LOLWUT 6, so here there is a generic
* canvas implementation that can be reused. */
@ -106,7 +106,7 @@ void lwFreeCanvas(lwCanvas *canvas) {
}
/* Set a pixel to the specified color. Color is 0 or 1, where zero means no
* dot will be displyed, and 1 means dot will be displayed.
* dot will be displayed, and 1 means dot will be displayed.
* Coordinates are arranged so that left-top corner is 0,0. You can write
* out of the size of the canvas without issues. */
void lwDrawPixel(lwCanvas *canvas, int x, int y, int color) {

View File

@ -156,7 +156,7 @@ void lolwut5Command(client *c) {
return;
/* Limits. We want LOLWUT to be always reasonably fast and cheap to execute
* so we have maximum number of columns, rows, and output resulution. */
* so we have maximum number of columns, rows, and output resolution. */
if (cols < 1) cols = 1;
if (cols > 1000) cols = 1000;
if (squares_per_row < 1) squares_per_row = 1;

View File

@ -127,7 +127,7 @@
/*
* Whether to store pointers or offsets inside the hash table. On
* 64 bit architetcures, pointers take up twice as much space,
* 64 bit architectures, pointers take up twice as much space,
* and might also be slower. Default is to autodetect.
*/
/*#define LZF_USER_OFFSETS autodetect */

View File

@ -46,7 +46,7 @@ typedef struct RedisModuleInfoCtx {
sds info; /* info string we collected so far */
int sections; /* number of sections we collected so far */
int in_section; /* indication if we're in an active section or not */
int in_dict_field; /* indication that we're curreintly appending to a dict */
int in_dict_field; /* indication that we're currently appending to a dict */
} RedisModuleInfoCtx;
typedef void (*RedisModuleInfoFunc)(RedisModuleInfoCtx *ctx, int for_crash_report);
@ -906,10 +906,21 @@ int RM_SignalModifiedKey(RedisModuleCtx *ctx, RedisModuleString *keyname) {
* Automatic memory management for modules
* -------------------------------------------------------------------------- */
/* Enable automatic memory management. See API.md for more information.
/* Enable automatic memory management.
*
* The function must be called as the first function of a command implementation
* that wants to use automatic memory. */
* that wants to use automatic memory.
*
* When enabled, automatic memory management tracks and automatically frees
* keys, call replies and Redis string objects once the command returns. In most
* cases this eliminates the need of calling the following functions:
*
* 1) RedisModule_CloseKey()
* 2) RedisModule_FreeCallReply()
* 3) RedisModule_FreeString()
*
* These functions can still be used with automatic memory management enabled,
* to optimize loops that make numerous allocations for example. */
void RM_AutoMemory(RedisModuleCtx *ctx) {
ctx->flags |= REDISMODULE_CTX_AUTO_MEMORY;
}
@ -1045,7 +1056,7 @@ RedisModuleString *RM_CreateStringFromLongLong(RedisModuleCtx *ctx, long long ll
}
/* Like RedisModule_CreatString(), but creates a string starting from a double
* integer instead of taking a buffer and its length.
* instead of taking a buffer and its length.
*
* The returned string must be released with RedisModule_FreeString() or by
* enabling automatic memory management. */
@ -1922,7 +1933,7 @@ int RM_GetContextFlags(RedisModuleCtx *ctx) {
flags |= REDISMODULE_CTX_FLAGS_LUA;
if (ctx->client->flags & CLIENT_MULTI)
flags |= REDISMODULE_CTX_FLAGS_MULTI;
/* Module command recieved from MASTER, is replicated. */
/* Module command received from MASTER, is replicated. */
if (ctx->client->flags & CLIENT_MASTER)
flags |= REDISMODULE_CTX_FLAGS_REPLICATED;
}
@ -2921,7 +2932,7 @@ int RM_HashSet(RedisModuleKey *key, int flags, ...) {
/* Get fields from an hash value. This function is called using a variable
* number of arguments, alternating a field name (as a StringRedisModule
* pointer) with a pointer to a StringRedisModule pointer, that is set to the
* value of the field if the field exist, or NULL if the field did not exist.
* value of the field if the field exists, or NULL if the field does not exist.
* At the end of the field/value-ptr pairs, NULL must be specified as last
* argument to signal the end of the arguments in the variadic function.
*
@ -3040,7 +3051,7 @@ void moduleParseCallReply_SimpleString(RedisModuleCallReply *reply);
void moduleParseCallReply_Array(RedisModuleCallReply *reply);
/* Do nothing if REDISMODULE_REPLYFLAG_TOPARSE is false, otherwise
* use the protcol of the reply in reply->proto in order to fill the
* use the protocol of the reply in reply->proto in order to fill the
* reply with parsed data according to the reply type. */
void moduleParseCallReply(RedisModuleCallReply *reply) {
if (!(reply->flags & REDISMODULE_REPLYFLAG_TOPARSE)) return;
@ -3599,7 +3610,7 @@ void moduleTypeNameByID(char *name, uint64_t moduleid) {
/* Register a new data type exported by the module. The parameters are the
* following. Please for in depth documentation check the modules API
* documentation, especially the TYPES.md file.
* documentation, especially https://redis.io/topics/modules-native-types.
*
* * **name**: A 9 characters data type name that MUST be unique in the Redis
* Modules ecosystem. Be creative... and there will be no collisions. Use
@ -3646,7 +3657,7 @@ void moduleTypeNameByID(char *name, uint64_t moduleid) {
* * **aux_load**: A callback function pointer that loads out of keyspace data from RDB files.
* Similar to aux_save, returns REDISMODULE_OK on success, and ERR otherwise.
*
* The **digest* and **mem_usage** methods should currently be omitted since
* The **digest** and **mem_usage** methods should currently be omitted since
* they are not yet implemented inside the Redis modules core.
*
* Note: the module name "AAAAAAAAA" is reserved and produces an error, it
@ -3656,7 +3667,7 @@ void moduleTypeNameByID(char *name, uint64_t moduleid) {
* and if the module name or encver is invalid, NULL is returned.
* Otherwise the new type is registered into Redis, and a reference of
* type RedisModuleType is returned: the caller of the function should store
* this reference into a gobal variable to make future use of it in the
* this reference into a global variable to make future use of it in the
* modules type API, since a single module may register multiple types.
* Example code fragment:
*
@ -3738,7 +3749,7 @@ moduleType *RM_ModuleTypeGetType(RedisModuleKey *key) {
/* Assuming RedisModule_KeyType() returned REDISMODULE_KEYTYPE_MODULE on
* the key, returns the module type low-level value stored at key, as
* it was set by the user via RedisModule_ModuleTypeSet().
* it was set by the user via RedisModule_ModuleTypeSetValue().
*
* If the key is NULL, is not associated with a module type, or is empty,
* then NULL is returned instead. */
@ -3795,7 +3806,7 @@ int moduleAllDatatypesHandleErrors() {
/* Returns true if any previous IO API failed.
* for Load* APIs the REDISMODULE_OPTIONS_HANDLE_IO_ERRORS flag must be set with
* RediModule_SetModuleOptions first. */
* RedisModule_SetModuleOptions first. */
int RM_IsIOError(RedisModuleIO *io) {
return io->error;
}
@ -3928,7 +3939,7 @@ RedisModuleString *RM_LoadString(RedisModuleIO *io) {
*
* The size of the string is stored at '*lenptr' if not NULL.
* The returned string is not automatically NULL terminated, it is loaded
* exactly as it was stored inisde the RDB file. */
* exactly as it was stored inside the RDB file. */
char *RM_LoadStringBuffer(RedisModuleIO *io, size_t *lenptr) {
return moduleLoadString(io,1,lenptr);
}
@ -4517,14 +4528,14 @@ int moduleTryServeClientBlockedOnKey(client *c, robj *key) {
*
* The callbacks are called in the following contexts:
*
* reply_callback: called after a successful RedisModule_UnblockClient()
* call in order to reply to the client and unblock it.
* reply_callback: called after a successful RedisModule_UnblockClient()
* call in order to reply to the client and unblock it.
*
* reply_timeout: called when the timeout is reached in order to send an
* error to the client.
* timeout_callback: called when the timeout is reached in order to send an
* error to the client.
*
* free_privdata: called in order to free the private data that is passed
* by RedisModule_UnblockClient() call.
* free_privdata: called in order to free the private data that is passed
* by RedisModule_UnblockClient() call.
*
* Note: RedisModule_UnblockClient should be called for every blocked client,
* even if client was killed, timed-out or disconnected. Failing to do so
@ -4547,13 +4558,13 @@ RedisModuleBlockedClient *RM_BlockClient(RedisModuleCtx *ctx, RedisModuleCmdFunc
* once certain keys become "ready", that is, contain more data.
*
* Basically this is similar to what a typical Redis command usually does,
* like BLPOP or ZPOPMAX: the client blocks if it cannot be served ASAP,
* like BLPOP or BZPOPMAX: the client blocks if it cannot be served ASAP,
* and later when the key receives new data (a list push for instance), the
* client is unblocked and served.
*
* However in the case of this module API, when the client is unblocked?
*
* 1. If you block ok a key of a type that has blocking operations associated,
* 1. If you block on a key of a type that has blocking operations associated,
* like a list, a sorted set, a stream, and so forth, the client may be
* unblocked once the relevant key is targeted by an operation that normally
* unblocks the native blocking operations for that type. So if we block
@ -4948,7 +4959,7 @@ void moduleReleaseGIL(void) {
/* Subscribe to keyspace notifications. This is a low-level version of the
* keyspace-notifications API. A module can register callbacks to be notified
* when keyspce events occur.
* when keyspace events occur.
*
* Notification events are filtered by their type (string events, set events,
* etc), and the subscriber callback receives only events that match a specific
@ -5659,7 +5670,7 @@ int RM_AuthenticateClientWithACLUser(RedisModuleCtx *ctx, const char *name, size
/* Deauthenticate and close the client. The client resources will not be
* be immediately freed, but will be cleaned up in a background job. This is
* the recommended way to deauthenicate a client since most clients can't
* handle users becomming deauthenticated. Returns REDISMODULE_ERR when the
* handle users becoming deauthenticated. Returns REDISMODULE_ERR when the
* client doesn't exist and REDISMODULE_OK when the operation was successful.
*
* The client ID is returned from the RM_AuthenticateClientWithUser and
@ -5779,14 +5790,14 @@ int RM_DictDel(RedisModuleDict *d, RedisModuleString *key, void *oldval) {
return RM_DictDelC(d,key->ptr,sdslen(key->ptr),oldval);
}
/* Return an interator, setup in order to start iterating from the specified
/* Return an iterator, setup in order to start iterating from the specified
* key by applying the operator 'op', which is just a string specifying the
* comparison operator to use in order to seek the first element. The
* operators avalable are:
* operators available are:
*
* "^" -- Seek the first (lexicographically smaller) key.
* "$" -- Seek the last (lexicographically biffer) key.
* ">" -- Seek the first element greter than the specified key.
* ">" -- Seek the first element greater than the specified key.
* ">=" -- Seek the first element greater or equal than the specified key.
* "<" -- Seek the first element smaller than the specified key.
* "<=" -- Seek the first element smaller or equal than the specified key.
@ -5913,7 +5924,7 @@ RedisModuleString *RM_DictPrev(RedisModuleCtx *ctx, RedisModuleDictIter *di, voi
* in the loop, as we iterate elements, we can also check if we are still
* on range.
*
* The function returne REDISMODULE_ERR if the iterator reached the
* The function return REDISMODULE_ERR if the iterator reached the
* end of elements condition as well. */
int RM_DictCompareC(RedisModuleDictIter *di, const char *op, void *key, size_t keylen) {
if (raxEOF(&di->ri)) return REDISMODULE_ERR;
@ -6294,7 +6305,7 @@ int RM_ExportSharedAPI(RedisModuleCtx *ctx, const char *apiname, void *func) {
* command that requires external APIs: if some API cannot be resolved, the
* command should return an error.
*
* Here is an exmaple:
* Here is an example:
*
* int ... myCommandImplementation() {
* if (getExternalAPIs() == 0) {
@ -6680,7 +6691,7 @@ void RM_ScanCursorDestroy(RedisModuleScanCursor *cursor) {
* RedisModule_ScanCursorDestroy(c);
*
* It is also possible to use this API from another thread while the lock
* is acquired durring the actuall call to RM_Scan:
* is acquired during the actuall call to RM_Scan:
*
* RedisModuleCursor *c = RedisModule_ScanCursorCreate();
* RedisModule_ThreadSafeContextLock(ctx);
@ -6694,7 +6705,7 @@ void RM_ScanCursorDestroy(RedisModuleScanCursor *cursor) {
* The function will return 1 if there are more elements to scan and
* 0 otherwise, possibly setting errno if the call failed.
*
* It is also possible to restart and existing cursor using RM_CursorRestart.
* It is also possible to restart an existing cursor using RM_ScanCursorRestart.
*
* IMPORTANT: This API is very similar to the Redis SCAN command from the
* point of view of the guarantees it provides. This means that the API
@ -6708,7 +6719,7 @@ void RM_ScanCursorDestroy(RedisModuleScanCursor *cursor) {
* Moreover playing with the Redis keyspace while iterating may have the
* effect of returning more duplicates. A safe pattern is to store the keys
* names you want to modify elsewhere, and perform the actions on the keys
* later when the iteration is complete. Howerver this can cost a lot of
* later when the iteration is complete. However this can cost a lot of
* memory, so it may make sense to just operate on the current key when
* possible during the iteration, given that this is safe. */
int RM_Scan(RedisModuleCtx *ctx, RedisModuleScanCursor *cursor, RedisModuleScanCB fn, void *privdata) {
@ -6773,8 +6784,8 @@ static void moduleScanKeyCallback(void *privdata, const dictEntry *de) {
* RedisModule_CloseKey(key);
* RedisModule_ScanCursorDestroy(c);
*
* It is also possible to use this API from another thread while the lock is acquired durring
* the actuall call to RM_Scan, and re-opening the key each time:
* It is also possible to use this API from another thread while the lock is acquired during
* the actuall call to RM_ScanKey, and re-opening the key each time:
* RedisModuleCursor *c = RedisModule_ScanCursorCreate();
* RedisModule_ThreadSafeContextLock(ctx);
* RedisModuleKey *key = RedisModule_OpenKey(...)
@ -6790,7 +6801,7 @@ static void moduleScanKeyCallback(void *privdata, const dictEntry *de) {
*
* The function will return 1 if there are more elements to scan and 0 otherwise,
* possibly setting errno if the call failed.
* It is also possible to restart and existing cursor using RM_CursorRestart.
* It is also possible to restart an existing cursor using RM_ScanCursorRestart.
*
* NOTE: Certain operations are unsafe while iterating the object. For instance
* while the API guarantees to return at least one time all the elements that
@ -6943,7 +6954,7 @@ int TerminateModuleForkChild(int child_pid, int wait) {
}
/* Can be used to kill the forked child process from the parent process.
* child_pid whould be the return value of RedisModule_Fork. */
* child_pid would be the return value of RedisModule_Fork. */
int RM_KillForkChild(int child_pid) {
/* Kill module child, wait for child exit. */
if (TerminateModuleForkChild(child_pid,1) == C_OK)
@ -7081,7 +7092,7 @@ void ModuleForkDoneHandler(int exitcode, int bysignal) {
* REDISMODULE_SUBEVENT_LOADING_FAILED
*
* Note that AOF loading may start with an RDB data in case of
* rdb-preamble, in which case you'll only recieve an AOF_START event.
* rdb-preamble, in which case you'll only receive an AOF_START event.
*
*
* RedisModuleEvent_ClientChange
@ -7103,7 +7114,7 @@ void ModuleForkDoneHandler(int exitcode, int bysignal) {
* This event is called when the instance (that can be both a
* master or a replica) get a new online replica, or lose a
* replica since it gets disconnected.
* The following sub events are availble:
* The following sub events are available:
*
* REDISMODULE_SUBEVENT_REPLICA_CHANGE_ONLINE
* REDISMODULE_SUBEVENT_REPLICA_CHANGE_OFFLINE
@ -7141,7 +7152,7 @@ void ModuleForkDoneHandler(int exitcode, int bysignal) {
* RedisModuleEvent_ModuleChange
*
* This event is called when a new module is loaded or one is unloaded.
* The following sub events are availble:
* The following sub events are available:
*
* REDISMODULE_SUBEVENT_MODULE_LOADED
* REDISMODULE_SUBEVENT_MODULE_UNLOADED
@ -7168,7 +7179,7 @@ void ModuleForkDoneHandler(int exitcode, int bysignal) {
* int32_t progress; // Approximate progress between 0 and 1024,
* or -1 if unknown.
*
* The function returns REDISMODULE_OK if the module was successfully subscrived
* The function returns REDISMODULE_OK if the module was successfully subscribed
* for the specified event. If the API is called from a wrong context then
* REDISMODULE_ERR is returned. */
int RM_SubscribeToServerEvent(RedisModuleCtx *ctx, RedisModuleEvent event, RedisModuleEventCallback callback) {
@ -7374,7 +7385,7 @@ void moduleInitModulesSystem(void) {
server.loadmodule_queue = listCreate();
modules = dictCreate(&modulesDictType,NULL);
/* Set up the keyspace notification susbscriber list and static client */
/* Set up the keyspace notification subscriber list and static client */
moduleKeyspaceSubscribers = listCreate();
/* Set up filter list */
@ -7735,7 +7746,7 @@ size_t moduleCount(void) {
return dictSize(modules);
}
/* Set the key last access time for LRU based eviction. not relevent if the
/* Set the key last access time for LRU based eviction. not relevant if the
* servers's maxmemory policy is LFU based. Value is idle time in milliseconds.
* returns REDISMODULE_OK if the LRU was updated, REDISMODULE_ERR otherwise. */
int RM_SetLRU(RedisModuleKey *key, mstime_t lru_idle) {

View File

@ -125,7 +125,7 @@ int RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)
cmd_KEYRANGE,"readonly",1,1,0) == REDISMODULE_ERR)
return REDISMODULE_ERR;
/* Create our global dictionray. Here we'll set our keys and values. */
/* Create our global dictionary. Here we'll set our keys and values. */
Keyspace = RedisModule_CreateDict(NULL);
return REDISMODULE_OK;

View File

@ -91,7 +91,7 @@ int HelloPushCall_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, in
}
/* HELLO.PUSH.CALL2
* This is exaxctly as HELLO.PUSH.CALL, but shows how we can reply to the
* This is exactly as HELLO.PUSH.CALL, but shows how we can reply to the
* client using directly a reply object that Call() returned. */
int HelloPushCall2_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)
{
@ -345,7 +345,7 @@ int HelloToggleCase_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv,
/* HELLO.MORE.EXPIRE key milliseconds.
*
* If they key has already an associated TTL, extends it by "milliseconds"
* If the key has already an associated TTL, extends it by "milliseconds"
* milliseconds. Otherwise no operation is performed. */
int HelloMoreExpire_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {
RedisModule_AutoMemory(ctx); /* Use automatic memory management. */

View File

@ -87,7 +87,7 @@ void discardTransaction(client *c) {
unwatchAllKeys(c);
}
/* Flag the transacation as DIRTY_EXEC so that EXEC will fail.
/* Flag the transaction as DIRTY_EXEC so that EXEC will fail.
* Should be called every time there is an error while queueing a command. */
void flagTransaction(client *c) {
if (c->flags & CLIENT_MULTI)

View File

@ -170,7 +170,7 @@ client *createClient(connection *conn) {
return c;
}
/* This funciton puts the client in the queue of clients that should write
/* This function puts the client in the queue of clients that should write
* their output buffers to the socket. Note that it does not *yet* install
* the write handler, to start clients are put in a queue of clients that need
* to write, so we try to do that before returning in the event loop (see the
@ -268,7 +268,7 @@ void _addReplyProtoToList(client *c, const char *s, size_t len) {
listNode *ln = listLast(c->reply);
clientReplyBlock *tail = ln? listNodeValue(ln): NULL;
/* Note that 'tail' may be NULL even if we have a tail node, becuase when
/* Note that 'tail' may be NULL even if we have a tail node, because when
* addReplyDeferredLen() is used, it sets a dummy node to NULL just
* fo fill it later, when the size of the bulk length is set. */
@ -1161,7 +1161,7 @@ void freeClient(client *c) {
listDelNode(server.clients_to_close,ln);
}
/* If it is our master that's beging disconnected we should make sure
/* If it is our master that's being disconnected we should make sure
* to cache the state to try a partial resynchronization later.
*
* Note that before doing this we make sure that the client is not in
@ -1491,7 +1491,7 @@ void resetClient(client *c) {
}
}
/* This funciton is used when we want to re-enter the event loop but there
/* This function is used when we want to re-enter the event loop but there
* is the risk that the client we are dealing with will be freed in some
* way. This happens for instance in:
*
@ -2050,7 +2050,7 @@ char *getClientPeerId(client *c) {
return c->peerid;
}
/* Concatenate a string representing the state of a client in an human
/* Concatenate a string representing the state of a client in a human
* readable format, into the sds string 's'. */
sds catClientInfoString(sds s, client *client) {
char flags[16], events[3], conninfo[CONN_INFO_LEN], *p;
@ -3057,7 +3057,7 @@ void stopThreadedIO(void) {
* we need to handle in parallel, however the I/O threading is disabled
* globally for reads as well if we have too little pending clients.
*
* The function returns 0 if the I/O threading should be used becuase there
* The function returns 0 if the I/O threading should be used because there
* are enough active threads, otherwise 1 is returned and the I/O threads
* could be possibly stopped (if already active) as a side effect. */
int stopThreadedIOIfNeeded(void) {

View File

@ -62,7 +62,7 @@ int keyspaceEventsStringToFlags(char *classes) {
return flags;
}
/* This function does exactly the revese of the function above: it gets
/* This function does exactly the reverse of the function above: it gets
* as input an integer with the xored flags and returns a string representing
* the selected classes. The string returned is an sds string that needs to
* be released with sdsfree(). */

View File

@ -126,7 +126,7 @@ robj *createStringObject(const char *ptr, size_t len) {
/* Create a string object from a long long value. When possible returns a
* shared integer object, or at least an integer encoded one.
*
* If valueobj is non zero, the function avoids returning a a shared
* If valueobj is non zero, the function avoids returning a shared
* integer, because the object is going to be used as value in the Redis key
* space (for instance when the INCR command is used), so we want LFU/LRU
* values specific for each key. */
@ -1224,7 +1224,7 @@ robj *objectCommandLookupOrReply(client *c, robj *key, robj *reply) {
return o;
}
/* Object command allows to inspect the internals of an Redis Object.
/* Object command allows to inspect the internals of a Redis Object.
* Usage: OBJECT <refcount|encoding|idletime|freq> <key> */
void objectCommand(client *c) {
robj *o;

View File

@ -40,7 +40,7 @@
* count: 16 bits, max 65536 (max zl bytes is 65k, so max count actually < 32k).
* encoding: 2 bits, RAW=1, LZF=2.
* container: 2 bits, NONE=1, ZIPLIST=2.
* recompress: 1 bit, bool, true if node is temporarry decompressed for usage.
* recompress: 1 bit, bool, true if node is temporary decompressed for usage.
* attempted_compress: 1 bit, boolean, used for verifying during testing.
* extra: 10 bits, free for future use; pads out the remainder of 32 bits */
typedef struct quicklistNode {
@ -97,7 +97,7 @@ typedef struct quicklistBookmark {
/* quicklist is a 40 byte struct (on 64-bit systems) describing a quicklist.
* 'count' is the number of total entries.
* 'len' is the number of quicklist nodes.
* 'compress' is: -1 if compression disabled, otherwise it's the number
* 'compress' is: 0 if compression disabled, otherwise it's the number
* of quicklistNodes to leave uncompressed at ends of quicklist.
* 'fill' is the user-requested (or default) fill factor.
* 'bookmakrs are an optional feature that is used by realloc this struct,

View File

@ -628,7 +628,7 @@ int raxGenericInsert(rax *rax, unsigned char *s, size_t len, void *data, void **
*
* 3b. IF $SPLITPOS != 0:
* Trim the compressed node (reallocating it as well) in order to
* contain $splitpos characters. Change chilid pointer in order to link
* contain $splitpos characters. Change child pointer in order to link
* to the split node. If new compressed node len is just 1, set
* iscompr to 0 (layout is the same). Fix parent's reference.
*
@ -1082,7 +1082,7 @@ int raxRemove(rax *rax, unsigned char *s, size_t len, void **old) {
}
} else if (h->size == 1) {
/* If the node had just one child, after the removal of the key
* further compression with adjacent nodes is pontentially possible. */
* further compression with adjacent nodes is potentially possible. */
trycompress = 1;
}
@ -1329,7 +1329,7 @@ int raxIteratorNextStep(raxIterator *it, int noup) {
if (!noup && children) {
debugf("GO DEEPER\n");
/* Seek the lexicographically smaller key in this subtree, which
* is the first one found always going torwards the first child
* is the first one found always going towards the first child
* of every successive node. */
if (!raxStackPush(&it->stack,it->node)) return 0;
raxNode **cp = raxNodeFirstChildPtr(it->node);
@ -1348,7 +1348,7 @@ int raxIteratorNextStep(raxIterator *it, int noup) {
return 1;
}
} else {
/* If we finished exporing the previous sub-tree, switch to the
/* If we finished exploring the previous sub-tree, switch to the
* new one: go upper until a node is found where there are
* children representing keys lexicographically greater than the
* current key. */
@ -1510,7 +1510,7 @@ int raxIteratorPrevStep(raxIterator *it, int noup) {
int raxSeek(raxIterator *it, const char *op, unsigned char *ele, size_t len) {
int eq = 0, lt = 0, gt = 0, first = 0, last = 0;
it->stack.items = 0; /* Just resetting. Intialized by raxStart(). */
it->stack.items = 0; /* Just resetting. Initialized by raxStart(). */
it->flags |= RAX_ITER_JUST_SEEKED;
it->flags &= ~RAX_ITER_EOF;
it->key_len = 0;
@ -1731,7 +1731,7 @@ int raxPrev(raxIterator *it) {
* tree, expect a disappointing distribution. A random walk produces good
* random elements if the tree is not sparse, however in the case of a radix
* tree certain keys will be reported much more often than others. At least
* this function should be able to expore every possible element eventually. */
* this function should be able to explore every possible element eventually. */
int raxRandomWalk(raxIterator *it, size_t steps) {
if (it->rt->numele == 0) {
it->flags |= RAX_ITER_EOF;
@ -1825,7 +1825,7 @@ uint64_t raxSize(rax *rax) {
/* ----------------------------- Introspection ------------------------------ */
/* This function is mostly used for debugging and learning purposes.
* It shows an ASCII representation of a tree on standard output, outling
* It shows an ASCII representation of a tree on standard output, outline
* all the nodes and the contained keys.
*
* The representation is as follow:
@ -1835,7 +1835,7 @@ uint64_t raxSize(rax *rax) {
* [abc]=0x12345678 (node is a key, pointing to value 0x12345678)
* [] (a normal empty node)
*
* Children are represented in new idented lines, each children prefixed by
* Children are represented in new indented lines, each children prefixed by
* the "`-(x)" string, where "x" is the edge byte.
*
* [abc]

View File

@ -58,7 +58,7 @@
* successive nodes having a single child are "compressed" into the node
* itself as a string of characters, each representing a next-level child,
* and only the link to the node representing the last character node is
* provided inside the representation. So the above representation is turend
* provided inside the representation. So the above representation is turned
* into:
*
* ["foo"] ""
@ -123,7 +123,7 @@ typedef struct raxNode {
* nodes).
*
* If the node has an associated key (iskey=1) and is not NULL
* (isnull=0), then after the raxNode pointers poiting to the
* (isnull=0), then after the raxNode pointers pointing to the
* children, an additional value pointer is present (as you can see
* in the representation above as "value-ptr" field).
*/

View File

@ -2170,7 +2170,7 @@ int rdbLoadRio(rio *rdb, int rdbflags, rdbSaveInfo *rsi) {
} else if (type == RDB_OPCODE_AUX) {
/* AUX: generic string-string fields. Use to add state to RDB
* which is backward compatible. Implementations of RDB loading
* are requierd to skip AUX fields they don't understand.
* are required to skip AUX fields they don't understand.
*
* An AUX field is composed of two strings: key and value. */
robj *auxkey, *auxval;
@ -2420,7 +2420,7 @@ void backgroundSaveDoneHandlerDisk(int exitcode, int bysignal) {
latencyEndMonitor(latency);
latencyAddSampleIfNeeded("rdb-unlink-temp-file",latency);
/* SIGUSR1 is whitelisted, so we have a way to kill a child without
* tirggering an error condition. */
* triggering an error condition. */
if (bysignal != SIGUSR1)
server.lastbgsave_status = C_ERR;
}

View File

@ -331,7 +331,7 @@ err:
return 1;
}
/* RDB check main: called form redis.c when Redis is executed with the
/* RDB check main: called form server.c when Redis is executed with the
* redis-check-rdb alias, on during RDB loading errors.
*
* The function works in two ways: can be called with argc/argv as a

View File

@ -313,7 +313,7 @@ static void cliRefreshPrompt(void) {
/* Return the name of the dotfile for the specified 'dotfilename'.
* Normally it just concatenates user $HOME to the file specified
* in 'dotfilename'. However if the environment varialbe 'envoverride'
* in 'dotfilename'. However if the environment variable 'envoverride'
* is set, its value is taken as the path.
*
* The function returns NULL (if the file is /dev/null or cannot be
@ -1794,7 +1794,7 @@ static void usage(void) {
" -a <password> Password to use when connecting to the server.\n"
" You can also use the " REDIS_CLI_AUTH_ENV " environment\n"
" variable to pass this password more safely\n"
" (if both are used, this argument takes predecence).\n"
" (if both are used, this argument takes precedence).\n"
" --user <username> Used to send ACL style 'AUTH username pass'. Needs -a.\n"
" --pass <password> Alias of -a for consistency with the new --user option.\n"
" --askpass Force user to input password with mask from STDIN.\n"
@ -2225,7 +2225,7 @@ static int evalMode(int argc, char **argv) {
argv2[2] = sdscatprintf(sdsempty(),"%d",keys);
/* Call it */
int eval_ldb = config.eval_ldb; /* Save it, may be reverteed. */
int eval_ldb = config.eval_ldb; /* Save it, may be reverted. */
retval = issueCommand(argc+3-got_comma, argv2);
if (eval_ldb) {
if (!config.eval_ldb) {
@ -6741,13 +6741,13 @@ struct distsamples {
* samples greater than the previous one, and is also the stop sentinel.
*
* "tot' is the total number of samples in the different buckets, so it
* is the SUM(samples[i].conut) for i to 0 up to the max sample.
* is the SUM(samples[i].count) for i to 0 up to the max sample.
*
* As a side effect the function sets all the buckets count to 0. */
void showLatencyDistSamples(struct distsamples *samples, long long tot) {
int j;
/* We convert samples into a index inside the palette
/* We convert samples into an index inside the palette
* proportional to the percentage a given bucket represents.
* This way intensity of the different parts of the spectrum
* don't change relative to the number of requests, which avoids to
@ -8054,7 +8054,7 @@ static void LRUTestMode(void) {
* Intrisic latency mode.
*
* Measure max latency of a running process that does not result from
* syscalls. Basically this software should provide an hint about how much
* syscalls. Basically this software should provide a hint about how much
* time the kernel leaves the process without a chance to run.
*--------------------------------------------------------------------------- */

View File

@ -963,4 +963,4 @@ static int RedisModule_Init(RedisModuleCtx *ctx, const char *name, int ver, int
#define RedisModuleString robj
#endif /* REDISMODULE_CORE */
#endif /* REDISMOUDLE_H */
#endif /* REDISMODULE_H */

View File

@ -83,16 +83,16 @@ char *replicationGetSlaveName(client *c) {
* the file deletion to the filesystem. This call removes the file in a
* background thread instead. We actually just do close() in the thread,
* by using the fact that if there is another instance of the same file open,
* the foreground unlink() will not really do anything, and deleting the
* file will only happen once the last reference is lost. */
* the foreground unlink() will only remove the fs name, and deleting the
* file's storage space will only happen once the last reference is lost. */
int bg_unlink(const char *filename) {
int fd = open(filename,O_RDONLY|O_NONBLOCK);
if (fd == -1) {
/* Can't open the file? Fall back to unlinking in the main thread. */
return unlink(filename);
} else {
/* The following unlink() will not do anything since file
* is still open. */
/* The following unlink() removes the name but doesn't free the
* file contents because a process still has it open. */
int retval = unlink(filename);
if (retval == -1) {
/* If we got an unlink error, we just return it, closing the
@ -204,7 +204,7 @@ void feedReplicationBacklogWithObject(robj *o) {
* as well. This function is used if the instance is a master: we use
* the commands received by our clients in order to create the replication
* stream. Instead if the instance is a slave and has sub-slaves attached,
* we use replicationFeedSlavesFromMaster() */
* we use replicationFeedSlavesFromMasterStream() */
void replicationFeedSlaves(list *slaves, int dictid, robj **argv, int argc) {
listNode *ln;
listIter li;
@ -535,7 +535,7 @@ int masterTryPartialResynchronization(client *c) {
(strcasecmp(master_replid, server.replid2) ||
psync_offset > server.second_replid_offset))
{
/* Run id "?" is used by slaves that want to force a full resync. */
/* Replid "?" is used by slaves that want to force a full resync. */
if (master_replid[0] != '?') {
if (strcasecmp(master_replid, server.replid) &&
strcasecmp(master_replid, server.replid2))
@ -707,7 +707,7 @@ int startBgsaveForReplication(int mincapa) {
return retval;
}
/* SYNC and PSYNC command implemenation. */
/* SYNC and PSYNC command implementation. */
void syncCommand(client *c) {
/* ignore SYNC if already slave or in monitor mode */
if (c->flags & CLIENT_SLAVE) return;
@ -1364,7 +1364,7 @@ void replicationEmptyDbCallback(void *privdata) {
replicationSendNewlineToMaster();
}
/* Once we have a link with the master and the synchroniziation was
/* Once we have a link with the master and the synchronization was
* performed, this function materializes the master client we store
* at server.master, starting from the specified file descriptor. */
void replicationCreateMasterClient(connection *conn, int dbid) {
@ -1441,7 +1441,7 @@ redisDb *disklessLoadMakeBackups(void) {
* the 'restore' argument (the number of DBs to replace) is non-zero.
*
* When instead the loading succeeded we want just to free our old backups,
* in that case the funciton will do just that when 'restore' is 0. */
* in that case the function will do just that when 'restore' is 0. */
void disklessLoadRestoreBackups(redisDb *backup, int restore, int empty_db_flags)
{
if (restore) {
@ -1475,7 +1475,7 @@ void readSyncBulkPayload(connection *conn) {
off_t left;
/* Static vars used to hold the EOF mark, and the last bytes received
* form the server: when they match, we reached the end of the transfer. */
* from the server: when they match, we reached the end of the transfer. */
static char eofmark[CONFIG_RUN_ID_SIZE];
static char lastbytes[CONFIG_RUN_ID_SIZE];
static int usemark = 0;
@ -1792,7 +1792,7 @@ void readSyncBulkPayload(connection *conn) {
REDISMODULE_SUBEVENT_MASTER_LINK_UP,
NULL);
/* After a full resynchroniziation we use the replication ID and
/* After a full resynchronization we use the replication ID and
* offset of the master. The secondary ID / offset are cleared since
* we are starting a new history. */
memcpy(server.replid,server.master->replid,sizeof(server.replid));
@ -1891,7 +1891,7 @@ char *sendSynchronousCommand(int flags, connection *conn, ...) {
/* Try a partial resynchronization with the master if we are about to reconnect.
* If there is no cached master structure, at least try to issue a
* "PSYNC ? -1" command in order to trigger a full resync using the PSYNC
* command in order to obtain the master run id and the master replication
* command in order to obtain the master replid and the master replication
* global offset.
*
* This function is designed to be called from syncWithMaster(), so the
@ -1919,7 +1919,7 @@ char *sendSynchronousCommand(int flags, connection *conn, ...) {
*
* PSYNC_CONTINUE: If the PSYNC command succeeded and we can continue.
* PSYNC_FULLRESYNC: If PSYNC is supported but a full resync is needed.
* In this case the master run_id and global replication
* In this case the master replid and global replication
* offset is saved.
* PSYNC_NOT_SUPPORTED: If the server does not understand PSYNC at all and
* the caller should fall back to SYNC.
@ -1950,7 +1950,7 @@ int slaveTryPartialResynchronization(connection *conn, int read_reply) {
/* Writing half */
if (!read_reply) {
/* Initially set master_initial_offset to -1 to mark the current
* master run_id and offset as not valid. Later if we'll be able to do
* master replid and offset as not valid. Later if we'll be able to do
* a FULL resync using the PSYNC command we'll set the offset at the
* right value, so that this information will be propagated to the
* client structure representing the master into server.master. */
@ -1991,7 +1991,7 @@ int slaveTryPartialResynchronization(connection *conn, int read_reply) {
if (!strncmp(reply,"+FULLRESYNC",11)) {
char *replid = NULL, *offset = NULL;
/* FULL RESYNC, parse the reply in order to extract the run id
/* FULL RESYNC, parse the reply in order to extract the replid
* and the replication offset. */
replid = strchr(reply,' ');
if (replid) {
@ -2283,7 +2283,7 @@ void syncWithMaster(connection *conn) {
/* Try a partial resynchonization. If we don't have a cached master
* slaveTryPartialResynchronization() will at least try to use PSYNC
* to start a full resynchronization so that we get the master run id
* to start a full resynchronization so that we get the master replid
* and the global offset, to try a partial resync at the next
* reconnection attempt. */
if (server.repl_state == REPL_STATE_SEND_PSYNC) {
@ -2446,7 +2446,7 @@ void replicationAbortSyncTransfer(void) {
* If there was a replication handshake in progress 1 is returned and
* the replication state (server.repl_state) set to REPL_STATE_CONNECT.
*
* Otherwise zero is returned and no operation is perforemd at all. */
* Otherwise zero is returned and no operation is performed at all. */
int cancelReplicationHandshake(int reconnect) {
if (server.repl_state == REPL_STATE_TRANSFER) {
replicationAbortSyncTransfer();
@ -2906,7 +2906,7 @@ void refreshGoodSlavesCount(void) {
*
* We don't care about taking a different cache for every different slave
* since to fill the cache again is not very costly, the goal of this code
* is to avoid that the same big script is trasmitted a big number of times
* is to avoid that the same big script is transmitted a big number of times
* per second wasting bandwidth and processor speed, but it is not a problem
* if we need to rebuild the cache from scratch from time to time, every used
* script will need to be transmitted a single time to reappear in the cache.
@ -2916,7 +2916,7 @@ void refreshGoodSlavesCount(void) {
* 1) Every time a new slave connects, we flush the whole script cache.
* 2) We only send as EVALSHA what was sent to the master as EVALSHA, without
* trying to convert EVAL into EVALSHA specifically for slaves.
* 3) Every time we trasmit a script as EVAL to the slaves, we also add the
* 3) Every time we transmit a script as EVAL to the slaves, we also add the
* corresponding SHA1 of the script into the cache as we are sure every
* slave knows about the script starting from now.
* 4) On SCRIPT FLUSH command, we replicate the command to all the slaves
@ -3007,7 +3007,7 @@ int replicationScriptCacheExists(sds sha1) {
/* This just set a flag so that we broadcast a REPLCONF GETACK command
* to all the slaves in the beforeSleep() function. Note that this way
* we "group" all the clients that want to wait for synchronouns replication
* we "group" all the clients that want to wait for synchronous replication
* in a given event loop iteration, and send a single GETACK for them all. */
void replicationRequestAckFromSlaves(void) {
server.get_ack_from_slaves = 1;

View File

@ -69,7 +69,7 @@ struct ldbState {
list *children; /* All forked debugging sessions pids. */
int bp[LDB_BREAKPOINTS_MAX]; /* An array of breakpoints line numbers. */
int bpcount; /* Number of valid entries inside bp. */
int step; /* Stop at next line ragardless of breakpoints. */
int step; /* Stop at next line regardless of breakpoints. */
int luabp; /* Stop at next line because redis.breakpoint() was called. */
sds *src; /* Lua script source code split by line. */
int lines; /* Number of lines in 'src'. */
@ -886,7 +886,7 @@ int luaRedisReplicateCommandsCommand(lua_State *lua) {
/* redis.breakpoint()
*
* Allows to stop execution during a debuggign session from within
* Allows to stop execution during a debugging session from within
* the Lua code implementation, like if a breakpoint was set in the code
* immediately after the function. */
int luaRedisBreakpointCommand(lua_State *lua) {
@ -1500,7 +1500,7 @@ void evalGenericCommand(client *c, int evalsha) {
/* Hash the code if this is an EVAL call */
sha1hex(funcname+2,c->argv[1]->ptr,sdslen(c->argv[1]->ptr));
} else {
/* We already have the SHA if it is a EVALSHA */
/* We already have the SHA if it is an EVALSHA */
int j;
char *sha = c->argv[1]->ptr;
@ -1630,7 +1630,7 @@ void evalGenericCommand(client *c, int evalsha) {
* To do so we use a cache of SHA1s of scripts that we already propagated
* as full EVAL, that's called the Replication Script Cache.
*
* For repliation, everytime a new slave attaches to the master, we need to
* For replication, everytime a new slave attaches to the master, we need to
* flush our cache of scripts that can be replicated as EVALSHA, while
* for AOF we need to do so every time we rewrite the AOF file. */
if (evalsha && !server.lua_replicate_commands) {
@ -1803,7 +1803,7 @@ void ldbLog(sds entry) {
}
/* A version of ldbLog() which prevents producing logs greater than
* ldb.maxlen. The first time the limit is reached an hint is generated
* ldb.maxlen. The first time the limit is reached a hint is generated
* to inform the user that reply trimming can be disabled using the
* debugger "maxlen" command. */
void ldbLogWithMaxLen(sds entry) {
@ -1844,7 +1844,7 @@ void ldbSendLogs(void) {
}
/* Start a debugging session before calling EVAL implementation.
* The techique we use is to capture the client socket file descriptor,
* The technique we use is to capture the client socket file descriptor,
* in order to perform direct I/O with it from within Lua hooks. This
* way we don't have to re-enter Redis in order to handle I/O.
*
@ -1927,7 +1927,7 @@ void ldbEndSession(client *c) {
connNonBlock(ldb.conn);
connSendTimeout(ldb.conn,0);
/* Close the client connectin after sending the final EVAL reply
/* Close the client connection after sending the final EVAL reply
* in order to signal the end of the debugging session. */
c->flags |= CLIENT_CLOSE_AFTER_REPLY;
@ -2096,7 +2096,7 @@ void ldbLogSourceLine(int lnum) {
/* Implement the "list" command of the Lua debugger. If around is 0
* the whole file is listed, otherwise only a small portion of the file
* around the specified line is shown. When a line number is specified
* the amonut of context (lines before/after) is specified via the
* the amount of context (lines before/after) is specified via the
* 'context' argument. */
void ldbList(int around, int context) {
int j;
@ -2107,7 +2107,7 @@ void ldbList(int around, int context) {
}
}
/* Append an human readable representation of the Lua value at position 'idx'
/* Append a human readable representation of the Lua value at position 'idx'
* on the stack of the 'lua' state, to the SDS string passed as argument.
* The new SDS string with the represented value attached is returned.
* Used in order to implement ldbLogStackValue().
@ -2351,7 +2351,7 @@ char *ldbRedisProtocolToHuman_Double(sds *o, char *reply) {
return p+2;
}
/* Log a Redis reply as debugger output, in an human readable format.
/* Log a Redis reply as debugger output, in a human readable format.
* If the resulting string is longer than 'len' plus a few more chars
* used as prefix, it gets truncated. */
void ldbLogRedisReply(char *reply) {
@ -2535,7 +2535,7 @@ void ldbTrace(lua_State *lua) {
}
}
/* Impleemnts the debugger "maxlen" command. It just queries or sets the
/* Implements the debugger "maxlen" command. It just queries or sets the
* ldb.maxlen variable. */
void ldbMaxlen(sds *argv, int argc) {
if (argc == 2) {
@ -2608,8 +2608,8 @@ ldbLog(sdsnew(" mode dataset changes will be retained."));
ldbLog(sdsnew(""));
ldbLog(sdsnew("Debugger functions you can call from Lua scripts:"));
ldbLog(sdsnew("redis.debug() Produce logs in the debugger console."));
ldbLog(sdsnew("redis.breakpoint() Stop execution like if there was a breakpoing."));
ldbLog(sdsnew(" in the next line of code."));
ldbLog(sdsnew("redis.breakpoint() Stop execution like if there was a breakpoint in the"));
ldbLog(sdsnew(" next line of code."));
ldbSendLogs();
} else if (!strcasecmp(argv[0],"s") || !strcasecmp(argv[0],"step") ||
!strcasecmp(argv[0],"n") || !strcasecmp(argv[0],"next")) {

View File

@ -405,7 +405,7 @@ sds sdscatlen(sds s, const void *t, size_t len) {
return s;
}
/* Append the specified null termianted C string to the sds string 's'.
/* Append the specified null terminated C string to the sds string 's'.
*
* After the call, the passed sds string is no longer valid and all the
* references must be substituted with the new pointer returned by the call. */
@ -453,7 +453,7 @@ int sdsll2str(char *s, long long value) {
size_t l;
/* Generate the string representation, this method produces
* an reversed string. */
* a reversed string. */
v = (value < 0) ? -value : value;
p = s;
do {
@ -484,7 +484,7 @@ int sdsull2str(char *s, unsigned long long v) {
size_t l;
/* Generate the string representation, this method produces
* an reversed string. */
* a reversed string. */
p = s;
do {
*p++ = '0'+(v%10);

View File

@ -131,13 +131,13 @@ typedef struct sentinelAddr {
/* The link to a sentinelRedisInstance. When we have the same set of Sentinels
* monitoring many masters, we have different instances representing the
* same Sentinels, one per master, and we need to share the hiredis connections
* among them. Oherwise if 5 Sentinels are monitoring 100 masters we create
* among them. Otherwise if 5 Sentinels are monitoring 100 masters we create
* 500 outgoing connections instead of 5.
*
* So this structure represents a reference counted link in terms of the two
* hiredis connections for commands and Pub/Sub, and the fields needed for
* failure detection, since the ping/pong time are now local to the link: if
* the link is available, the instance is avaialbe. This way we don't just
* the link is available, the instance is available. This way we don't just
* have 5 connections instead of 500, we also send 5 pings instead of 500.
*
* Links are shared only for Sentinels: master and slave instances have
@ -986,7 +986,7 @@ instanceLink *createInstanceLink(void) {
return link;
}
/* Disconnect an hiredis connection in the context of an instance link. */
/* Disconnect a hiredis connection in the context of an instance link. */
void instanceLinkCloseConnection(instanceLink *link, redisAsyncContext *c) {
if (c == NULL) return;
@ -1125,7 +1125,7 @@ int sentinelUpdateSentinelAddressInAllMasters(sentinelRedisInstance *ri) {
return reconfigured;
}
/* This function is called when an hiredis connection reported an error.
/* This function is called when a hiredis connection reported an error.
* We set it to NULL and mark the link as disconnected so that it will be
* reconnected again.
*
@ -2015,7 +2015,7 @@ void sentinelSendAuthIfNeeded(sentinelRedisInstance *ri, redisAsyncContext *c) {
* The connection type is "cmd" or "pubsub" as specified by 'type'.
*
* This makes it possible to list all the sentinel instances connected
* to a Redis servewr with CLIENT LIST, grepping for a specific name format. */
* to a Redis server with CLIENT LIST, grepping for a specific name format. */
void sentinelSetClientName(sentinelRedisInstance *ri, redisAsyncContext *c, char *type) {
char name[64];
@ -2470,7 +2470,7 @@ void sentinelPublishReplyCallback(redisAsyncContext *c, void *reply, void *privd
ri->last_pub_time = mstime();
}
/* Process an hello message received via Pub/Sub in master or slave instance,
/* Process a hello message received via Pub/Sub in master or slave instance,
* or sent directly to this sentinel via the (fake) PUBLISH command of Sentinel.
*
* If the master name specified in the message is not known, the message is
@ -2607,7 +2607,7 @@ void sentinelReceiveHelloMessages(redisAsyncContext *c, void *reply, void *privd
sentinelProcessHelloMessage(r->element[2]->str, r->element[2]->len);
}
/* Send an "Hello" message via Pub/Sub to the specified 'ri' Redis
/* Send a "Hello" message via Pub/Sub to the specified 'ri' Redis
* instance in order to broadcast the current configuration for this
* master, and to advertise the existence of this Sentinel at the same time.
*
@ -2661,7 +2661,7 @@ int sentinelSendHello(sentinelRedisInstance *ri) {
}
/* Reset last_pub_time in all the instances in the specified dictionary
* in order to force the delivery of an Hello update ASAP. */
* in order to force the delivery of a Hello update ASAP. */
void sentinelForceHelloUpdateDictOfRedisInstances(dict *instances) {
dictIterator *di;
dictEntry *de;
@ -2675,13 +2675,13 @@ void sentinelForceHelloUpdateDictOfRedisInstances(dict *instances) {
dictReleaseIterator(di);
}
/* This function forces the delivery of an "Hello" message (see
/* This function forces the delivery of a "Hello" message (see
* sentinelSendHello() top comment for further information) to all the Redis
* and Sentinel instances related to the specified 'master'.
*
* It is technically not needed since we send an update to every instance
* with a period of SENTINEL_PUBLISH_PERIOD milliseconds, however when a
* Sentinel upgrades a configuration it is a good idea to deliever an update
* Sentinel upgrades a configuration it is a good idea to deliver an update
* to the other Sentinels ASAP. */
int sentinelForceHelloUpdateForMaster(sentinelRedisInstance *master) {
if (!(master->flags & SRI_MASTER)) return C_ERR;
@ -3105,7 +3105,7 @@ NULL
* ip and port are the ip and port of the master we want to be
* checked by Sentinel. Note that the command will not check by
* name but just by master, in theory different Sentinels may monitor
* differnet masters with the same name.
* different masters with the same name.
*
* current-epoch is needed in order to understand if we are allowed
* to vote for a failover leader or not. Each Sentinel can vote just
@ -4017,7 +4017,7 @@ int sentinelSendSlaveOf(sentinelRedisInstance *ri, char *host, int port) {
* the following tasks:
* 1) Reconfigure the instance according to the specified host/port params.
* 2) Rewrite the configuration.
* 3) Disconnect all clients (but this one sending the commnad) in order
* 3) Disconnect all clients (but this one sending the command) in order
* to trigger the ask-master-on-reconnection protocol for connected
* clients.
*
@ -4569,7 +4569,7 @@ void sentinelHandleDictOfRedisInstances(dict *instances) {
* difference bigger than SENTINEL_TILT_TRIGGER milliseconds if one of the
* following conditions happen:
*
* 1) The Sentiel process for some time is blocked, for every kind of
* 1) The Sentinel process for some time is blocked, for every kind of
* random reason: the load is huge, the computer was frozen for some time
* in I/O or alike, the process was stopped by a signal. Everything.
* 2) The system clock was altered significantly.

View File

@ -116,7 +116,7 @@ volatile unsigned long lru_clock; /* Server global current LRU time. */
* write: Write command (may modify the key space).
*
* read-only: All the non special commands just reading from keys without
* changing the content, or returning other informations like
* changing the content, or returning other information like
* the TIME command. Special commands such administrative commands
* or transaction related commands (multi, exec, discard, ...)
* are not flagged as read-only commands, since they affect the
@ -1289,7 +1289,7 @@ dictType objectKeyHeapPointerValueDictType = {
dictVanillaFree /* val destructor */
};
/* Set dictionary type. Keys are SDS strings, values are ot used. */
/* Set dictionary type. Keys are SDS strings, values are not used. */
dictType setDictType = {
dictSdsHash, /* hash function */
NULL, /* key dup */
@ -1394,9 +1394,8 @@ dictType clusterNodesBlackListDictType = {
NULL /* val destructor */
};
/* Cluster re-addition blacklist. This maps node IDs to the time
* we can re-add this node. The goal is to avoid readding a removed
* node for some time. */
/* Modules system dictionary type. Keys are module name,
* values are pointer to RedisModule struct. */
dictType modulesDictType = {
dictSdsCaseHash, /* hash function */
NULL, /* key dup */
@ -1449,7 +1448,7 @@ void tryResizeHashTables(int dbid) {
/* Our hash table implementation performs rehashing incrementally while
* we write/read from the hash table. Still if the server is idle, the hash
* table will use two tables for a long time. So we try to use 1 millisecond
* of CPU time at every call of this function to perform some rehahsing.
* of CPU time at every call of this function to perform some rehashing.
*
* The function returns 1 if some rehashing was performed, otherwise 0
* is returned. */
@ -1471,8 +1470,8 @@ int incrementallyRehash(int dbid) {
* as we want to avoid resizing the hash tables when there is a child in order
* to play well with copy-on-write (otherwise when a resize happens lots of
* memory pages are copied). The goal of this function is to update the ability
* for dict.c to resize the hash tables accordingly to the fact we have o not
* running childs. */
* for dict.c to resize the hash tables accordingly to the fact we have an
* active fork child running. */
void updateDictResizePolicy(void) {
if (!hasActiveChildProcess())
dictEnableResize();
@ -1623,7 +1622,7 @@ int clientsCronTrackClientsMemUsage(client *c) {
mem += sdsAllocSize(c->querybuf);
mem += sizeof(client);
/* Now that we have the memory used by the client, remove the old
* value from the old categoty, and add it back. */
* value from the old category, and add it back. */
server.stat_clients_type_memory[c->client_cron_last_memory_type] -=
c->client_cron_last_memory_usage;
server.stat_clients_type_memory[type] += mem;
@ -1844,18 +1843,18 @@ void cronUpdateMemoryStats() {
* the frag ratio calculate may be off (ratio of two samples at different times) */
server.cron_malloc_stats.process_rss = zmalloc_get_rss();
server.cron_malloc_stats.zmalloc_used = zmalloc_used_memory();
/* Sampling the allcator info can be slow too.
* The fragmentation ratio it'll show is potentically more accurate
/* Sampling the allocator info can be slow too.
* The fragmentation ratio it'll show is potentially more accurate
* it excludes other RSS pages such as: shared libraries, LUA and other non-zmalloc
* allocations, and allocator reserved pages that can be pursed (all not actual frag) */
zmalloc_get_allocator_info(&server.cron_malloc_stats.allocator_allocated,
&server.cron_malloc_stats.allocator_active,
&server.cron_malloc_stats.allocator_resident);
/* in case the allocator isn't providing these stats, fake them so that
* fragmention info still shows some (inaccurate metrics) */
* fragmentation info still shows some (inaccurate metrics) */
if (!server.cron_malloc_stats.allocator_resident) {
/* LUA memory isn't part of zmalloc_used, but it is part of the process RSS,
* so we must desuct it in order to be able to calculate correct
* so we must deduct it in order to be able to calculate correct
* "allocator fragmentation" ratio */
size_t lua_memory = lua_gc(server.lua,LUA_GCCOUNT,0)*1024LL;
server.cron_malloc_stats.allocator_resident = server.cron_malloc_stats.process_rss - lua_memory;
@ -2038,7 +2037,7 @@ int serverCron(struct aeEventLoop *eventLoop, long long id, void *clientData) {
/* AOF write errors: in this case we have a buffer to flush as well and
* clear the AOF error in case of success to make the DB writable again,
* however to try every second is enough in case of 'hz' is set to
* an higher frequency. */
* a higher frequency. */
run_with_period(1000) {
if (server.aof_last_write_status == C_ERR)
flushAppendOnlyFile(0);
@ -2266,7 +2265,7 @@ void beforeSleep(struct aeEventLoop *eventLoop) {
if (moduleCount()) moduleReleaseGIL();
}
/* This function is called immadiately after the event loop multiplexing
/* This function is called immediately after the event loop multiplexing
* API returned, and the control is going to soon return to Redis by invoking
* the different events callbacks. */
void afterSleep(struct aeEventLoop *eventLoop) {
@ -2488,7 +2487,7 @@ void initServerConfig(void) {
R_NegInf = -1.0/R_Zero;
R_Nan = R_Zero/R_Zero;
/* Command table -- we initiialize it here as it is part of the
/* Command table -- we initialize it here as it is part of the
* initial configuration, since command names may be changed via
* redis.conf using the rename-command directive. */
server.commands = dictCreate(&commandTableDictType,NULL);
@ -3152,7 +3151,7 @@ int populateCommandTableParseFlags(struct redisCommand *c, char *strflags) {
}
/* Populates the Redis Command Table starting from the hard coded list
* we have on top of redis.c file. */
* we have on top of server.c file. */
void populateCommandTable(void) {
int j;
int numcommands = sizeof(redisCommandTable)/sizeof(struct redisCommand);
@ -3286,12 +3285,12 @@ void propagate(struct redisCommand *cmd, int dbid, robj **argv, int argc,
*
* 'cmd' must be a pointer to the Redis command to replicate, dbid is the
* database ID the command should be propagated into.
* Arguments of the command to propagte are passed as an array of redis
* Arguments of the command to propagate are passed as an array of redis
* objects pointers of len 'argc', using the 'argv' vector.
*
* The function does not take a reference to the passed 'argv' vector,
* so it is up to the caller to release the passed argv (but it is usually
* stack allocated). The function autoamtically increments ref count of
* stack allocated). The function automatically increments ref count of
* passed objects, so the caller does not need to. */
void alsoPropagate(struct redisCommand *cmd, int dbid, robj **argv, int argc,
int target)
@ -3451,7 +3450,7 @@ void call(client *c, int flags) {
if (c->flags & CLIENT_FORCE_AOF) propagate_flags |= PROPAGATE_AOF;
/* However prevent AOF / replication propagation if the command
* implementations called preventCommandPropagation() or similar,
* implementation called preventCommandPropagation() or similar,
* or if we don't have the call() flags to do so. */
if (c->flags & CLIENT_PREVENT_REPL_PROP ||
!(flags & CMD_CALL_PROPAGATE_REPL))
@ -3706,7 +3705,7 @@ int processCommand(client *c) {
}
/* Save out_of_memory result at script start, otherwise if we check OOM
* untill first write within script, memory used by lua stack and
* until first write within script, memory used by lua stack and
* arguments might interfere. */
if (c->cmd->proc == evalCommand || c->cmd->proc == evalShaCommand) {
server.lua_oom = out_of_memory;
@ -3944,7 +3943,7 @@ int prepareForShutdown(int flags) {
/*================================== Commands =============================== */
/* Sometimes Redis cannot accept write commands because there is a perstence
/* Sometimes Redis cannot accept write commands because there is a persistence
* error with the RDB or AOF file, and Redis is configured in order to stop
* accepting writes in such situation. This function returns if such a
* condition is active, and the type of the condition.

View File

@ -161,7 +161,7 @@ extern int configOOMScoreAdjValuesDefaults[CONFIG_OOM_COUNT];
/* Hash table parameters */
#define HASHTABLE_MIN_FILL 10 /* Minimal hash table fill 10% */
/* Command flags. Please check the command table defined in the redis.c file
/* Command flags. Please check the command table defined in the server.c file
* for more information about the meaning of every flag. */
#define CMD_WRITE (1ULL<<0) /* "write" flag */
#define CMD_READONLY (1ULL<<1) /* "read-only" flag */
@ -827,7 +827,7 @@ typedef struct client {
copying this slave output buffer
should use. */
char replid[CONFIG_RUN_ID_SIZE+1]; /* Master replication ID (if master). */
int slave_listening_port; /* As configured with: SLAVECONF listening-port */
int slave_listening_port; /* As configured with: REPLCONF listening-port */
char slave_ip[NET_IP_STR_LEN]; /* Optionally given by REPLCONF ip-address */
int slave_capa; /* Slave capabilities: SLAVE_CAPA_* bitwise OR. */
multiState mstate; /* MULTI/EXEC state */
@ -939,7 +939,7 @@ typedef struct redisOp {
} redisOp;
/* Defines an array of Redis operations. There is an API to add to this
* structure in a easy way.
* structure in an easy way.
*
* redisOpArrayInit();
* redisOpArrayAppend();
@ -1357,7 +1357,7 @@ struct redisServer {
unsigned int maxclients; /* Max number of simultaneous clients */
unsigned long long maxmemory; /* Max number of memory bytes to use */
int maxmemory_policy; /* Policy for key eviction */
int maxmemory_samples; /* Pricision of random sampling */
int maxmemory_samples; /* Precision of random sampling */
int lfu_log_factor; /* LFU logarithmic counter factor. */
int lfu_decay_time; /* LFU counter decay factor. */
long long proto_max_bulk_len; /* Protocol bulk length maximum size. */
@ -1438,7 +1438,7 @@ struct redisServer {
int lua_random_dirty; /* True if a random command was called during the
execution of the current script. */
int lua_replicate_commands; /* True if we are doing single commands repl. */
int lua_multi_emitted;/* True if we already proagated MULTI. */
int lua_multi_emitted;/* True if we already propagated MULTI. */
int lua_repl; /* Script replication flags for redis.set_repl(). */
int lua_timedout; /* True if we reached the time limit for script
execution. */
@ -1946,7 +1946,7 @@ void addACLLogEntry(client *c, int reason, int keypos, sds username);
/* Flags only used by the ZADD command but not by zsetAdd() API: */
#define ZADD_CH (1<<16) /* Return num of elements added or updated. */
/* Struct to hold a inclusive/exclusive range spec by score comparison. */
/* Struct to hold an inclusive/exclusive range spec by score comparison. */
typedef struct {
double min, max;
int minex, maxex; /* are min or max exclusive? */

View File

@ -22,7 +22,7 @@
1. We use SipHash 1-2. This is not believed to be as strong as the
suggested 2-4 variant, but AFAIK there are not trivial attacks
against this reduced-rounds version, and it runs at the same speed
as Murmurhash2 that we used previously, why the 2-4 variant slowed
as Murmurhash2 that we used previously, while the 2-4 variant slowed
down Redis by a 4% figure more or less.
2. Hard-code rounds in the hope the compiler can optimize it more
in this raw from. Anyway we always want the standard 2-4 variant.
@ -36,7 +36,7 @@
perform a text transformation in some temporary buffer, which is costly.
5. Remove debugging code.
6. Modified the original test.c file to be a stand-alone function testing
the function in the new form (returing an uint64_t) using just the
the function in the new form (returning an uint64_t) using just the
relevant test vector.
*/
#include <assert.h>
@ -46,7 +46,7 @@
#include <ctype.h>
/* Fast tolower() alike function that does not care about locale
* but just returns a-z insetad of A-Z. */
* but just returns a-z instead of A-Z. */
int siptlw(int c) {
if (c >= 'A' && c <= 'Z') {
return c+('a'-'A');

View File

@ -75,7 +75,7 @@ slowlogEntry *slowlogCreateEntry(client *c, robj **argv, int argc, long long dur
} else if (argv[j]->refcount == OBJ_SHARED_REFCOUNT) {
se->argv[j] = argv[j];
} else {
/* Here we need to dupliacate the string objects composing the
/* Here we need to duplicate the string objects composing the
* argument vector of the command, because those may otherwise
* end shared with string objects stored into keys. Having
* shared objects between any part of Redis, and the data

View File

@ -115,7 +115,7 @@ robj *lookupKeyByPattern(redisDb *db, robj *pattern, robj *subst, int writeflag)
if (fieldobj) {
if (o->type != OBJ_HASH) goto noobj;
/* Retrieve value from hash by the field name. The returend object
/* Retrieve value from hash by the field name. The returned object
* is a new object with refcount already incremented. */
o = hashTypeGetValueObject(o, fieldobj->ptr);
} else {

View File

@ -92,7 +92,7 @@ void freeSparklineSequence(struct sequence *seq) {
* ------------------------------------------------------------------------- */
/* Render part of a sequence, so that render_sequence() call call this function
* with differnent parts in order to create the full output without overflowing
* with different parts in order to create the full output without overflowing
* the current terminal columns. */
sds sparklineRenderRange(sds output, struct sequence *seq, int rows, int offset, int len, int flags) {
int j;

View File

@ -74,7 +74,7 @@ typedef struct streamConsumer {
consumer not yet acknowledged. Keys are
big endian message IDs, while values are
the same streamNACK structure referenced
in the "pel" of the conumser group structure
in the "pel" of the consumer group structure
itself, so the value is shared. */
} streamConsumer;

View File

@ -627,7 +627,7 @@ void hincrbyfloatCommand(client *c) {
server.dirty++;
/* Always replicate HINCRBYFLOAT as an HSET command with the final value
* in order to make sure that differences in float pricision or formatting
* in order to make sure that differences in float precision or formatting
* will not create differences in replicas or after an AOF restart. */
robj *aux, *newobj;
aux = createStringObject("HSET",4);

View File

@ -722,7 +722,7 @@ void rpoplpushCommand(client *c) {
* Blocking POP operations
*----------------------------------------------------------------------------*/
/* This is a helper function for handleClientsBlockedOnKeys(). It's work
/* This is a helper function for handleClientsBlockedOnKeys(). Its work
* is to serve a specific client (receiver) that is blocked on 'key'
* in the context of the specified 'db', doing the following:
*
@ -807,7 +807,7 @@ void blockingPopGenericCommand(client *c, int where) {
return;
} else {
if (listTypeLength(o) != 0) {
/* Non empty list, this is like a non normal [LR]POP. */
/* Non empty list, this is like a normal [LR]POP. */
char *event = (where == LIST_HEAD) ? "lpop" : "rpop";
robj *value = listTypePop(o,where);
serverAssert(value != NULL);
@ -843,7 +843,7 @@ void blockingPopGenericCommand(client *c, int where) {
return;
}
/* If the list is empty or the key does not exists we must block */
/* If the keys do not exist we must block */
blockForKeys(c,BLOCKED_LIST,c->argv + 1,c->argc - 2,timeout,NULL,NULL);
}

View File

@ -193,7 +193,7 @@ sds setTypeNextObject(setTypeIterator *si) {
}
/* Return random element from a non empty set.
* The returned element can be a int64_t value if the set is encoded
* The returned element can be an int64_t value if the set is encoded
* as an "intset" blob of integers, or an SDS string if the set
* is a regular set.
*
@ -458,7 +458,7 @@ void spopWithCountCommand(client *c) {
dbDelete(c->db,c->argv[1]);
notifyKeyspaceEvent(NOTIFY_GENERIC,"del",c->argv[1],c->db->id);
/* Propagate this command as an DEL operation */
/* Propagate this command as a DEL operation */
rewriteClientCommandVector(c,2,shared.del,c->argv[1]);
signalModifiedKey(c,c->db,c->argv[1]);
server.dirty++;
@ -692,7 +692,7 @@ void srandmemberWithCountCommand(client *c) {
* In this case we create a set from scratch with all the elements, and
* subtract random elements to reach the requested number of elements.
*
* This is done because if the number of requsted elements is just
* This is done because if the number of requested elements is just
* a bit less than the number of elements in the set, the natural approach
* used into CASE 3 is highly inefficient. */
if (count*SRANDMEMBER_SUB_STRATEGY_MUL > size) {

View File

@ -1196,7 +1196,7 @@ void xaddCommand(client *c) {
int id_given = 0; /* Was an ID different than "*" specified? */
long long maxlen = -1; /* If left to -1 no trimming is performed. */
int approx_maxlen = 0; /* If 1 only delete whole radix tree nodes, so
the maxium length is not applied verbatim. */
the maximum length is not applied verbatim. */
int maxlen_arg_idx = 0; /* Index of the count in MAXLEN, for rewriting. */
/* Parse options. */
@ -1890,7 +1890,7 @@ NULL
}
}
/* XSETID <stream> <groupname> <id>
/* XSETID <stream> <id>
*
* Set the internal "last ID" of a stream. */
void xsetidCommand(client *c) {
@ -1982,7 +1982,7 @@ cleanup:
*
* If start and stop are omitted, the command just outputs information about
* the amount of pending messages for the key/group pair, together with
* the minimum and maxium ID of pending messages.
* the minimum and maximum ID of pending messages.
*
* If start and stop are provided instead, the pending messages are returned
* with informations about the current owner, number of deliveries and last

View File

@ -315,7 +315,7 @@ void msetGenericCommand(client *c, int nx) {
}
/* Handle the NX flag. The MSETNX semantic is to return zero and don't
* set anything if at least one key alerady exists. */
* set anything if at least one key already exists. */
if (nx) {
for (j = 1; j < c->argc; j += 2) {
if (lookupKeyWrite(c->db,c->argv[j]) != NULL) {

View File

@ -245,7 +245,7 @@ int zslDelete(zskiplist *zsl, double score, sds ele, zskiplistNode **node) {
return 0; /* not found */
}
/* Update the score of an elmenent inside the sorted set skiplist.
/* Update the score of an element inside the sorted set skiplist.
* Note that the element must exist and must match 'score'.
* This function does not update the score in the hash table side, the
* caller should take care of it.

View File

@ -134,7 +134,7 @@ void enableTracking(client *c, uint64_t redirect_to, uint64_t options, robj **pr
CLIENT_TRACKING_NOLOOP);
c->client_tracking_redirection = redirect_to;
/* This may be the first client we ever enable. Crete the tracking
/* This may be the first client we ever enable. Create the tracking
* table if it does not exist. */
if (TrackingTable == NULL) {
TrackingTable = raxNew();

View File

@ -1,17 +1,17 @@
{
<lzf_unitialized_hash_table>
<lzf_uninitialized_hash_table>
Memcheck:Cond
fun:lzf_compress
}
{
<lzf_unitialized_hash_table>
<lzf_uninitialized_hash_table>
Memcheck:Value4
fun:lzf_compress
}
{
<lzf_unitialized_hash_table>
<lzf_uninitialized_hash_table>
Memcheck:Value8
fun:lzf_compress
}

View File

@ -99,7 +99,7 @@
* Integer encoded as 24 bit signed (3 bytes).
* |11111110| - 2 bytes
* Integer encoded as 8 bit signed (1 byte).
* |1111xxxx| - (with xxxx between 0000 and 1101) immediate 4 bit integer.
* |1111xxxx| - (with xxxx between 0001 and 1101) immediate 4 bit integer.
* Unsigned integer from 0 to 12. The encoded value is actually from
* 1 to 13 because 0000 and 1111 can not be used, so 1 should be
* subtracted from the encoded 4 bit value to obtain the right value.
@ -191,10 +191,10 @@
#include "redisassert.h"
#define ZIP_END 255 /* Special "end of ziplist" entry. */
#define ZIP_BIG_PREVLEN 254 /* Max number of bytes of the previous entry, for
the "prevlen" field prefixing each entry, to be
represented with just a single byte. Otherwise
it is represented as FE AA BB CC DD, where
#define ZIP_BIG_PREVLEN 254 /* ZIP_BIG_PREVLEN - 1 is the max number of bytes of
the previous entry, for the "prevlen" field prefixing
each entry, to be represented with just a single byte.
Otherwise it is represented as FE AA BB CC DD, where
AA BB CC DD are a 4 bytes unsigned integer
representing the previous entry len. */
@ -317,7 +317,7 @@ unsigned int zipIntSize(unsigned char encoding) {
return 0;
}
/* Write the encoidng header of the entry in 'p'. If p is NULL it just returns
/* Write the encoding header of the entry in 'p'. If p is NULL it just returns
* the amount of bytes required to encode such a length. Arguments:
*
* 'encoding' is the encoding we are using for the entry. It could be
@ -325,7 +325,7 @@ unsigned int zipIntSize(unsigned char encoding) {
* for single-byte small immediate integers.
*
* 'rawlen' is only used for ZIP_STR_* encodings and is the length of the
* srting that this entry represents.
* string that this entry represents.
*
* The function returns the number of bytes used by the encoding/length
* header stored in 'p'. */
@ -951,7 +951,7 @@ unsigned char *ziplistMerge(unsigned char **first, unsigned char **second) {
} else {
/* !append == prepending to target */
/* Move target *contents* exactly size of (source - [END]),
* then copy source into vacataed space (source - [END]):
* then copy source into vacated space (source - [END]):
* [SOURCE - END, TARGET - HEADER] */
memmove(target + source_bytes - ZIPLIST_END_SIZE,
target + ZIPLIST_HEADER_SIZE,

View File

@ -133,7 +133,7 @@ static unsigned int zipmapEncodeLength(unsigned char *p, unsigned int len) {
* zipmap. Returns NULL if the key is not found.
*
* If NULL is returned, and totlen is not NULL, it is set to the entire
* size of the zimap, so that the calling function will be able to
* size of the zipmap, so that the calling function will be able to
* reallocate the original zipmap to make room for more entries. */
static unsigned char *zipmapLookupRaw(unsigned char *zm, unsigned char *key, unsigned int klen, unsigned int *totlen) {
unsigned char *p = zm+1, *k = NULL;

View File

@ -1,7 +1,7 @@
# Failover stress test.
# In this test a different node is killed in a loop for N
# iterations. The test checks that certain properties
# are preseved across iterations.
# are preserved across iterations.
source "../tests/includes/init-tests.tcl"
source "../../../tests/support/cli.tcl"
@ -32,7 +32,7 @@ test "Enable AOF in all the instances" {
}
}
# Return nno-zero if the specified PID is about a process still in execution,
# Return non-zero if the specified PID is about a process still in execution,
# otherwise 0 is returned.
proc process_is_running {pid} {
# PS should return with an error if PID is non existing,
@ -45,7 +45,7 @@ proc process_is_running {pid} {
#
# - N commands are sent to the cluster in the course of the test.
# - Every command selects a random key from key:0 to key:MAX-1.
# - The operation RPUSH key <randomvalue> is perforemd.
# - The operation RPUSH key <randomvalue> is performed.
# - Tcl remembers into an array all the values pushed to each list.
# - After N/2 commands, the resharding process is started in background.
# - The test continues while the resharding is in progress.

View File

@ -322,7 +322,7 @@ proc pause_on_error {} {
puts "S <id> cmd ... arg Call command in Sentinel <id>."
puts "R <id> cmd ... arg Call command in Redis <id>."
puts "SI <id> <field> Show Sentinel <id> INFO <field>."
puts "RI <id> <field> Show Sentinel <id> INFO <field>."
puts "RI <id> <field> Show Redis <id> INFO <field>."
puts "continue Resume test."
} else {
set errcode [catch {eval $line} retval]

View File

@ -16,7 +16,7 @@ start_server {tags {"repl"}} {
s 0 role
} {slave}
test {Test replication with parallel clients writing in differnet DBs} {
test {Test replication with parallel clients writing in different DBs} {
after 5000
stop_bg_complex_data $load_handle0
stop_bg_complex_data $load_handle1

View File

@ -108,7 +108,7 @@ proc test {name code {okpattern undefined} {options undefined}} {
return
}
# abort if test name in skiptests
# abort if only_tests was set but test name is not included
if {[llength $::only_tests] > 0 && [lsearch $::only_tests $name] < 0} {
incr ::num_skipped
send_data_packet $::test_server_fd skip $name

View File

@ -471,7 +471,7 @@ proc signal_idle_client fd {
# The the_end function gets called when all the test units were already
# executed, so the test finished.
proc the_end {} {
# TODO: print the status, exit with the rigth exit code.
# TODO: print the status, exit with the right exit code.
puts "\n The End\n"
puts "Execution time of different units:"
foreach {time name} $::clients_time_history {
@ -526,9 +526,10 @@ proc print_help_screen {} {
"--stack-logging Enable OSX leaks/malloc stack logging."
"--accurate Run slow randomized tests for more iterations."
"--quiet Don't show individual tests."
"--single <unit> Just execute the specified unit (see next option). this option can be repeated."
"--single <unit> Just execute the specified unit (see next option). This option can be repeated."
"--verbose Increases verbosity."
"--list-tests List all the available test units."
"--only <test> Just execute the specified test by test name. this option can be repeated."
"--only <test> Just execute the specified test by test name. This option can be repeated."
"--skip-till <unit> Skip all units until (and including) the specified one."
"--skipunit <unit> Skip one unit."
"--clients <num> Number of test clients (default 16)."

View File

@ -72,7 +72,7 @@ start_server {tags {"expire"}} {
list [r persist foo] [r persist nokeyatall]
} {0 0}
test {EXPIRE pricision is now the millisecond} {
test {EXPIRE precision is now the millisecond} {
# This test is very likely to do a false positive if the
# server is under pressure, so if it does not work give it a few more
# chances.

View File

@ -79,7 +79,7 @@ start_server {tags {"hll"}} {
}
}
test {Corrupted sparse HyperLogLogs are detected: Additionl at tail} {
test {Corrupted sparse HyperLogLogs are detected: Additional at tail} {
r del hll
r pfadd hll a b c
r append hll "hello"

View File

@ -533,7 +533,7 @@ start_server {tags {"scripting"}} {
# Note: keep this test at the end of this server stanza because it
# kills the server.
test {SHUTDOWN NOSAVE can kill a timedout script anyway} {
# The server could be still unresponding to normal commands.
# The server should be still unresponding to normal commands.
catch {r ping} e
assert_match {BUSY*} $e
catch {r shutdown nosave}

View File

@ -1,4 +1,4 @@
Create-custer is a small script used to easily start a big number of Redis
create-cluster is a small script used to easily start a big number of Redis
instances configured to run in cluster mode. Its main goal is to allow manual
testing in a condition which is not easy to replicate with the Redis cluster
unit tests, for example when a lot of instances are needed in order to trigger

View File

@ -5,7 +5,7 @@ rehashing.c
Visually show buckets in the two hash tables between rehashings. Also stress
test getRandomKeys() implementation, that may actually disappear from
Redis soon, however visualization some code is reusable in new bugs
Redis soon, However the visualization code is reusable in new bugs
investigation.
Compile with: