Spelling fixes.

This commit is contained in:
Bruce Mitchener 2011-07-26 10:18:36 +07:00
parent 80e87a461a
commit 5215ab1418
6 changed files with 17 additions and 17 deletions

View File

@ -9,7 +9,7 @@ implemented. You can read more about it here:
http://groups.google.com/group/redis-db/browse_thread/thread/d444bc786689bde9
This Redis version is not intented for production environments.
This Redis version is not intended for production environments.
Cheers,
Salvatore

View File

@ -1,7 +1,7 @@
CLUSTER README
==============
Redis Cluster is currenty a work in progress, however there are a few things
Redis Cluster is currently a work in progress, however there are a few things
that you can do already with it to see how it works.
The following guide show you how to setup a three nodes cluster and issue some
@ -21,7 +21,7 @@ basic command against it.
TODO
====
*** WARNING: all the following problably has some meaning only for
*** WARNING: all the following probably has some meaning only for
*** me (antirez), most info are not updated, so please consider this file
*** as a private TODO list / brainstorming.
@ -72,7 +72,7 @@ With -MOVED the client should update its hash slots table to reflect the fact th
alive table if the received alive timestamp is more recent the
one present in the node local table.
In the ping packet every node "gossip" information is somethig like
In the ping packet every node "gossip" information is something like
this:
<ip>:<port>:<status>:<pingsent_timestamp>:<pongreceived_timestamp>

View File

@ -1,4 +1,4 @@
1. Enter irc.freenode.org #redis and start talking with 'antirez' and/or 'pietern' to check if there is interest for such a feature and to understand the probability of it being merged. We'll try hard to keep Redis simple... so you'll likely encounter an high resistence.
1. Enter irc.freenode.org #redis and start talking with 'antirez' and/or 'pietern' to check if there is interest for such a feature and to understand the probability of it being merged. We'll try hard to keep Redis simple... so you'll likely encounter high resistance.
2. Drop a message to the Redis Google Group with a proposal of semantics/API.

4
TODO
View File

@ -39,7 +39,7 @@ SCRIPTING
APPEND ONLY FILE
================
* in AOF rewirte use HMSET to rewrite small hashes instead of multiple calls
* in AOF rewrite use HMSET to rewrite small hashes instead of multiple calls
to HSET.
OPTIMIZATIONS
@ -75,7 +75,7 @@ KNOWN BUGS
AOF is loading.
* #519: Slave may have expired keys that were never read in the master (so a DEL
is not sent in the replication channel) but are already expired since
a lot of time. Maybe after a given delay that is undoubltly greater than
a lot of time. Maybe after a given delay that is undoubtably greater than
the replication link latency we should expire this key on the slave on
access?

View File

@ -12,7 +12,7 @@ sub-dictionaries (hashes) and so forth.
While Redis is very fast, currently it lacks scalability in the form of ability
to transparently run across different nodes. This is desirable mainly for the
following three rasons:
following three reasons:
A) Fault tolerance. Some node may go off line without affecting the operations.
B) Holding bigger datasets without using a single box with a lot of RAM.
@ -33,7 +33,7 @@ Still a Dynamo alike DHT may not be the best fit for Redis.
Redis is very simple and fast at its core, so Redis cluster should try to
follow the same guidelines. The first problem with a Dynamo-alike DHT is that
Redis supports complex data types. Merging complex values like lsits, where
Redis supports complex data types. Merging complex values like lists, where
in the case of a netsplit may diverge in very complex ways, is not going to
be easy. The "most recent data" wins is not applicable and all the resolution
business should be in the application.
@ -114,7 +114,7 @@ Configuration Node. This connections are keep alive with PING requests from time
to time if there is no traffic. This way Proxy Nodes can understand asap if
there is a problem in some Data Node or in the Configuration Node.
When a Proxy Node is started it needs to know the Configuration node address in order to load the infomration about the Data nodes and the mapping between the key space and the nodes.
When a Proxy Node is started it needs to know the Configuration node address in order to load the information about the Data nodes and the mapping between the key space and the nodes.
On startup a Proxy Node will also register itself in the Configuration node, and will make sure to refresh it's configuration every N seconds (via an EXPIREing key) so that it's possible to detect when a Proxy node fails.
@ -126,8 +126,8 @@ The Proxy Node is also in charge of signaling failing Data nodes to the Configur
When a new Data node joins or leaves the cluster, and in general when the cluster configuration changes, all the Proxy nodes will receive a notification and will reload the configuration from the Configuration node.
Proxy Nodes - how queries are submited
======================================
Proxy Nodes - how queries are submitted
=======================================
This is how a query is processed:
@ -140,7 +140,7 @@ WRITE QUERY:
3a) The Proxy Node forwards the query to M Data Nodes at the same time, waiting for replies.
3b) Once all the replies are received the Proxy Node checks that the replies are consistent. For instance all the M nodes need to reply with OK and so forth. If the query fails in a subset of nodes but succeeds in other nodes, the failing nodes are considered unreliable and are put off line notifying the configuration node.
3c) The reply is transfered back to the client.
3c) The reply is transferred back to the client.
READ QUERY:
@ -189,7 +189,7 @@ When a Data node is added to the cluster, it is added via an LPUSH operation int
LPUSH newnodes 192.168.1.55:6379
The Handling node will check from time to time for this new elements in the "newode" list. If there are new nodes pending to enter the cluster, they are processed one after the other in this way:
The Handling node will check from time to time for this new elements in the "newnode" list. If there are new nodes pending to enter the cluster, they are processed one after the other in this way:
For instance let's assume there are already two Data nodes in the cluster:
@ -198,7 +198,7 @@ For instance let's assume there are already two Data nodes in the cluster:
We add a new node 192.168.1.3:6379 via the LPUSH operation.
We can imagine that the 1024 hash slots are assigned equally among the two inital nodes. In order to add the new (third) node what we have to do is to move incrementally 341 slots form the two old servers to the new one.
We can imagine that the 1024 hash slots are assigned equally among the two initial nodes. In order to add the new (third) node what we have to do is to move incrementally 341 slots form the two old servers to the new one.
For now we can think that every hash slot is only stored in a single server, to generalize the idea later.

View File

@ -96,7 +96,7 @@ Clients of the cluster are required to have the cluster configuration loaded
into memory. The cluster configuration is the sum of the following info:
- Number of data nodes in the cluster, for instance, 10
- A map between hash slots and nodes, so for instnace:
- A map between hash slots and nodes, so for instance:
hash slot 1 -> node 0
hash slot 2 -> node 5
hash slot 3 -> node 3
@ -140,7 +140,7 @@ to time is going to have no impact in the overall performance.
-------------
To perform a read query the client hashes the key argument from the command
(in the intiial version of Redis Cluster only single-key commands are
(in the initial version of Redis Cluster only single-key commands are
allowed). Using the in memory configuration it maps the hash key to the
node ID.