Skip to content
This repository was archived by the owner on Oct 17, 2022. It is now read-only.
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 1 addition & 2 deletions src/api/server/configuration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -162,8 +162,7 @@ interact with the local node's configuration.
"default_handler": "{couch_httpd_db, handle_request}",
"enable_cors": "false",
"port": "5984",
"secure_rewrites": "true",
"vhost_global_handlers": "_utils, _uuids, _session, _users"
"secure_rewrites": "true"
}

.. _api/config/section/key:
Expand Down
7 changes: 4 additions & 3 deletions src/cluster/databases.rst
Original file line number Diff line number Diff line change
Expand Up @@ -56,9 +56,10 @@ placement rules.
Use of the ``placement`` argument will **override** the standard
logic for shard replica cardinality (specified by ``[cluster] n``.)

First, each node must be labeled with a zone attribute. This defines which
zone each node is in. You do this by editing the node's document in the
``/nodes`` database, which is accessed through the "back-door" (5986) port.
First, each node must be labeled with a zone attribute. This defines which zone each node
is in. You do this by editing the node's document in the system ``_nodes`` database, which
is accessed node-local via the ``GET /_node/_local/_nodes/{node-name}`` endpoint.

Add a key value pair of the form:

.. code-block:: text
Expand Down
6 changes: 3 additions & 3 deletions src/cluster/nodes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ To add a node simply do:

.. code-block:: text

curl -X PUT "http://xxx.xxx.xxx.xxx:5986/_nodes/node2@yyy.yyy.yyy.yyy" -d {}
curl -X PUT "http://xxx.xxx.xxx.xxx/_node/_local/_nodes/node2@yyy.yyy.yyy.yyy" -d {}

Now look at ``http://server1:5984/_membership`` again.

Expand Down Expand Up @@ -79,11 +79,11 @@ revision of the document that signifies that node’s existence:

.. code-block:: text

curl "http://xxx.xxx.xxx.xxx:5986/_nodes/node2@yyy.yyy.yyy.yyy"
curl "http://xxx.xxx.xxx.xxx/_node/_local/_nodes/node2@yyy.yyy.yyy.yyy"
{"_id":"node2@yyy.yyy.yyy.yyy","_rev":"1-967a00dff5e02add41820138abb3284d"}

With that ``_rev``, you can now proceed to delete the node document:

.. code-block:: text

curl -X DELETE "http://xxx.xxx.xxx.xxx:5986/_nodes/node2@yyy.yyy.yyy.yyy?rev=1-967a00dff5e02add41820138abb3284d"
curl -X DELETE "http://xxx.xxx.xxx.xxx/_node/_local/_nodes/node2@yyy.yyy.yyy.yyy?rev=1-967a00dff5e02add41820138abb3284d"
70 changes: 35 additions & 35 deletions src/cluster/sharding.rst
Original file line number Diff line number Diff line change
Expand Up @@ -107,13 +107,13 @@ have responded:

.. code-block:: bash

$ curl "$COUCH_URL:5984/<db>/<doc>?r=2"
$ curl "$COUCH_URL:5984/{db}/{doc}?r=2"

Here is a similar example for writing a document:

.. code-block:: bash

$ curl -X PUT "$COUCH_URL:5984/<db>/<doc>?w=2" -d '{...}'
$ curl -X PUT "$COUCH_URL:5984/{db}/{doc}?w=2" -d '{...}'

Setting ``r`` or ``w`` to be equal to ``n`` (the number of replicas)
means you will only receive a response once all nodes with relevant
Expand Down Expand Up @@ -336,13 +336,13 @@ another. For example:
.. code-block:: bash

# one one machine
$ mkdir -p data/.shards/<range>
$ mkdir -p data/shards/<range>
$ mkdir -p data/.shards/{range}
$ mkdir -p data/shards/{range}
# on the other
$ scp <couch-dir>/data/.shards/<range>/<database>.<datecode>* \
<node>:<couch-dir>/data/.shards/<range>/
$ scp <couch-dir>/data/shards/<range>/<database>.<datecode>.couch \
<node>:<couch-dir>/data/shards/<range>/
$ scp {couch-dir}/data/.shards/{range}/{database}.{datecode}* \
{node}:{couch-dir}/data/.shards/{range}/
$ scp {couch-dir}/data/shards/{range}/{database}.{datecode}.couch \
{node}:{couch-dir}/data/shards/{range}/

.. note::
Remember to move view files before database files! If a view index
Expand Down Expand Up @@ -379,7 +379,7 @@ To enable maintenance mode:
.. code-block:: bash

$ curl -X PUT -H "Content-type: application/json" \
$COUCH_URL:5984/_node/<nodename>/_config/couchdb/maintenance_mode \
$COUCH_URL:5984/_node/{node-name}/_config/couchdb/maintenance_mode \
-d "\"true\""

Then, verify that the node is in maintenance mode by performing a ``GET
Expand Down Expand Up @@ -407,15 +407,14 @@ shard replicas for a given database.

To update the cluster metadata, use the special ``/_dbs`` database,
which is an internal CouchDB database that maps databases to shards and
nodes. This database is replicated between nodes. It is accessible only
via a node-local port, usually at port 5986. By default, this port is
only available on the localhost interface for security purposes.
nodes. This database is automatically replicated between nodes. It is accessible
only through the special ``/_node/_local/_dbs`` endpoint.

First, retrieve the database's current metadata:

.. code-block:: bash

$ curl http://localhost:5986/_dbs/{name}
$ curl http://localhost/_node/_local/_dbs/{name}
{
"_id": "{name}",
"_rev": "1-e13fb7e79af3b3107ed62925058bfa3a",
Expand Down Expand Up @@ -471,11 +470,11 @@ metadata's ``changelog`` attribute:

.. code-block:: javascript

["add", "<range>", "<node-name>"]
["add", "{range}", "{node-name}"]

The ``<range>`` is the specific shard range for the shard. The ``<node-
name>`` should match the name and address of the node as displayed in
``GET /_membership`` on the cluster.
The ``{range}`` is the specific shard range for the shard. The ``{node-name}``
should match the name and address of the node as displayed in ``GET
/_membership`` on the cluster.

.. note::
When removing a shard from a node, specify ``remove`` instead of ``add``.
Expand Down Expand Up @@ -526,7 +525,7 @@ Now you can ``PUT`` this new metadata:

.. code-block:: bash

$ curl -X PUT http://localhost:5986/_dbs/{name} -d '{...}'
$ curl -X PUT http://localhost/_node/_local/_dbs/{name} -d '{...}'

.. _cluster/sharding/sync:

Expand All @@ -541,7 +540,7 @@ CouchDB to synchronize all replicas of all shards in a database with the

.. code-block:: bash

$ curl -X POST $COUCH_URL:5984/{dbname}/_sync_shards
$ curl -X POST $COUCH_URL:5984/{db}/_sync_shards
{"ok":true}

This starts the synchronization process. Note that this will put
Expand All @@ -562,7 +561,7 @@ Monitor internal replication to ensure up-to-date shard(s)

After you complete the previous step, CouchDB will have started
synchronizing the shards. You can observe this happening by monitoring
the ``/_node/<nodename>/_system`` endpoint, which includes the
the ``/_node/{node-name}/_system`` endpoint, which includes the
``internal_replication_jobs`` metric.

Once this metric has returned to the baseline from before you started
Expand Down Expand Up @@ -591,7 +590,7 @@ Update cluster metadata again to remove the source shard

Now, remove the source shard from the shard map the same way that you
added the new target shard to the shard map in step 2. Be sure to add
the ``["remove", <range>, <source-shard>]`` entry to the end of the
the ``["remove", {range}, {source-shard}]`` entry to the end of the
changelog as well as modifying both the ``by_node`` and ``by_range`` sections of
the database metadata document.

Expand All @@ -605,8 +604,8 @@ command line on the source host, along with any view shard replicas:

.. code-block:: bash

$ rm <couch-dir>/data/shards/<range>/<dbname>.<datecode>.couch
$ rm -r <couch-dir>/data/.shards/<range>/<dbname>.<datecode>*
$ rm {couch-dir}/data/shards/{range}/{db}.{datecode}.couch
$ rm -r {couch-dir}/data/.shards/{range}/{db}.{datecode}*

Congratulations! You have moved a database shard replica. By adding and removing
database shard replicas in this way, you can change the cluster's shard layout,
Expand All @@ -623,10 +622,11 @@ database creation time using placement rules.
Use of the ``placement`` option will **override** the ``n`` option,
both in the ``.ini`` file as well as when specified in a ``URL``.

First, each node must be labeled with a zone attribute. This defines
which zone each node is in. You do this by editing the node’s document
in the ``/_nodes`` database, which is accessed through the node-local
port. Add a key value pair of the form:
First, each node must be labeled with a zone attribute. This defines which zone
each node is in. You do this by editing the node’s document in the special
``/_nodes`` database, which is accessed through the special node-local API
endpoint at ``/_node/_local/_nodes/{node-name}``. Add a key value pair of the
form:

::

Expand All @@ -636,11 +636,11 @@ Do this for all of the nodes in your cluster. For example:

.. code-block:: bash

$ curl -X PUT http://localhost:5986/_nodes/<node-name> \
$ curl -X PUT http://localhost/_node/_local/_nodes/{node-name} \
-d '{ \
"_id": "<node-name>",
"_rev": "<rev>",
"zone": "<zone-name>"
"_id": "{node-name}",
"_rev": "{rev}",
"zone": "{zone-name}"
}'

In the local config file (``local.ini``) of each node, define a
Expand All @@ -649,20 +649,20 @@ consistent cluster-wide setting like:
::

[cluster]
placement = <zone-name-1>:2,<zone-name-2>:1
placement = {zone-name-1}:2,{zone-name-2}:1

In this example, CouchDB will ensure that two replicas for a shard will
be hosted on nodes with the zone attribute set to ``<zone-name-1>`` and
be hosted on nodes with the zone attribute set to ``{zone-name-1}`` and
one replica will be hosted on a new with the zone attribute set to
``<zone-name-2>``.
``{zone-name-2}``.

This approach is flexible, since you can also specify zones on a per-
database basis by specifying the placement setting as a query parameter
when the database is created, using the same syntax as the ini file:

.. code-block:: bash

curl -X PUT $COUCH_URL:5984/<dbname>?zone=<zone>
curl -X PUT $COUCH_URL:5984/{db}?zone={zone}

The ``placement`` argument may also be specified. Note that this *will*
override the logic that determines the number of created replicas!
Expand Down
18 changes: 0 additions & 18 deletions src/config/auth.rst
Original file line number Diff line number Diff line change
Expand Up @@ -119,11 +119,6 @@ Authentication Configuration
[chttpd]
require_valid_user = false

.. note::
This setting only affects the clustered-port (5984 by default).
To make the same change for the node-local port (5986 by default),
set the ``[couch_httpd_auth]`` setting of the same name.

.. config:option:: require_valid_user_except_for_up :: Force user auth (mostly)

When this option is set to ``true``, no requests are allowed from
Expand Down Expand Up @@ -180,10 +175,6 @@ Authentication Configuration
[couch_httpd_auth]
authentication_redirect = /_utils/session.html

.. note::
This setting affects both the clustered-port (5984 by default)
and the node-local port (5986 by default).

.. config:option:: iterations :: PBKDF2 iterations count

.. versionadded:: 1.3
Expand Down Expand Up @@ -252,11 +243,6 @@ Authentication Configuration
[couch_httpd_auth]
require_valid_user = false

.. warning::
This setting only affects the node-local port (5986 by default).
Most administrators want the ``[chttpd]`` setting of the same name
for clustered-port (5984) behaviour.

.. config:option:: secret :: Authentication secret token

The secret token is used for :ref:`api/auth/proxy` and for :ref:`api/auth/cookie`. ::
Expand All @@ -283,10 +269,6 @@ Authentication Configuration
[couch_httpd_auth]
users_db_public = false

.. note::
This setting affects both the clustered-port (5984 by default)
and the node-local port (5986 by default).

.. config:option:: x_auth_roles :: Proxy Auth roles header

The HTTP header name (``X-Auth-CouchDB-Roles`` by default) that
Expand Down
Loading