From 0828bc505fe237fe17728cf7cca2c8518eb74989 Mon Sep 17 00:00:00 2001 From: Joan Touzet Date: Mon, 19 Nov 2018 17:22:34 -0500 Subject: [PATCH] Improve CouchDB setup documentation, esp. for clustering --- src/cluster/index.rst | 7 +- src/cluster/setup.rst | 266 ------------- src/index.rst | 1 + src/install/docker.rst | 4 +- src/install/freebsd.rst | 4 +- src/install/index.rst | 7 +- src/install/mac.rst | 4 +- src/install/snap.rst | 4 +- src/install/troubleshooting.rst | 4 +- src/install/unix.rst | 13 +- src/install/upgrading.rst | 2 +- src/install/windows.rst | 6 +- src/setup/cluster.rst | 364 ++++++++++++++++++ src/setup/index.rst | 27 ++ .../setup.rst => setup/single-node.rst} | 48 +-- 15 files changed, 435 insertions(+), 326 deletions(-) delete mode 100644 src/cluster/setup.rst create mode 100644 src/setup/cluster.rst create mode 100644 src/setup/index.rst rename src/{install/setup.rst => setup/single-node.rst} (53%) diff --git a/src/cluster/index.rst b/src/cluster/index.rst index a66c6547..93973d01 100644 --- a/src/cluster/index.rst +++ b/src/cluster/index.rst @@ -16,17 +16,16 @@ Cluster Reference ================= -As of 2.0 CouchDB now have two modes of operations: +As of CouchDB 2.0.0, CouchDB can be run in two different modes of operation: * Standalone * Cluster -This part of the documentation is about setting up and maintain a CouchDB -cluster. +This section details the theory behind CouchDB clusters, and provides specific +operational instructions on node, database and shard management. .. toctree:: :maxdepth: 2 - setup theory nodes databases diff --git a/src/cluster/setup.rst b/src/cluster/setup.rst deleted file mode 100644 index d21095c3..00000000 --- a/src/cluster/setup.rst +++ /dev/null @@ -1,266 +0,0 @@ -.. Licensed under the Apache License, Version 2.0 (the "License"); you may not -.. use this file except in compliance with the License. You may obtain a copy of -.. the License at -.. -.. http://www.apache.org/licenses/LICENSE-2.0 -.. -.. Unless required by applicable law or agreed to in writing, software -.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -.. License for the specific language governing permissions and limitations under -.. the License. - -.. _cluster/setup: - -====== -Set Up -====== - -Everything you need to know to prepare the cluster for the installation of -CouchDB. - -Firewall -======== - -If you do not have a firewall between your servers, then you can skip this. - -CouchDB in cluster mode uses the port ``5984`` just as standalone, but it also -uses ``5986`` for node-local APIs. - -Erlang uses TCP port ``4369`` (EPMD) to find other nodes, so all servers must be -able to speak to each other on this port. In an Erlang Cluster, all nodes are -connected to all other nodes. A mesh. - -.. warning:: - If you expose the port ``4369`` to the Internet or any other untrusted - network, then the only thing protecting you is the - `cookie`_. - -.. _cookie: http://erlang.org/doc/reference_manual/distributed.html - -Every Erlang application then uses other ports for talking to each other. Yes, -this means random ports. This will obviously not work with a firewall, but it is -possible to force an Erlang application to use a specific port rage. - -This documentation will use the range TCP ``9100-9200``. Open up those ports in -your firewalls and it is time to test it. - -Configure and Test the Communication with Erlang -================================================ - -Make CouchDB use correct IP|FQDN and the open ports ----------------------------------------------------- - -In file ``etc/vm.args`` change the line ``-name couchdb@127.0.0.1`` to -``-name couchdb@`` which defines -the name of the node. Each node must have an identifier that allows remote -systems to talk to it. The node name is of the form -``@``. The name portion can -be couchdb on all nodes, unless you are running more than 1 CouchDB node on the -same server with the same IP address or domain name. In that case, we recommend -names of ``couchdb1``, ``couchdb2``, etc. The second portion of the node name -must be an identifier by which other nodes can access this node -- either the -node's fully qualified domain name (FQDN) or the node's IP address. The FQDN is -preferred so that you can renumber the node without disruption to the cluster. - -Open ``vm.args``, on all nodes, and add ``-kernel inet_dist_listen_min 9100`` -and ``-kernel inet_dist_listen_max 9200`` like below: - -.. code-block:: erlang - - -name ... - -setcookie ... - ... - -kernel inet_dist_listen_min 9100 - -kernel inet_dist_listen_max 9200 - -You need 2 servers with working hostnames. Let us call them server1 and server2. - -On server1: - -.. code-block:: bash - - erl -sname bus -setcookie 'brumbrum' -kernel inet_dist_listen_min 9100 -kernel inet_dist_listen_max 9200 - -Then on server2: - -.. code-block:: bash - - erl -sname car -setcookie 'brumbrum' -kernel inet_dist_listen_min 9100 -kernel inet_dist_listen_max 9200 - -An explanation to the commands: - * ``erl`` the Erlang shell. - * ``-sname bus`` the name of the Erlang node. - * ``-setcookie 'brumbrum'`` the "password" used when nodes connect to each - other. - * ``-kernel inet_dist_listen_min 9100`` the lowest port in the rage. - * ``-kernel inet_dist_listen_max 9200`` the highest port in the rage. - -This gives us 2 Erlang shells. shell1 on server1, shell2 on server2. -Time to connect them. The ``.`` is to Erlang what ``;`` is to C. - -In shell1: - -.. code-block:: erlang - - net_kernel:connect_node(car@server2). - -This will connect to the node called ``car`` on the server called ``server2``. - -If that returns true, then you have an Erlang cluster, and the firewalls are -open. If you get false or nothing at all, then you have a problem with the -firewall. - -First time in Erlang? Time to play! ------------------------------------ - -Run in both shells: - -.. code-block:: erlang - - register(shell, self()). - -shell1: - -.. code-block:: erlang - - {shell, car@server2} ! {hello, from, self()}. - -shell2: - -.. code-block:: erlang - - flush(). - {shell, bus@server1} ! {"It speaks!", from, self()}. - -shell1: - -.. code-block:: erlang - - flush(). - -To close the shells, run in both: - -.. code-block:: erlang - - q(). - -.. _cluster/setup/wizard: - -The Cluster Setup Wizard -======================== - -Setting up a cluster of Erlang applications correctly can be a daunting -task. Luckily, CouchDB 2.0 comes with a convenient Cluster Setup Wizard -as part of the Fauxton web administration interface. - -After installation and initial start-up, visit Fauxton at -``http://127.0.0.1:5984/_utils#setup``. You will be asked to set up -CouchDB as a single-node instance or set up a cluster. - -When you click "setup cluster" you are asked for admin credentials again and -then to add nodes by IP address. To get more nodes, go through the same install -procedure on other machines. Be sure to specify the total number of nodes you -expect to add to the cluster before adding nodes. - -Before you can add nodes to form a cluster, you must have them listening on an -IP address accessible from the other nodes in the cluster. -Do this once per node: - -.. code-block:: bash - - curl -X PUT http://127.0.0.1:5984/_node/couchdb@/_config/admins/admin -d '"password"' - curl -X PUT http://127.0.0.1:5984/_node/couchdb@/_config/chttpd/bind_address -d '"0.0.0.0"' - -Now you can enter their IP addresses in the setup screen on your first -node. And make sure to put in the admin username and password. And use -the same admin username and password on all nodes. - -Once you added all nodes, click "Setup" and Fauxton will finish the -cluster configuration for you. - -See http://127.0.0.1:5984/_membership to get a list of all the nodes in -your cluster. - -Now your cluster is ready and available. You can send requests to any -one of the nodes and get to all the data. - -For a proper production setup, you'd now set up an HTTP proxy in front -of the nodes, that does load balancing. We recommend `HAProxy`_. See -our `example configuration for HAProxy`_. All you need is to adjust the -ip addresses and ports. - -.. _cluster/setup/api: - -The Cluster Setup API -===================== - -If you would prefer to manually configure your CouchDB cluster, CouchDB exposes -the ``_cluster_setup`` endpoint for that. After installation and initial setup, -we can set up the cluster. On each node we need to run the following command to -set up the node: - -.. code-block:: bash - - curl -X POST -H "Content-Type: application/json" http://admin:password@127.0.0.1:5984/_cluster_setup -d '{"action": "enable_cluster", "bind_address":"0.0.0.0", "username": "admin", "password":"password", "node_count":"3"}' - -After that we can join all the nodes together. Choose one node -as the "setup coordination node" to run all these commands on. -This is a "setup coordination node" that manages the setup and -requires all other nodes to be able to see it and vice versa. -Set up will not work with unavailable nodes. -The notion of "setup coordination node" will be gone once the setup is finished. -From then on, the cluster will no longer have a "setup coordination node". -To add a node run these commands for each node you want to add: - -.. code-block:: bash - - curl -X POST -H "Content-Type: application/json" http://admin:password@127.0.0.1:5984/_cluster_setup -d '{"action": "enable_cluster", "bind_address":"0.0.0.0", "username": "admin", "password":"password", "port": 15984, "node_count": "3", "remote_node": "", "remote_current_user": "", "remote_current_password": "" }' - curl -X POST -H "Content-Type: application/json" http://admin:password@127.0.0.1:5984/_cluster_setup -d '{"action": "add_node", "host":"", "port": , "username": "admin", "password":"password"}' - -This will join the two nodes together. -Keep running the above commands for each -node you want to add to the cluster. Once this is done run the -following command to complete the setup and add the missing databases: - -.. code-block:: bash - - curl -X POST -H "Content-Type: application/json" http://admin:password@127.0.0.1:5984/_cluster_setup -d '{"action": "finish_cluster"}' - -Verify install: - -.. code-block:: bash - - curl http://admin:password@127.0.0.1:5984/_cluster_setup - -Response: - -.. code-block:: bash - - {"state":"cluster_finished"} - -Verify cluster nodes: - -.. code-block:: bash - - curl http://admin:password@127.0.0.1:5984/_membership - -Response: - -.. code-block:: bash - - { - "all_nodes": [ - "couchdb@couch1", - "couchdb@couch2", - ], - "cluster_nodes": [ - "couchdb@couch1", - "couchdb@couch2", - ] - } - -You CouchDB cluster is now set up. - -.. _HAProxy: http://haproxy.org/ -.. _example configuration for HAProxy: https://github.com/apache/couchdb/blob/master/rel/haproxy.cfg diff --git a/src/index.rst b/src/index.rst index 041ac7c2..03b10321 100644 --- a/src/index.rst +++ b/src/index.rst @@ -26,6 +26,7 @@ Table of Contents intro/index install/index + setup/index config/index replication/index maintenance/index diff --git a/src/install/docker.rst b/src/install/docker.rst index 08b8098c..1d6e8a40 100644 --- a/src/install/docker.rst +++ b/src/install/docker.rst @@ -32,8 +32,8 @@ use of a Docker volume for data at ``/opt/couchdb/data``. Note that you can also use the ``NODENAME`` environment variable to set the name of the CouchDB node inside the container. -**Be sure to complete the** :ref:`First-time Setup ` **steps for -a single node or clustered installation.** +**Your installation is not complete. Be sure to complete the** +:ref:`Setup ` **steps for a single node or clustered installation.** Further details on the Docker configuration are available in our `couchdb-docker git repository`_. diff --git a/src/install/freebsd.rst b/src/install/freebsd.rst index 028d43bb..3eb3b4f1 100644 --- a/src/install/freebsd.rst +++ b/src/install/freebsd.rst @@ -52,8 +52,8 @@ Administrators should use ``default.ini`` as reference and only modify the Post install ------------ -**Be sure to complete the** :ref:`First-time Setup ` **steps for -a single node or clustered installation.** +**Your installation is not complete. Be sure to complete the** +:ref:`Setup ` **steps for a single node or clustered installation.** In case the install script fails to install a non-interactive user "couchdb" to be used for the database, the user needs to be created manually: diff --git a/src/install/index.rst b/src/install/index.rst index cc4d2b45..94166240 100644 --- a/src/install/index.rst +++ b/src/install/index.rst @@ -12,9 +12,9 @@ .. _install: -=============================== -Installation & First-Time Setup -=============================== +============ +Installation +============ .. toctree:: :maxdepth: 2 @@ -25,6 +25,5 @@ Installation & First-Time Setup freebsd docker snap - setup upgrading troubleshooting diff --git a/src/install/mac.rst b/src/install/mac.rst index 69440c8f..b2aa362d 100644 --- a/src/install/mac.rst +++ b/src/install/mac.rst @@ -37,8 +37,8 @@ That's all, now CouchDB is installed on your Mac: #. Run Apache CouchDB application #. `Open up Fauxton`_, the CouchDB admin interface #. Verify the install by clicking on `Verify`, then `Verify Installation`. -#. **Be sure to complete the** :ref:`First-time Setup ` **steps - for a single node or clustered installation.** +#. **Your installation is not complete. Be sure to complete the** + :ref:`Setup ` **steps for a single node or clustered installation.** #. Time to Relax! .. _Open up Fauxton: http://localhost:5984/_utils diff --git a/src/install/snap.rst b/src/install/snap.rst index f932af1e..0e0e8fd5 100644 --- a/src/install/snap.rst +++ b/src/install/snap.rst @@ -30,9 +30,11 @@ After `installing snapd`_, the CouchDB snap can be installed via:: CouchDB will be installed at ``/snap/couchdb``. Data will be stored at ``/var/snap/couchdb/``. -.. _installing snapd: https://snapcraft.io/docs/core/install +**Your installation is not complete. Be sure to complete the** +:ref:`Setup ` **steps for a single node or clustered installation.** Further details on the snap build process are available in our `couchdb-pkg git repository`_. +.. _installing snapd: https://snapcraft.io/docs/core/install .. _couchdb-pkg git repository: https://github.com/apache/couchdb-pkg diff --git a/src/install/troubleshooting.rst b/src/install/troubleshooting.rst index ae6d6a6e..acb5f8b1 100644 --- a/src/install/troubleshooting.rst +++ b/src/install/troubleshooting.rst @@ -141,7 +141,7 @@ Quick Build Having problems getting CouchDB to run for the first time? Follow this simple procedure and report back to the user mailing list or IRC with the output of each step. Please put the output of these steps into a paste service (such -as https://paste.apache.org/) rather than including the output of your entire +as https://paste.ee/) rather than including the output of your entire run in IRC or the mailing list directly. 1. Note down the name and version of your operating system and your processor @@ -183,7 +183,7 @@ run in IRC or the mailing list directly. Upgrading ========= -Are you upgrading from CouchDB 2.0? Install CouchDB into a fresh directory. +Are you upgrading from CouchDB 1.x? Install CouchDB into a fresh directory. CouchDB's directory layout has changed and may be confused by libraries present from previous releases. diff --git a/src/install/unix.rst b/src/install/unix.rst index 5c7c08a3..31d7f45f 100644 --- a/src/install/unix.rst +++ b/src/install/unix.rst @@ -95,8 +95,8 @@ Installing the Apache CouchDB packages $ sudo yum -y install epel-release && yum install couchdb -**Be sure to complete the** :ref:`First-time Setup ` **steps for -a single node or clustered installation.** +**Your installation is not complete. Be sure to complete the** +:ref:`Setup ` **steps for a single node or clustered installation.** **Debian/Ubuntu**: First, install the repository key:: @@ -109,9 +109,8 @@ Then update the repository cache and install the package:: Debian/Ubuntu installs from binaries will be pre-configured for single node or clustered installations. For clusters, multiple nodes will still need to be -joined together; **follow the** -:ref:`Cluster Setup Wizard ` **steps** to complete the -process. +joined together and configured consistently across all machines; **follow the** +:ref:`Cluster Setup ` **walkthrough** to complete the process. Relax! CouchDB is installed and running. @@ -335,8 +334,8 @@ From here you should verify your installation by pointing your web browser to:: http://localhost:5984/_utils/index.html#verifyinstall -**Be sure to complete the** :ref:`First-time Setup ` **steps for -a single node or clustered installation.** +**Your installation is not complete. Be sure to complete the** +:ref:`Setup ` **steps for a single node or clustered installation.** Running as a Daemon =================== diff --git a/src/install/upgrading.rst b/src/install/upgrading.rst index 1836257c..8f6040bc 100644 --- a/src/install/upgrading.rst +++ b/src/install/upgrading.rst @@ -117,7 +117,7 @@ proper read/write access, then use commands similar to the following: $ couchup list # Should show no remaining databases! The same process works for moving from a single 1.x node to a cluster of -2.x nodes; the only difference is that you must complete cluster setup +2.x nodes; the only difference is that you must :ref:`complete cluster setup ` prior to running the couchup commands. Special Features diff --git a/src/install/windows.rst b/src/install/windows.rst index 6514d54d..121ca503 100644 --- a/src/install/windows.rst +++ b/src/install/windows.rst @@ -29,10 +29,12 @@ This is the simplest way to go. #. Follow the installation wizard steps. **Be sure to install CouchDB to a path with no spaces, such as** ``C:\CouchDB``. +#. **Your installation is not complete. Be sure to complete the** + :ref:`Setup ` **steps for a single node or clustered installation.** + #. `Open up Fauxton`_ -#. It's time to Relax! **Be sure to complete the** :ref:`First-time Setup - ` **steps for a single node or clustered installation.** +#. It's time to Relax! .. note:: In some cases you might been asked to reboot Windows to complete diff --git a/src/setup/cluster.rst b/src/setup/cluster.rst new file mode 100644 index 00000000..c40cbdff --- /dev/null +++ b/src/setup/cluster.rst @@ -0,0 +1,364 @@ +.. Licensed under the Apache License, Version 2.0 (the "License"); you may not +.. use this file except in compliance with the License. You may obtain a copy of +.. the License at +.. +.. http://www.apache.org/licenses/LICENSE-2.0 +.. +.. Unless required by applicable law or agreed to in writing, software +.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +.. License for the specific language governing permissions and limitations under +.. the License. + +.. _setup/cluster: + +============== +Cluster Set Up +============== + +This section describes everything you need to know to prepare, install, and +set up your first CouchDB 2.x cluster. + +Ports and Firewalls +=================== + +CouchDB uses the following ports: + ++-------------+----------+-----------------------+----------------------+ +| Port Number | Protocol | Recommended binding | Usage | ++=============+==========+=======================+======================+ +| 5984 | tcp | As desired, by | Standard clustered | +| | | default ``localhost`` | port for all HTTP | +| | | | API requests | ++-------------+----------+-----------------------+----------------------+ +| 5986 | tcp | ``localhost`` or | Administrative tasks | +| | | private network | such as node and | +| | | **ONLY** | shard management | ++-------------+----------+-----------------------+----------------------+ +| 4369 | tcp | All interfaces | Erlang port mapper | +| | | by default | daemon (epmd) | ++-------------+----------+-----------------------+----------------------+ +| Random | tcp | Automatic | Communication with | +| above 1024 | | | other CouchDB nodes | +| (see below) | | | in the cluster | ++-------------+----------+-----------------------+----------------------+ + +CouchDB in clustered mode uses the port ``5984``, just as in a standalone +configuration, but it also uses ``5986`` for node-local APIs. These APIs are +administrative tools only, such as node and shard management. Do not use +port ``5986`` for any other reason. The port is slated to be deprecated in a +future CouchDB release. + +.. warning:: + **Never expose the node-local port to the public Internet.** + + By default, CouchDB only exposes port ``5986`` **only** on localhost. + If you have a secondary network connection on nodes for management purposes + only, it is acceptable to expose the port on that network as well. + +CouchDB uses Erlang-native clustering functionality to achieve a clustered +installation. Erlang uses TCP port ``4369`` (EPMD) to find other nodes, so all +servers must be able to speak to each other on this port. In an Erlang cluster, +all nodes are connected to all other nodes, in a mesh network configuration. + +.. warning:: + If you expose the port ``4369`` to the Internet or any other untrusted + network, then the only thing protecting you is the Erlang + `cookie`_. + +.. _cookie: http://erlang.org/doc/reference_manual/distributed.html + +Every Erlang application running on that machine (such as CouchDB) then uses +automatically assigned ports for communciation with other nodes. Yes, this +means random ports. This will obviously not work with a firewall, but it is +possible to force an Erlang application to use a specific port rage. + +This documentation will use the range TCP ``9100-9200``, but this range is +unnecessarily broad. If you only have a single Erlang application running on a +machine, the range can be limited to a single port: ``9100-9100``, since the +ports epmd assign are for *inbound connections* only. Three CouchDB nodes +running on a single machine, as in a development cluster scenario, would need +three ports in this range. + +Configure and Test the Communication with Erlang +================================================ + +Make CouchDB use correct IP|FQDN and the open ports +---------------------------------------------------- + +In file ``etc/vm.args`` change the line ``-name couchdb@127.0.0.1`` to +``-name couchdb@`` which defines +the name of the node. Each node must have an identifier that allows remote +systems to talk to it. The node name is of the form +``@``. + +The name portion can be couchdb on all nodes, unless you are running more than +1 CouchDB node on the same server with the same IP address or domain name. In +that case, we recommend names of ``couchdb1``, ``couchdb2``, etc. + +The second portion of the node name must be an identifier by which other nodes +can access this node -- either the node's fully qualified domain name (FQDN) or +the node's IP address. The FQDN is preferred so that you can renumber the node's +IP address without disruption to the cluster. (This is common in cloud-hosted +environments.) + +Open ``etc/vm.args``, on all nodes, and add ``-kernel inet_dist_listen_min 9100`` +and ``-kernel inet_dist_listen_max 9200`` like below: + +.. code-block:: erlang + + -name ... + -setcookie ... + ... + -kernel inet_dist_listen_min 9100 + -kernel inet_dist_listen_max 9200 + +Again, a small range is fine, down to a single port (set both to ``9100``) if you +only ever run a single CouchDB node on each machine. + +Confirming connectivity between nodes +------------------------------------- + +For this test, you need 2 servers with working hostnames. Let us call them +server1 and server2. + +On server1: + +.. code-block:: bash + + erl -name bus@192.168.0.1 -setcookie 'brumbrum' -kernel inet_dist_listen_min 9100 -kernel inet_dist_listen_max 9200 + +Then on server2: + +.. code-block:: bash + + erl -name car@192.168.0.2 -setcookie 'brumbrum' -kernel inet_dist_listen_min 9100 -kernel inet_dist_listen_max 9200 + +An explanation to the commands: + * ``erl`` the Erlang shell. + * ``-name bus@192.168.0.1`` the name of the Erlang node and its IP address or FQDN. + * ``-setcookie 'brumbrum'`` the "password" used when nodes connect to each + other. + * ``-kernel inet_dist_listen_min 9100`` the lowest port in the rage. + * ``-kernel inet_dist_listen_max 9200`` the highest port in the rage. + +This gives us 2 Erlang shells. shell1 on server1, shell2 on server2. +Time to connect them. Enter the following, being sure to end the line with a +period (``.``): + +In shell1: + +.. code-block:: erlang + + net_kernel:connect_node(car@server2). + +This will connect to the node called ``car`` on the server called ``server2``. + +If that returns true, then you have an Erlang cluster, and the firewalls are +open. This means that 2 CouchDB nodes on these two servers will be able to +communicate with each other successfully. If you get false or nothing at all, +then you have a problem with the firewall, DNS, or your settings. Try again. + +If you're concerned about firewall issues, or having trouble connecting all +nodes of your cluster later on, repeat the above test between all pairs of +servers to confirm connectivity and system configuration is correct. + +.. _cluster/setup/prepare: + +Preparing CouchDB nodes to be joined into a cluster +=================================================== + +Before you can add nodes to form a cluster, you must have them listening on an +IP address accessible from the other nodes in the cluster. You should also ensure +that a few critical settings are identical across all nodes before joining them. + +The settings we recommend you set now, before joining the nodes into a cluster, +are: + +1. ``etc/vm.args`` settings as described in the + :ref:`previous two sections` +2. At least one :ref:`server administrator` + user (and password) +3. Bind the node's clustered interface (port ``5984``) to a reachable IP address +4. A consistent :config:option:`UUID `. The UUID is used in identifying + the cluster when replicating. If this value is not consistent across all nodes + in the cluster, replications may be forced to rewind the changes feed to zero, + leading to excessive memory, CPU and network use. +5. A consistent :config:option:`httpd secret `. The secret + is used in calculating and evaluating cookie and proxy authentication, and should + be set consistently to avoid unnecessary repeated session cookie requests. + +If you use a configuration management tool, such as Chef, Ansible, Puppet, etc., +then you can place these settings in a ``.ini`` file and distribute them to all +nodes ahead of time. Be sure to pre-encrypt the password (cutting and pasting +from a test instance is easiest) if you use this route to avoid CouchDB rewriting +the file. + +If you do not use configuration management, or are just experimenting with +CouchDB for the first time, use these commands *once per server* to perform +steps 2-4 above. Be sure to change the ``password`` to something secure, and +again, use the same password on all nodes. You may have to run these commands +locally on each node; if so, replace ```` below with ``127.0.0.1``. + +.. code-block:: bash + + # First, get two UUIDs to use later on. Be sure to use the SAME UUIDs on all nodes. + curl http://:5984/_uuids?count=2 + + # CouchDB will respond with something like: + # {"uuids":["60c9e8234dfba3e2fdab04bf92001142","60c9e8234dfba3e2fdab04bf92001cc2"]} + # Copy the provided UUIDs into your clipboard or a text editor for later use. + # Use the first UUID as the cluster UUID. + # Use the second UUID as the cluster shared http secret. + + # Create the admin user and password: + curl -X PUT http://:5984/_node/_local/_config/admins/admin -d '"password"' + + # Now, bind the clustered interface to all IP addresses availble on this machine + curl -X PUT http://:5984/_node/_local/_config/chttpd/bind_address -d '"0.0.0.0"' + + # Set the UUID of the node to the first UUID you previously obtained: + curl -X PUT http://:5984/_node/_local/_config/couchdb/uuid -d '"FIRST-UUID-GOES-HERE"' + + # Finally, set the shared http secret for cookie creation to the second UUID: + curl -X PUT http://:5984/_node/_local/_config/couch_httpd_auth/secret -d '"SECOND-UUID-GOES-HERE"' + +.. _cluster/setup/wizard: + +The Cluster Setup Wizard +======================== + +CouchDB 2.x comes with a convenient Cluster Setup Wizard as part of the Fauxton +web administration interface. For first-time cluster setup, and for +experimentation, this is your best option. + +It is **strongly recommended** that the minimum number of nodes in a cluster is +3. For more explanation, see the :ref:`Cluster Theory ` section +of this documentation. + +After installation and initial start-up of all nodes in your cluster, ensuring +all nodes are reachable, and the pre-configuration steps listed above, visit +Fauxton at ``http://:5984/_utils#setup``. You will be asked to set up +CouchDB as a single-node instance or set up a cluster. + +When you click "Setup Cluster" you are asked for admin credentials again, and +then to add nodes by IP address. To get more nodes, go through the same install +procedure on other machines. Be sure to specify the total number of nodes you +expect to add to the cluster before adding nodes. + +Now enter each node's IP address or FQDN in the setup wizard, ensuring you also +enter the previously set server admin username and password. + +Once you have added all nodes, click "Setup" and Fauxton will finish the +cluster configuration for you. + +To check that all nodes have been joined correctly, visit +``http://:5984/_membership`` on each node. The returned list +should show all of the nodes in your cluster: + +.. code-block:: javascript + + { + "all_nodes": [ + "couchdb@server1", + "couchdb@server2", + "couchdb@server3" + ], + "cluster_nodes": [ + "couchdb@server1", + "couchdb@server2", + "couchdb@server3" + ] + } + +The ``all_nodes`` section is the list of *expected* nodes; the ``cluster_nodes`` +section is the list of *actually connected* nodes. Be sure the two lists match. + +Now your cluster is ready and available! You can send requests to any one of +the nodes, and all three will respond as if you are working with a single +CouchDB cluster. + +For a proper production setup, you'd now set up an HTTP proxy in front of the +node that does load balancing and SSL termination, if desired. We recommend +`HAProxy`_. See our `example configuration for HAProxy`_. All you need is to +adjust the IP addresses or hostnames and ports. + +.. _cluster/setup/api: + +The Cluster Setup API +===================== + +If you would prefer to manually configure your CouchDB cluster, CouchDB exposes +the ``_cluster_setup`` endpoint for that purpose. After installation and +initial setup/config, we can set up the cluster. On each node we need to run +the following command to set up the node: + +.. code-block:: bash + + curl -X POST -H "Content-Type: application/json" http://admin:password@127.0.0.1:5984/_cluster_setup -d '{"action": "enable_cluster", "bind_address":"0.0.0.0", "username": "admin", "password":"password", "node_count":"3"}' + +After that we can join all the nodes together. Choose one node as the "setup +coordination node" to run all these commands on. This "setup coordination +node" only manages the setup and requires all other nodes to be able to see it +and vice versa. *It has no special purpose beyond the setup process; CouchDB +does not have the concept of a "master" node in a cluster.* + +Setup will not work with unavailable nodes. All nodes must be online and properly +preconfigured before the cluster setup process can begin. + +To join a node to the cluster, run these commands for each node you want to add: + +.. code-block:: bash + + curl -X POST -H "Content-Type: application/json" http://admin:password@:5984/_cluster_setup -d '{"action": "enable_cluster", "bind_address":"0.0.0.0", "username": "admin", "password":"password", "port": 5984, "node_count": "3", "remote_node": "", "remote_current_user": "", "remote_current_password": "" }' + curl -X POST -H "Content-Type: application/json" http://admin:password@:5984/_cluster_setup -d '{"action": "add_node", "host":"", "port": , "username": "admin", "password":"password"}' + +This will join the two nodes together. Keep running the above commands for each +node you want to add to the cluster. Once this is done run the following +command to complete the cluster setup and add the system databases: + +.. code-block:: bash + + curl -X POST -H "Content-Type: application/json" http://admin:password@:5984/_cluster_setup -d '{"action": "finish_cluster"}' + +Verify install: + +.. code-block:: bash + + curl http://admin:password@:5984/_cluster_setup + +Response: + +.. code-block:: bash + + {"state":"cluster_finished"} + +Verify all cluster nodes are connected: + +.. code-block:: bash + + curl http://admin:password@:5984/_membership + +Response: + +.. code-block:: bash + + { + "all_nodes": [ + "couchdb@couch1", + "couchdb@couch2", + "couchdb@couch3", + ], + "cluster_nodes": [ + "couchdb@couch1", + "couchdb@couch2", + "couchdb@couch3", + ] + } + +Ensure the ``all_nodes`` and ``cluster_nodes`` lists match. + +You CouchDB cluster is now set up. + +.. _HAProxy: http://haproxy.org/ +.. _example configuration for HAProxy: https://github.com/apache/couchdb/blob/master/rel/haproxy.cfg diff --git a/src/setup/index.rst b/src/setup/index.rst new file mode 100644 index 00000000..c3f8aa81 --- /dev/null +++ b/src/setup/index.rst @@ -0,0 +1,27 @@ +.. Licensed under the Apache License, Version 2.0 (the "License"); you may not +.. use this file except in compliance with the License. You may obtain a copy of +.. the License at +.. +.. http://www.apache.org/licenses/LICENSE-2.0 +.. +.. Unless required by applicable law or agreed to in writing, software +.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +.. License for the specific language governing permissions and limitations under +.. the License. + +.. _setup: + +===== +Setup +===== + +CouchDB 2.x can be deployed in either a single-node or a clustered +configuration. This section covers the first-time setup steps required for each +of these configurations. + +.. toctree:: + :maxdepth: 2 + + single-node + cluster diff --git a/src/install/setup.rst b/src/setup/single-node.rst similarity index 53% rename from src/install/setup.rst rename to src/setup/single-node.rst index 6cd38612..169f2762 100644 --- a/src/install/setup.rst +++ b/src/setup/single-node.rst @@ -10,41 +10,33 @@ .. License for the specific language governing permissions and limitations under .. the License. -.. _install/setup: - -================ -First-Time Setup -================ - -CouchDB 2.0 can be used in a single-node or clustered configuration. -Below are the first-time setup steps required for each of these -configurations. - -.. _install/setup/single: +.. _setup/single-node: Single Node Setup ================= -A single node CouchDB 2.0 installation is what most users will be using. -It is roughly equivalent to the CouchDB 1.x-series. Note that a -single-node setup obviously doesn't take any advantage of the new -scaling and fault-tolerance features in CouchDB 2.0. +Many users simply need a single-node CouchDB 2.x installation. Operationally, +it is roughly equivalent to the CouchDB 1.x series. Note that a single-node +setup obviously doesn't take any advantage of the new scaling and +fault-tolerance features in CouchDB 2.x. After installation and initial startup, visit Fauxton at ``http://127.0.0.1:5984/_utils#setup``. You will be asked to set up CouchDB as a single-node instance or set up a cluster. When you click “Single-Node-Setup”, you will get asked for an admin username and -password. Choose them well and remember them. You can also bind CouchDB -to a public address, so it is accessible within your LAN or the public, if -you are doing this on a public VM. Or, you can keep the installation private -by binding only to 127.0.0.1 (localhost). The wizard then configures your admin +password. Choose them well and remember them. + +You can also bind CouchDB to a public address, so it is accessible within your +LAN or the public, if you are doing this on a public VM. Or, you can keep the +installation private by binding only to 127.0.0.1 (localhost). Binding to +0.0.0.0 will bind to all addresses. The wizard then configures your admin username and password and creates the three system databases ``_users``, ``_replicator`` and ``_global_changes`` for you. -Alternatively, if you don't want to use the “Single-Node-Setup” wizard -and run 2.0 as a single node with admin username and password already -configured, make sure to create the three three system databases manually -on startup: +Alternatively, if you don't want to use the Setup Wizard, and run 2.x as a +single node with a server administrator already configured via +:ref:`config file`, make sure to +create the three system databases manually on startup: .. code-block:: sh @@ -58,13 +50,3 @@ Note that the last of these is not necessary if you do not expect to be using the global changes feed. Feel free to delete this database if you have created it, it has grown in size, and you do not need the function (and do not wish to waste system resources on compacting it regularly.) - -See the next section for the cluster setup instructions. - -.. _install/setup/cluster: - -Cluster Setup -============= - -As configuration has many steps, see the :ref:`Cluster Reference Setup -` for full details.