Skip to content

Fix fabric_open_doc_revs#1862

Merged
davisp merged 1 commit intomasterfrom
fix-open-doc-revs
Jan 15, 2019
Merged

Fix fabric_open_doc_revs#1862
davisp merged 1 commit intomasterfrom
fix-open-doc-revs

Conversation

@davisp
Copy link
Copy Markdown
Member

@davisp davisp commented Jan 15, 2019

There was a subtle bug when opening specific revisions in
fabric_doc_open_revs due to a race condition between updates being
applied across a cluster.

The underlying cause here was due to the stemming after a document had
been updated more than revs_limit number of times along with concurrent
reads to a node that had not yet made the update. To illustrate lets
consider a document A which has a revision history from {N, RevN} to
{N+1000, RevN+1000} (assuming revs_limit is the default 1000). If we
consider a single node perspective when an update comes in we added the
new revision and stem the oldest revision. The docs the revisions on the
node would be {N+1, RevN+1} to {N+1001, RevN+1001}.

The bug exists when we attempt to open revisions on a different node
that has yet to apply the new update. In this case when
fabric_doc_open_revs could be called with {N+1000, RevN+1000}. This
results in a response from fabric_doc_open_revs that includes two
different {ok, Doc} results instead of the expected one instance. The
reason for this is that one document has revisions {N+1, RevN+1} to
{N+1000, RevN+1000} from the node that has applied the update, while
the node without the update responds with revisions {N, RevN} to
{N+1000, RevN+1000}`.

To rephrase that, a node that has applied an update can end up returning
a revision path that contains revs_limit - 1 revisions while a node
wihtout the update returns all revs_limit revisions. This slight
change in the path prevented the responses from being properly combined
into a single response.

This bug has existed for many years. However, read repair effectively
prevents it from being a significant issue by immediately fixing the
revision history discrepancy. This was discovered due to the recent bug
in read repair during a mixed cluster upgrade to a release including
clustered purge. In this situation we end up crashing the design
document cache which then leads to all of the design document requests
being direct reads which can end up causing cluster nodes to OOM and
die. The conditions require a significant number of design document
edits coupled with already significant load to those modified design
documents. The most direct example observed was a clustered that had a
significant number of filtered replications in and out of the cluster.

Testing recommendations

make check

Related Issues or Pull Requests

This was discovered due to issues caused by #1860

Checklist

  • Code is written and works correctly;
  • Changes are covered by tests;
  • Documentation reflects the changes;

There was a subtle bug when opening specific revisions in
fabric_doc_open_revs due to a race condition between updates being
applied across a cluster.

The underlying cause here was due to the stemming after a document had
been updated more than revs_limit number of times along with concurrent
reads to a node that had not yet made the update. To illustrate lets
consider a document A which has a revision history from `{N, RevN}` to
`{N+1000, RevN+1000}` (assuming revs_limit is the default 1000). If we
consider a single node perspective when an update comes in we added the
new revision and stem the oldest revision. The docs the revisions on the
node would be `{N+1, RevN+1}` to `{N+1001, RevN+1001}`.

The bug exists when we attempt to open revisions on a different node
that has yet to apply the new update. In this case when
fabric_doc_open_revs could be called with `{N+1000, RevN+1000}`. This
results in a response from fabric_doc_open_revs that includes two
different `{ok, Doc}` results instead of the expected one instance. The
reason for this is that one document has revisions `{N+1, RevN+1}` to
`{N+1000, RevN+1000}` from the node that has applied the update, while
the node without the update responds with revisions `{N, RevN}` to
{N+1000, RevN+1000}`.

To rephrase that, a node that has applied an update can end up returning
a revision path that contains `revs_limit - 1` revisions while a node
wihtout the update returns all `revs_limit` revisions. This slight
change in the path prevented the responses from being properly combined
into a single response.

This bug has existed for many years. However, read repair effectively
prevents it from being a significant issue by immediately fixing the
revision history discrepancy. This was discovered due to the recent bug
in read repair during a mixed cluster upgrade to a release including
clustered purge. In this situation we end up crashing the design
document cache which then leads to all of the design document requests
being direct reads which can end up causing cluster nodes to OOM and
die. The conditions require a significant number of design document
edits coupled with already significant load to those modified design
documents. The most direct example observed was a clustered that had a
significant number of filtered replications in and out of the cluster.
@davisp davisp force-pushed the fix-open-doc-revs branch from 0ecc373 to 4252c3d Compare January 15, 2019 18:45
Comment thread src/fabric/src/fabric_doc_open_revs.erl
@davisp davisp merged commit 5269d79 into master Jan 15, 2019
@davisp davisp deleted the fix-open-doc-revs branch January 15, 2019 19:37
@wohali wohali mentioned this pull request Feb 7, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants