Skip to content

Conversation

@nickvergessen
Copy link
Member

Not sure where this origins from, but it broke federated shares for me.

  1. Do a federated share
  2. Gather the token form the database
  3. Get Share info:

With patch:

{
  "data": {
    "id": 165,
    "parentId": 2,
    "mtime": 1557132779,
    "name": "wat",
    "permissions": 31,
    "mimetype": "httpd\/unix-directory",
    "size": 0,
    "type": "dir",
    "etag": "5ccff5eb2c45d",
    "children": [
      
    ]
  },
  "status": "success"
}

Without:

{
  "data": {
    "data": {
      "id": 165,
      "parentId": 2,
      "mtime": 1557132779,
      "name": "wat",
      "permissions": 31,
      "mimetype": "httpd\/unix-directory",
      "size": 0,
      "type": "dir",
      "etag": "5ccff5eb2c45d",
      "children": [
        
      ]
    },
    "status": "success"
  },
  "status": "success"
}

The double nesting causes broken SQL queries when tryiung to store the cache information for status and data....

Signed-off-by: Joas Schilling <coding@schilljs.com>
* @return boolean|null
*/
public function registerMiddleWare($middleWare) {
if (in_array($middleWare, $this->middleWares, true) !== false) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we registered the same middleware twice?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, all sharing middlewares were there twice

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a performance impact by this change? Could we use a key to reference the middleware and just replace it?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In a default installation there are 14 (or with this bug 17) middlewares. Not really a huge impact.

Copy link
Member

@blizzz blizzz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixes #15211

@blizzz blizzz merged commit 0e6e2a9 into master May 6, 2019
@blizzz blizzz deleted the bugfix/noid/broken-federated-shares branch May 6, 2019 21:37
@blizzz
Copy link
Member

blizzz commented May 6, 2019

/backport to stable16

@backportbot-nextcloud
Copy link

backport to stable16 in #15399

@hcderaad
Copy link

hcderaad commented May 7, 2019

Can confirm manually applying the patch does fix the symptoms described in #15393 and #15245 for me.

@Whissi
Copy link

Whissi commented May 7, 2019

@hcderaad Did you run another command after applying the fix to get the broken shares back into working state?

@geokh
Copy link

geokh commented May 8, 2019

I am still facing the issue after Applying the changes on lib/private/AppFramework/DependencyInjection/DIContainer.php.
please let us know what next

@nickvergessen
Copy link
Member Author

The problem is that the patch needs to be applied on the server which sent you the federated share.

@Whissi
Copy link

Whissi commented May 8, 2019

I can confirm @nickvergessen comment. Once the NC instance owning the federated share was patched, my NC instance returned into a working state.

However, I am concerned that another instance is able to bring down my instance. For example, I had to stop NC desktop and mobile client (desktop at least allowed me to uncheck the shared folder so I could continue using that client) or clients would keep crashing/stop working due to that error. I am lucky that the owner of the other instance was able to patch his instance. Now think about about an instance where the user has no control...

Also, things like occ files:scan --all should never fail. Instead they should remove the federated share causing problems. Not?

@hcderaad
Copy link

I can confirm @nickvergessen comment. Once the NC instance owning the federated share was patched, my NC instance returned into a working state.

However, I am concerned that another instance is able to bring down my instance. For example, I had to stop NC desktop and mobile client (desktop at least allowed me to uncheck the shared folder so I could continue using that client) or clients would keep crashing/stop working due to that error. I am lucky that the owner of the other instance was able to patch his instance. Now think about about an instance where the user has no control...

Also, things like occ files:scan --all should never fail. Instead they should remove the federated share causing problems. Not?

See #15765

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants