Skip to content

Test multiple filesystems with Vagrant#1359

Closed
PlasmaPower wants to merge 1 commit intoborgbackup:masterfrom
PlasmaPower:vagrant-multiple-fs
Closed

Test multiple filesystems with Vagrant#1359
PlasmaPower wants to merge 1 commit intoborgbackup:masterfrom
PlasmaPower:vagrant-multiple-fs

Conversation

@PlasmaPower
Copy link
Copy Markdown
Contributor

@PlasmaPower PlasmaPower commented Jul 22, 2016

Fixes #1289

Both kernel module and FUSE filesystems are supported.

TODO:

@PlasmaPower PlasmaPower force-pushed the vagrant-multiple-fs branch 3 times, most recently from 0b43240 to c6db6e5 Compare July 23, 2016 00:38
@PlasmaPower
Copy link
Copy Markdown
Contributor Author

Okay, first network based FS is implemented (SSHFS). Implementation is a bit hacky (e.g. rootTestingDir hardcoded in sshd_config, couldn't get ChrootDirectory working), but I've made sure to comment it well and it should work fine as-is.

@PlasmaPower
Copy link
Copy Markdown
Contributor Author

nfsd is... interesting. It seems to be implemented as a kernel module. It also has no way to specify a custom config file location from what I can tell, meaning that I can't easily automate it. I'm going to not test NFS, at least for now.

@PlasmaPower PlasmaPower force-pushed the vagrant-multiple-fs branch from c6db6e5 to 9f81903 Compare July 23, 2016 02:07
@PlasmaPower
Copy link
Copy Markdown
Contributor Author

Okay, CIFS is done after a lot of trouble with configuration. That means all the filesystems I was planning to test (except NFS which I couldn't) are done! Any more recommendations?

I'm now going to test if borg actually succeeds in these filesystems (I'm expecting they'll be bugs :/).

@PlasmaPower PlasmaPower force-pushed the vagrant-multiple-fs branch 3 times, most recently from 9ed8620 to f57427c Compare July 23, 2016 03:14
@PlasmaPower PlasmaPower force-pushed the vagrant-multiple-fs branch from f57427c to 302f1ce Compare July 23, 2016 03:44
@PlasmaPower PlasmaPower changed the title Test multiple filesystems with Vagrant [WIP] Test multiple filesystems with Vagrant Jul 23, 2016
@PlasmaPower PlasmaPower force-pushed the vagrant-multiple-fs branch from 302f1ce to ca38a06 Compare July 23, 2016 04:09
@PlasmaPower
Copy link
Copy Markdown
Contributor Author

Okay, I've added all the drivers I could find to the Vagrantfile, but I might have missed some and some might not exist that I thought existed (or had different names). I'm going to wait on the FS specific issues to be fixed before moving forward though, wouldn't want to have to redo everything.

fi
# otherwise: just use the system python

testingPartitionSize=128M
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is the minimum that would safely work?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know, and I would like to support future tests. The FS minimum size is usually small, around 10M, so I don't think that's a big problem. Maybe we should drop this to 16M? I'll check that

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like 16M works. Switching to that now.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

16M fails on NTFS, switching to 32M.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've switched it back to 16M with a custom value of 128M for NTFS because it still runs out at 32M.

@PlasmaPower PlasmaPower force-pushed the vagrant-multiple-fs branch 4 times, most recently from a9d06de to f02d87d Compare July 23, 2016 14:41
@PlasmaPower
Copy link
Copy Markdown
Contributor Author

I think I'm setting some naming conventions here, @ThomasWaldmann do you agree with them? I can change them of course.

map to guest = Bad User

# Who thought it was a good idea
# to put printing in a file sharing protocol?
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lol :D

@PlasmaPower PlasmaPower force-pushed the vagrant-multiple-fs branch from f02d87d to e4ce1f3 Compare July 23, 2016 17:04
@ThomasWaldmann
Copy link
Copy Markdown
Member

names should be like in pep8.

@PlasmaPower
Copy link
Copy Markdown
Contributor Author

PlasmaPower commented Jul 24, 2016

@ThomasWaldmann Should the names of files/directories also be snake_case? I'm currently using dashes as delimiters, but I can easily switch.

@ThomasWaldmann
Copy link
Copy Markdown
Member

dashes in file/dirnames are fine imho.

@PlasmaPower
Copy link
Copy Markdown
Contributor Author

That seems like it'll eliminate a number of problems, and enable more tests. I'll switch all the provision commands over to privileged in the Vagrantfile, remove the sudo -u vagrant in the test driver, and do some testing.

@PlasmaPower PlasmaPower force-pushed the vagrant-multiple-fs branch 2 times, most recently from c55dba5 to ac7afa9 Compare July 31, 2016 14:28
@PlasmaPower
Copy link
Copy Markdown
Contributor Author

I'm getting really odd errors on CIFS. I've put the debugging info in this gist because it's a bit long.

@PlasmaPower
Copy link
Copy Markdown
Contributor Author

I'd appreciate help debugging that error. I'm not really sure what's causing it.

@ThomasWaldmann
Copy link
Copy Markdown
Member

Well, seems like a crash in pytest itself.

@PlasmaPower
Copy link
Copy Markdown
Contributor Author

It does, but I find it odd that it occurs right after a failure in one of our tests.

Maybe I should just disable CIFS?

@PlasmaPower
Copy link
Copy Markdown
Contributor Author

PlasmaPower commented Aug 2, 2016

I can't figure this issue out, so for now at least I'm just disabling CIFS. We already are testing a network based FS anyway.

@ThomasWaldmann
Copy link
Copy Markdown
Member

Pity. Maybe we can enable it again later, CIFS is quite popular, even for internet storage boxes.

@PlasmaPower
Copy link
Copy Markdown
Contributor Author

Yeah, in my local changeset (will push after testing FreeBSD) it's just continued past with a comment explaining the problem. Hopefully that means it'll be easy to re-enable if the problem gets fixed.

Also, I'm getting some errors on FreeBSD, issue will be up with detailed information soon.

@PlasmaPower
Copy link
Copy Markdown
Contributor Author

Once #1416 and #1434 have been resolved, I'll rerun the tests, and then this should be good to merge.

@PlasmaPower PlasmaPower force-pushed the vagrant-multiple-fs branch from 90e0af6 to b8ba248 Compare August 3, 2016 18:48
@PlasmaPower
Copy link
Copy Markdown
Contributor Author

Okay, I'm going to revisit the PR to rerun tests now that some of them should have been fixed.

@enkore enkore added this to the 1.1.0b2 milestone Aug 27, 2016
@codecov-io
Copy link
Copy Markdown

codecov-io commented Sep 7, 2016

Current coverage is 85.43% (diff: 100%)

Merging #1359 into master will increase coverage by 0.04%

@@             master      #1359   diff @@
==========================================
  Files            19         19          
  Lines          6125       6125          
  Methods           0          0          
  Messages          0          0          
  Branches       1033       1033          
==========================================
+ Hits           5230       5233     +3   
+ Misses          645        643     -2   
+ Partials        250        249     -1   

Sunburst

Powered by Codecov. Last update d3cea70...21e6384

@PlasmaPower
Copy link
Copy Markdown
Contributor Author

I've been really busy, but I currently have some time to work on this (and I should be relatively free for a while). I'm running tests for wheezy32 now, I'll post logs if anything comes up of course.

It's unfortunate that the whole thing has to be rebuilt and dependencies have to be reinstalled for each filesystem. Is there a workaround that I could use?

@ThomasWaldmann ThomasWaldmann modified the milestones: 1.1.0b3, 1.1.0b2 Sep 16, 2016
@ThomasWaldmann
Copy link
Copy Markdown
Member

@PlasmaPower not sure what you mean with rebuilt/reinstalled for each filesystem.

but as a general remark: we'll soon have 1.1.0b3 and likely that will be the last beta, follow by 1+ release candidate. so it would be good to get your PR finished soon, so it can be in 1.1.0 release.

@ThomasWaldmann
Copy link
Copy Markdown
Member

another thing: I'ld like to keep the count of toplevel items in the repo low, so how about moving the stuff from vagrant-tools/... to scripts/vagrant/... ?

@PlasmaPower
Copy link
Copy Markdown
Contributor Author

If anyone else wants to pick up this PR (and claim the bounty), that's fine. The test time is simply too long to try to fix a lot of small issues. Each time tox is run with a different TMPDIR it seems to reinstall dependencies, I haven't looked into preventing this, a symlink in the TMPDIR to a central location might fix it.

@PlasmaPower
Copy link
Copy Markdown
Contributor Author

And yes, tests are failing on various filesystems, but a lot of the time it's hard to tell where the problem is coming from and if it's the actual tests or just my test driver. Also, it seems to also be somewhat OS dependent.

@ThomasWaldmann ThomasWaldmann self-assigned this Nov 10, 2016
@ThomasWaldmann
Copy link
Copy Markdown
Member

continued in #1820.

@ThomasWaldmann ThomasWaldmann removed this from the 1.1.0b3 milestone Nov 10, 2016
@PlasmaPower
Copy link
Copy Markdown
Contributor Author

Great! I've followed that issue, I'll be happy to help if you're wondering why something's there/what is does.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Test on many file systems automatically

4 participants