Skip to content

Conversation

@ROpdebee
Copy link
Owner

@ROpdebee ROpdebee commented Sep 26, 2021

WIP, but sharing early, especially because this PR might grow to quite a large size.

Run tests using npx jest (TODO: add a package.json script).
Run tests using npm run test or npx jest or npm t.

The order of jobs in CI is currently a bit backwards (test -> build -> release makes little sense), but it works for now. Once tests that are ran in an actual browser are added, that order has to adapted.

This also fixes the remaining linter warnings about using `any` in
the catch clauses. We're using ts-custom-error since extending the
native `Error` class in ES6 is tricky.
The custom errors get minified and if we just use the stringified
version, it shows up as "<name>: <message>" where <name> is the
minified class name, which doesn't look nice. We could fix it by
manually assigning the name for each custom error class, but then
the error messages would look like "NetworkError: network error"
which again isn't ideal.
* Don't do nullish coalescing or optional chaining for values that
  are never null, e.g. string.split always returns an array with at
  least one element, even if the string is empty.
* More strict checking of previous footers.
@ROpdebee
Copy link
Owner Author

Every time I add tests to a module, I find stuff to refactor, so I'm half-tempted to merge this already and then work on tests and refactoring simultaneously in smaller PRs.

For example, the gmxhr refactoring (#69) already conflicts with the tests added here...

@kellnerd
Copy link
Collaborator

You don't have to create a separate branch for each change just for the purpose of easing reviews, do whatever feels most convenient for you. As long as there are separate commits for the different changes, I have no problem with a huge "refactoring & tests" PR. However, I think that I won't have time to review any PR in detail until the end of the week.

@ROpdebee
Copy link
Owner Author

I decided against refactoring qs/qsa/qsMaybe because both qs and qsa do useful things (asserting existence and converting to a real array, respectively). qsMaybe does nothing but wrap querySelector though, so it's a bit useless, but IMO it makes sense to keep it for the sake of consistency.

Also changing private members to use TS `private` instead of ES `#`
since we can bypass TS's `private`, but not ES's `#`
@ROpdebee
Copy link
Owner Author

I've done some experimentation, it appears that we'll run into quite a lot of problems if we try to test files containing JSX. We'd need to run them through babel first, but we can't do that with nativejsx currently. I don't feel like changing the way we handle JSX at the moment. I could write a babel plugin, but that's of course not going to be straightforward and I doubt it's worth it. On top of all that, the SCSS imports will be quite the challenge to deal with too.

Instead, I'd propose to split off the UI elements into separate TSX files whose sole responsibility will be UI stuff. I should be able to mock those imports in the unit tests, and besides, it's good software design to separate UI from logic anyway :)

These are leftovers from the blind votes port, which isn't finished
yet. They're saved in some other WIP branch currently. Removing these
will lead to merge conflicts whenever that WIP branch gets merged,
but I'm not sure when that will ever happen, so let's just clean up
some of the clutter for now.
@ROpdebee
Copy link
Owner Author

@kellnerd Marking this one as ready, as the lib and build stuff is fully covered with these tests, where possible*. I'm going to branch off to a separate PR to test the ECAU script source, since this one has already grown sufficiently.

Feel free to take a look through some of the changes on your own pace if you'd like, merging these changes isn't urgent. I'll likely go over the history once more myself soon-ish. Just let me know if you intend on reviewing or if I should just merge whenever I complete my double-checking pass. I'm fine with either. :)

*In terms of build, I'm only really testing the userscript metadata generator, not anything else. I'd rather not touch the build scripts too much, since I can't fully remember how they work 😅 Large swaths of its code are very difficult to properly test, and most (if not all) of that code is exercised by the CI build anyway. That of course doesn't verify that the scripts run properly when built, but I intend on adding real E2E browser tests at some point in the near future anyway, after unit testing the script code itself.

@ROpdebee ROpdebee marked this pull request as ready for review September 30, 2021 17:49
@kellnerd
Copy link
Collaborator

I will have a closer look over the weekend and might leave you some comments 😄

Copy link
Collaborator

@kellnerd kellnerd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me, I have run the tests locally with jest --verbose and jest --coverage. Lots of tests which should cover all the code in the src/lib if I can trust jest. I definitely would have been too lazy to write most of them ("this method is unlikely to break, that one too..." 😅).

P.S. Just to make it clear, while I know the theory about unit testing (in JS and Python), I have never actually written tests using a framework since my projects are usually to small for me to bother with, or die before they reach a state which would require proper testing framework.

node_modules
dist/
.DS_Store
coverage/
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had just wondered what the origin of this directory could be, until I found out that I can get code coverage reports with jest --coverage.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I should add a package script for coverage, maybe integrate some external service to visualise coverage after CI too.

ROpdebee and others added 2 commits October 3, 2021 19:22
Co-authored-by: David Kellner <52860029+kellnerd@users.noreply.github.com>
Clarify test names and reorder test cases to highlight the difference
between two test cases
@ROpdebee
Copy link
Owner Author

ROpdebee commented Oct 3, 2021

I definitely would have been too lazy to write most of them ("this method is unlikely to break, that one too..." 😅).

I always tell students that 100% code coverage doesn't mean much more than e.g. 90% code coverage. Your tests can achieve 100% coverage, but if you've got code like

function abs(x) { return x; }

and you only test abs(1), abs(0) and abs(100), you'll get 100% code coverage but your code will still do wrong things on abs(-1).

Nevertheless, 100% code coverage looks nice :) I use coverage mainly as a guidance to check what I might have forgotten to test, and if everything is green now in the coverage overview, and next week I make a change and all of a sudden it's not green anymore, I'll immediately know that certain parts need to be tested again. But I'm not going to go out of my way to cover some statement which is nearly impossible, I'll just add an /* istanbul ignore next */ comment before them (istanbul being the coverage provider) if I'm reasonably sure that 1) that statement is correct or 2) there will be another way to fail the deployment if the statement isn't.

Thanks for the review!

@ROpdebee ROpdebee merged commit 6d06e4e into main Oct 3, 2021
@ROpdebee ROpdebee deleted the add-tests branch October 3, 2021 17:40
@ROpdebee ROpdebee restored the add-tests branch October 17, 2021 12:16
@ROpdebee ROpdebee deleted the add-tests branch October 17, 2021 12:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants