-
Notifications
You must be signed in to change notification settings - Fork 0
Rationalize and expand testing model #24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
7 commits
Select commit
Hold shift + click to select a range
c2a1cc6
Added OneHopTests and StandardsValidation to TestObjectiveEnum; rege…
RichardBruskiewich d469c1a
Subclass KnowledgeGraphNavigationTestCase from ComplianceTestCase
RichardBruskiewich 180869f
Refactored runner_settings into test_run_settings inside TestEntity; …
RichardBruskiewich 15b6163
Repaired example data to fix unit tests
RichardBruskiewich 4b38ace
ditto
RichardBruskiewich 51964bb
Fixed TestObjectiveEnum values as per feedback from Max
RichardBruskiewich f1268b3
Merge branch 'main' into add-new-test-object-enum and regenerated all…
RichardBruskiewich File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
Binary file not shown.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think components should be all the way up at the TestRunSession. I think it needs to stay down at TestCase. We could conceivably want to run tests against different components in the same Test Run (Session)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A use case could be TRAPI validation. We would hit all levels, KPs, ARAs, ARS, make sure that none of them are breaking the standard.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @maximusunc,
componentsare aList[str]of infores identifiers of targetcomponents. The implicit assumption that was made (perhaps incorrect, I guess) was that all of thetest_entities(test data - TestAssets/Cases/Suites) of a given TestRunSession will be run against all of the specifiedcomponents. If a TestRunSession is a heterogeneous mix of testing targets for which the data is not mapped one-to-one, then of course, this assumption is violated. I guess it just depends how one decides to cut the cake.Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As promised @maximusunc (cc: @sierra-moxon), I will try to express my sense of the testing landscape. I suspect there is no "right answer" to this... that your current implementation is equally tenable... Just trying to "think out loud" here.
Anyhow, I would see the planning of the test process as an ordered series of test configuration decisions. Here's an order I have sort of followed in earlier testing work:
My general sense is that the answers to all of the above questions could be aggregated within a top level TestRunSession declaration.
What I understand, however, is that from your perspective, a TestRunSession is mainly a scheduled (timestamped) container for any ad hoc set of test data (TestAssets/Cases/Suites) to be processed, and that the test data itself will specify where it is consumed (i.e. which component) thus different members of the
test_entitiesarray of test data could be routed to different target components, albeit, all processed by the same specified TestRunner.Nothing wrong with that, I guess... one is simply embedding the answer to question 1 above, inside the test data returned in response to question 5. On that note, would the
test_envalso belong there as well, or does that parameter tend to be more global to a given test run?Just an alternate way of organizing the testing... "cutting the cake..." LOL. In terms of granularity, I might tend to suggest that the granularity for this is the TestSuite - in that components might "own" a TestSuite, right (even if in some cases, there is duplication of test data in TestSuite data across components).
I guess that is actually what the legacy SRI_Testing test harness did to some extent, since the test data was resolved from KP's using their Translator SmartAPI '
info.x-trapi.test_data_location' property. that said, the top level test run specification decision of SRI_Testing was still to specify which component(s) were to be tested.... OK... just a different way of cutting the cake.In the context of the above list of questions, I should probably ask your opinion about question 4. In particular, I suppose that TestRunner specific parameters like TRAPI version and Biolink version (e.g. for a standards validation test runner) could be pushed down into the TestSuite level, if it is the case that specific TestSuite instances are run as checks on standards. Alternately, if one expects to generally run the standards validation as an overlay test using many of the test data sets used in other tests, then where is it best specified? The TestRunSession seemed to be the best location but if so, should these Translator level declarations be specified at the TestHarness level CLI (perhaps defaulting to the 'current' consortium releases, if not specified)?
Come to think of it, the TestRunner also currently seems to be specified at the TestCase level. This seems very granular. Is there a possible rationale for moving this up to the TestSuite level, or even, the TestRunSession level (actually, there is already a
test_runner_nameslot in TestRunSession which reflects this idea...). More alternative ways of cutting the cake?Any further thoughts @maximusunc and @sierra-moxon on any or all of the above?