Conversation
|
Is there a preview of the documentation, report available somewhere? |
The documentation that is presented online only contains the status of the main branch. For a preview on your own machine, you may run the shell-script generate_documentation.sh, which is located in the folder TSF/scripts. This script generates the documentation and publishes it to a locally hosted website. |
* Add reference to nodes * add custom reference ItemReference * document ItemReference * add minor tests for ItemReference * add table for scoring rationale to documentation * add ItemReferences For each item referring also to references of other nodes, add the corresponding ItemReferences * change ItemReference Since each reference's content was calculated, performance was quite bad. Therefore, the change of the content is now indirectly detected, only. * fix orphan-nodes * PJD-04 already covered by NPFs -> removed PJD-04 * linked TA-UPDATES and AOU-27 * remove unused code * re-formulate documentation for clarification * adapt formulation to be precise * remove unused pydot * change import of LocalFileReference to suppress warnings * add clarification of scope * improve function-reference * correct section counting * fix typo SCORE -> S-CORE
* fix function-reference * add support for multiline function declarations * add test for multiline function declaration * fix error-messages
8a42aa9 to
e8ea224
Compare
* remove reference ListOfTestCases * add folder artifacts to gitignore * add generator for the list of test environments * generate list of test environments * add validator for list of test environments * document generator * implement evidence * remove obsolete tests * add tests for ListOfTestsGenerator * repair pytest execution --------- Signed-off-by: Erik Hu <135733975+Erikhu1@users.noreply.github.com> Signed-off-by: Jonas-Kirchhoff <jonas.kirchhoff@d-fine.com> Co-authored-by: Erik Hu <erik.hu@d-fine.com> Co-authored-by: Erik Hu <135733975+Erikhu1@users.noreply.github.com>
|
The local generation of the documentation previously required the database MemoryEfficientTestResults.db, which is generated in the ubuntu-workflow, to be present in a folder artifacts in the root of the repository. Otherwise, the reference class ListOfTestCases would raise an error on execution, which prevented the successful execution of trudag. We have removed this hurdle, so that the local execution of generate_documentation.sh does not require any further modification. |
c21ec61 to
9301785
Compare
|
To ensure that the correct version of trudag (2025.7.23 or later) is used, and that all imports are working as expected, it is recommended to execute the scripts from within the virtual environment supplied with the project, only. In case that the virtual environment does not automatically activate when the devcontainer is not activated, please excecute the script .devcontainer/post_create_script.sh. |
Signed-off-by: Erik Hu <135733975+Erikhu1@users.noreply.github.com>
TSF/scripts/README.md
Outdated
| ## generate_list_of_tests.py | ||
|
|
||
| The python script [generate_list_of_tests.py](generate_list_of_tests.py) is used to generate the list_of_test_cases.md listing the expected test-cases with their execution environments. | ||
| To run this script, the database `MemoryEfficientTestResults.db` must be downloaded from the artifact of the most recent successful ubuntu workflow, and placed in the folder `artifacts` in the root of the repository. |
There was a problem hiding this comment.
Comment in PR suggests that this is not true any more.
There was a problem hiding this comment.
This is still true, albeit badly explained. The file is persistently stored, and a validator cross-checks a file generated from the most recent test-results with the stored file. Therefore, the script needs to be run locally in case anything in the tests changes to have a valid list_of_test_environments.md. More explanation on this was added.
aschemmel-tech
left a comment
There was a problem hiding this comment.
Managed to create a document with some manual support. One comment may be considered. Could also approve.
* library version * AOU-03 shall * AOU-25 'used' standard library * fix AOU-26 * add context diagram * fix introduction * Review round 2 jk (#93) * Add reference to nodes * add custom reference ItemReference * document ItemReference * add minor tests for ItemReference * add table for scoring rationale to documentation * add ItemReferences For each item referring also to references of other nodes, add the corresponding ItemReferences * change ItemReference Since each reference's content was calculated, performance was quite bad. Therefore, the change of the content is now indirectly detected, only. * fix orphan-nodes * PJD-04 already covered by NPFs -> removed PJD-04 * linked TA-UPDATES and AOU-27 * remove unused code * re-formulate documentation for clarification * adapt formulation to be precise * remove unused pydot * change import of LocalFileReference to suppress warnings * add clarification of scope * improve function-reference * correct section counting * fix typo SCORE -> S-CORE * Review round 2 jk (#96) * fix function-reference * add support for multiline function declarations * add test for multiline function declaration * fix error-messages * Review round 2 jk (#101) * remove reference ListOfTestCases * add folder artifacts to gitignore * add generator for the list of test environments * generate list of test environments * add validator for list of test environments * document generator * implement evidence * remove obsolete tests * add tests for ListOfTestsGenerator * repair pytest execution --------- Signed-off-by: Erik Hu <135733975+Erikhu1@users.noreply.github.com> Signed-off-by: Jonas-Kirchhoff <jonas.kirchhoff@d-fine.com> Co-authored-by: Erik Hu <erik.hu@d-fine.com> Co-authored-by: Erik Hu <135733975+Erikhu1@users.noreply.github.com> * add root to syspath * remove @classmethods from init * clean up * clean up * explain generate_list_of_tests.py --------- Signed-off-by: Erik Hu <135733975+Erikhu1@users.noreply.github.com> Signed-off-by: Jonas-Kirchhoff <jonas.kirchhoff@d-fine.com> Co-authored-by: Jonas-Kirchhoff <jonas.kirchhoff@d-fine.com>
Addressing all feedback in #73 (comment)