From 8c57a68ff555d6fcd84b9ca12a51845070a20e58 Mon Sep 17 00:00:00 2001 From: J Boddey Date: Mon, 21 Aug 2023 09:25:16 +0100 Subject: [PATCH 01/33] Merge dev into main (Sprint 10 and 11) (#86) * Implement test orchestrator (#4) * Initial work on test-orchestrator * Ignore runtime folder * Update runtime directory for test modules * Fix logging Add initial framework for running tests * logging and misc cleanup * logging changes * Add a stop hook after all tests complete * Refactor test_orc code * Add arg passing Add option to use locally cloned via install or remote via main project network orchestrator * Fix baseline module Fix orchestrator exiting only after timeout * Add result file to baseline test module Change result format to match closer to design doc * Refactor pylint * Skip test module if it failed to start * Refactor * Check for valid log level --------- Co-authored-by: Jacob Boddey * Add issue report templates (#7) * Add issue templates * Update README.md * Discover devices on the network (#5) * Test run sync (#8) * Initial work on test-orchestrator * Ignore runtime folder * Update runtime directory for test modules * Fix logging Add initial framework for running tests * logging and misc cleanup * logging changes * Add a stop hook after all tests complete * Refactor test_orc code * Add arg passing Add option to use locally cloned via install or remote via main project network orchestrator * Fix baseline module Fix orchestrator exiting only after timeout * Add result file to baseline test module Change result format to match closer to design doc * Refactor pylint * Skip test module if it failed to start * Refactor * Check for valid log level * Add config file arg Misc changes to network start procedure * fix merge issues * Update runner and test orch procedure Add useful runtiem args * Restructure test run startup process Misc updates to work with net orch updates * Refactor --------- * Quick refactor (#9) * Fix duplicate sleep calls * Add net orc (#11) * Add network orchestrator repository * cleanup duplicate start and install scripts * Temporary fix for python dependencies * Remove duplicate python requirements * remove duplicate conf files * remove remote-net option * cleanp unecessary files * Add the DNS test module (#12) * Add network orchestrator repository * cleanup duplicate start and install scripts * Temporary fix for python dependencies * Remove duplicate python requirements * remove duplicate conf files * remove remote-net option * cleanp unecessary files * Add dns test module Fix test module build process * Add mac address of device under test to test container Update dns test to use mac address filter * Update dns module tests * Change result output * logging update * Update test module for better reusability * Load in module config to test module * logging cleanup * Update baseline module to new template Misc cleanup * Add ability to disable individual tests * remove duplicate readme * Update device directories * Remove local folder * Update device template Update test module to work with new device config file format * Change test module network config options Do not start network services for modules not configured for network * Refactor --------- * Add baseline and pylint tests (#25) * Discover devices on the network (#22) * Discover devices on the network * Add defaults when missing from config Implement monitor wait period from config * Add steady state monitor Remove duplicate callback registrations * Load devices into network orchestrator during testrun start --------- Co-authored-by: jhughesbiot * Build dependencies first (#21) * Build dependencies first * Remove debug message * Add depend on option to test modules * Re-add single interface option * Import subprocess --------- Co-authored-by: jhughesbiot * Port scan test module (#23) * Add network orchestrator repository * cleanup duplicate start and install scripts * Temporary fix for python dependencies * Remove duplicate python requirements * remove duplicate conf files * remove remote-net option * cleanp unecessary files * Add dns test module Fix test module build process * Add mac address of device under test to test container Update dns test to use mac address filter * Update dns module tests * Change result output * logging update * Update test module for better reusability * Load in module config to test module * logging cleanup * Update baseline module to new template Misc cleanup * Add ability to disable individual tests * remove duplicate readme * Update device directories * Remove local folder * Update device template Update test module to work with new device config file format * Change test module network config options Do not start network services for modules not configured for network * Initial nmap test module add Add device ip resolving to base module Add network mounting for test modules * Update ipv4 device resolving in test modules * Map in ip subnets and remove hard coded references * Add ftp port test * Add ability to pass config for individual tests within a module Update nmap module scan to run tests based on config * Add full module check for compliance * Add all tcp port scans to config * Update nmap commands to match existing DAQ tests Add udp scanning and tests * logging cleanup * Update TCP port scanning range Update logging * Merge device config into module config Update device template * fix merge issues * Update timeouts Add multi-threading for multiple scanns to run simultaneously Add option to use scan scripts for services * Fix merge issues * Fix device configs * Remove unecessary files * Cleanup duplicate properties * Cleanup install script * Formatting (#26) * Fix pylint issues in net orc * more pylint fixes * fix listener lint issues * fix logger lint issues * fix validator lint issues * fix util lint issues * Update base network module linting issues * Cleanup linter issues for dhcp modules Remove old code testing code * change to single quote delimeter * Cleanup linter issues for ntp module * Cleanup linter issues for radius module * Cleanup linter issues for template module * fix linter issues with faux-dev * Test results (#27) * Collect all module test results * Fix test modules without config options * Add timestamp to test results * Test results (#28) * Collect all module test results * Fix test modules without config options * Add timestamp to test results * Add attempt timing and device info to test results * Ignore disabled test containers when generating results * Fully skip modules that are disabled * Fix pylint test and skip internet tests so CI passes (#29) * disable internet checks for pass * fix pylint test * Increase pylint score (#31) * More formatting fixes * More formatting fixes * More formatting fixes * More formatting fixes * Misc pylint fixes Fix test module logger --------- Co-authored-by: jhughesbiot * Pylint (#32) * More formatting fixes * More formatting fixes * More formatting fixes * More formatting fixes * Misc pylint fixes Fix test module logger * remove unused files * more formatting * revert breaking pylint changes * more formatting * fix results file * More formatting * ovs module formatting --------- Co-authored-by: Jacob Boddey * Add license header (#36) * More formatting fixes * More formatting fixes * More formatting fixes * More formatting fixes * Misc pylint fixes Fix test module logger * remove unused files * more formatting * revert breaking pylint changes * more formatting * fix results file * More formatting * ovs module formatting * Add ovs control into network orchestrator * Add verification methods for the base network * Add network validation and misc logging updates * remove ovs module * add license header to all python files --------- Co-authored-by: Jacob Boddey Co-authored-by: SuperJonotron * Ovs (#35) * More formatting fixes * More formatting fixes * More formatting fixes * More formatting fixes * Misc pylint fixes Fix test module logger * remove unused files * more formatting * revert breaking pylint changes * more formatting * fix results file * More formatting * ovs module formatting * Add ovs control into network orchestrator * Add verification methods for the base network * Add network validation and misc logging updates * remove ovs module --------- Co-authored-by: Jacob Boddey Co-authored-by: SuperJonotron * remove ovs files added back in during merge * Nmap (#38) * More formatting fixes * More formatting fixes * More formatting fixes * More formatting fixes * Misc pylint fixes Fix test module logger * remove unused files * more formatting * revert breaking pylint changes * more formatting * fix results file * More formatting * ovs module formatting * Add ovs control into network orchestrator * Add verification methods for the base network * Add network validation and misc logging updates * remove ovs module * add license header to all python files * Update tcp scans to speed up full port range scan Add version checking Implement ssh version checking * Add unknown port checks Match unknown ports to existing services Add unknown ports without existing services to results file --------- Co-authored-by: Jacob Boddey Co-authored-by: SuperJonotron * Create startup capture (#37) * Connection (#40) * Initial add of connection test module with ping test * Update host user resolving * Update host user resolving for validator * add get user method to validator * Conn mac oui (#42) * Initial add of connection test module with ping test * Update host user resolving * Update host user resolving for validator * add get user method to validator * Add mac_oui test Add option to return test result and details of test for reporting * Con mac address (#43) * Initial add of connection test module with ping test * Update host user resolving * Update host user resolving for validator * add get user method to validator * Add mac_oui test Add option to return test result and details of test for reporting * Add connection.mac_address test * Dns (#44) * Add MDNS test * Update existing mdns logging to be more consistent with other tests * Add startup and monitor captures * File permissions (#45) * Fix validator file permissions * Fix test module permissions * Fix device capture file permissions * Fix device results permissions * Add connection single ip test (#47) * Nmap results (#49) * Update processing of nmap results to use xml output and json conversions for stability * Update matching with regex to prevent wrong service matches and duplicate processing for partial matches * Update max port scan range * Framework restructure (#50) * Restructure framework and modules * Fix CI paths * Fix base module * Add build script * Remove build logs * Update base and template docker files to fit the new format Implement a template option on network modules Fix skipping of base image build * remove base image build in ci * Remove group from chown --------- Co-authored-by: jhughesbiot * Ip control (#51) * Add initial work for ip control module * Implement ip control module with additional cleanup methods * Update link check to not use error stream * Add error checking around container network configurations * Add network cleanup for namespaces and links * formatting * Move config to /local (#52) * Move config to /local * Fix testing config * Fix ovs_control config location * Fix faux dev config location * Add documentation (#53) * Sync dev to main (#56) * Merge dev into main (Sprint 7 and 8) (#33) * Implement test orchestrator (#4) * Initial work on test-orchestrator * Ignore runtime folder * Update runtime directory for test modules * Fix logging Add initial framework for running tests * logging and misc cleanup * logging changes * Add a stop hook after all tests complete * Refactor test_orc code * Add arg passing Add option to use locally cloned via install or remote via main project network orchestrator * Fix baseline module Fix orchestrator exiting only after timeout * Add result file to baseline test module Change result format to match closer to design doc * Refactor pylint * Skip test module if it failed to start * Refactor * Check for valid log level --------- Co-authored-by: Jacob Boddey * Add issue report templates (#7) * Add issue templates * Update README.md * Discover devices on the network (#5) * Test run sync (#8) * Initial work on test-orchestrator * Ignore runtime folder * Update runtime directory for test modules * Fix logging Add initial framework for running tests * logging and misc cleanup * logging changes * Add a stop hook after all tests complete * Refactor test_orc code * Add arg passing Add option to use locally cloned via install or remote via main project network orchestrator * Fix baseline module Fix orchestrator exiting only after timeout * Add result file to baseline test module Change result format to match closer to design doc * Refactor pylint * Skip test module if it failed to start * Refactor * Check for valid log level * Add config file arg Misc changes to network start procedure * fix merge issues * Update runner and test orch procedure Add useful runtiem args * Restructure test run startup process Misc updates to work with net orch updates * Refactor --------- * Quick refactor (#9) * Fix duplicate sleep calls * Add net orc (#11) * Add network orchestrator repository * cleanup duplicate start and install scripts * Temporary fix for python dependencies * Remove duplicate python requirements * remove duplicate conf files * remove remote-net option * cleanp unecessary files * Add the DNS test module (#12) * Add network orchestrator repository * cleanup duplicate start and install scripts * Temporary fix for python dependencies * Remove duplicate python requirements * remove duplicate conf files * remove remote-net option * cleanp unecessary files * Add dns test module Fix test module build process * Add mac address of device under test to test container Update dns test to use mac address filter * Update dns module tests * Change result output * logging update * Update test module for better reusability * Load in module config to test module * logging cleanup * Update baseline module to new template Misc cleanup * Add ability to disable individual tests * remove duplicate readme * Update device directories * Remove local folder * Update device template Update test module to work with new device config file format * Change test module network config options Do not start network services for modules not configured for network * Refactor --------- * Add baseline and pylint tests (#25) * Discover devices on the network (#22) * Discover devices on the network * Add defaults when missing from config Implement monitor wait period from config * Add steady state monitor Remove duplicate callback registrations * Load devices into network orchestrator during testrun start --------- Co-authored-by: jhughesbiot * Build dependencies first (#21) * Build dependencies first * Remove debug message * Add depend on option to test modules * Re-add single interface option * Import subprocess --------- Co-authored-by: jhughesbiot * Port scan test module (#23) * Add network orchestrator repository * cleanup duplicate start and install scripts * Temporary fix for python dependencies * Remove duplicate python requirements * remove duplicate conf files * remove remote-net option * cleanp unecessary files * Add dns test module Fix test module build process * Add mac address of device under test to test container Update dns test to use mac address filter * Update dns module tests * Change result output * logging update * Update test module for better reusability * Load in module config to test module * logging cleanup * Update baseline module to new template Misc cleanup * Add ability to disable individual tests * remove duplicate readme * Update device directories * Remove local folder * Update device template Update test module to work with new device config file format * Change test module network config options Do not start network services for modules not configured for network * Initial nmap test module add Add device ip resolving to base module Add network mounting for test modules * Update ipv4 device resolving in test modules * Map in ip subnets and remove hard coded references * Add ftp port test * Add ability to pass config for individual tests within a module Update nmap module scan to run tests based on config * Add full module check for compliance * Add all tcp port scans to config * Update nmap commands to match existing DAQ tests Add udp scanning and tests * logging cleanup * Update TCP port scanning range Update logging * Merge device config into module config Update device template * fix merge issues * Update timeouts Add multi-threading for multiple scanns to run simultaneously Add option to use scan scripts for services * Fix merge issues * Fix device configs * Remove unecessary files * Cleanup duplicate properties * Cleanup install script * Formatting (#26) * Fix pylint issues in net orc * more pylint fixes * fix listener lint issues * fix logger lint issues * fix validator lint issues * fix util lint issues * Update base network module linting issues * Cleanup linter issues for dhcp modules Remove old code testing code * change to single quote delimeter * Cleanup linter issues for ntp module * Cleanup linter issues for radius module * Cleanup linter issues for template module * fix linter issues with faux-dev * Test results (#27) * Collect all module test results * Fix test modules without config options * Add timestamp to test results * Test results (#28) * Collect all module test results * Fix test modules without config options * Add timestamp to test results * Add attempt timing and device info to test results * Ignore disabled test containers when generating results * Fully skip modules that are disabled * Fix pylint test and skip internet tests so CI passes (#29) * disable internet checks for pass * fix pylint test * Increase pylint score (#31) * More formatting fixes * More formatting fixes * More formatting fixes * More formatting fixes * Misc pylint fixes Fix test module logger --------- Co-authored-by: jhughesbiot * Pylint (#32) * More formatting fixes * More formatting fixes * More formatting fixes * More formatting fixes * Misc pylint fixes Fix test module logger * remove unused files * more formatting * revert breaking pylint changes * more formatting * fix results file * More formatting * ovs module formatting --------- Co-authored-by: Jacob Boddey * Add license header (#36) * More formatting fixes * More formatting fixes * More formatting fixes * More formatting fixes * Misc pylint fixes Fix test module logger * remove unused files * more formatting * revert breaking pylint changes * more formatting * fix results file * More formatting * ovs module formatting * Add ovs control into network orchestrator * Add verification methods for the base network * Add network validation and misc logging updates * remove ovs module * add license header to all python files --------- Co-authored-by: Jacob Boddey Co-authored-by: SuperJonotron * Ovs (#35) * More formatting fixes * More formatting fixes * More formatting fixes * More formatting fixes * Misc pylint fixes Fix test module logger * remove unused files * more formatting * revert breaking pylint changes * more formatting * fix results file * More formatting * ovs module formatting * Add ovs control into network orchestrator * Add verification methods for the base network * Add network validation and misc logging updates * remove ovs module --------- Co-authored-by: Jacob Boddey Co-authored-by: SuperJonotron * remove ovs files added back in during merge * Nmap (#38) * More formatting fixes * More formatting fixes * More formatting fixes * More formatting fixes * Misc pylint fixes Fix test module logger * remove unused files * more formatting * revert breaking pylint changes * more formatting * fix results file * More formatting * ovs module formatting * Add ovs control into network orchestrator * Add verification methods for the base network * Add network validation and misc logging updates * remove ovs module * add license header to all python files * Update tcp scans to speed up full port range scan Add version checking Implement ssh version checking * Add unknown port checks Match unknown ports to existing services Add unknown ports without existing services to results file --------- Co-authored-by: Jacob Boddey Co-authored-by: SuperJonotron * Create startup capture (#37) * Connection (#40) * Initial add of connection test module with ping test * Update host user resolving * Update host user resolving for validator * add get user method to validator * Conn mac oui (#42) * Initial add of connection test module with ping test * Update host user resolving * Update host user resolving for validator * add get user method to validator * Add mac_oui test Add option to return test result and details of test for reporting * Con mac address (#43) * Initial add of connection test module with ping test * Update host user resolving * Update host user resolving for validator * add get user method to validator * Add mac_oui test Add option to return test result and details of test for reporting * Add connection.mac_address test * Dns (#44) * Add MDNS test * Update existing mdns logging to be more consistent with other tests * Add startup and monitor captures * File permissions (#45) * Fix validator file permissions * Fix test module permissions * Fix device capture file permissions * Fix device results permissions * Add connection single ip test (#47) * Nmap results (#49) * Update processing of nmap results to use xml output and json conversions for stability * Update matching with regex to prevent wrong service matches and duplicate processing for partial matches * Update max port scan range * Framework restructure (#50) * Restructure framework and modules * Fix CI paths * Fix base module * Add build script * Remove build logs * Update base and template docker files to fit the new format Implement a template option on network modules Fix skipping of base image build * remove base image build in ci * Remove group from chown --------- Co-authored-by: jhughesbiot * Ip control (#51) * Add initial work for ip control module * Implement ip control module with additional cleanup methods * Update link check to not use error stream * Add error checking around container network configurations * Add network cleanup for namespaces and links * formatting * Move config to /local (#52) * Move config to /local * Fix testing config * Fix ovs_control config location * Fix faux dev config location * Add documentation (#53) --------- Co-authored-by: jhughesbiot <50999916+jhughesbiot@users.noreply.github.com> Co-authored-by: jhughesbiot Co-authored-by: Noureddine Co-authored-by: SuperJonotron * Sprint 8 Hotfix (#54) * Fix connection results.json * Re add try/catch * Fix log level * Debug test module load order * Add depends on to nmap module * Remove logging change --------- Co-authored-by: jhughesbiot <50999916+jhughesbiot@users.noreply.github.com> Co-authored-by: jhughesbiot Co-authored-by: Noureddine Co-authored-by: SuperJonotron * Fix missing results on udp tests when tcp ports are also defined (#59) * Add licence header (#61) * Resolve merge conflict * Add network docs (#63) * Add network docs * Rename to readme * Add link to template module * Dhcp (#64) * Add initial work for ip control module * Implement ip control module with additional cleanup methods * Update link check to not use error stream * Add error checking around container network configurations * Add network cleanup for namespaces and links * formatting * initial work on adding grpc functions for dhcp tests * rework code to allow for better usage and unit testing * working poc for test containers and grpc client to dhcp-1 * Move grpc client code into base image * Move grpc proto builds outside of dockerfile into module startup script * Setup pythonpath var in test module base startup process misc cleanup * pylinting and logging updates * Add python path resolving to network modules Update grpc path to prevent conflicts misc pylinting * Change lease resolving method to fix pylint issue * cleanup unit tests * cleanup unit tests * Add grpc updates to dhcp2 module Update dhcp_config to deal with missing optional variables * Add grpc updates to dhcp2 module Update dhcp_config to deal with missing optional variables * fix line endings * misc cleanup * Dhcp (#67) * Add initial work for ip control module * Implement ip control module with additional cleanup methods * Update link check to not use error stream * Add error checking around container network configurations * Add network cleanup for namespaces and links * formatting * initial work on adding grpc functions for dhcp tests * rework code to allow for better usage and unit testing * working poc for test containers and grpc client to dhcp-1 * Move grpc client code into base image * Move grpc proto builds outside of dockerfile into module startup script * Setup pythonpath var in test module base startup process misc cleanup * pylinting and logging updates * Add python path resolving to network modules Update grpc path to prevent conflicts misc pylinting * Change lease resolving method to fix pylint issue * cleanup unit tests * cleanup unit tests * Add grpc updates to dhcp2 module Update dhcp_config to deal with missing optional variables * Add grpc updates to dhcp2 module Update dhcp_config to deal with missing optional variables * fix line endings * misc cleanup * Move isc-dhcp-server and radvd to services Move DHCP server monitoring and booting to python script * Add grpc methods to interact with dhcp_server module Update dhcp_server to control radvd server directly from calls Fix radvd service status method * Add updates to dhcp2 module Update radvd service * Add license headers * Add connection.dhcp_address test (#68) * Add NTP tests (#60) * Add ntp support test * Add extra log message * Modify descriptions * Pylint * Pylint (#69) --------- Co-authored-by: jhughesbiot <50999916+jhughesbiot@users.noreply.github.com> * Add ipv6 tests (#65) * Add ipv6 tests * Check for ND_NS * Connection private address (#71) * Add ntp support test * Add extra log message * Modify descriptions * Pylint * formatting * Change isc-dhcp service setup Fix dhcpd logging Add start and stop methods to grpc dhcp client Add dhcp2 client Inttial private_addr test * Add max lease time Add unit tests * fix last commit * finish initial work on test * pylinting * Breakup test and allow better failure reporting * restore network after test * Wait for device to get a lease from original dhcp range after network restore * pylinting * Fix ipv6 tests --------- Co-authored-by: Jacob Boddey * fix windows line ending * Fix python import * move isc-dhcp service commands to their own class update logging pylinting * fix dhcp1 * Initial CI testing for tests (#72) * Fix radvd conf * Fix individual test disable * Add NTP Pass CI test (#76) * add shared address test (#75) * Fix single ip test (#58) * Fix single ip test from detecting faux-device during validation as a failure * remove dhcp server capture file from scan --------- Co-authored-by: J Boddey * Merge API into dev (#70) * Start API * Write interfaces * Get current configuration * Set versions * Add more API methods * Correct no-ui flag * Do not launch API on baseline test * Move loading devices back to Test Run core * Merge dev into api (#74) * Merge dev into main (Add license header) (#62) Add license header * Add network docs (#63) * Add network docs * Rename to readme * Add link to template module * Dhcp (#64) * Add initial work for ip control module * Implement ip control module with additional cleanup methods * Update link check to not use error stream * Add error checking around container network configurations * Add network cleanup for namespaces and links * formatting * initial work on adding grpc functions for dhcp tests * rework code to allow for better usage and unit testing * working poc for test containers and grpc client to dhcp-1 * Move grpc client code into base image * Move grpc proto builds outside of dockerfile into module startup script * Setup pythonpath var in test module base startup process misc cleanup * pylinting and logging updates * Add python path resolving to network modules Update grpc path to prevent conflicts misc pylinting * Change lease resolving method to fix pylint issue * cleanup unit tests * cleanup unit tests * Add grpc updates to dhcp2 module Update dhcp_config to deal with missing optional variables * Add grpc updates to dhcp2 module Update dhcp_config to deal with missing optional variables * fix line endings * misc cleanup * Dhcp (#67) * Add initial work for ip control module * Implement ip control module with additional cleanup methods * Update link check to not use error stream * Add error checking around container network configurations * Add network cleanup for namespaces and links * formatting * initial work on adding grpc functions for dhcp tests * rework code to allow for better usage and unit testing * working poc for test containers and grpc client to dhcp-1 * Move grpc client code into base image * Move grpc proto builds outside of dockerfile into module startup script * Setup pythonpath var in test module base startup process misc cleanup * pylinting and logging updates * Add python path resolving to network modules Update grpc path to prevent conflicts misc pylinting * Change lease resolving method to fix pylint issue * cleanup unit tests * cleanup unit tests * Add grpc updates to dhcp2 module Update dhcp_config to deal with missing optional variables * Add grpc updates to dhcp2 module Update dhcp_config to deal with missing optional variables * fix line endings * misc cleanup * Move isc-dhcp-server and radvd to services Move DHCP server monitoring and booting to python script * Add grpc methods to interact with dhcp_server module Update dhcp_server to control radvd server directly from calls Fix radvd service status method * Add updates to dhcp2 module Update radvd service * Add license headers * Add connection.dhcp_address test (#68) * Add NTP tests (#60) * Add ntp support test * Add extra log message * Modify descriptions * Pylint * Pylint (#69) --------- Co-authored-by: jhughesbiot <50999916+jhughesbiot@users.noreply.github.com> * Add ipv6 tests (#65) * Add ipv6 tests * Check for ND_NS * Merge dev into main (Sprint 9) (#66) * Implement test orchestrator (#4) * Initial work on test-orchestrator * Ignore runtime folder * Update runtime directory for test modules * Fix logging Add initial framework for running tests * logging and misc cleanup * logging changes * Add a stop hook after all tests complete * Refactor test_orc code * Add arg passing Add option to use locally cloned via install or remote via main project network orchestrator * Fix baseline module Fix orchestrator exiting only after timeout * Add result file to baseline test module Change result format to match closer to design doc * Refactor pylint * Skip test module if it failed to start * Refactor * Check for valid log level --------- Co-authored-by: Jacob Boddey * Add issue report templates (#7) * Add issue templates * Update README.md * Discover devices on the network (#5) * Test run sync (#8) * Initial work on test-orchestrator * Ignore runtime folder * Update runtime directory for test modules * Fix logging Add initial framework for running tests * logging and misc cleanup * logging changes * Add a stop hook after all tests complete * Refactor test_orc code * Add arg passing Add option to use locally cloned via install or remote via main project network orchestrator * Fix baseline module Fix orchestrator exiting only after timeout * Add result file to baseline test module Change result format to match closer to design doc * Refactor pylint * Skip test module if it failed to start * Refactor * Check for valid log level * Add config file arg Misc changes to network start procedure * fix merge issues * Update runner and test orch procedure Add useful runtiem args * Restructure test run startup process Misc updates to work with net orch updates * Refactor --------- * Quick refactor (#9) * Fix duplicate sleep calls * Add net orc (#11) * Add network orchestrator repository * cleanup duplicate start and install scripts * Temporary fix for python dependencies * Remove duplicate python requirements * remove duplicate conf files * remove remote-net option * cleanp unecessary files * Add the DNS test module (#12) * Add network orchestrator repository * cleanup duplicate start and install scripts * Temporary fix for python dependencies * Remove duplicate python requirements * remove duplicate conf files * remove remote-net option * cleanp unecessary files * Add dns test module Fix test module build process * Add mac address of device under test to test container Update dns test to use mac address filter * Update dns module tests * Change result output * logging update * Update test module for better reusability * Load in module config to test module * logging cleanup * Update baseline module to new template Misc cleanup * Add ability to disable individual tests * remove duplicate readme * Update device directories * Remove local folder * Update device template Update test module to work with new device config file format * Change test module network config options Do not start network services for modules not configured for network * Refactor --------- * Add baseline and pylint tests (#25) * Discover devices on the network (#22) * Discover devices on the network * Add defaults when missing from config Implement monitor wait period from config * Add steady state monitor Remove duplicate callback registrations * Load devices into network orchestrator during testrun start --------- Co-authored-by: jhughesbiot * Build dependencies first (#21) * Build dependencies first * Remove debug message * Add depend on option to test modules * Re-add single interface option * Import subprocess --------- Co-authored-by: jhughesbiot * Port scan test module (#23) * Add network orchestrator repository * cleanup duplicate start and install scripts * Temporary fix for python dependencies * Remove duplicate python requirements * remove duplicate conf files * remove remote-net option * cleanp unecessary files * Add dns test module Fix test module build process * Add mac address of device under test to test container Update dns test to use mac address filter * Update dns module tests * Change result output * logging update * Update test module for better reusability * Load in module config to test module * logging cleanup * Update baseline module to new template Misc cleanup * Add ability to disable individual tests * remove duplicate readme * Update device directories * Remove local folder * Update device template Update test module to work with new device config file format * Change test module network config options Do not start network services for modules not configured for network * Initial nmap test module add Add device ip resolving to base module Add network mounting for test modules * Update ipv4 device resolving in test modules * Map in ip subnets and remove hard coded references * Add ftp port test * Add ability to pass config for individual tests within a module Update nmap module scan to run tests based on config * Add full module check for compliance * Add all tcp port scans to config * Update nmap commands to match existing DAQ tests Add udp scanning and tests * logging cleanup * Update TCP port scanning range Update logging * Merge device config into module config Update device template * fix merge issues * Update timeouts Add multi-threading for multiple scanns to run simultaneously Add option to use scan scripts for services * Fix merge issues * Fix device configs * Remove unecessary files * Cleanup duplicate properties * Cleanup install script * Formatting (#26) * Fix pylint issues in net orc * more pylint fixes * fix listener lint issues * fix logger lint issues * fix validator lint issues * fix util lint issues * Update base network module linting issues * Cleanup linter issues for dhcp modules Remove old code testing code * change to single quote delimeter * Cleanup linter issues for ntp module * Cleanup linter issues for radius module * Cleanup linter issues for template module * fix linter issues with faux-dev * Test results (#27) * Collect all module test results * Fix test modules without config options * Add timestamp to test results * Test results (#28) * Collect all module test results * Fix test modules without config options * Add timestamp to test results * Add attempt timing and device info to test results * Ignore disabled test containers when generating results * Fully skip modules that are disabled * Fix pylint test and skip internet tests so CI passes (#29) * disable internet checks for pass * fix pylint test * Increase pylint score (#31) * More formatting fixes * More formatting fixes * More formatting fixes * More formatting fixes * Misc pylint fixes Fix test module logger --------- Co-authored-by: jhughesbiot * Pylint (#32) * More formatting fixes * More formatting fixes * More formatting fixes * More formatting fixes * Misc pylint fixes Fix test module logger * remove unused files * more formatting * revert breaking pylint changes * more formatting * fix results file * More formatting * ovs module formatting --------- Co-authored-by: Jacob Boddey * Add license header (#36) * More formatting fixes * More formatting fixes * More formatting fixes * More formatting fixes * Misc pylint fixes Fix test module logger * remove unused files * more formatting * revert breaking pylint changes * more formatting * fix results file * More formatting * ovs module formatting * Add ovs control into network orchestrator * Add verification methods for the base network * Add network validation and misc logging updates * remove ovs module * add license header to all python files --------- Co-authored-by: Jacob Boddey Co-authored-by: SuperJonotron * Ovs (#35) * More formatting fixes * More formatting fixes * More formatting fixes * More formatting fixes * Misc pylint fixes Fix test module logger * remove unused files * more formatting * revert breaking pylint changes * more formatting * fix results file * More formatting * ovs module formatting * Add ovs control into network orchestrator * Add verification methods for the base network * Add network validation and misc logging updates * remove ovs module --------- Co-authored-by: Jacob Boddey Co-authored-by: SuperJonotron * remove ovs files added back in during merge * Nmap (#38) * More formatting fixes * More formatting fixes * More formatting fixes * More formatting fixes * Misc pylint fixes Fix test module logger * remove unused files * more formatting * revert breaking pylint changes * more formatting * fix results file * More formatting * ovs module formatting * Add ovs control into network orchestrator * Add verification methods for the base network * Add network validation and misc logging updates * remove ovs module * add license header to all python files * Update tcp scans to speed up full port range scan Add version checking Implement ssh version checking * Add unknown port checks Match unknown ports to existing services Add unknown ports without existing services to results file --------- Co-authored-by: Jacob Boddey Co-authored-by: SuperJonotron * Create startup capture (#37) * Connection (#40) * Initial add of connection test module with ping test * Update host user resolving * Update host user resolving for validator * add get user method to validator * Conn mac oui (#42) * Initial add of connection test module with ping test * Update host user resolving * Update host user resolving for validator * add get user method to validator * Add mac_oui test Add option to return test result and details of test for reporting * Con mac address (#43) * Initial add of connection test module with ping test * Update host user resolving * Update host user resolving for validator * add get user method to validator * Add mac_oui test Add option to return test result and details of test for reporting * Add connection.mac_address test * Dns (#44) * Add MDNS test * Update existing mdns logging to be more consistent with other tests * Add startup and monitor captures * File permissions (#45) * Fix validator file permissions * Fix test module permissions * Fix device capture file permissions * Fix device results permissions * Add connection single ip test (#47) * Nmap results (#49) * Update processing of nmap results to use xml output and json conversions for stability * Update matching with regex to prevent wrong service matches and duplicate processing for partial matches * Update max port scan range * Framework restructure (#50) * Restructure framework and modules * Fix CI paths * Fix base module * Add build script * Remove build logs * Update base and template docker files to fit the new format Implement a template option on network modules Fix skipping of base image build * remove base image build in ci * Remove group from chown --------- Co-authored-by: jhughesbiot * Ip control (#51) * Add initial work for ip control module * Implement ip control module with additional cleanup methods * Update link check to not use error stream * Add error checking around container network configurations * Add network cleanup for namespaces and links * formatting * Move config to /local (#52) * Move config to /local * Fix testing config * Fix ovs_control config location * Fix faux dev config location * Add documentation (#53) * Sync dev to main (#56) * Merge dev into main (Sprint 7 and 8) (#33) * Implement test orchestrator (#4) * Initial work on test-orchestrator * Ignore runtime folder * Update runtime directory for test modules * Fix logging Add initial framework for running tests * logging and misc cleanup * logging changes * Add a stop hook after all tests complete * Refactor test_orc code * Add arg passing Add option to use locally cloned via install or remote via main project network orchestrator * Fix baseline module Fix orchestrator exiting only after timeout * Add result file to baseline test module Change result format to match closer to design doc * Refactor pylint * Skip test module if it failed to start * Refactor * Check for valid log level --------- Co-authored-by: Jacob Boddey * Add issue report templates (#7) * Add issue templates * Update README.md * Discover devices on the network (#5) * Test run sync (#8) * Initial work on test-orchestrator * Ignore runtime folder * Update runtime directory for test modules * Fix logging Add initial framework for running tests * logging and misc cleanup * logging changes * Add a stop hook after all tests complete * Refactor test_orc code * Add arg passing Add option to use locally cloned via install or remote via main project network orchestrator * Fix baseline module Fix orchestrator exiting only after timeout * Add result file to baseline test module Change result format to match closer to design doc * Refactor pylint * Skip test module if it failed to start * Refactor * Check for valid log level * Add config file arg Misc changes to network start procedure * fix merge issues * Update runner and test orch procedure Add useful runtiem args * Restructure test run startup process Misc updates to work with net orch updates * Refactor --------- * Quick refactor (#9) * Fix duplicate sleep calls * Add net orc (#11) * Add network orchestrator repository * cleanup duplicate start and install scripts * Temporary fix for python dependencies * Remove duplicate python requirements * remove duplicate conf files * remove remote-net option * cleanp unecessary files * Add the DNS test module (#12) * Add network orchestrator repository * cleanup duplicate start and install scripts * Temporary fix for python dependencies * Remove duplicate python requirements * remove duplicate conf files * remove remote-net option * cleanp unecessary files * Add dns test module Fix test module build process * Add mac address of device under test to test container Update dns test to use mac address filter * Update dns module tests * Change result output * logging update * Update test module for better reusability * Load in module config to test module * logging cleanup * Update baseline module to new template Misc cleanup * Add ability to disable individual tests * remove duplicate readme * Update device directories * Remove local folder * Update device template Update test module to work with new device config file format * Change test module network config options Do not start network services for modules not configured for network * Refactor --------- * Add baseline and pylint tests (#25) * Discover devices on the network (#22) * Discover devices on the network * Add defaults when missing from config Implement monitor wait period from config * Add steady state monitor Remove duplicate callback registrations * Load devices into network orchestrator during testrun start --------- Co-authored-by: jhughesbiot * Build dependencies first (#21) * Build dependencies first * Remove debug message * Add depend on option to test modules * Re-add single interface option * Import subprocess --------- Co-authored-by: jhughesbiot * Port scan test module (#23) * Add network orchestrator repository * cleanup duplicate start and install scripts * Temporary fix for python dependencies * Remove duplicate python requirements * remove duplicate conf files * remove remote-net option * cleanp unecessary files * Add dns test module Fix test module build process * Add mac address of device under test to test container Update dns test to use mac address filter * Update dns module tests * Change result output * logging update * Update test module for better reusability * Load in module config to test module * logging cleanup * Update baseline module to new template Misc cleanup * Add ability to disable individual tests * remove duplicate readme * Update device directories * Remove local folder * Update device template Update test module to work with new device config file format * Change test module network config options Do not start network services for modules not configured for network * Initial nmap test module add Add device ip resolving to base module Add network mounting for test modules * Update ipv4 device resolving in test modules * Map in ip subnets and remove hard coded references * Add ftp port test * Add ability to pass config for individual tests within a module Update nmap module scan to run tests based on config * Add full module check for compliance * Add all tcp port scans to config * Update nmap commands to match existing DAQ tests Add udp scanning and tests * logging cleanup * Update TCP port scanning range Update logging * Merge device config into module config Update device template * fix merge issues * Update timeouts Add multi-threading for multiple scanns to run simultaneously Add option to use scan scripts for services * Fix merge issues * Fix device configs * Remove unecessary files * Cleanup duplicate properties * Cleanup install script * Formatting (#26) * Fix pylint issues in net orc * more pylint fixes * fix listener lint issues * fix logger lint issues * fix validator lint issues * fix util lint issues * Update base network module linting issues * Cleanup linter issues for dhcp modules Remove old code testing code * change to single quote delimeter * Cleanup linter issues for ntp module * Cleanup linter issues for radius module * Cleanup linter issues for template module * fix linter issues with faux-dev * Test results (#27) * Collect all module test results * Fix test modules without config options * Add timestamp to test results * Test results (#28) * Collect all module test results * Fix test modules without config options * Add timestamp to test results * Add attempt timing and device info to test results * Ignore disabled test containers when generating results * Fully skip modules that are disabled * Fix pylint test and skip internet tests so CI passes (#29) * disable internet checks for pass * fix pylint test * Increase pylint score (#31) * More formatting fixes * More formatting fixes * More formatting fixes * More formatting fixes * Misc pylint fixes Fix test module logger --------- Co-authored-by: jhughesbiot * Pylint (#32) * More formatting fixes * More formatting fixes * More formatting fixes * More formatting fixes * Misc pylint fixes Fix test module logger * remove unused files * more formatting * revert breaking pylint changes * more formatting * fix results file * More formatting * ovs module formatting --------- Co-authored-by: Jacob Boddey * Add license header (#36) * More formatting fixes * More formatting fixes * More formatting fixes * More formatting fixes * Misc pylint fixes Fix test module logger * remove unused files * more formatting * revert breaking pylint changes * more formatting * fix results file * More formatting * ovs module formatting * Add ovs control into network orchestrator * Add verification methods for the base network * Add network validation and misc logging updates * remove ovs module * add license header to all python files --------- Co-authored-by: Jacob Boddey Co-authored-by: SuperJonotron * Ovs (#35) * More formatting fixes * More formatting fixes * More formatting fixes * More formatting fixes * Misc pylint fixes Fix test module logger * remove unused files * more formatting * revert breaking pylint changes * more formatting * fix results file * More formatting * ovs module formatting * Add ovs control into network orchestrator * Add verification methods for the base network * Add network validation and misc logging updates * remove ovs module --------- Co-authored-by: Jacob Boddey Co-authored-by: SuperJonotron * remove ovs files added back in during merge * Nmap (#38) * More formatting fixes * More formatting fixes * More formatting fixes * More formatting fixes * Misc pylint fixes Fix test module logger * remove unused files * more formatting * revert breaking pylint changes * more formatting * fix results file * More formatting * ovs module formatting * Add ovs control into network orchestrator * Add verification methods for the base network * Add network validation and misc logging updates * remove ovs module * add license header to all python files * Update tcp scans to speed up full port range scan Add version checking Implement ssh version checking * Add unknown port checks Match unknown ports to existing services Add unknown ports without existing services to results file --------- Co-authored-by: Jacob Boddey Co-authored-by: SuperJonotron * Create startup capture (#37) * Connection (#40) * Initial add of connection test module with ping test * Update host user resolving * Update host user resolving for validator * add get user method to validator * Conn mac oui (#42) * Initial add of connection test module with ping test * Update host user resolving * Update host user resolving for validator * add get user method to validator * Add mac_oui test Add option to return test result and details of test for reporting * Con mac address (#43) * Initial add of connection test module with ping test * Update host user resolving * Update host user resolving for validator * add get user method to validator * Add mac_oui test Add option to return test result and details of test for reporting * Add connection.mac_address test * Dns (#44) * Add MDNS test * Update existing mdns logging to be more consistent with other tests * Add startup and monitor captures * File permissions (#45) * Fix validator file permissions * Fix test module permissions * Fix device capture file permissions * Fix device results permissions * Add connection single ip test (#47) * Nmap results (#49) * Update processing of nmap results to use xml output and json conversions for stability * Update matching with regex to prevent wrong service matches and duplicate processing for partial matches * Update max port scan range * Framework restructure (#50) * Restructure framework and modules * Fix CI paths * Fix base module * Add build script * Remove build logs * Update base and template docker files to fit the new format Implement a template option on network modules Fix skipping of base image build * remove base image build in ci * Remove group from chown --------- Co-authored-by: jhughesbiot * Ip control (#51) * Add initial work for ip control module * Implement ip control module with additional cleanup methods * Update link check to not use error stream * Add error checking around container network configurations * Add network cleanup for namespaces and links * formatting * Move config to /local (#52) * Move config to /local * Fix testing config * Fix ovs_control config location * Fix faux dev config location * Add documentation (#53) --------- Co-authored-by: jhughesbiot <50999916+jhughesbiot@users.noreply.github.com> Co-authored-by: jhughesbiot Co-authored-by: Noureddine Co-authored-by: SuperJonotron * Sprint 8 Hotfix (#54) * Fix connection results.json * Re add try/catch * Fix log level * Debug test module load order * Add depends on to nmap module * Remove logging change --------- Co-authored-by: jhughesbiot <50999916+jhughesbiot@users.noreply.github.com> Co-authored-by: jhughesbiot Co-authored-by: Noureddine Co-authored-by: SuperJonotron * Fix missing results on udp tests when tcp ports are also defined (#59) * Add licence header (#61) * Resolve merge conflict * Add network docs (#63) * Add network docs * Rename to readme * Add link to template module * Dhcp (#64) * Add initial work for ip control module * Implement ip control module with additional cleanup methods * Update link check to not use error stream * Add error checking around container network configurations * Add network cleanup for namespaces and links * formatting * initial work on adding grpc functions for dhcp tests * rework code to allow for better usage and unit testing * working poc for test containers and grpc client to dhcp-1 * Move grpc client code into base image * Move grpc proto builds outside of dockerfile into module startup script * Setup pythonpath var in test module base startup process misc cleanup * pylinting and logging updates * Add python path resolving to network modules Update grpc path to prevent conflicts misc pylinting * Change lease resolving method to fix pylint issue * cleanup unit tests * cleanup unit tests * Add grpc updates to dhcp2 module Update dhcp_config to deal with missing optional variables * Add grpc updates to dhcp2 module Update dhcp_config to deal with missing optional variables * fix line endings * misc cleanup * Dhcp (#67) * Add initial work for ip control module * Implement ip control module with additional cleanup methods * Update link check to not use error stream * Add error checking around container network configurations * Add network cleanup for namespaces and links * formatting * initial work on adding grpc functions for dhcp tests * rework code to allow for better usage and unit testing * working poc for test containers and grpc client to dhcp-1 * Move grpc client code into base image * Move grpc proto builds outside of dockerfile into module startup script * Setup pythonpath var in test module base startup process misc cleanup * pylinting and logging updates * Add python path resolving to network modules Update grpc path to prevent conflicts misc pylinting * Change lease resolving method to fix pylint issue * cleanup unit tests * cleanup unit tests * Add grpc updates to dhcp2 module Update dhcp_config to deal with missing optional variables * Add grpc updates to dhcp2 module Update dhcp_config to deal with missing optional variables * fix line endings * misc cleanup * Move isc-dhcp-server and radvd to services Move DHCP server monitoring and booting to python script * Add grpc methods to interact with dhcp_server module Update dhcp_server to control radvd server directly from calls Fix radvd service status method * Add updates to dhcp2 module Update radvd service * Add license headers * Add connection.dhcp_address test (#68) * Add NTP tests (#60) * Add ntp support test * Add extra log message * Modify descriptions * Pylint * Pylint (#69) --------- Co-authored-by: jhughesbiot <50999916+jhughesbiot@users.noreply.github.com> * Add ipv6 tests (#65) * Add ipv6 tests * Check for ND_NS --------- Co-authored-by: jhughesbiot <50999916+jhughesbiot@users.noreply.github.com> Co-authored-by: jhughesbiot Co-authored-by: Noureddine Co-authored-by: SuperJonotron * Connection private address (#71) * Add ntp support test * Add extra log message * Modify descriptions * Pylint * formatting * Change isc-dhcp service setup Fix dhcpd logging Add start and stop methods to grpc dhcp client Add dhcp2 client Inttial private_addr test * Add max lease time Add unit tests * fix last commit * finish initial work on test * pylinting * Breakup test and allow better failure reporting * restore network after test * Wait for device to get a lease from original dhcp range after network restore * pylinting * Fix ipv6 tests --------- Co-authored-by: Jacob Boddey * fix windows line ending * Fix python import * move isc-dhcp service commands to their own class update logging pylinting * fix dhcp1 * Initial CI testing for tests (#72) * Fix radvd conf --------- Co-authored-by: jhughesbiot <50999916+jhughesbiot@users.noreply.github.com> Co-authored-by: jhughesbiot Co-authored-by: Noureddine Co-authored-by: SuperJonotron * Fix testing command * Disable API on testing * Add API session * Remove old method * Remove local vars * Replace old var * Add device config * Add device configs * Fix paths * Change MAC address * Revert mac * Fix copy path * Debug loading devices * Remove reference * Changes * Re-add checks to prevent null values * Fix variable * Fix * Use dict instead of string * Try without json conversion * Container output to log * Undo changes to nmap module * Add post devices route --------- Co-authored-by: jhughesbiot <50999916+jhughesbiot@users.noreply.github.com> Co-authored-by: jhughesbiot Co-authored-by: Noureddine Co-authored-by: SuperJonotron * Dhcp tests (#81) * Separate dhcp control methods into their own module Implement ip change test Add place holder for dhcp failover test * Stabilize network before leaving ip_change test Add dhcp_failover test * fix regression issue with individual test enable/disable setting * fix gitignore * Merge tls tests into dev (#80) * initial add of security module and tls tests * Fix server test and implement 1.3 version * pylinting * More work on client tests * Add client tls tests Add unit tets Add common python code to base test module * re-enable dhcp unit tests disabled during dev * rename module to tls * fix renaming * Fix unit tests broken by module rename Add TLS 1.3 tests to config * Add TLS 1.3 tests to config fix unit tests * Add certificate signature checks * Add local cert mounting for signature validatoin Fix test results * Update tls 1.2 server to pass with tls 1.3 compliance Add unit tests around tls 1.2 server Misc updates and cleanup * pylinting * Update cipher checks and add test * Fix test results when None is returned with details * Fix duplicate results --------- Co-authored-by: jhughesbiot * Test output restructure (#79) * Change runtime test structure to allow for multiple old tests * fix current test move * logging changes * Add device test count to device config * Change max report naming Add optional default value to system.json * Copy current test instead of moving to keep a consistent location of the most recent test * fix merge issue * pylint * Use local device folder and use session for config --------- Co-authored-by: Jacob Boddey * Keep test results in memory (#82) * Keep results in memory * More useful debug message * Fix file path * Result descriptions (#92) * Add short descriptions to conn module Add result_description to results with shorter wording for UI usage * Add result details to ipv6 tests * Update test descriptions in baseline module * Update dns test result details * Update skip results to include details when present * dns module formatting * add result details to nmap tests * add result details to ntp tests * Add short descriptions to tls module and formatting * misc test module formatting * fix typo * Misc cleanup (#93) * Fix network request from module config Misc formatting issues in test orchestrator * fix misc network orchestrator formatting issues * fix misc ovs control formatting issues * fix misc ip control formatting issues * Allow CORS (#91) * Allow CORS * Fix add device * Configurable API port * Add /history and device config endpoints (#88) * Add /history and device config endpoints * Add total tests * Add report to device * Only run tests if baseline passes * Re-enable actions, fix conn module (#89) * Re-enable actions, fix conn module * Fix net_orc init * Update report file name in testing * Add required result to module configs (#95) --------- Co-authored-by: jhughesbiot <50999916+jhughesbiot@users.noreply.github.com> Co-authored-by: jhughesbiot Co-authored-by: Noureddine Co-authored-by: SuperJonotron --- .github/workflows/testing.yml | 8 +- .gitignore | 3 +- cmd/start => bin/testrun | 17 +- framework/python/src/api/api.py | 222 ++++++++++ framework/python/src/common/device.py | 62 +++ framework/python/src/common/session.py | 231 ++++++++++ framework/python/src/common/testreport.py | 84 ++++ framework/python/src/core/test_runner.py | 18 +- framework/python/src/core/testrun.py | 316 +++++++++++--- framework/python/src/net_orc/ip_control.py | 20 +- framework/python/src/net_orc/listener.py | 8 +- .../src/net_orc/network_orchestrator.py | 245 ++++++----- .../python/src/net_orc/network_validator.py | 7 +- framework/python/src/net_orc/ovs_control.py | 51 +-- framework/python/src/test_orc/module.py | 12 +- .../{core/device.py => test_orc/test_case.py} | 53 ++- .../python/src/test_orc/test_orchestrator.py | 266 +++++++++--- framework/requirements.txt | 8 +- local/.gitignore | 3 +- local/system.json.example | 3 +- modules/test/base/base.Dockerfile | 6 +- .../test/base/bin/start | 41 +- modules/test/base/bin/start_module | 2 +- modules/test/base/python/requirements.txt | 3 +- modules/test/base/python/src/test_module.py | 29 +- modules/test/baseline/conf/module_config.json | 9 +- .../baseline/python/src/baseline_module.py | 19 +- modules/test/conn/bin/start_test_module | 2 +- modules/test/conn/conf/module_config.json | 83 +++- modules/test/conn/python/requirements.txt | 1 + .../test/conn/python/src/connection_module.py | 272 ++++++++---- modules/test/conn/python/src/dhcp_util.py | 214 ++++++++++ modules/test/dns/conf/module_config.json | 12 +- modules/test/dns/python/src/dns_module.py | 88 ++-- modules/test/nmap/conf/module_config.json | 51 +-- modules/test/nmap/python/src/nmap_module.py | 9 +- modules/test/nmap/python/src/run.py | 4 +- modules/test/ntp/conf/module_config.json | 6 +- modules/test/ntp/python/src/ntp_module.py | 45 +- modules/test/tls/bin/check_cert_signature.sh | 11 + modules/test/tls/bin/get_ciphers.sh | 10 + .../test/tls/bin/get_client_hello_packets.sh | 19 + .../test/tls/bin/get_handshake_complete.sh | 19 + modules/test/tls/bin/start_test_module | 56 +++ modules/test/tls/conf/module_config.json | 41 ++ modules/test/tls/python/requirements.txt | 2 + modules/test/tls/python/src/run.py | 68 +++ modules/test/tls/python/src/tls_module.py | 108 +++++ .../test/tls/python/src/tls_module_test.py | 285 +++++++++++++ modules/test/tls/python/src/tls_util.py | 393 ++++++++++++++++++ modules/test/tls/tls.Dockerfile | 48 +++ modules/ui/conf/nginx.conf | 13 + modules/ui/ui.Dockerfile | 19 + resources/devices/template/device_config.json | 168 +------- testing/{ => baseline}/test_baseline | 4 +- testing/{ => baseline}/test_baseline.py | 0 .../device_configs/tester1/device_config.json | 22 + .../device_configs/tester2/device_config.json | 22 + testing/docker/ci_test_device1/Dockerfile | 6 +- testing/docker/ci_test_device1/entrypoint.sh | 20 + testing/{ => pylint}/test_pylint | 0 testing/tests/example/mac | 0 testing/tests/example/mac1/results.json | 252 +++++++++++ testing/{ => tests}/test_tests | 11 +- testing/{ => tests}/test_tests.json | 6 +- testing/{ => tests}/test_tests.py | 23 +- testing/{unit_test => unit}/run_tests.sh | 4 + ui/index.html | 1 + 68 files changed, 3404 insertions(+), 760 deletions(-) rename cmd/start => bin/testrun (72%) create mode 100644 framework/python/src/api/api.py create mode 100644 framework/python/src/common/device.py create mode 100644 framework/python/src/common/session.py create mode 100644 framework/python/src/common/testreport.py rename framework/python/src/{core/device.py => test_orc/test_case.py} (68%) rename framework/python/src/net_orc/network_device.py => modules/test/base/bin/start (71%) mode change 100644 => 100755 create mode 100644 modules/test/conn/python/src/dhcp_util.py create mode 100644 modules/test/tls/bin/check_cert_signature.sh create mode 100644 modules/test/tls/bin/get_ciphers.sh create mode 100644 modules/test/tls/bin/get_client_hello_packets.sh create mode 100644 modules/test/tls/bin/get_handshake_complete.sh create mode 100644 modules/test/tls/bin/start_test_module create mode 100644 modules/test/tls/conf/module_config.json create mode 100644 modules/test/tls/python/requirements.txt create mode 100644 modules/test/tls/python/src/run.py create mode 100644 modules/test/tls/python/src/tls_module.py create mode 100644 modules/test/tls/python/src/tls_module_test.py create mode 100644 modules/test/tls/python/src/tls_util.py create mode 100644 modules/test/tls/tls.Dockerfile create mode 100644 modules/ui/conf/nginx.conf create mode 100644 modules/ui/ui.Dockerfile rename testing/{ => baseline}/test_baseline (95%) rename testing/{ => baseline}/test_baseline.py (100%) create mode 100644 testing/device_configs/tester1/device_config.json create mode 100644 testing/device_configs/tester2/device_config.json rename testing/{ => pylint}/test_pylint (100%) create mode 100644 testing/tests/example/mac create mode 100644 testing/tests/example/mac1/results.json rename testing/{ => tests}/test_tests (90%) rename testing/{ => tests}/test_tests.json (67%) rename testing/{ => tests}/test_tests.py (82%) rename testing/{unit_test => unit}/run_tests.sh (84%) create mode 100644 ui/index.html diff --git a/.github/workflows/testing.yml b/.github/workflows/testing.yml index c981dbd56..87c8a814a 100644 --- a/.github/workflows/testing.yml +++ b/.github/workflows/testing.yml @@ -16,18 +16,20 @@ jobs: uses: actions/checkout@v2.3.4 - name: Run tests shell: bash {0} - run: testing/test_baseline + run: testing/baseline/test_baseline testrun_tests: name: Tests runs-on: ubuntu-20.04 + needs: testrun_baseline timeout-minutes: 40 steps: - name: Checkout source uses: actions/checkout@v2.3.4 - name: Run tests shell: bash {0} - run: testing/test_tests + run: testing/tests/test_tests + pylint: name: Pylint runs-on: ubuntu-22.04 @@ -37,4 +39,4 @@ jobs: uses: actions/checkout@v2.3.4 - name: Run tests shell: bash {0} - run: testing/test_pylint + run: testing/pylint/test_pylint diff --git a/.gitignore b/.gitignore index e168ec07a..7ef392c5e 100644 --- a/.gitignore +++ b/.gitignore @@ -4,4 +4,5 @@ venv/ error pylint.out __pycache__/ -build/ \ No newline at end of file +build/ +testing/unit_test/temp/ diff --git a/cmd/start b/bin/testrun similarity index 72% rename from cmd/start rename to bin/testrun index 64ac197eb..9281c1ac6 100755 --- a/cmd/start +++ b/bin/testrun @@ -15,23 +15,26 @@ # limitations under the License. if [[ "$EUID" -ne 0 ]]; then - echo "Must run as root. Use sudo cmd/start" + echo "Must run as root. Use sudo testrun" exit 1 fi +# TODO: Obtain TESTRUNPATH from user environment variables +# TESTRUNPATH="/home/boddey/Desktop/test-run" +# cd $TESTRUNPATH + # Ensure that /var/run/netns folder exists -mkdir -p /var/run/netns +sudo mkdir -p /var/run/netns -# Clear up existing runtime files -rm -rf runtime +# Create device folder if it doesn't exist +mkdir -p local/devices -# Check if python modules exist. Install if not -[ ! -d "venv" ] && cmd/install +# Check if Python modules exist. Install if not +[ ! -d "venv" ] && sudo cmd/install # Activate Python virtual environment source venv/bin/activate -# TODO: Execute python code # Set the PYTHONPATH to include the "src" directory export PYTHONPATH="$PWD/framework/python/src" python -u framework/python/src/core/test_runner.py $@ diff --git a/framework/python/src/api/api.py b/framework/python/src/api/api.py new file mode 100644 index 000000000..6b89da795 --- /dev/null +++ b/framework/python/src/api/api.py @@ -0,0 +1,222 @@ +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# https://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from fastapi import FastAPI, APIRouter, Response, Request, status +from fastapi.middleware.cors import CORSMiddleware +import json +from json import JSONDecodeError +import psutil +import threading +import uvicorn + +from common import logger +from common.device import Device + +LOGGER = logger.get_logger("api") + +DEVICE_MAC_ADDR_KEY = "mac_addr" +DEVICE_MANUFACTURER_KEY = "manufacturer" +DEVICE_MODEL_KEY = "model" + +class Api: + """Provide REST endpoints to manage Test Run""" + + def __init__(self, test_run): + + self._test_run = test_run + self._name = "TestRun API" + self._router = APIRouter() + + self._session = self._test_run.get_session() + + self._router.add_api_route("/system/interfaces", self.get_sys_interfaces) + self._router.add_api_route("/system/config", self.post_sys_config, + methods=["POST"]) + self._router.add_api_route("/system/config", self.get_sys_config) + self._router.add_api_route("/system/start", self.start_test_run, + methods=["POST"]) + self._router.add_api_route("/system/stop", self.stop_test_run, + methods=["POST"]) + self._router.add_api_route("/system/status", self.get_status) + self._router.add_api_route("/history", self.get_history) + self._router.add_api_route("/devices", self.get_devices) + self._router.add_api_route("/device", self.save_device, methods=["POST"]) + + # TODO: Make this configurable in system.json + origins = ["http://localhost:4200"] + + self._app = FastAPI() + self._app.include_router(self._router) + self._app.add_middleware( + CORSMiddleware, + allow_origins=origins, + allow_credentials=True, + allow_methods=["*"], + allow_headers=["*"], + ) + + self._api_thread = threading.Thread(target=self._start, + name="Test Run API", + daemon=True) + + def start(self): + LOGGER.info("Starting API") + self._api_thread.start() + LOGGER.info("API waiting for requests") + + def _start(self): + uvicorn.run(self._app, log_config=None, port=self._session.get_api_port()) + + def stop(self): + LOGGER.info("Stopping API") + + async def get_sys_interfaces(self): + addrs = psutil.net_if_addrs() + ifaces = [] + for iface in addrs: + ifaces.append(iface) + return ifaces + + async def post_sys_config(self, request: Request, response: Response): + try: + config = (await request.body()).decode("UTF-8") + config_json = json.loads(config) + self._session.set_config(config_json) + # Catch JSON Decode error etc + except JSONDecodeError: + response.status_code = status.HTTP_400_BAD_REQUEST + return self._generate_msg(False, "Invalid JSON received") + return self._session.get_config() + + async def get_sys_config(self): + return self._session.get_config() + + async def get_devices(self): + return self._session.get_device_repository() + + async def start_test_run(self, request: Request, response: Response): + + LOGGER.debug("Received start command") + + # Check request is valid + body = (await request.body()).decode("UTF-8") + body_json = None + + try: + body_json = json.loads(body) + except JSONDecodeError: + response.status_code = status.HTTP_400_BAD_REQUEST + return self._generate_msg(False, "Invalid JSON received") + + if "device" not in body_json or not ( + "mac_addr" in body_json["device"] and + "firmware" in body_json["device"]): + response.status_code = status.HTTP_400_BAD_REQUEST + return self._generate_msg(False, "Invalid request received") + + device = self._session.get_device(body_json["device"]["mac_addr"]) + + # Check Test Run is not already running + if self._test_run.get_session().get_status() != "Idle": + LOGGER.debug("Test Run is already running. Cannot start another instance") + response.status_code = status.HTTP_409_CONFLICT + return self._generate_msg(False, "Test Run is already running") + + # Check if requested device is known in the device repository + if device is None: + response.status_code = status.HTTP_404_NOT_FOUND + return self._generate_msg(False, + "A device with that MAC address could not be found") + + device.firmware = body_json["device"]["firmware"] + + # Check Test Run is able to start + if self._test_run.get_net_orc().check_config() is False: + response.status_code = status.HTTP_500_INTERNAL_SERVER_ERROR + return self._generate_msg(False,"Configured interfaces are not ready for use. Ensure required interfaces are connected.") + + self._test_run.get_session().reset() + self._test_run.get_session().set_target_device(device) + LOGGER.info(f"Starting Test Run with device target {device.manufacturer} {device.model} with MAC address {device.mac_addr}") + + thread = threading.Thread(target=self._start_test_run, + name="Test Run") + thread.start() + return self._test_run.get_session().to_json() + + def _generate_msg(self, success, message): + msg_type = "success" + if not success: + msg_type = "error" + return json.loads('{"' + msg_type + '": "' + message + '"}') + + def _start_test_run(self): + self._test_run.start() + + async def stop_test_run(self): + LOGGER.debug("Received stop command. Stopping Test Run") + self._test_run.stop() + return self._generate_msg(True, "Test Run stopped") + + async def get_status(self): + return self._test_run.get_session().to_json() + + async def get_history(self): + LOGGER.debug("Received history list request") + return self._session.get_all_reports() + + async def save_device(self, request: Request, response: Response): + LOGGER.debug("Received device post request") + + try: + device_raw = (await request.body()).decode("UTF-8") + device_json = json.loads(device_raw) + + if not self._validate_device_json(device_json): + response.status_code = status.HTTP_400_BAD_REQUEST + return self._generate_msg(False, "Invalid request received") + + device = self._session.get_device(device_json.get(DEVICE_MAC_ADDR_KEY)) + + if device is None: + + # Create new device + device = Device() + device.mac_addr = device_json.get(DEVICE_MAC_ADDR_KEY) + device.manufacturer = device_json.get(DEVICE_MANUFACTURER_KEY) + device.model = device_json.get(DEVICE_MODEL_KEY) + device.device_folder = device.manufacturer + " " + device.model + + self._test_run.create_device(device) + response.status_code = status.HTTP_201_CREATED + + else: + + self._test_run.save_device(device, device_json) + response.status_code = status.HTTP_200_OK + + return device.to_config_json() + + # Catch JSON Decode error etc + except JSONDecodeError: + response.status_code = status.HTTP_400_BAD_REQUEST + return self._generate_msg(False, "Invalid JSON received") + + def _validate_device_json(self, json_obj): + if not (DEVICE_MAC_ADDR_KEY in json_obj and + DEVICE_MANUFACTURER_KEY in json_obj and + DEVICE_MODEL_KEY in json_obj + ): + return False + return True diff --git a/framework/python/src/common/device.py b/framework/python/src/common/device.py new file mode 100644 index 000000000..5d41fbef1 --- /dev/null +++ b/framework/python/src/common/device.py @@ -0,0 +1,62 @@ +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# https://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Track device object information.""" + +from typing import Dict +from dataclasses import dataclass, field + +@dataclass +class Device(): + """Represents a physical device and it's configuration.""" + + folder_url: str = None + mac_addr: str = None + manufacturer: str = None + model: str = None + test_modules: Dict = field(default_factory=dict) + ip_addr: str = None + firmware: str = None + device_folder: str = None + reports = [] + max_device_reports: int = None + + def add_report(self, report): + self.reports.append(report) + + def get_reports(self): + return self.reports + + # TODO: Add ability to remove reports once test reports have been cleaned up + + def to_dict(self): + """Returns the device as a python dictionary. This is used for the + system status API endpoint and in the report.""" + device_json = {} + device_json['mac_addr'] = self.mac_addr + device_json['manufacturer'] = self.manufacturer + device_json['model'] = self.model + if self.firmware is not None: + device_json['firmware'] = self.firmware + return device_json + + def to_config_json(self): + """Returns the device as a python dictionary. Fields relevant to the device + config json file are exported.""" + device_json = {} + device_json['mac_addr'] = self.mac_addr + device_json['manufacturer'] = self.manufacturer + device_json['model'] = self.model + device_json['test_modules'] = self.test_modules + return device_json diff --git a/framework/python/src/common/session.py b/framework/python/src/common/session.py new file mode 100644 index 000000000..f8c8d04b5 --- /dev/null +++ b/framework/python/src/common/session.py @@ -0,0 +1,231 @@ +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# https://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Track testing status.""" + +import datetime +import json +import os + +NETWORK_KEY = 'network' +DEVICE_INTF_KEY = 'device_intf' +INTERNET_INTF_KEY = 'internet_intf' +RUNTIME_KEY = 'runtime' +MONITOR_PERIOD_KEY = 'monitor_period' +STARTUP_TIMEOUT_KEY = 'startup_timeout' +LOG_LEVEL_KEY = 'log_level' +API_PORT_KEY = 'api_port' +MAX_DEVICE_REPORTS_KEY = 'max_device_reports' + +class TestRunSession(): + """Represents the current session of Test Run.""" + + def __init__(self, config_file): + self._status = 'Idle' + self._device = None + self._started = None + self._finished = None + self._results = [] + self._runtime_params = [] + self._device_repository = [] + self._total_tests = 0 + self._config_file = config_file + self._config = self._get_default_config() + self._load_config() + + def start(self): + self._status = 'Waiting for device' + self._started = datetime.datetime.now() + + def get_started(self): + return self._started + + def get_finished(self): + return self._finished + + def stop(self): + self._finished = datetime.datetime.now() + + def _get_default_config(self): + return { + 'network': { + 'device_intf': '', + 'internet_intf': '' + }, + 'log_level': 'INFO', + 'startup_timeout': 60, + 'monitor_period': 30, + 'runtime': 120, + 'max_device_reports': 5, + 'api_port': 8000 + } + + def get_config(self): + return self._config + + def _load_config(self): + + if not os.path.isfile(self._config_file): + return + + with open(self._config_file, 'r', encoding='utf-8') as f: + config_file_json = json.load(f) + + # Network interfaces + if (NETWORK_KEY in config_file_json + and DEVICE_INTF_KEY in config_file_json.get(NETWORK_KEY) + and INTERNET_INTF_KEY in config_file_json.get(NETWORK_KEY)): + self._config[NETWORK_KEY][DEVICE_INTF_KEY] = config_file_json.get(NETWORK_KEY, {}).get(DEVICE_INTF_KEY) + self._config[NETWORK_KEY][INTERNET_INTF_KEY] = config_file_json.get(NETWORK_KEY, {}).get(INTERNET_INTF_KEY) + + if RUNTIME_KEY in config_file_json: + self._config[RUNTIME_KEY] = config_file_json.get(RUNTIME_KEY) + + if STARTUP_TIMEOUT_KEY in config_file_json: + self._config[STARTUP_TIMEOUT_KEY] = config_file_json.get(STARTUP_TIMEOUT_KEY) + + if MONITOR_PERIOD_KEY in config_file_json: + self._config[MONITOR_PERIOD_KEY] = config_file_json.get(MONITOR_PERIOD_KEY) + + if LOG_LEVEL_KEY in config_file_json: + self._config[LOG_LEVEL_KEY] = config_file_json.get(LOG_LEVEL_KEY) + + if API_PORT_KEY in config_file_json: + self._config[API_PORT_KEY] = config_file_json.get(API_PORT_KEY) + + if MAX_DEVICE_REPORTS_KEY in config_file_json: + self._config[MAX_DEVICE_REPORTS_KEY] = config_file_json.get(MAX_DEVICE_REPORTS_KEY) + + def _save_config(self): + with open(self._config_file, 'w', encoding='utf-8') as f: + f.write(json.dumps(self._config, indent=2)) + + def get_runtime(self): + return self._config.get(RUNTIME_KEY) + + def get_log_level(self): + return self._config.get(LOG_LEVEL_KEY) + + def get_runtime_params(self): + return self._runtime_params + + def add_runtime_param(self, param): + self._runtime_params.append(param) + + def get_device_interface(self): + return self._config.get(NETWORK_KEY, {}).get(DEVICE_INTF_KEY) + + def get_internet_interface(self): + return self._config.get(NETWORK_KEY, {}).get(INTERNET_INTF_KEY) + + def get_monitor_period(self): + return self._config.get(MONITOR_PERIOD_KEY) + + def get_startup_timeout(self): + return self._config.get(STARTUP_TIMEOUT_KEY) + + def get_api_port(self): + return self._config.get(API_PORT_KEY) + + def get_max_device_reports(self): + return self._config.get(MAX_DEVICE_REPORTS_KEY) + + def set_config(self, config_json): + self._config = config_json + self._save_config() + + def set_target_device(self, device): + self._device = device + + def get_target_device(self): + return self._device + + def get_device_repository(self): + return self._device_repository + + def add_device(self, device): + self._device_repository.append(device) + + def clear_device_repository(self): + self._device_repository = [] + + def get_device(self, mac_addr): + for device in self._device_repository: + if device.mac_addr == mac_addr: + return device + return None + + def get_status(self): + return self._status + + def set_status(self, status): + self._status = status + + def get_test_results(self): + return self._results + + def get_report_tests(self): + return { + 'total': self.get_total_tests(), + 'results': self.get_test_results() + } + + def add_test_result(self, test_result): + self._results.append(test_result) + + def get_all_reports(self): + + reports = [] + + for device in self.get_device_repository(): + device_reports = device.get_reports() + for device_report in device_reports: + reports.append(device_report.to_json()) + + return reports + + def add_total_tests(self, no_tests): + self._total_tests += no_tests + + def get_total_tests(self): + return self._total_tests + + def reset(self): + self.set_status('Idle') + self.set_target_device(None) + self._tests = { + 'total': 0, + 'results': [] + } + self._started = None + self._finished = None + + def to_json(self): + + # TODO: Add report URL + + results = { + 'total': self.get_total_tests(), + 'results': self.get_test_results() + } + + session_json = { + 'status': self.get_status(), + 'device': self.get_target_device(), + 'started': self.get_started(), + 'finished': self.get_finished(), + 'tests': results + } + + return session_json diff --git a/framework/python/src/common/testreport.py b/framework/python/src/common/testreport.py new file mode 100644 index 000000000..ba35ff27a --- /dev/null +++ b/framework/python/src/common/testreport.py @@ -0,0 +1,84 @@ +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# https://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Store previous test run information.""" + +from datetime import datetime + +DATE_TIME_FORMAT = '%Y-%m-%d %H:%M:%S' + +class TestReport(): + """Represents a previous Test Run report.""" + + def __init__(self, + status='Non-Compliant', + started=None, + finished=None, + total_tests=0 + ): + self._device = {} + self._status: str = status + self._started = started + self._finished = finished + self._total_tests = total_tests + self._results = [] + + def get_status(self): + return self._status + + def get_started(self): + return self._started + + def get_finished(self): + return self._finished + + def get_duration_seconds(self): + diff = self._finished - self._started + return diff.total_seconds() + + def get_duration(self): + return str(datetime.timedelta(seconds=self.get_duration_seconds())) + + def add_test(self, test): + self._results.append(test) + + def to_json(self): + report_json = {} + report_json['device'] = self._device + report_json['status'] = self._status + report_json['started'] = self._started.strftime(DATE_TIME_FORMAT) + report_json['finished'] = self._finished.strftime(DATE_TIME_FORMAT) + report_json['tests'] = {'total': self._total_tests, + 'results': self._results} + return report_json + + def from_json(self, json_file): + + self._device['mac_addr'] = json_file['device']['mac_addr'] + self._device['manufacturer'] = json_file['device']['manufacturer'] + self._device['model'] = json_file['device']['model'] + + if 'firmware' in self._device: + self._device['firmware'] = json_file['device']['firmware'] + + self._status = json_file['status'] + self._started = datetime.strptime(json_file['started'], DATE_TIME_FORMAT) + self._finished = datetime.strptime(json_file['finished'], DATE_TIME_FORMAT) + self._total_tests = json_file['tests']['total'] + + # Loop through test results + for test_result in json_file['tests']['results']: + self.add_test(test_result) + + return self diff --git a/framework/python/src/core/test_runner.py b/framework/python/src/core/test_runner.py index 226f874cc..9962c3995 100644 --- a/framework/python/src/core/test_runner.py +++ b/framework/python/src/core/test_runner.py @@ -36,12 +36,14 @@ def __init__(self, config_file=None, validate=True, net_only=False, - single_intf=False): + single_intf=False, + no_ui=False): self._register_exits() self.test_run = TestRun(config_file=config_file, validate=validate, net_only=net_only, - single_intf=single_intf) + single_intf=single_intf, + no_ui=no_ui) def _register_exits(self): signal.signal(signal.SIGINT, self._exit_handler) @@ -62,10 +64,6 @@ def _exit_handler(self, signum, arg): # pylint: disable=unused-argument def stop(self, kill=False): self.test_run.stop(kill) - def start(self): - self.test_run.start() - LOGGER.info("Test Run has finished") - def parse_args(): parser = argparse.ArgumentParser( @@ -88,6 +86,10 @@ def parse_args(): parser.add_argument("--single-intf", action="store_true", help="Single interface mode (experimental)") + parser.add_argument("--no-ui", + default=False, + action="store_true", + help="Do not launch the user interface") parsed_args = parser.parse_known_args()[0] return parsed_args @@ -97,5 +99,5 @@ def parse_args(): runner = TestRunner(config_file=args.config_file, validate=not args.no_validate, net_only=args.net_only, - single_intf=args.single_intf) - runner.start() + single_intf=args.single_intf, + no_ui=args.no_ui) diff --git a/framework/python/src/core/testrun.py b/framework/python/src/core/testrun.py index a91736e95..9034f5796 100644 --- a/framework/python/src/core/testrun.py +++ b/framework/python/src/core/testrun.py @@ -20,36 +20,41 @@ Run using the provided command scripts in the cmd folder. E.g sudo cmd/start """ +import json import os import sys -import json import signal import time from common import logger, util +from common.device import Device +from common.session import TestRunSession +from common.testreport import TestReport +from api.api import Api +from net_orc.listener import NetworkEvent +from net_orc import network_orchestrator as net_orc +from test_orc import test_orchestrator as test_orc # Locate parent directory current_dir = os.path.dirname(os.path.realpath(__file__)) # Locate the test-run root directory, 4 levels, src->python->framework->test-run -root_dir = os.path.dirname(os.path.dirname(os.path.dirname(os.path.dirname(current_dir)))) - -from net_orc.listener import NetworkEvent -from test_orc import test_orchestrator as test_orc -from net_orc import network_orchestrator as net_orc -from device import Device +root_dir = os.path.dirname(os.path.dirname( + os.path.dirname(os.path.dirname(current_dir)))) LOGGER = logger.get_logger('test_run') -CONFIG_FILE = 'local/system.json' + +DEFAULT_CONFIG_FILE = 'local/system.json' EXAMPLE_CONFIG_FILE = 'local/system.json.example' -RUNTIME = 120 LOCAL_DEVICES_DIR = 'local/devices' RESOURCE_DEVICES_DIR = 'resources/devices' + DEVICE_CONFIG = 'device_config.json' DEVICE_MANUFACTURER = 'manufacturer' DEVICE_MODEL = 'model' DEVICE_MAC_ADDR = 'mac_addr' DEVICE_TEST_MODULES = 'test_modules' +MAX_DEVICE_REPORTS_KEY = 'max_device_reports' class TestRun: # pylint: disable=too-few-public-methods """Test Run controller. @@ -59,74 +64,250 @@ class TestRun: # pylint: disable=too-few-public-methods """ def __init__(self, - config_file=CONFIG_FILE, + config_file, validate=True, net_only=False, - single_intf=False): - self._devices = [] + single_intf=False, + no_ui=False): + + if config_file is None: + self._config_file = self._get_config_abs(DEFAULT_CONFIG_FILE) + else: + self._config_file = self._get_config_abs(config_file) + self._net_only = net_only self._single_intf = single_intf + self._no_ui = no_ui # Catch any exit signals self._register_exits() - # Expand the config file to absolute pathing - config_file_abs = self._get_config_abs(config_file=config_file) + # Create session + self._session = TestRunSession(config_file=self._config_file) + + # Register runtime parameters + if single_intf: + self._session.add_runtime_param('single_intf') + if net_only: + self._session.add_runtime_param('net_only') + if not validate: + self._session.add_runtime_param('no-validate') + + self.load_all_devices() self._net_orc = net_orc.NetworkOrchestrator( - config_file=config_file_abs, - validate=validate, - single_intf = self._single_intf) + session=self._session) + self._test_orc = test_orc.TestOrchestrator( + self._session, + self._net_orc) + + if self._no_ui: + + # Check Test Run is able to start + if self.get_net_orc().check_config() is False: + return + + # Any additional checks that need to be performed go here + + self.start() + + else: + + # Build UI image + self._api = Api(self) + self._api.start() + # Start UI container + + # Hold until API ends + while True: + time.sleep(1) + + def load_all_devices(self): + self._session.clear_device_repository() + self._load_devices(device_dir=LOCAL_DEVICES_DIR) + + # Temporarily removing loading of template device + # configs (feature not required yet) + # self._load_devices(device_dir=RESOURCE_DEVICES_DIR) + return self.get_session().get_device_repository() + + def _load_devices(self, device_dir): + LOGGER.debug('Loading devices from ' + device_dir) + + util.run_command(f'chown -R {util.get_host_user()} {device_dir}') + + for device_folder in os.listdir(device_dir): + + device_config_file_path = os.path.join(device_dir, + device_folder, + DEVICE_CONFIG) + + # Check if device config file exists before loading + if not os.path.exists(device_config_file_path): + LOGGER.error(f'Device configuration file missing from device {device_folder}') + continue + + # Open device config file + with open(device_config_file_path, + encoding='utf-8') as device_config_file: + device_config_json = json.load(device_config_file) + + device_manufacturer = device_config_json.get(DEVICE_MANUFACTURER) + device_model = device_config_json.get(DEVICE_MODEL) + mac_addr = device_config_json.get(DEVICE_MAC_ADDR) + test_modules = device_config_json.get(DEVICE_TEST_MODULES) + max_device_reports = None + if 'max_device_reports' in device_config_json: + max_device_reports = device_config_json.get(MAX_DEVICE_REPORTS_KEY) + + device = Device(folder_url=os.path.join(device_dir, device_folder), + manufacturer=device_manufacturer, + model=device_model, + mac_addr=mac_addr, + test_modules=test_modules, + max_device_reports=max_device_reports, + device_folder=device_folder) + + # Load reports for this device + self._load_test_reports(device) + + # Add device to device repository + self.get_session().add_device(device) + LOGGER.debug(f'Loaded device {device.manufacturer} ' + + f'{device.model} with MAC address {device.mac_addr}') - self._test_orc = test_orc.TestOrchestrator(self._net_orc) + def _load_test_reports(self, device: Device): + + LOGGER.debug(f'Loading test reports for device {device.model}') + + # Locate reports folder + reports_folder = os.path.join(root_dir, + LOCAL_DEVICES_DIR, + device.device_folder, 'reports') + + # Check if reports folder exists (device may have no reports) + if not os.path.exists(reports_folder): + return + + for report_folder in os.listdir(reports_folder): + report_json_file_path = os.path.join( + reports_folder, + report_folder, + 'report.json') + + # Check if the report.json file exists + if not os.path.isfile(report_json_file_path): + # Some error may have occured during this test run + continue + + with open(report_json_file_path, encoding='utf-8') as report_json_file: + report_json = json.load(report_json_file) + test_report = TestReport().from_json(report_json) + device.add_report(test_report) + + def create_device(self, device: Device): + + # Define the device folder location + device_folder_path = os.path.join(root_dir, + LOCAL_DEVICES_DIR, + device.device_folder) + + # Create the directory + os.makedirs(device_folder_path) + + config_file_path = os.path.join(device_folder_path, + DEVICE_CONFIG) + + with open(config_file_path, 'w', encoding='utf-8') as config_file: + config_file.writelines(json.dumps(device.to_config_json(), indent=4)) + + # Ensure new folder has correct permissions + util.run_command(f"chown -R {util.get_host_user()} '{device_folder_path}'") + + # Add new device to the device repository + self._session.add_device(device) + + return device.to_config_json() + + def save_device(self, device: Device, device_json): + """Edit and save an existing device config.""" + + # Update device properties + device.manufacturer = device_json['manufacturer'] + device.model = device_json['model'] + + if 'test_modules' in device_json: + device.test_modules = device_json['test_modules'] + else: + device.test_modules = {} + + # Obtain the config file path + config_file_path = os.path.join(root_dir, + LOCAL_DEVICES_DIR, + device.device_folder, + DEVICE_CONFIG) + + with open(config_file_path, 'w+', encoding='utf-8') as config_file: + config_file.writelines(json.dumps(device.to_config_json(), indent=4)) + + return device.to_config_json() def start(self): - self._load_all_devices() + self._session.start() self._start_network() if self._net_only: LOGGER.info('Network only option configured, no tests will be run') - self._net_orc.listener.register_callback( + self.get_net_orc().listener.register_callback( self._device_discovered, [NetworkEvent.DEVICE_DISCOVERED] ) - self._net_orc.start_listener() + self.get_net_orc().start_listener() LOGGER.info('Waiting for devices on the network...') while True: - time.sleep(RUNTIME) + time.sleep(self._session.get_runtime()) else: self._test_orc.start() - self._net_orc.listener.register_callback( + self.get_net_orc().get_listener().register_callback( self._device_stable, [NetworkEvent.DEVICE_STABLE] ) - self._net_orc.listener.register_callback( + self.get_net_orc().get_listener().register_callback( self._device_discovered, [NetworkEvent.DEVICE_DISCOVERED] ) - self._net_orc.start_listener() + self.get_net_orc().start_listener() + self._set_status('Waiting for device') LOGGER.info('Waiting for devices on the network...') - time.sleep(RUNTIME) + time.sleep(self._session.get_runtime()) - if not (self._test_orc.test_in_progress() or self._net_orc.monitor_in_progress()): - LOGGER.info('Timed out whilst waiting for device or stopping due to test completion') + if not (self._test_orc.test_in_progress() or + self.get_net_orc().monitor_in_progress()): + LOGGER.info('''Timed out whilst waiting for + device or stopping due to test completion''') else: - while self._test_orc.test_in_progress() or self._net_orc.monitor_in_progress(): + while (self._test_orc.test_in_progress() or + self.get_net_orc().monitor_in_progress()): time.sleep(5) - self.stop() + self.stop() def stop(self, kill=False): + + # Prevent discovering new devices whilst stopping + if self.get_net_orc().get_listener() is not None: + self.get_net_orc().get_listener().stop_listener() + self._stop_tests() self._stop_network(kill=kill) @@ -146,65 +327,62 @@ def _exit_handler(self, signum, arg): # pylint: disable=unused-argument def _get_config_abs(self, config_file=None): if config_file is None: # If not defined, use relative pathing to local file - config_file = os.path.join(root_dir, CONFIG_FILE) + config_file = os.path.join(root_dir, self._config_file) # Expand the config file to absolute pathing return os.path.abspath(config_file) + def get_config_file(self): + return self._get_config_abs() + + def get_net_orc(self): + return self._net_orc + def _start_network(self): # Start the network orchestrator - self._net_orc.start() + if not self.get_net_orc().start(): + self.stop(kill=True) + sys.exit(1) def _stop_network(self, kill=False): - self._net_orc.stop(kill=kill) + self.get_net_orc().stop(kill=kill) def _stop_tests(self): self._test_orc.stop() - def _load_all_devices(self): - self._load_devices(device_dir=LOCAL_DEVICES_DIR) - self._load_devices(device_dir=RESOURCE_DEVICES_DIR) - - def _load_devices(self, device_dir): - LOGGER.debug('Loading devices from ' + device_dir) - - os.makedirs(device_dir, exist_ok=True) - util.run_command(f'chown -R {util.get_host_user()} {device_dir}') - - for device_folder in os.listdir(device_dir): - with open(os.path.join(device_dir, device_folder, DEVICE_CONFIG), - encoding='utf-8') as device_config_file: - device_config_json = json.load(device_config_file) - - device_manufacturer = device_config_json.get(DEVICE_MANUFACTURER) - device_model = device_config_json.get(DEVICE_MODEL) - mac_addr = device_config_json.get(DEVICE_MAC_ADDR) - test_modules = device_config_json.get(DEVICE_TEST_MODULES) - - device = Device(manufacturer=device_manufacturer, - model=device_model, - mac_addr=mac_addr, - test_modules=json.dumps(test_modules)) - self._devices.append(device) - def get_device(self, mac_addr): """Returns a loaded device object from the device mac address.""" - for device in self._devices: + for device in self._session.get_device_repository(): if device.mac_addr == mac_addr: return device + return None def _device_discovered(self, mac_addr): - device = self.get_device(mac_addr) + + device = self.get_session().get_target_device() + if device is not None: - LOGGER.info( - f'Discovered {device.manufacturer} {device.model} on the network') + if mac_addr != device.mac_addr: + # Ignore discovered device because it is not the target device + return else: - device = Device(mac_addr=mac_addr) - self._devices.append(device) - LOGGER.info( - f'A new device has been discovered with mac address {mac_addr}') + device = self.get_device(mac_addr) + if device is None: + return + + self.get_session().set_target_device(device) + + LOGGER.info( + f'Discovered {device.manufacturer} {device.model} on the network. Waiting for device to obtain IP') def _device_stable(self, mac_addr): - device = self.get_device(mac_addr) LOGGER.info(f'Device with mac address {mac_addr} is ready for testing.') - self._test_orc.run_test_modules(device) + self._set_status('In progress') + self._test_orc.run_test_modules() + self._set_status('Complete') + + def _set_status(self, status): + self._session.set_status(status) + + def get_session(self): + return self._session diff --git a/framework/python/src/net_orc/ip_control.py b/framework/python/src/net_orc/ip_control.py index eb683c46b..5c9f86d18 100644 --- a/framework/python/src/net_orc/ip_control.py +++ b/framework/python/src/net_orc/ip_control.py @@ -34,7 +34,7 @@ def add_link(self, interface_name, peer_name): def add_namespace(self, namespace): """Add a network namespace""" exists = self.namespace_exists(namespace) - LOGGER.info("Namespace exists: " + str(exists)) + LOGGER.info('Namespace exists: ' + str(exists)) if exists: return True else: @@ -58,14 +58,11 @@ def link_exists(self, link_name): def namespace_exists(self, namespace): """Check if a namespace already exists""" namespaces = self.get_namespaces() - if namespace in namespaces: - return True - else: - return False + return namespace in namespaces def get_links(self): - stdout, stderr = util.run_command('ip link list') - links = stdout.strip().split('\n') + result = util.run_command('ip link list') + links = result[0].strip().split('\n') netns_links = [] for link in links: match = re.search(r'\d+:\s+(\S+)', link) @@ -78,9 +75,9 @@ def get_links(self): return netns_links def get_namespaces(self): - stdout, stderr = util.run_command('ip netns list') + result = util.run_command('ip netns list') #Strip ID's from the namespace results - namespaces = re.findall(r'(\S+)(?:\s+\(id: \d+\))?', stdout) + namespaces = re.findall(r'(\S+)(?:\s+\(id: \d+\))?', result[0]) return namespaces def set_namespace(self, interface_name, namespace): @@ -187,9 +184,8 @@ def configure_container_interface(self, # Rename container interface name if not self.rename_interface(container_intf, namespace, namespace_intf): - LOGGER.error( - f'Failed to rename container interface {container_intf} to {namespace_intf}' - ) + LOGGER.error((f'Failed to rename container interface {container_intf} ' + + 'to {namespace_intf}')) return False # Set MAC address of container interface diff --git a/framework/python/src/net_orc/listener.py b/framework/python/src/net_orc/listener.py index 4f8e1961f..83805f908 100644 --- a/framework/python/src/net_orc/listener.py +++ b/framework/python/src/net_orc/listener.py @@ -31,8 +31,9 @@ class Listener: """Methods to start and stop the network listener.""" - def __init__(self, device_intf): - self._device_intf = device_intf + def __init__(self, session): + self._session = session + self._device_intf = self._session.get_device_interface() self._device_intf_mac = get_if_hwaddr(self._device_intf) self._sniffer = AsyncSniffer(iface=self._device_intf, @@ -47,7 +48,8 @@ def start_listener(self): def stop_listener(self): """Stop sniffing packets on the device interface.""" - self._sniffer.stop() + if self._sniffer.running: + self._sniffer.stop() def is_running(self): """Determine whether the sniffer is running.""" diff --git a/framework/python/src/net_orc/network_orchestrator.py b/framework/python/src/net_orc/network_orchestrator.py index 499ce954b..4abdb9651 100644 --- a/framework/python/src/net_orc/network_orchestrator.py +++ b/framework/python/src/net_orc/network_orchestrator.py @@ -13,7 +13,6 @@ # limitations under the License. """Network orchestrator is responsible for managing all of the virtual network services""" -import getpass import ipaddress import json import os @@ -23,57 +22,38 @@ import sys import docker from docker.types import Mount -from common import logger -from common import util +from common import logger, util from net_orc.listener import Listener -from net_orc.network_device import NetworkDevice from net_orc.network_event import NetworkEvent from net_orc.network_validator import NetworkValidator from net_orc.ovs_control import OVSControl from net_orc.ip_control import IPControl LOGGER = logger.get_logger('net_orc') -CONFIG_FILE = 'local/system.json' -EXAMPLE_CONFIG_FILE = 'local/system.json.example' RUNTIME_DIR = 'runtime' TEST_DIR = 'test' -MONITOR_PCAP = 'monitor.pcap' NET_DIR = 'runtime/network' NETWORK_MODULES_DIR = 'modules/network' + +MONITOR_PCAP = 'monitor.pcap' NETWORK_MODULE_METADATA = 'conf/module_config.json' + DEVICE_BRIDGE = 'tr-d' INTERNET_BRIDGE = 'tr-c' PRIVATE_DOCKER_NET = 'tr-private-net' CONTAINER_NAME = 'network_orchestrator' -RUNTIME_KEY = 'runtime' -MONITOR_PERIOD_KEY = 'monitor_period' -STARTUP_TIMEOUT_KEY = 'startup_timeout' -DEFAULT_STARTUP_TIMEOUT = 60 -DEFAULT_RUNTIME = 1200 -DEFAULT_MONITOR_PERIOD = 300 class NetworkOrchestrator: """Manage and controls a virtual testing network.""" def __init__(self, - config_file=CONFIG_FILE, - validate=True, - single_intf=False): + session): - self._runtime = DEFAULT_RUNTIME - self._startup_timeout = DEFAULT_STARTUP_TIMEOUT - self._monitor_period = DEFAULT_MONITOR_PERIOD + self._session = session self._monitor_in_progress = False - - self._int_intf = None - self._dev_intf = None - self._single_intf = single_intf - - self.listener = None + self._listener = None self._net_modules = [] - self._devices = [] - self.validate = validate self._path = os.path.dirname( os.path.dirname( @@ -83,8 +63,7 @@ def __init__(self, self.validator = NetworkValidator() shutil.rmtree(os.path.join(os.getcwd(), NET_DIR), ignore_errors=True) self.network_config = NetworkConfig() - self.load_config(config_file) - self._ovs = OVSControl() + self._ovs = OVSControl(self._session) self._ip_ctrl = IPControl() def start(self): @@ -92,8 +71,6 @@ def start(self): LOGGER.debug('Starting network orchestrator') - self._host_user = util.get_host_user() - # Get all components ready self.load_network_modules() @@ -102,23 +79,58 @@ def start(self): self.start_network() + return True + + def check_config(self): + + device_interface_ready = util.interface_exists( + self._session.get_device_interface()) + internet_interface_ready = util.interface_exists( + self._session.get_internet_interface()) + + if 'single_intf' in self._session.get_runtime_params(): + # Check for device interface only + if not device_interface_ready: + LOGGER.error('Device interface is not ready for use. ' + + 'Ensure device interface is connected.') + return False + else: + if not device_interface_ready and not internet_interface_ready: + LOGGER.error( + 'Both device and internet interfaces are not ready for use. ' + + 'Ensure both interfaces are connected.') + return False + elif not device_interface_ready: + LOGGER.error('Device interface is not ready for use. ' + + 'Ensure device interface is connected.') + return False + elif not internet_interface_ready: + LOGGER.error('Internet interface is not ready for use. ' + + 'Ensure internet interface is connected.') + return False + return True + def start_network(self): """Start the virtual testing network.""" LOGGER.info('Starting network') self.build_network_modules() + self.create_net() self.start_network_services() - if self.validate: + if 'no-validate' not in self._session.get_runtime_params(): # Start the validator after network is ready self.validator.start() # Get network ready (via Network orchestrator) LOGGER.debug('Network is ready') + def get_listener(self): + return self._listener + def start_listener(self): - self.listener.start_listener() + self.get_listener().start_listener() def stop(self, kill=False): """Stop the network orchestrator.""" @@ -136,44 +148,39 @@ def stop_network(self, kill=False): self.stop_networking_services(kill=kill) self.restore_net() - def load_config(self, config_file=None): - if config_file is None: - # If not defined, use relative pathing to local file - self._config_file = os.path.join(self._path, CONFIG_FILE) - else: - # If defined, use as provided - self._config_file = config_file - - if not os.path.isfile(self._config_file): - LOGGER.error('Configuration file is not present at ' + config_file) - LOGGER.info('An example is present in ' + EXAMPLE_CONFIG_FILE) - sys.exit(1) + def _device_discovered(self, mac_addr): - LOGGER.info('Loading config file: ' + os.path.abspath(self._config_file)) - with open(self._config_file, encoding='UTF-8') as config_json_file: - config_json = json.load(config_json_file) - self.import_config(config_json) + device = self._session.get_device(mac_addr) - def _device_discovered(self, mac_addr): + if self._session.get_target_device() is not None: + if mac_addr != self._session.get_target_device().mac_addr: + # Ignore discovered device + return self._monitor_in_progress = True LOGGER.debug( f'Discovered device {mac_addr}. Waiting for device to obtain IP') - device = self._get_device(mac_addr=mac_addr) + if device is None: + LOGGER.debug(f'Device with MAC address {mac_addr} does not exist' + + ' in device repository') + # Ignore device if not registered + return device_runtime_dir = os.path.join(RUNTIME_DIR, TEST_DIR, - device.mac_addr.replace(':', '')) - os.makedirs(device_runtime_dir) - util.run_command(f'chown -R {self._host_user} {device_runtime_dir}') + mac_addr.replace(':', '')) - packet_capture = sniff(iface=self._dev_intf, - timeout=self._startup_timeout, + # Cleanup any old current test files + shutil.rmtree(device_runtime_dir, ignore_errors=True) + os.makedirs(device_runtime_dir, exist_ok=True) + + util.run_command(f'chown -R {util.get_host_user()} {device_runtime_dir}') + + packet_capture = sniff(iface=self._session.get_device_interface(), + timeout=self._session.get_startup_timeout(), stop_filter=self._device_has_ip) - wrpcap( - os.path.join(RUNTIME_DIR, TEST_DIR, device.mac_addr.replace(':', ''), - 'startup.pcap'), packet_capture) + wrpcap(os.path.join(device_runtime_dir, 'startup.pcap'), packet_capture) if device.ip_addr is None: LOGGER.info( @@ -189,49 +196,38 @@ def monitor_in_progress(self): return self._monitor_in_progress def _device_has_ip(self, packet): - device = self._get_device(mac_addr=packet.src) + device = self._session.get_device(mac_addr=packet.src) if device is None or device.ip_addr is None: return False return True def _dhcp_lease_ack(self, packet): mac_addr = packet[BOOTP].chaddr.hex(':')[0:17] - device = self._get_device(mac_addr=mac_addr) + device = self._session.get_device(mac_addr=mac_addr) + + # Ignore devices that are not registered + if device is None: + return + + # TODO: Check if device is None device.ip_addr = packet[BOOTP].yiaddr def _start_device_monitor(self, device): """Start a timer until the steady state has been reached and callback the steady state method for this device.""" LOGGER.info(f'Monitoring device with mac addr {device.mac_addr} ' - f'for {str(self._monitor_period)} seconds') - - packet_capture = sniff(iface=self._dev_intf, timeout=self._monitor_period) - wrpcap( - os.path.join(RUNTIME_DIR, TEST_DIR, device.mac_addr.replace(':', ''), - 'monitor.pcap'), packet_capture) + f'for {str(self._session.get_monitor_period())} seconds') - self._monitor_in_progress = False - self.listener.call_callback(NetworkEvent.DEVICE_STABLE, device.mac_addr) - - def _get_device(self, mac_addr): - for device in self._devices: - if device.mac_addr == mac_addr: - return device - - device = NetworkDevice(mac_addr=mac_addr) - self._devices.append(device) - return device + device_runtime_dir = os.path.join(RUNTIME_DIR, TEST_DIR, + device.mac_addr.replace(':', '')) - def import_config(self, json_config): - self._int_intf = json_config['network']['internet_intf'] - self._dev_intf = json_config['network']['device_intf'] + packet_capture = sniff(iface=self._session.get_device_interface(), + timeout=self._session.get_monitor_period()) + wrpcap(os.path.join(device_runtime_dir, 'monitor.pcap'), packet_capture) - if RUNTIME_KEY in json_config: - self._runtime = json_config[RUNTIME_KEY] - if STARTUP_TIMEOUT_KEY in json_config: - self._startup_timeout = json_config[STARTUP_TIMEOUT_KEY] - if MONITOR_PERIOD_KEY in json_config: - self._monitor_period = json_config[MONITOR_PERIOD_KEY] + self._monitor_in_progress = False + self.get_listener().call_callback(NetworkEvent.DEVICE_STABLE, + device.mac_addr) def _check_network_services(self): LOGGER.debug('Checking network modules...') @@ -278,30 +274,38 @@ def _ci_pre_network_create(self): """ self._ethmac = subprocess.check_output( - f'cat /sys/class/net/{self._int_intf}/address', + f'cat /sys/class/net/{self._session.get_internet_interface()}/address', shell=True).decode('utf-8').strip() self._gateway = subprocess.check_output( 'ip route | head -n 1 | awk \'{print $3}\'', shell=True).decode('utf-8').strip() self._ipv4 = subprocess.check_output( - f'ip a show {self._int_intf} | grep \"inet \" | awk \'{{print $2}}\'', + (f'ip a show {self._session.get_internet_interface()} | ' + + 'grep \"inet \" | awk \'{{print $2}}\''), shell=True).decode('utf-8').strip() self._ipv6 = subprocess.check_output( - f'ip a show {self._int_intf} | grep inet6 | awk \'{{print $2}}\'', + (f'ip a show {self._session.get_internet_interface()} | grep inet6 | ' + + 'awk \'{{print $2}}\''), shell=True).decode('utf-8').strip() self._brd = subprocess.check_output( - f'ip a show {self._int_intf} | grep \"inet \" | awk \'{{print $4}}\'', + (f'ip a show {self._session.get_internet_interface()} | grep \"inet \" ' + + '| awk \'{{print $4}}\''), shell=True).decode('utf-8').strip() def _ci_post_network_create(self): """ Restore network connection in CI environment """ LOGGER.info('post cr') - util.run_command(f'ip address del {self._ipv4} dev {self._int_intf}') - util.run_command(f'ip -6 address del {self._ipv6} dev {self._int_intf}') + util.run_command(((f'ip address del {self._ipv4} ' + + 'dev {self._session.get_internet_interface()}'))) + util.run_command((f'ip -6 address del {self._ipv6} ' + + 'dev {self._session.get_internet_interface()}')) + util.run_command( + (f'ip link set dev {self._session.get_internet_interface()} ' + + 'address 00:B0:D0:63:C2:26')) + util.run_command( + f'ip addr flush dev {self._session.get_internet_interface()}') util.run_command( - f'ip link set dev {self._int_intf} address 00:B0:D0:63:C2:26') - util.run_command(f'ip addr flush dev {self._int_intf}') - util.run_command(f'ip addr add dev {self._int_intf} 0.0.0.0') + f'ip addr add dev {self._session.get_internet_interface()} 0.0.0.0') util.run_command( f'ip addr add dev {INTERNET_BRIDGE} {self._ipv4} broadcast {self._brd}') util.run_command(f'ip -6 addr add {self._ipv6} dev {INTERNET_BRIDGE} ') @@ -316,34 +320,25 @@ def _ci_post_network_create(self): def create_net(self): LOGGER.info('Creating baseline network') - if not util.interface_exists(self._int_intf) or not util.interface_exists( - self._dev_intf): - LOGGER.error('Configured interfaces are not ready for use. ' + - 'Ensure both interfaces are connected.') - sys.exit(1) - - if self._single_intf: + if os.getenv('GITHUB_ACTIONS'): self._ci_pre_network_create() - # Remove IP from internet adapter - util.run_command('ifconfig ' + self._int_intf + ' 0.0.0.0') - # Setup the virtual network if not self._ovs.create_baseline_net(verify=True): LOGGER.error('Baseline network validation failed.') self.stop() sys.exit(1) - if self._single_intf: + if os.getenv("GITHUB_ACTIONS"): self._ci_post_network_create() self._create_private_net() - self.listener = Listener(self._dev_intf) - self.listener.register_callback(self._device_discovered, - [NetworkEvent.DEVICE_DISCOVERED]) - self.listener.register_callback(self._dhcp_lease_ack, - [NetworkEvent.DHCP_LEASE_ACK]) + self._listener = Listener(self._session) + self.get_listener().register_callback(self._device_discovered, + [NetworkEvent.DEVICE_DISCOVERED]) + self.get_listener().register_callback(self._dhcp_lease_ack, + [NetworkEvent.DHCP_LEASE_ACK]) def load_network_modules(self): """Load network modules from module_config.json.""" @@ -468,7 +463,7 @@ def _start_network_service(self, net_module): privileged=True, detach=True, mounts=net_module.mounts, - environment={'HOST_USER': self._host_user}) + environment={'HOST_USER': util.get_host_user()}) except docker.errors.ContainerError as error: LOGGER.error('Container run error') LOGGER.error(error) @@ -618,7 +613,7 @@ def _attach_service_to_network(self, net_module): # Add and configure the interface container if not self._ip_ctrl.configure_container_interface( - bridge_intf, container_intf, "veth0", container_net_ns, mac_addr, + bridge_intf, container_intf, 'veth0', container_net_ns, mac_addr, net_module.container_name, ipv4_addr, ipv6_addr): LOGGER.error('Failed to configure local networking for ' + net_module.name + '. Exiting.') @@ -644,7 +639,7 @@ def _attach_service_to_network(self, net_module): container_intf = 'tr-cti-' + net_module.dir_name if not self._ip_ctrl.configure_container_interface( - bridge_intf, container_intf, "eth1", container_net_ns, mac_addr): + bridge_intf, container_intf, 'eth1', container_net_ns, mac_addr): LOGGER.error('Failed to configure internet networking for ' + net_module.name + '. Exiting.') sys.exit(1) @@ -661,9 +656,9 @@ def restore_net(self): LOGGER.info('Clearing baseline network') - if hasattr(self, 'listener' - ) and self.listener is not None and self.listener.is_running(): - self.listener.stop_listener() + if hasattr(self, 'listener') and self.get_listener( + ) is not None and self.get_listener().is_running(): + self.get_listener().stop_listener() client = docker.from_env() @@ -681,10 +676,12 @@ def restore_net(self): # Clean up any existing network artifacts self._ip_ctrl.clean_all() + internet_intf = self._session.get_internet_interface() + # Restart internet interface - if util.interface_exists(self._int_intf): - util.run_command('ip link set ' + self._int_intf + ' down') - util.run_command('ip link set ' + self._int_intf + ' up') + if util.interface_exists(internet_intf): + util.run_command('ip link set ' + internet_intf + ' down') + util.run_command('ip link set ' + internet_intf + ' up') LOGGER.info('Network is restored') @@ -713,9 +710,6 @@ def __init__(self): self.net_config = NetworkModuleNetConfig() -# The networking configuration for a network module - - class NetworkModuleNetConfig: """Define all the properties of the network config for a network module""" @@ -739,9 +733,6 @@ def get_ipv6_addr_with_prefix(self): return format(self.ipv6_address) + '/' + str(self.ipv6_network.prefixlen) -# Represents the current configuration of the network for the device bridge - - class NetworkConfig: """Define all the properties of the network configuration""" diff --git a/framework/python/src/net_orc/network_validator.py b/framework/python/src/net_orc/network_validator.py index f82787af5..2a4112764 100644 --- a/framework/python/src/net_orc/network_validator.py +++ b/framework/python/src/net_orc/network_validator.py @@ -30,7 +30,7 @@ DEVICE_BRIDGE = 'tr-d' CONF_DIR = 'local' CONF_FILE = 'system.json' - +TR_CONTAINER_MAC_PREFIX = '9a:02:57:1e:8f:' class NetworkValidator: """Perform validation of network services.""" @@ -238,6 +238,10 @@ def _attach_device_to_network(self, device): util.run_command('ip link add ' + bridge_intf + ' type veth peer name ' + container_intf) + mac_addr = TR_CONTAINER_MAC_PREFIX + '10' + + util.run_command('ip link set dev ' + container_intf + ' address ' + mac_addr) + # Add bridge interface to device bridge util.run_command('ovs-vsctl add-port ' + DEVICE_BRIDGE + ' ' + bridge_intf) @@ -258,6 +262,7 @@ def _attach_device_to_network(self, device): util.run_command('ip netns exec ' + container_net_ns + ' ip link set dev ' + container_intf + ' name veth0') + # Set interfaces up util.run_command('ip link set dev ' + bridge_intf + ' up') util.run_command('ip netns exec ' + container_net_ns + diff --git a/framework/python/src/net_orc/ovs_control.py b/framework/python/src/net_orc/ovs_control.py index 83823e8fa..80f76e85f 100644 --- a/framework/python/src/net_orc/ovs_control.py +++ b/framework/python/src/net_orc/ovs_control.py @@ -11,14 +11,10 @@ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. - """OVS Control Module""" -import json -import os from common import logger from common import util -CONFIG_FILE = 'local/system.json' DEVICE_BRIDGE = 'tr-d' INTERNET_BRIDGE = 'tr-c' LOGGER = logger.get_logger('ovs_ctrl') @@ -27,10 +23,8 @@ class OVSControl: """OVS Control""" - def __init__(self): - self._int_intf = None - self._dev_intf = None - self._load_config() + def __init__(self, session): + self._session = session def add_bridge(self, bridge_name): LOGGER.debug('Adding OVS bridge: ' + bridge_name) @@ -79,13 +73,19 @@ def validate_baseline_network(self): # Verify the OVS setup of the virtual network LOGGER.debug('Validating baseline network') + dev_bridge = True + int_bridge = True + # Verify the device bridge - dev_bridge = self.verify_bridge(DEVICE_BRIDGE, [self._dev_intf]) + dev_bridge = self.verify_bridge(DEVICE_BRIDGE, + [self._session.get_device_interface()]) LOGGER.debug('Device bridge verified: ' + str(dev_bridge)) # Verify the internet bridge - int_bridge = self.verify_bridge(INTERNET_BRIDGE, [self._int_intf]) - LOGGER.debug('Internet bridge verified: ' + str(int_bridge)) + if 'single_intf' not in self._session.get_runtime_params(): + int_bridge = self.verify_bridge(INTERNET_BRIDGE, + [self._session.get_internet_interface()]) + LOGGER.debug('Internet bridge verified: ' + str(int_bridge)) return dev_bridge and int_bridge @@ -106,21 +106,20 @@ def verify_bridge(self, bridge_name, ports): def create_baseline_net(self, verify=True): LOGGER.debug('Creating baseline network') - # Remove IP from internet adapter - self.set_interface_ip(interface=self._int_intf, ip_addr='0.0.0.0') - # Create data plane self.add_bridge(DEVICE_BRIDGE) # Create control plane self.add_bridge(INTERNET_BRIDGE) - # Remove IP from internet adapter - self.set_interface_ip(self._int_intf, '0.0.0.0') - # Add external interfaces to data and control plane - self.add_port(self._dev_intf, DEVICE_BRIDGE) - self.add_port(self._int_intf, INTERNET_BRIDGE) + self.add_port(self._session.get_device_interface(), DEVICE_BRIDGE) + + # Remove IP from internet adapter + if not 'single_intf' in self._session.get_runtime_params(): + self.set_interface_ip(interface=self._session.get_internet_interface(), + ip_addr='0.0.0.0') + self.add_port(self._session.get_internet_interface(), INTERNET_BRIDGE) # Enable forwarding of eapol packets self.add_flow(bridge_name=DEVICE_BRIDGE, @@ -145,20 +144,6 @@ def delete_bridge(self, bridge_name): success = util.run_command('ovs-vsctl --if-exists del-br ' + bridge_name) return success - def _load_config(self): - path = os.path.dirname(os.path.dirname( - os.path.dirname( - os.path.dirname(os.path.dirname(os.path.realpath(__file__)))))) - config_file = os.path.join(path, CONFIG_FILE) - LOGGER.debug('Loading configuration: ' + config_file) - with open(config_file, 'r', encoding='utf-8') as conf_file: - config_json = json.load(conf_file) - self._int_intf = config_json['network']['internet_intf'] - self._dev_intf = config_json['network']['device_intf'] - LOGGER.debug('Configuration loaded') - LOGGER.debug('Internet interface: ' + self._int_intf) - LOGGER.debug('Device interface: ' + self._dev_intf) - def restore_net(self): LOGGER.debug('Restoring network...') # Delete data plane diff --git a/framework/python/src/test_orc/module.py b/framework/python/src/test_orc/module.py index 185940dd8..6f3c544a1 100644 --- a/framework/python/src/test_orc/module.py +++ b/framework/python/src/test_orc/module.py @@ -12,31 +12,33 @@ # See the License for the specific language governing permissions and # limitations under the License. -"""Represemts a test module.""" -from dataclasses import dataclass +"""Represents a test module.""" +from dataclasses import dataclass, field from docker.models.containers import Container - @dataclass class TestModule: # pylint: disable=too-few-public-methods,too-many-instance-attributes """Represents a test module.""" + # General test module information name: str = None display_name: str = None description: str = None + tests: list = field(default_factory=lambda: []) + # Docker settings build_file: str = None container: Container = None container_name: str = None image_name: str = None enable_container: bool = True network: bool = True - + total_tests: int = 0 timeout: int = 60 # Absolute path dir: str = None dir_name: str = None - #Set IP Index for all test modules + # Set IP Index for all test modules ip_index: str = 9 diff --git a/framework/python/src/core/device.py b/framework/python/src/test_orc/test_case.py similarity index 68% rename from framework/python/src/core/device.py rename to framework/python/src/test_orc/test_case.py index efce2dba1..7c9eb6c20 100644 --- a/framework/python/src/core/device.py +++ b/framework/python/src/test_orc/test_case.py @@ -1,27 +1,26 @@ -# Copyright 2023 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Track device object information.""" - -from net_orc.network_device import NetworkDevice -from dataclasses import dataclass - - -@dataclass -class Device(NetworkDevice): - """Represents a physical device and it's configuration.""" - - manufacturer: str = None - model: str = None - test_modules: str = None +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# https://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Represents an individual test case.""" +from dataclasses import dataclass + + +@dataclass +class TestCase: # pylint: disable=too-few-public-methods,too-many-instance-attributes + """Represents a test case.""" + + name: str = "test.undefined" + description: str = "" + expected_behavior: str = "" + required_result: str = "Recommended" diff --git a/framework/python/src/test_orc/test_orchestrator.py b/framework/python/src/test_orc/test_orchestrator.py index fef4e5bb5..eb5676e17 100644 --- a/framework/python/src/test_orc/test_orchestrator.py +++ b/framework/python/src/test_orc/test_orchestrator.py @@ -11,47 +11,47 @@ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. - """Provides high level management of the test orchestrator.""" import os import json +import re import time import shutil import docker +from datetime import datetime from docker.types import Mount from common import logger, util +from common.testreport import TestReport from test_orc.module import TestModule +from test_orc.test_case import TestCase LOG_NAME = "test_orc" LOGGER = logger.get_logger("test_orc") RUNTIME_DIR = "runtime/test" TEST_MODULES_DIR = "modules/test" MODULE_CONFIG = "conf/module_config.json" +LOG_REGEX = r"^[A-Z][a-z]{2} [0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2} test_" +SAVED_DEVICE_REPORTS = "local/devices/{device_folder}/reports" +DEVICE_ROOT_CERTS = "local/root_certs" class TestOrchestrator: """Manages and controls the test modules.""" - def __init__(self, net_orc): + def __init__(self, session, net_orc): self._test_modules = [] - self._module_config = None + self._session = session self._net_orc = net_orc self._test_in_progress = False + self._path = os.path.dirname( + os.path.dirname( + os.path.dirname( + os.path.dirname(os.path.dirname(os.path.realpath(__file__)))))) - self._path = os.path.dirname(os.path.dirname( - os.path.dirname( - os.path.dirname(os.path.dirname(os.path.realpath(__file__)))))) - - # Resolve the path to the test-run folder - #self._root_path = os.path.abspath(os.path.join(self._path, os.pardir)) - - - self._root_path = os.path.dirname(os.path.dirname( - os.path.dirname( - os.path.dirname(os.path.dirname(os.path.realpath(__file__)))))) - - shutil.rmtree(os.path.join(self._root_path, RUNTIME_DIR), - ignore_errors=True) + self._root_path = os.path.dirname( + os.path.dirname( + os.path.dirname( + os.path.dirname(os.path.dirname(os.path.realpath(__file__)))))) def start(self): LOGGER.debug("Starting test orchestrator") @@ -61,6 +61,9 @@ def start(self): os.makedirs(RUNTIME_DIR, exist_ok=True) util.run_command(f"chown -R {self._host_user} {RUNTIME_DIR}") + # Setup the root_certs folder + os.makedirs(DEVICE_ROOT_CERTS, exist_ok=True) + self._load_test_modules() self.build_test_modules() @@ -68,48 +71,118 @@ def stop(self): """Stop any running tests""" self._stop_modules() - def run_test_modules(self, device): + def run_test_modules(self): """Iterates through each test module and starts the container.""" + + device = self._session.get_target_device() self._test_in_progress = True LOGGER.info( f"Running test modules on device with mac addr {device.mac_addr}") for module in self._test_modules: - self._run_test_module(module, device) + self._run_test_module(module) LOGGER.info("All tests complete") - self._generate_results(device) + self._session.stop() + report = TestReport().from_json(self._generate_report()) + device.add_report(report) + self._test_in_progress = False + self._timestamp_results(device) - def _generate_results(self, device): - results = {} - results["device"] = {} - if device.manufacturer is not None: - results["device"]["manufacturer"] = device.manufacturer - if device.model is not None: - results["device"]["model"] = device.model - results["device"]["mac_addr"] = device.mac_addr - for module in self._test_modules: - if module.enable_container and self._is_module_enabled(module, device): - container_runtime_dir = os.path.join( - self._root_path, "runtime/test/" + - device.mac_addr.replace(":", "") + "/" + module.name) - results_file = f"{container_runtime_dir}/{module.name}-result.json" - try: - with open(results_file, "r", encoding="utf-8-sig") as f: - module_results = json.load(f) - results[module.name] = module_results - except (FileNotFoundError, PermissionError, - json.JSONDecodeError) as results_error: - LOGGER.error(f"Error occured whilst obbtaining results for module {module.name}") - LOGGER.debug(results_error) + LOGGER.debug("Cleaning old test results...") + self._cleanup_old_test_results(device) + + LOGGER.debug("Old test results cleaned") + self._test_in_progress = False + + def _generate_report(self): + report = {} + report["device"] = self._session.get_target_device().to_dict() + report["started"] = self._session.get_started().strftime( + "%Y-%m-%d %H:%M:%S") + report["finished"] = self._session.get_finished().strftime( + "%Y-%m-%d %H:%M:%S") + report["status"] = self._calculate_result() + report["tests"] = self._session.get_report_tests() out_file = os.path.join( - self._root_path, - "runtime/test/" + device.mac_addr.replace(":", "") + "/results.json") + self._root_path, RUNTIME_DIR, + self._session.get_target_device().mac_addr.replace(":", ""), + "report.json") + with open(out_file, "w", encoding="utf-8") as f: - json.dump(results, f, indent=2) + json.dump(report, f, indent=2) util.run_command(f"chown -R {self._host_user} {out_file}") - return results + return report + + def _calculate_result(self): + result = "Compliant" + for test_result in self._session.get_test_results(): + test_case = self.get_test_case(test_result["name"]) + if (test_case.required_result.lower() == "required" + and test_result["result"].lower() == "non-compliant"): + result = "non-compliant" + return result + + def _cleanup_old_test_results(self, device): + + if device.max_device_reports is not None: + max_device_reports = device.max_device_reports + else: + max_device_reports = self._session.get_max_device_reports() + + completed_results_dir = os.path.join( + self._root_path, + SAVED_DEVICE_REPORTS.replace("{device_folder}", device.device_folder)) + + completed_tests = os.listdir(completed_results_dir) + cur_test_count = len(completed_tests) + if cur_test_count > max_device_reports: + LOGGER.debug("Current device has more than max tests results allowed: " + + str(cur_test_count) + ">" + str(max_device_reports)) + + # Find and delete the oldest test + oldest_test = self._find_oldest_test(completed_results_dir) + if oldest_test is not None: + LOGGER.debug("Oldest test found, removing: " + str(oldest_test)) + shutil.rmtree(oldest_test, ignore_errors=True) + # Confirm the delete was succesful + new_test_count = len(os.listdir(completed_results_dir)) + if (new_test_count != cur_test_count + and new_test_count > max_device_reports): + # Continue cleaning up until we're under the max + self._cleanup_old_test_results(device) + + def _find_oldest_test(self, completed_tests_dir): + oldest_timestamp = None + oldest_directory = None + for completed_test in os.listdir(completed_tests_dir): + timestamp = datetime.strptime(str(completed_test), "%Y-%m-%dT%H:%M:%S") + if oldest_timestamp is None or timestamp < oldest_timestamp: + oldest_timestamp = timestamp + oldest_directory = completed_test + if oldest_directory: + return os.path.join(completed_tests_dir, oldest_directory) + else: + return None + + def _timestamp_results(self, device): + + # Define the current device results directory + cur_results_dir = os.path.join(self._root_path, RUNTIME_DIR, + device.mac_addr.replace(":", "")) + + # Define the destination results directory with timestamp + cur_time = datetime.now().strftime("%Y-%m-%dT%H:%M:%S") + completed_results_dir = os.path.join( + SAVED_DEVICE_REPORTS.replace("{device_folder}", device.device_folder), + cur_time) + + # Copy the results to the timestamp directory + # leave current copy in place for quick reference to + # most recent test + shutil.copytree(cur_results_dir, completed_results_dir) + util.run_command(f"chown -R {self._host_user} '{completed_results_dir}'") def test_in_progress(self): return self._test_in_progress @@ -117,15 +190,17 @@ def test_in_progress(self): def _is_module_enabled(self, module, device): enabled = True if device.test_modules is not None: - test_modules = json.loads(device.test_modules) + test_modules = device.test_modules if module.name in test_modules: if "enabled" in test_modules[module.name]: enabled = test_modules[module.name]["enabled"] return enabled - def _run_test_module(self, module, device): + def _run_test_module(self, module): """Start the test container and extract the results.""" + device = self._session.get_target_device() + if module is None or not module.enable_container: return @@ -135,21 +210,19 @@ def _run_test_module(self, module, device): LOGGER.info("Running test module " + module.name) try: - container_runtime_dir = os.path.join( - self._root_path, "runtime/test/" + device.mac_addr.replace(":", "") + - "/" + module.name) - os.makedirs(container_runtime_dir) + + device_test_dir = os.path.join(self._root_path, RUNTIME_DIR, + device.mac_addr.replace(":", "")) + + container_runtime_dir = os.path.join(device_test_dir, module.name) + os.makedirs(container_runtime_dir, exist_ok=True) network_runtime_dir = os.path.join(self._root_path, "runtime/network") - device_startup_capture = os.path.join( - self._root_path, "runtime/test/" + device.mac_addr.replace(":", "") + - "/startup.pcap") + device_startup_capture = os.path.join(device_test_dir, "startup.pcap") util.run_command(f"chown -R {self._host_user} {device_startup_capture}") - device_monitor_capture = os.path.join( - self._root_path, "runtime/test/" + device.mac_addr.replace(":", "") + - "/monitor.pcap") + device_monitor_capture = os.path.join(device_test_dir, "monitor.pcap") util.run_command(f"chown -R {self._host_user} {device_monitor_capture}") client = docker.from_env() @@ -182,7 +255,7 @@ def _run_test_module(self, module, device): environment={ "HOST_USER": self._host_user, "DEVICE_MAC": device.mac_addr, - "DEVICE_TEST_MODULES": device.test_modules, + "DEVICE_TEST_MODULES": json.dumps(device.test_modules), "IPV4_SUBNET": self._net_orc.network_config.ipv4_network, "IPV6_SUBNET": self._net_orc.network_config.ipv6_network }) @@ -201,10 +274,36 @@ def _run_test_module(self, module, device): test_module_timeout = time.time() + module.timeout status = self._get_module_status(module) - while time.time() < test_module_timeout and status == "running": - time.sleep(1) + log_stream = module.container.logs(stream=True, stdout=True, stderr=True) + while (time.time() < test_module_timeout and status == "running" + and self._session.get_status() == "In progress"): + try: + line = next(log_stream).decode("utf-8").strip() + if re.search(LOG_REGEX, line): + print(line) + except Exception: # pylint: disable=W0718 + time.sleep(1) status = self._get_module_status(module) + # Get test results from module + container_runtime_dir = os.path.join( + self._root_path, + "runtime/test/" + device.mac_addr.replace(":", "") + "/" + module.name) + results_file = f"{container_runtime_dir}/{module.name}-result.json" + try: + with open(results_file, "r", encoding="utf-8-sig") as f: + module_results_json = json.load(f) + module_results = module_results_json["results"] + for test_result in module_results: + self._session.add_test_result(test_result) + except (FileNotFoundError, PermissionError, + json.JSONDecodeError) as results_error: + LOGGER.error( + f"Error occured whilst obbtaining results for module {module.name}") + LOGGER.debug(results_error) + + self._session.add_total_tests(module.total_tests) + LOGGER.info("Test module " + module.name + " has finished") def _get_module_status(self, module): @@ -251,7 +350,7 @@ def _load_test_modules(self): def _load_test_module(self, module_dir): """Import module configuration from module_config.json.""" - LOGGER.debug("Loading test module " + module_dir) + LOGGER.debug(f"Loading test module {module_dir}") modules_dir = os.path.join(self._path, TEST_MODULES_DIR) @@ -270,6 +369,22 @@ def _load_test_module(self, module_dir): module.container_name = "tr-ct-" + module.dir_name + "-test" module.image_name = "test-run/" + module.dir_name + "-test" + # Load test cases + if "tests" in module_json["config"]: + module.total_tests = len(module_json["config"]["tests"]) + for test_case_json in module_json["config"]["tests"]: + try: + test_case = TestCase( + name=test_case_json["name"], + description=test_case_json["description"], + expected_behavior=test_case_json["expected_behavior"], + required_result=test_case_json["required_result"] + ) + module.tests.append(test_case) + except Exception as error: + LOGGER.debug("Failed to load test case. See error for details") + LOGGER.error(error) + if "timeout" in module_json["config"]["docker"]: module.timeout = module_json["config"]["docker"]["timeout"] @@ -278,6 +393,11 @@ def _load_test_module(self, module_dir): module.enable_container = module_json["config"]["docker"][ "enable_container"] + # Determine if this module needs network access + if "network" in module_json["config"]: + module.network = module_json["config"]["network"] + + # Ensure container is built after any dependencies if "depends_on" in module_json["config"]["docker"]: depends_on_module = module_json["config"]["docker"]["depends_on"] if self._get_test_module(depends_on_module) is None: @@ -328,3 +448,25 @@ def _stop_module(self, module, kill=False): LOGGER.debug("Container stopped:" + module.container_name) except docker.errors.NotFound: pass + + def get_test_modules(self): + return self._test_modules + + def get_test_module(self, name): + for test_module in self.get_test_modules(): + if test_module.name == name: + return test_module + return None + + def get_test_cases(self): + test_cases = [] + for test_module in self.get_test_modules(): + for test_case in test_module.tests: + test_cases.append(test_case) + return test_cases + + def get_test_case(self, name): + for test_case in self.get_test_cases(): + if test_case.name == name: + return test_case + return None diff --git a/framework/requirements.txt b/framework/requirements.txt index 03eab9796..560c2baf9 100644 --- a/framework/requirements.txt +++ b/framework/requirements.txt @@ -5,4 +5,10 @@ requests<2.29.0 docker ipaddress netifaces -scapy \ No newline at end of file +scapy + +# Requirements for the API +fastapi==0.99.1 +psutil +uvicorn +pydantic==1.10.11 \ No newline at end of file diff --git a/local/.gitignore b/local/.gitignore index 4fb365c03..06f79c1ca 100644 --- a/local/.gitignore +++ b/local/.gitignore @@ -1,2 +1,3 @@ system.json -devices \ No newline at end of file +devices +root_certs diff --git a/local/system.json.example b/local/system.json.example index e99e013f3..17e5b0891 100644 --- a/local/system.json.example +++ b/local/system.json.example @@ -6,5 +6,6 @@ "log_level": "INFO", "startup_timeout": 60, "monitor_period": 300, - "runtime": 1200 + "runtime": 1200, + "max_device_reports": 5 } \ No newline at end of file diff --git a/modules/test/base/base.Dockerfile b/modules/test/base/base.Dockerfile index 10344cbc7..707136f6d 100644 --- a/modules/test/base/base.Dockerfile +++ b/modules/test/base/base.Dockerfile @@ -17,10 +17,14 @@ FROM ubuntu:jammy ARG MODULE_NAME=base ARG MODULE_DIR=modules/test/$MODULE_NAME +ARG COMMON_DIR=framework/python/src/common # Install common software RUN apt-get update && apt-get install -y net-tools iputils-ping tcpdump iproute2 jq python3 python3-pip dos2unix nmap --fix-missing +# Install common python modules +COPY $COMMON_DIR/ /testrun/python/src/common + # Setup the base python requirements COPY $MODULE_DIR/python /testrun/python @@ -45,4 +49,4 @@ COPY $NET_MODULE_DIR/dhcp-1/$NET_MODULE_PROTO_DIR $CONTAINER_PROTO_DIR/dhcp1/ COPY $NET_MODULE_DIR/dhcp-2/$NET_MODULE_PROTO_DIR $CONTAINER_PROTO_DIR/dhcp2/ # Start the test module -ENTRYPOINT [ "/testrun/bin/start_module" ] \ No newline at end of file +ENTRYPOINT [ "/testrun/bin/start" ] \ No newline at end of file diff --git a/framework/python/src/net_orc/network_device.py b/modules/test/base/bin/start old mode 100644 new mode 100755 similarity index 71% rename from framework/python/src/net_orc/network_device.py rename to modules/test/base/bin/start index f17ac0f0d..37902b868 --- a/framework/python/src/net_orc/network_device.py +++ b/modules/test/base/bin/start @@ -1,24 +1,17 @@ -# Copyright 2023 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Track device object information.""" -from dataclasses import dataclass - - -@dataclass -class NetworkDevice: - """Represents a physical device and it's configuration.""" - - mac_addr: str - ip_addr: str = None +#!/bin/bash + +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# https://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +/testrun/bin/start_module \ No newline at end of file diff --git a/modules/test/base/bin/start_module b/modules/test/base/bin/start_module index 82c9d26bf..69f399feb 100644 --- a/modules/test/base/bin/start_module +++ b/modules/test/base/bin/start_module @@ -99,4 +99,4 @@ fi sleep 3 # Start the networking service -$BIN_DIR/start_test_module $MODULE_NAME $IFACE \ No newline at end of file +$BIN_DIR/start_test_module $MODULE_NAME $IFACE > /runtime/output/container.log \ No newline at end of file diff --git a/modules/test/base/python/requirements.txt b/modules/test/base/python/requirements.txt index 9c4e2b056..9d9473d74 100644 --- a/modules/test/base/python/requirements.txt +++ b/modules/test/base/python/requirements.txt @@ -1,2 +1,3 @@ grpcio -grpcio-tools \ No newline at end of file +grpcio-tools +netifaces \ No newline at end of file diff --git a/modules/test/base/python/src/test_module.py b/modules/test/base/python/src/test_module.py index b0898aa20..519fb2433 100644 --- a/modules/test/base/python/src/test_module.py +++ b/modules/test/base/python/src/test_module.py @@ -11,7 +11,6 @@ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. - """Base class for all core test module functions""" import json import logger @@ -61,8 +60,10 @@ def _get_device_tests(self, device_test_module): if 'tests' in device_test_module: if test['name'] in device_test_module['tests']: dev_test_config = device_test_module['tests'][test['name']] - if 'config' in test: - test['config'].update(dev_test_config) + if 'enabled' in dev_test_config: + test['enabled'] = dev_test_config['enabled'] + if 'config' in test and 'config' in dev_test_config: + test['config'].update(dev_test_config['config']) return module_tests def _get_device_test_module(self): @@ -80,9 +81,9 @@ def run_tests(self): for test in tests: test_method_name = '_' + test['name'].replace('.', '_') result = None + test['start'] = datetime.now().isoformat() if ('enabled' in test and test['enabled']) or 'enabled' not in test: LOGGER.info('Attempting to run test: ' + test['name']) - test['start'] = datetime.now().isoformat() # Resolve the correct python method by test name and run test if hasattr(self, test_method_name): if 'config' in test: @@ -98,10 +99,28 @@ def run_tests(self): if isinstance(result, bool): test['result'] = 'compliant' if result else 'non-compliant' else: - test['result'] = 'compliant' if result[0] else 'non-compliant' + if result[0] is None: + test['result'] = 'skipped' + if len(result)>1: + test['result_details'] = result[1] + else: + test['result'] = 'compliant' if result[0] else 'non-compliant' test['result_details'] = result[1] else: test['result'] = 'skipped' + + # Generate the short result description based on result value + if test['result'] == 'compliant': + test['result_description'] = test[ + 'short_description'] if 'short_description' in test else test[ + 'name'] + ' passed - see result details for more info' + elif test['result'] == 'non-compliant': + test['result_description'] = test[ + 'name'] + ' failed - see result details for more info' + else: + test['result_description'] = test[ + 'name'] + ' skipped - see result details for more info' + test['end'] = datetime.now().isoformat() duration = datetime.fromisoformat(test['end']) - datetime.fromisoformat( test['start']) diff --git a/modules/test/baseline/conf/module_config.json b/modules/test/baseline/conf/module_config.json index 4c0cd08d8..83b920ea6 100644 --- a/modules/test/baseline/conf/module_config.json +++ b/modules/test/baseline/conf/module_config.json @@ -15,17 +15,20 @@ { "name": "baseline.pass", "description": "Simulate a compliant test", - "expected_behavior": "A compliant test result is generated" + "expected_behavior": "A compliant test result is generated", + "required_result": "Required" }, { "name": "baseline.fail", "description": "Simulate a non-compliant test", - "expected_behavior": "A non-compliant test result is generated" + "expected_behavior": "A non-compliant test result is generated", + "required_result": "Recommended" }, { "name": "baseline.skip", "description": "Simulate a skipped test", - "expected_behavior": "A skipped test result is generated" + "expected_behavior": "A skipped test result is generated", + "required_result": "Roadmap" } ] } diff --git a/modules/test/baseline/python/src/baseline_module.py b/modules/test/baseline/python/src/baseline_module.py index 22555d369..978f916fe 100644 --- a/modules/test/baseline/python/src/baseline_module.py +++ b/modules/test/baseline/python/src/baseline_module.py @@ -15,7 +15,7 @@ """Baseline test module""" from test_module import TestModule -LOG_NAME = "test_baseline" +LOG_NAME = 'test_baseline' LOGGER = None @@ -28,15 +28,16 @@ def __init__(self, module): LOGGER = self._get_logger() def _baseline_pass(self): - LOGGER.info("Running baseline pass test") - LOGGER.info("Baseline pass test finished") - return True + LOGGER.info('Running baseline pass test') + LOGGER.info('Baseline pass test finished') + return True, 'Baseline pass test ran successfully' def _baseline_fail(self): - LOGGER.info("Running baseline pass test") - LOGGER.info("Baseline pass test finished") - return False + LOGGER.info('Running baseline fail test') + LOGGER.info('Baseline fail test finished') + return False, 'Baseline fail test ran successfully' def _baseline_skip(self): - LOGGER.info("Running baseline pass test") - LOGGER.info("Baseline pass test finished") + LOGGER.info('Running baseline skip test') + LOGGER.info('Baseline skip test finished') + return None, 'Baseline skip test ran successfully' diff --git a/modules/test/conn/bin/start_test_module b/modules/test/conn/bin/start_test_module index 0df510b86..d85ae7d6b 100644 --- a/modules/test/conn/bin/start_test_module +++ b/modules/test/conn/bin/start_test_module @@ -45,7 +45,7 @@ touch $RESULT_FILE chown $HOST_USER $LOG_FILE chown $HOST_USER $RESULT_FILE -# Run the python scrip that will execute the tests for this module +# Run the python script that will execute the tests for this module # -u flag allows python print statements # to be logged by docker by running unbuffered python3 -u $PYTHON_SRC_DIR/run.py "-m $MODULE_NAME" diff --git a/modules/test/conn/conf/module_config.json b/modules/test/conn/conf/module_config.json index b82879544..c358ba1c2 100644 --- a/modules/test/conn/conf/module_config.json +++ b/modules/test/conn/conf/module_config.json @@ -6,31 +6,84 @@ "description": "Connection tests" }, "network": true, + "interface_control": true, "docker": { "depends_on": "base", "enable_container": true, "timeout": 600 }, "tests": [ + { + "name": "connection.dhcp.disconnect", + "description": "The device under test has received an IP address from the DHCP server and responds to an ICMP echo (ping) request", + "expected_behavior": "The device is not setup with a static IP address. The device accepts an IP address from a DHCP server (RFC 2131) and responds succesfully to an ICMP echo (ping) request.", + "required_result": "Required" + }, + { + "name": "connection.dhcp.disconnect_ip_change", + "description": "Update device IP on the DHCP server and reconnect the device. Does the device receive the new IP address?", + "expected_behavior": "Device recieves a new IP address within the range that is specified on the DHCP server. Device should respond to aping on this new address.", + "required_result": "Required" + }, { "name": "connection.dhcp_address", "description": "The device under test has received an IP address from the DHCP server and responds to an ICMP echo (ping) request", - "expected_behavior": "The device is not setup with a static IP address. The device accepts an IP address from a DHCP server (RFC 2131) and responds succesfully to an ICMP echo (ping) request." + "expected_behavior": "The device is not setup with a static IP address. The device accepts an IP address from a DHCP server (RFC 2131) and responds succesfully to an ICMP echo (ping) request.", + "required_result": "Required" }, { "name": "connection.mac_address", "description": "Check and note device physical address.", - "expected_behavior": "N/A" + "expected_behavior": "N/A", + "required_result": "Required" }, { "name": "connection.mac_oui", "description": "The device under test hs a MAC address prefix that is registered against a known manufacturer.", - "expected_behavior": "The MAC address prefix is registered in the IEEE Organizationally Unique Identifier database." + "expected_behavior": "The MAC address prefix is registered in the IEEE Organizationally Unique Identifier database.", + "required_result": "Required" + }, + { + "name": "connection.private_address", + "description": "The device under test accepts an IP address that is compliant with RFC 1918 Address Allocation for Private Internets.", + "expected_behavior": "The device under test accepts IP addresses within all ranges specified in RFC 1918 and communicates using these addresses. The Internet Assigned Numbers Authority (IANA) has reserved the following three blocks of the IP address space for private internets. 10.0.0.0 - 10.255.255.255.255 (10/8 prefix). 172.16.0.0 - 172.31.255.255 (172.16/12 prefix). 192.168.0.0 - 192.168.255.255 (192.168/16 prefix)", + "required_result": "Required", + "config": { + "ranges": [ + { + "start": "10.0.0.100", + "end": "10.0.0.200" + }, + { + "start": "172.16.0.0", + "end": "172.16.255.255" + }, + { + "start": "192.168.0.0", + "end": "192.168.255.255" + } + ] + } + }, + { + "name": "connection.shared_address", + "description": "Ensure the device supports RFC 6598 IANA-Reserved IPv4 Prefix for Shared Address Space", + "expected_behavior": "The device under test accepts IP addresses within the ranges specified in RFC 6598 and communicates using these addresses", + "required_result": "Required", + "config": { + "ranges": [ + { + "start": "100.64.0.1", + "end": "100.64.255.254" + } + ] + } }, { "name": "connection.private_address", "description": "The device under test accepts an IP address that is compliant with RFC 1918 Address Allocation for Private Internets.", "expected_behavior": "The device under test accepts IP addresses within all ranges specified in RFC 1918 and communicates using these addresses. The Internet Assigned Numbers Authority (IANA) has reserved the following three blocks of the IP address space for private internets. 10.0.0.0 - 10.255.255.255.255 (10/8 prefix). 172.16.0.0 - 172.31.255.255 (172.16/12 prefix). 192.168.0.0 - 192.168.255.255 (192.168/16 prefix)", + "required_result": "Required", "config": [ { "start": "10.0.0.100", @@ -49,22 +102,38 @@ { "name": "connection.single_ip", "description": "The network switch port connected to the device reports only one IP address for the device under test.", - "expected_behavior": "The device under test does not behave as a network switch and only requets one IP address. This test is to avoid that devices implement network switches that allow connecting strings of daisy chained devices to one single network port, as this would not make 802.1x port based authentication possible." + "expected_behavior": "The device under test does not behave as a network switch and only requets one IP address. This test is to avoid that devices implement network switches that allow connecting strings of daisy chained devices to one single network port, as this would not make 802.1x port based authentication possible.", + "required_result": "Required" }, { "name": "connection.target_ping", "description": "The device under test responds to an ICMP echo (ping) request.", - "expected_behavior": "The device under test responds to an ICMP echo (ping) request." + "expected_behavior": "The device under test responds to an ICMP echo (ping) request.", + "required_result": "Required" + }, + { + "name": "connection.ipaddr.ip_change", + "description": "The device responds to a ping (ICMP echo request) to the new IP address it has received after the initial DHCP lease has expired.", + "expected_behavior": "If the lease expires before the client receiveds a DHCPACK, the client moves to INIT state, MUST immediately stop any other network processing and requires network initialization parameters as if the client were uninitialized. If the client then receives a DHCPACK allocating the client its previous network addres, the client SHOULD continue network processing. If the client is given a new network address, it MUST NOT continue using the previous network address and SHOULD notify the local users of the problem.", + "required_result": "Required" + }, + { + "name": "connection.ipaddr.dhcp_failover", + "description": "The device has requested a DHCPREQUEST/REBIND to the DHCP failover server after the primary DHCP server has been brought down.", + "expected_behavior": "", + "required_result": "Required" }, { "name": "connection.ipv6_slaac", "description": "The device forms a valid IPv6 address as a combination of the IPv6 router prefix and the device interface identifier", - "expected_behavior": "The device under test complies with RFC4862 and forms a valid IPv6 SLAAC address" + "expected_behavior": "The device under test complies with RFC4862 and forms a valid IPv6 SLAAC address", + "required_result": "Required" }, { "name": "connection.ipv6_ping", "description": "The device responds to an IPv6 ping (ICMPv6 Echo) request to the SLAAC address", - "expected_behavior": "The device responds to the ping as per RFC4443" + "expected_behavior": "The device responds to the ping as per RFC4443", + "required_result": "Required" } ] } diff --git a/modules/test/conn/python/requirements.txt b/modules/test/conn/python/requirements.txt index 93b351f44..c2275b3e0 100644 --- a/modules/test/conn/python/requirements.txt +++ b/modules/test/conn/python/requirements.txt @@ -1 +1,2 @@ +pyOpenSSL scapy \ No newline at end of file diff --git a/modules/test/conn/python/src/connection_module.py b/modules/test/conn/python/src/connection_module.py index da8754608..248edc536 100644 --- a/modules/test/conn/python/src/connection_module.py +++ b/modules/test/conn/python/src/connection_module.py @@ -20,14 +20,15 @@ from test_module import TestModule from dhcp1.client import Client as DHCPClient1 from dhcp2.client import Client as DHCPClient2 +from dhcp_util import DHCPUtil LOG_NAME = 'test_connection' LOGGER = None OUI_FILE = '/usr/local/etc/oui.txt' -DHCP_SERVER_CAPTURE_FILE = '/runtime/network/dhcp-1.pcap' STARTUP_CAPTURE_FILE = '/runtime/device/startup.pcap' MONITOR_CAPTURE_FILE = '/runtime/device/monitor.pcap' SLAAC_PREFIX = 'fd10:77be:4186' +TR_CONTAINER_MAC_PREFIX = '9a:02:57:1e:8f:' class ConnectionModule(TestModule): @@ -39,6 +40,7 @@ def __init__(self, module): LOGGER = self._get_logger() self.dhcp1_client = DHCPClient1() self.dhcp2_client = DHCPClient2() + self._dhcp_util = DHCPUtil(self.dhcp1_client, self.dhcp2_client, LOGGER) # ToDo: Move this into some level of testing, leave for # reference until tests are implemented with these calls @@ -68,71 +70,12 @@ def __init__(self, module): # print("Set Range: " + str(response)) def _connection_private_address(self, config): - # Shutdown the secondary DHCP Server LOGGER.info('Running connection.private_address') - response = self.dhcp1_client.get_dhcp_range() - cur_range = {} - if response.code == 200: - cur_range['start'] = response.start - cur_range['end'] = response.end - LOGGER.info('Current DHCP subnet range: ' + str(cur_range)) - else: - LOGGER.error('Failed to resolve current subnet range required ' - 'for restoring network') - return None, ('Failed to resolve current subnet range required ' - 'for restoring network') - - results = [] - dhcp_setup = self.setup_single_dhcp_server() - if dhcp_setup[0]: - LOGGER.info(dhcp_setup[1]) - lease = self._get_cur_lease() - if lease is not None: - if self._is_lease_active(lease): - results = self.test_subnets(config) - else: - return None, 'Failed to confirm a valid active lease for the device' - else: - LOGGER.error(dhcp_setup[1]) - return None, 'Failed to setup DHCP server for test' + return self._run_subnet_test(config) - # Process and return final results - final_result = None - final_result_details = '' - for result in results: - if final_result is None: - final_result = result['result'] - else: - final_result &= result['result'] - final_result_details += result['details'] + '\n' - - try: - # Restore failover configuration of DHCP servers - self.restore_failover_dhcp_server(cur_range) - - # Wait for the current lease to expire - self._wait_for_lease_expire(self._get_cur_lease()) - - # Wait for a new lease to be provided before exiting test - # to prevent other test modules from failing - for _ in range(5): - LOGGER.info('Checking for new lease') - lease = self._get_cur_lease() - if lease is not None: - LOGGER.info('New Lease found: ' + str(lease)) - LOGGER.info('Validating subnet for new lease...') - in_range = self.is_ip_in_range(lease['ip'], cur_range['start'], - cur_range['end']) - LOGGER.info('Lease within subnet: ' + str(in_range)) - break - else: - LOGGER.info('New lease not found. Waiting to check again') - time.sleep(5) - - except Exception as e: # pylint: disable=W0718 - LOGGER.error('Failed to restore DHCP server configuration: ' + str(e)) - - return final_result, final_result_details + def _connection_shared_address(self, config): + LOGGER.info('Running connection.shared_address') + return self._run_subnet_test(config) def _connection_dhcp_address(self): LOGGER.info('Running connection.dhcp_address') @@ -182,8 +125,7 @@ def _connection_single_ip(self): return result, 'No MAC address found.' # Read all the pcap files containing DHCP packet information - packets = rdpcap(DHCP_SERVER_CAPTURE_FILE) - packets.append(rdpcap(STARTUP_CAPTURE_FILE)) + packets = rdpcap(STARTUP_CAPTURE_FILE) packets.append(rdpcap(MONITOR_CAPTURE_FILE)) # Extract MAC addresses from DHCP packets @@ -193,7 +135,8 @@ def _connection_single_ip(self): # Option[1] = message-type, option 3 = DHCPREQUEST if DHCP in packet and packet[DHCP].options[0][1] == 3: mac_address = packet[Ether].src - mac_addresses.add(mac_address.upper()) + if not mac_address.startswith(TR_CONTAINER_MAC_PREFIX): + mac_addresses.add(mac_address.upper()) # Check if the device mac address is in the list of DHCPREQUESTs result = self._device_mac.upper() in mac_addresses @@ -210,7 +153,7 @@ def _connection_target_ping(self): # If the ipv4 address wasn't resolved yet, try again if self._device_ipv4_addr is None: - self._device_ipv4_addr = self._get_device_ipv4(self) + self._device_ipv4_addr = self._get_device_ipv4() if self._device_ipv4_addr is None: LOGGER.error('No device IP could be resolved') @@ -218,6 +161,85 @@ def _connection_target_ping(self): else: return self._ping(self._device_ipv4_addr) + def _connection_ipaddr_ip_change(self): + result = None + LOGGER.info('Running connection.ipaddr.ip_change') + if self._dhcp_util.setup_single_dhcp_server(): + lease = self._dhcp_util.get_cur_lease(self._device_mac) + if lease is not None: + LOGGER.info('Current device lease resolved: ' + str(lease)) + # Figure out how to calculate a valid IP address + ip_address = '10.10.10.30' + if self._dhcp_util.add_reserved_lease(lease['hostname'], + lease['hw_addr'], ip_address): + self._dhcp_util.wait_for_lease_expire(lease) + LOGGER.info('Checking device accepted new ip') + for _ in range(5): + LOGGER.info('Pinging device at IP: ' + ip_address) + if self._ping(ip_address): + LOGGER.info('Ping Success') + LOGGER.info('Reserved lease confirmed active in device') + result = True, 'Device has accepted an IP address change' + LOGGER.info('Restoring DHCP failover configuration') + break + else: + LOGGER.info('Device did not respond to ping') + result = False, 'Device did not accept IP address change' + time.sleep(5) # Wait 5 seconds before trying again + self._dhcp_util.delete_reserved_lease(lease['hw_addr']) + else: + result = None, 'Failed to create reserved lease for device' + else: + result = None, 'Device has no current DHCP lease' + # Restore the network + self._dhcp_util.restore_failover_dhcp_server() + LOGGER.info("Waiting 30 seconds for reserved lease to expire") + time.sleep(30) + self._dhcp_util.get_new_lease(self._device_mac) + else: + result = None, 'Failed to configure network for test' + return result + + def _connection_ipaddr_dhcp_failover(self): + result = None + # Confirm that both servers are online + primary_status = self._dhcp_util.get_dhcp_server_status( + dhcp_server_primary=True) + secondary_status = self._dhcp_util.get_dhcp_server_status( + dhcp_server_primary=False) + if primary_status and secondary_status: + lease = self._dhcp_util.get_cur_lease(self._device_mac) + if lease is not None: + LOGGER.info('Current device lease resolved: ' + str(lease)) + if self._dhcp_util.is_lease_active(lease): + # Shutdown the primary server + if self._dhcp_util.stop_dhcp_server(dhcp_server_primary=True): + # Wait until the current lease is expired + self._dhcp_util.wait_for_lease_expire(lease) + # Make sure the device has received a new lease from the + # secondary server + if self._dhcp_util.get_new_lease(self._device_mac, + dhcp_server_primary=False): + if self._dhcp_util.is_lease_active(lease): + result = True, ('Secondary DHCP server lease confirmed active ' + 'in device') + else: + result = False, 'Could not validate lease is active in device' + else: + result = False, ('Device did not recieve a new lease from ' + 'secondary DHCP server') + self._dhcp_util.start_dhcp_server(dhcp_server_primary=True) + else: + result = None, 'Failed to shutdown primary DHCP server' + else: + result = False, 'Device did not respond to ping' + else: + result = None, 'Device has no current DHCP lease' + else: + LOGGER.error('Network is not ready for this test. Skipping') + result = None, 'Network is not ready for this test' + return result + def _get_oui_manufacturer(self, mac_address): # Do some quick fixes on the format of the mac_address # to match the oui file pattern @@ -231,6 +253,7 @@ def _get_oui_manufacturer(self, mac_address): def _connection_ipv6_slaac(self): LOGGER.info('Running connection.ipv6_slaac') + result = None packet_capture = rdpcap(MONITOR_CAPTURE_FILE) sends_ipv6 = False @@ -243,30 +266,34 @@ def _connection_ipv6_slaac(self): if ipv6_addr.startswith(SLAAC_PREFIX): self._device_ipv6_addr = ipv6_addr LOGGER.info(f'Device has formed SLAAC address {ipv6_addr}') - return True - - if sends_ipv6: - LOGGER.info('Device does not support IPv6 SLAAC') - else: - LOGGER.info('Device does not support IPv6') - return False + result = True, f'Device has formed SLAAC address {ipv6_addr}' + if result is None: + if sends_ipv6: + LOGGER.info('Device does not support IPv6 SLAAC') + result = False, 'Device does not support IPv6 SLAAC' + else: + LOGGER.info('Device does not support IPv6') + result = False, 'Device does not support IPv6' + return result def _connection_ipv6_ping(self): LOGGER.info('Running connection.ipv6_ping') - + result = None + if self._device_ipv6_addr is None: LOGGER.info('No IPv6 SLAAC address found. Cannot ping') - return - - if self._ping(self._device_ipv6_addr): - LOGGER.info(f'Device responds to IPv6 ping on {self._device_ipv6_addr}') - return True + result = None, 'No IPv6 SLAAc address found. Cannot ping' else: - LOGGER.info('Device does not respond to IPv6 ping') - return False + if self._ping(self._device_ipv6_addr): + LOGGER.info(f'Device responds to IPv6 ping on {self._device_ipv6_addr}') + result = True, f'Device responds to IPv6 ping on {self._device_ipv6_addr}' + else: + LOGGER.info('Device does not respond to IPv6 ping') + result = False, 'Device does not respond to IPv6 ping' + return result def _ping(self, host): - cmd = "ping -c 1 " + str(host) + cmd = 'ping -c 1 ' + str(host) success = util.run_command(cmd, output=False) return success @@ -334,6 +361,79 @@ def is_ip_in_range(self, ip, start_ip, end_ip): return start_int <= ip_int <= end_int + def _run_subnet_test(self, config): + # Resolve the configured dhcp subnet ranges + ranges = None + if 'ranges' in config: + ranges = config['ranges'] + else: + LOGGER.error('No subnet ranges configured for test. Skipping') + return None, 'No subnet ranges configured for test. Skipping' + + response = self.dhcp1_client.get_dhcp_range() + cur_range = {} + if response.code == 200: + cur_range['start'] = response.start + cur_range['end'] = response.end + LOGGER.info('Current DHCP subnet range: ' + str(cur_range)) + else: + LOGGER.error('Failed to resolve current subnet range required ' + 'for restoring network') + return None, ('Failed to resolve current subnet range required ' + 'for restoring network') + + results = [] + dhcp_setup = self.setup_single_dhcp_server() + if dhcp_setup[0]: + LOGGER.info(dhcp_setup[1]) + lease = self._get_cur_lease() + if lease is not None: + if self._is_lease_active(lease): + results = self.test_subnets(ranges) + else: + return None, 'Failed to confirm a valid active lease for the device' + else: + LOGGER.error(dhcp_setup[1]) + return None, 'Failed to setup DHCP server for test' + + # Process and return final results + final_result = None + final_result_details = '' + for result in results: + if final_result is None: + final_result = result['result'] + else: + final_result &= result['result'] + final_result_details += result['details'] + '\n' + + try: + # Restore failover configuration of DHCP servers + self.restore_failover_dhcp_server(cur_range) + + # Wait for the current lease to expire + self._wait_for_lease_expire(self._get_cur_lease()) + + # Wait for a new lease to be provided before exiting test + # to prevent other test modules from failing + for _ in range(5): + LOGGER.info('Checking for new lease') + lease = self._get_cur_lease() + if lease is not None: + LOGGER.info('New Lease found: ' + str(lease)) + LOGGER.info('Validating subnet for new lease...') + in_range = self.is_ip_in_range(lease['ip'], cur_range['start'], + cur_range['end']) + LOGGER.info('Lease within subnet: ' + str(in_range)) + break + else: + LOGGER.info('New lease not found. Waiting to check again') + time.sleep(5) + + except Exception as e: # pylint: disable=W0718 + LOGGER.error('Failed to restore DHCP server configuration: ' + str(e)) + + return final_result, final_result_details + def _test_subnet(self, subnet, lease): if self._change_subnet(subnet): expiration = datetime.strptime(lease['expires'], '%Y-%m-%d %H:%M:%S') @@ -387,7 +487,7 @@ def _get_cur_lease(self): LOGGER.info('Checking current device lease') response = self.dhcp1_client.get_lease(self._device_mac) if response.code == 200: - lease = eval(response.message) # pylint: disable=W0123 + lease = eval(response.message) # pylint: disable=W0123 if lease: # Check if non-empty lease return lease else: @@ -425,7 +525,7 @@ def test_subnets(self, subnets): 'details': 'Subnet ' + subnet['start'] + '-' + subnet['end'] + ' failed' } - except Exception as e: # pylint: disable=W0718 + except Exception as e: # pylint: disable=W0718 result = {'result': False, 'details': 'Subnet test failed: ' + str(e)} results.append(result) return results diff --git a/modules/test/conn/python/src/dhcp_util.py b/modules/test/conn/python/src/dhcp_util.py new file mode 100644 index 000000000..6bc4d8401 --- /dev/null +++ b/modules/test/conn/python/src/dhcp_util.py @@ -0,0 +1,214 @@ +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# https://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +"""Module that contains various methods for validating the DHCP +device behaviors""" + +import time +from datetime import datetime +import util + +LOG_NAME = 'dhcp_util' +LOGGER = None + +class DHCPUtil(): + """Helper class for various tests concerning DHCP behavior""" + + def __init__(self, dhcp_primary_client, dhcp_secondary_client, logger): + global LOGGER + LOGGER = logger + self._dhcp1_client = dhcp_primary_client + self._dhcp2_client = dhcp_secondary_client + + # Move primary DHCP server from failover into a single DHCP server config + def disable_failover(self, dhcp_server_primary=True): + LOGGER.info('Disabling primary DHCP server failover') + response = self.get_dhcp_client(dhcp_server_primary).disable_failover() + if response.code == 200: + LOGGER.info('Primary DHCP server failover disabled') + return True + else: + LOGGER.error('Failed to disable primary DHCP server failover') + return False + + # Move primary DHCP server to primary failover + def enable_failover(self, dhcp_server_primary=True): + LOGGER.info('Enabling primary failover DHCP server') + response = self.get_dhcp_client(dhcp_server_primary).enable_failover() + if response.code == 200: + LOGGER.info('Primary DHCP server failover enabled') + return True + else: + LOGGER.error('Failed to enable primary DHCP server failover') + return False + + # Resolve the requested dhcp client + def get_dhcp_client(self, dhcp_server_primary=True): + if dhcp_server_primary: + return self._dhcp1_client + else: + return self._dhcp2_client + + # Read the DHCP range + def get_dhcp_range(self, dhcp_server_primary=True): + response = self.get_dhcp_client(dhcp_server_primary).get_dhcp_range() + cur_range = None + if response.code == 200: + cur_range = {} + cur_range['start'] = response.start + cur_range['end'] = response.end + LOGGER.info('Current DHCP subnet range: ' + str(cur_range)) + else: + LOGGER.error('Failed to resolve current subnet range required ' + 'for restoring network') + return cur_range + + def restore_failover_dhcp_server(self): + if self.enable_failover(): + response = self.get_dhcp_client(False).start_dhcp_server() + if response.code == 200: + LOGGER.info('Secondary DHCP server started') + return True + else: + LOGGER.error('Failed to start secondary DHCP server') + return False + else: + LOGGER.error('Failed to enabled failover in primary DHCP server') + return False + + # Resolve the requested dhcp client + def start_dhcp_server(self, dhcp_server_primary=True): + LOGGER.info('Starting DHCP server') + response = self.get_dhcp_client(dhcp_server_primary).start_dhcp_server() + if response.code == 200: + LOGGER.info('DHCP server start command success') + return True + else: + LOGGER.error('DHCP server start command failed') + return False + + # Resolve the requested dhcp client + def stop_dhcp_server(self, dhcp_server_primary=True): + LOGGER.info('Stopping DHCP server') + response = self.get_dhcp_client(dhcp_server_primary).stop_dhcp_server() + if response.code == 200: + LOGGER.info('DHCP server stop command success') + return True + else: + LOGGER.error('DHCP server stop command failed') + return False + + def get_dhcp_server_status(self, dhcp_server_primary=True): + LOGGER.info('Checking DHCP server status') + response = self.get_dhcp_client(dhcp_server_primary).get_status() + if response.code == 200: + LOGGER.info('DHCP server status: ' + str(response.message)) + status = eval(response.message) # pylint: disable=W0123 + return status['dhcpStatus'] + else: + return False + + def get_cur_lease(self, mac_address, dhcp_server_primary=True): + LOGGER.info('Checking current device lease') + response = self.get_dhcp_client(dhcp_server_primary).get_lease(mac_address) + if response.code == 200: + lease = eval(response.message) # pylint: disable=W0123 + if lease: # Check if non-empty lease + return lease + else: + return None + + def get_new_lease(self, mac_address, dhcp_server_primary=True): + lease = None + for _ in range(5): + LOGGER.info('Checking for new lease') + if lease is None: + lease = self.get_cur_lease(mac_address,dhcp_server_primary) + LOGGER.info('New Lease found: ' + str(lease)) + break + else: + LOGGER.info('New lease not found. Waiting to check again') + time.sleep(5) + return lease + + def is_lease_active(self, lease): + if 'ip' in lease: + ip_addr = lease['ip'] + LOGGER.info('Lease IP Resolved: ' + ip_addr) + LOGGER.info('Attempting to ping device...') + ping_success = self.ping(ip_addr) + LOGGER.info('Ping Success: ' + str(ping_success)) + LOGGER.info('Current lease confirmed active in device') + else: + LOGGER.error('Failed to confirm a valid active lease for the device') + return ping_success + + def ping(self, host): + cmd = 'ping -c 1 ' + str(host) + success = util.run_command(cmd, output=False) + return success + + def add_reserved_lease(self, + hostname, + mac_address, + ip_address, + dhcp_server_primary=True): + response = self.get_dhcp_client(dhcp_server_primary).add_reserved_lease( + hostname, mac_address, ip_address) + if response.code == 200: + LOGGER.info('Reserved lease ' + ip_address + ' added for ' + mac_address) + return True + else: + LOGGER.error('Failed to add reserved lease for ' + mac_address) + return False + + def delete_reserved_lease(self, mac_address, dhcp_server_primary=True): + response = self.get_dhcp_client(dhcp_server_primary).delete_reserved_lease( + mac_address) + if response.code == 200: + LOGGER.info('Reserved lease deleted for ' + mac_address) + return True + else: + LOGGER.error('Failed to delete reserved lease for ' + mac_address) + return False + + def setup_single_dhcp_server(self): + # Shutdown the secondary DHCP Server + LOGGER.info('Stopping secondary DHCP server') + if self.stop_dhcp_server(False): + LOGGER.info('Secondary DHCP server stop command success') + time.sleep(3) # Give some time for the server to stop + if not self.get_dhcp_server_status(False): + LOGGER.info('Secondary DHCP server stopped') + if self.disable_failover(True): + LOGGER.info('Primary DHCP server failover disabled') + return True + else: + LOGGER.error('Failed to disable primary DHCP server failover') + return False + else: + LOGGER.error('Secondary DHCP server still running') + return False + else: + LOGGER.error('Failed to stop secondary DHCP server') + return False + + def wait_for_lease_expire(self, lease): + expiration = datetime.strptime(lease['expires'], '%Y-%m-%d %H:%M:%S') + time_to_expire = expiration - datetime.now() + LOGGER.info('Time until lease expiration: ' + str(time_to_expire)) + LOGGER.info('Waiting for current lease to expire: ' + str(expiration)) + if time_to_expire.total_seconds() > 0: + time.sleep(time_to_expire.total_seconds() + + 5) # Wait until the expiration time and padd 5 seconds + LOGGER.info('Current lease expired.') diff --git a/modules/test/dns/conf/module_config.json b/modules/test/dns/conf/module_config.json index 177537b69..e00061047 100644 --- a/modules/test/dns/conf/module_config.json +++ b/modules/test/dns/conf/module_config.json @@ -13,18 +13,22 @@ }, "tests":[ { - "name": "dns.network.from_device", + "name": "dns.network.hostname_resolution", "description": "Verify the device sends DNS requests", - "expected_behavior": "The device sends DNS requests." + "expected_behavior": "The device sends DNS requests.", + "required_result": "Required" }, { "name": "dns.network.from_dhcp", "description": "Verify the device allows for a DNS server to be entered automatically", - "expected_behavior": "The device sends DNS requests to the DNS server provided by the DHCP server" + "expected_behavior": "The device sends DNS requests to the DNS server provided by the DHCP server", + "required_result": "Roadmap" }, { "name": "dns.mdns", - "description": "If the device has MDNS (or any kind of IP multicast), can it be disabled" + "description": "If the device has MDNS (or any kind of IP multicast), can it be disabled", + "expected_behavior": "Device may send MDNS requests", + "required_result": "Recommended" } ] } diff --git a/modules/test/dns/python/src/dns_module.py b/modules/test/dns/python/src/dns_module.py index 8d32d4dfb..bc56c3718 100644 --- a/modules/test/dns/python/src/dns_module.py +++ b/modules/test/dns/python/src/dns_module.py @@ -11,7 +11,6 @@ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. - """DNS test module""" import subprocess from test_module import TestModule @@ -32,61 +31,84 @@ def __init__(self, module): global LOGGER LOGGER = self._get_logger() - def _check_dns_traffic(self, tcpdump_filter): - dns_server_queries = self._exec_tcpdump(tcpdump_filter,DNS_SERVER_CAPTURE_FILE) + def _has_dns_traffic(self, tcpdump_filter): + dns_server_queries = self._exec_tcpdump(tcpdump_filter, + DNS_SERVER_CAPTURE_FILE) LOGGER.info('DNS Server queries found: ' + str(len(dns_server_queries))) - dns_startup_queries = self._exec_tcpdump(tcpdump_filter,STARTUP_CAPTURE_FILE) + dns_startup_queries = self._exec_tcpdump(tcpdump_filter, + STARTUP_CAPTURE_FILE) LOGGER.info('Startup DNS queries found: ' + str(len(dns_startup_queries))) - dns_monitor_queries = self._exec_tcpdump(tcpdump_filter,MONITOR_CAPTURE_FILE) + dns_monitor_queries = self._exec_tcpdump(tcpdump_filter, + MONITOR_CAPTURE_FILE) LOGGER.info('Monitor DNS queries found: ' + str(len(dns_monitor_queries))) - num_query_dns = len(dns_server_queries) + len(dns_startup_queries) + len(dns_monitor_queries) - + num_query_dns = len(dns_server_queries) + len(dns_startup_queries) + len( + dns_monitor_queries) LOGGER.info('DNS queries found: ' + str(num_query_dns)) - dns_traffic_detected = num_query_dns > 0 - LOGGER.info('DNS traffic detected: ' + str(dns_traffic_detected)) - return dns_traffic_detected + + return num_query_dns > 0 def _dns_network_from_dhcp(self): - LOGGER.info("Running dns.network.from_dhcp") + LOGGER.info('Running dns.network.from_dhcp') + result = None LOGGER.info('Checking DNS traffic for configured DHCP DNS server: ' + self._dns_server) - # Check if the device DNS traffic is to appropriate server - tcpdump_filter = (f'dst port 53 and dst host {self._dns_server}', - f' and ether src {self._device_mac}') - - result = self._check_dns_traffic(tcpdump_filter=tcpdump_filter) - - LOGGER.info('DNS traffic detected to configured DHCP DNS server: ' + - str(result)) + # Check if the device DNS traffic is to appropriate local + # DHCP provided server + tcpdump_filter = (f'dst port 53 and dst host {self._dns_server} ' + + 'and ether src {self._device_mac}') + dns_packets_local = self._has_dns_traffic(tcpdump_filter=tcpdump_filter) + + # Check if the device sends any DNS traffic to non-DHCP provided server + tcpdump_filter = (f'dst port 53 and dst not host {self._dns_server} ' + + 'ether src {self._device_mac}') + dns_packets_not_local = self._has_dns_traffic(tcpdump_filter=tcpdump_filter) + + if dns_packets_local or dns_packets_not_local: + if dns_packets_not_local: + result = False, 'DNS traffic detected to non-DHCP provided server' + else: + LOGGER.info('DNS traffic detected only to configured DHCP DNS server') + result = True, 'DNS traffic detected only to DHCP provided server' + else: + LOGGER.info('No DNS traffic detected from the device') + result = None, 'No DNS traffic detected from the device' return result def _dns_network_from_device(self): - LOGGER.info("Running dns.network.from_device") + LOGGER.info('Running dns.network.from_device') + result = None LOGGER.info('Checking DNS traffic from device: ' + self._device_mac) - # Check if the device DNS traffic is to appropriate server + # Check if the device DNS traffic tcpdump_filter = f'dst port 53 and ether src {self._device_mac}' - - result = self._check_dns_traffic(tcpdump_filter=tcpdump_filter) - - LOGGER.info('DNS traffic detected from device: ' + str(result)) + dns_packetes = self._has_dns_traffic(tcpdump_filter=tcpdump_filter) + + if dns_packetes: + LOGGER.info('DNS traffic detected from device') + result = True, 'DNS traffic detected from device' + else: + LOGGER.info('No DNS traffic detected from the device') + result = False, 'No DNS traffic detected from the device' return result def _dns_mdns(self): - LOGGER.info("Running dns.mdns") - + LOGGER.info('Running dns.mdns') + result = None # Check if the device sends any MDNS traffic tcpdump_filter = f'udp port 5353 and ether src {self._device_mac}' - - result = self._check_dns_traffic(tcpdump_filter=tcpdump_filter) - - LOGGER.info('MDNS traffic detected from device: ' + str(result)) - return not result - + dns_packetes = self._has_dns_traffic(tcpdump_filter=tcpdump_filter) + + if dns_packetes: + LOGGER.info('MDNS traffic detected from device') + result = True, 'MDNS traffic detected from device' + else: + LOGGER.info('No MDNS traffic detected from the device') + result = None, 'No MDNS traffic detected from the device' + return result def _exec_tcpdump(self, tcpdump_filter, capture_file): """ diff --git a/modules/test/nmap/conf/module_config.json b/modules/test/nmap/conf/module_config.json index 292eced8b..8a90febc1 100644 --- a/modules/test/nmap/conf/module_config.json +++ b/modules/test/nmap/conf/module_config.json @@ -29,9 +29,10 @@ } }, "description": "Check FTP port 20/21 is disabled and FTP is not running on any port", - "expected_behavior": "There is no FTP service running on any port" + "expected_behavior": "There is no FTP service running on any port", + "required_result": "Required" }, - "security.services.ssh": { + "security.ssh.version": { "tcp_ports": { "22": { "allowed": true, @@ -39,8 +40,9 @@ "version": "2.0" } }, - "description": "Check TELNET port 23 is disabled and TELNET is not running on any port", - "expected_behavior": "There is no FTP service running on any port" + "description": "If the device is running a SSH server ensure it is SSHv2", + "expected_behavior": "SSH server is not running or server is SSHv2", + "required_result": "Required" }, "security.services.telnet": { "tcp_ports": { @@ -50,7 +52,8 @@ } }, "description": "Check TELNET port 23 is disabled and TELNET is not running on any port", - "expected_behavior": "There is no FTP service running on any port" + "expected_behavior": "There is no FTP service running on any port", + "required_result": "Required" }, "security.services.smtp": { "tcp_ports": { @@ -67,8 +70,9 @@ "description": "Simple Mail Transfer Protocol via TLS (SMTPS) Server" } }, - "description": "Check SMTP port 25 is disabled and ports 465 or 587 with SSL encryption are (not?) enabled and SMTP is not running on any port.", - "expected_behavior": "There is no smtp service running on any port" + "description": "Check SMTP ports 25, 465 and 587 are not enabled and SMTP is not running on any port.", + "expected_behavior": "There is no smtp service running on any port", + "required_result": "Required" }, "security.services.http": { "tcp_ports": { @@ -81,7 +85,8 @@ } }, "description": "Check that there is no HTTP server running on any port", - "expected_behavior": "Device is unreachable on port 80 (or any other port) and only responds to HTTPS requests on port 443 (or any other port if HTTP is used at all)" + "expected_behavior": "Device is unreachable on port 80 (or any other port) and only responds to HTTPS requests on port 443 (or any other port if HTTP is used at all)", + "required_result": "Required" }, "security.services.pop": { "tcp_ports": { @@ -91,7 +96,8 @@ } }, "description": "Check POP port 110 is disalbed and POP is not running on any port", - "expected_behavior": "There is no pop service running on any port" + "expected_behavior": "There is no pop service running on any port", + "required_result": "Required" }, "security.services.imap": { "tcp_ports": { @@ -101,7 +107,8 @@ } }, "description": "Check IMAP port 143 is disabled and IMAP is not running on any port", - "expected_behavior": "There is no imap service running on any port" + "expected_behavior": "There is no imap service running on any port", + "required_result": "Required" }, "security.services.snmpv3": { "tcp_ports": { @@ -125,17 +132,8 @@ } }, "description": "Check SNMP port 161/162 is disabled. If SNMP is an essential service, check it supports version 3", - "expected_behavior": "Device is unreachable on port 161 (or any other port) and device is unreachable on port 162 (or any other port) unless SNMP is essential in which case it is SNMPv3 is used." - }, - "security.services.https": { - "tcp_ports": { - "80": { - "allowed": false, - "description": "Administrative Secure Web-Server" - } - }, - "description": "Check that if there is a web server running it is running on a secure port.", - "expected_behavior": "Device only responds to HTTPS requests on port 443 (or any other port if HTTP is used at all)" + "expected_behavior": "Device is unreachable on port 161 (or any other port) and device is unreachable on port 162 (or any other port) unless SNMP is essential in which case it is SNMPv3 is used.", + "required_result": "Required" }, "security.services.vnc": { "tcp_ports": { @@ -149,7 +147,8 @@ } }, "description": "Check VNC is disabled on any port", - "expected_behavior": "Device cannot be accessed /connected to via VNc on any port" + "expected_behavior": "Device cannot be accessed /connected to via VNC on any port", + "required_result": "Required" }, "security.services.tftp": { "udp_ports": { @@ -159,9 +158,10 @@ } }, "description": "Check TFTP port 69 is disabled (UDP)", - "expected_behavior": "There is no tftp service running on any port" + "expected_behavior": "There is no tftp service running on any port", + "required_result": "Required" }, - "security.services.ntp": { + "ntp.network.ntp_server": { "udp_ports": { "123": { "allowed": false, @@ -171,7 +171,8 @@ "description": "Check NTP port 123 is disabled and the device is not operating as an NTP server", "expected_behavior": "The device dos not respond to NTP requests when it's IP is set as the NTP server on another device" } - } + }, + "required_result": "Required" } ] } diff --git a/modules/test/nmap/python/src/nmap_module.py b/modules/test/nmap/python/src/nmap_module.py index f998f302a..6bcbd141a 100644 --- a/modules/test/nmap/python/src/nmap_module.py +++ b/modules/test/nmap/python/src/nmap_module.py @@ -40,6 +40,7 @@ def __init__(self, module): def _security_nmap_ports(self, config): LOGGER.info("Running security.nmap.ports test") + result = None # Delete the enabled key from the config if it exists # to prevent it being treated as a test key @@ -74,10 +75,14 @@ def _security_nmap_ports(self, config): LOGGER.info("Unallowed Ports Detected: " + str(self._unallowed_ports)) self._check_unallowed_port(self._unallowed_ports,config) LOGGER.info("Unallowed Ports: " + str(self._unallowed_ports)) - return len(self._unallowed_ports) == 0 + if len(self._unallowed_ports) > 0: + result = False, 'Some allowed ports detected: ' + str(self._unallowed_ports) + else: + result = True, 'No unallowed ports detected' else: LOGGER.info("Device ip address not resolved, skipping") - return None + result = None, "Device ip address not resolved" + return result def _process_port_results(self, tests): scan_results = {} diff --git a/modules/test/nmap/python/src/run.py b/modules/test/nmap/python/src/run.py index 5e33451d9..e68b52525 100644 --- a/modules/test/nmap/python/src/run.py +++ b/modules/test/nmap/python/src/run.py @@ -20,7 +20,7 @@ from nmap_module import NmapModule -LOG_NAME = "nmap_runner" +LOG_NAME = 'nmap_runner' LOGGER = logger.get_logger(LOG_NAME) class NmapModuleRunner: @@ -39,7 +39,7 @@ def __init__(self, module): self._test_module = NmapModule(module) self._test_module.run_tests() - LOGGER.info("nmap test module finished") + LOGGER.info('nmap test module finished') def add_logger(self, module): global LOGGER diff --git a/modules/test/ntp/conf/module_config.json b/modules/test/ntp/conf/module_config.json index 288474868..a1a297f06 100644 --- a/modules/test/ntp/conf/module_config.json +++ b/modules/test/ntp/conf/module_config.json @@ -15,12 +15,14 @@ { "name": "ntp.network.ntp_support", "description": "Does the device request network time sync as client as per RFC 5905 - Network Time Protocol Version 4: Protocol and Algorithms Specification", - "expected_behavior": "The device sends an NTPv4 request to the configured NTP server." + "expected_behavior": "The device sends an NTPv4 request to the configured NTP server.", + "required_result": "Required" }, { "name": "ntp.network.ntp_dhcp", "description": "Accept NTP address over DHCP", - "expected_behavior": "Device can accept NTP server address, provided by the DHCP server (DHCP OFFER PACKET)" + "expected_behavior": "Device can accept NTP server address, provided by the DHCP server (DHCP OFFER PACKET)", + "required_result": "Roadmap" } ] } diff --git a/modules/test/ntp/python/src/ntp_module.py b/modules/test/ntp/python/src/ntp_module.py index 4053ce98a..6a577d1a6 100644 --- a/modules/test/ntp/python/src/ntp_module.py +++ b/modules/test/ntp/python/src/ntp_module.py @@ -11,7 +11,6 @@ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. - """NTP test module""" from test_module import TestModule from scapy.all import rdpcap, NTP, IP @@ -22,6 +21,7 @@ MONITOR_CAPTURE_FILE = '/runtime/device/monitor.pcap' LOGGER = None + class NTPModule(TestModule): """NTP Test module""" @@ -35,7 +35,7 @@ def __init__(self, module): def _ntp_network_ntp_support(self): LOGGER.info('Running ntp.network.ntp_support') - + result = None packet_capture = rdpcap(STARTUP_CAPTURE_FILE) + rdpcap(MONITOR_CAPTURE_FILE) device_sends_ntp4 = False @@ -52,28 +52,47 @@ def _ntp_network_ntp_support(self): LOGGER.info(f'Device sent NTPv3 request to {packet[IP].dst}') if not (device_sends_ntp3 or device_sends_ntp4): - LOGGER.info('Device has not sent any NTP requests') - - return device_sends_ntp4 and not device_sends_ntp3 + result = False, 'Device has not sent any NTP requests' + elif device_sends_ntp3 and device_sends_ntp4: + result = False, ('Device sent NTPv3 and NTPv4 packets. ' + + 'NTPv3 is not allowed.') + elif device_sends_ntp3: + result = False, ('Device sent NTPv3 packets. ' + 'NTPv3 is not allowed.') + elif device_sends_ntp4: + result = True, 'Device sent NTPv4 packets.' + LOGGER.info(result[1]) + return result def _ntp_network_ntp_dhcp(self): LOGGER.info('Running ntp.network.ntp_dhcp') - + result = None packet_capture = rdpcap(STARTUP_CAPTURE_FILE) + rdpcap(MONITOR_CAPTURE_FILE) device_sends_ntp = False + ntp_to_local = False + ntp_to_remote = False for packet in packet_capture: - if NTP in packet and packet.src == self._device_mac: device_sends_ntp = True if packet[IP].dst == self._ntp_server: LOGGER.info('Device sent NTP request to DHCP provided NTP server') - return True - - if not device_sends_ntp: - LOGGER.info('Device has not sent any NTP requests') + ntp_to_local = True + else: + LOGGER.info('Device sent NTP request to non-DHCP provided NTP server') + ntp_to_remote = True + + if device_sends_ntp: + if ntp_to_local and ntp_to_remote: + result = False, ('Device sent NTP request to DHCP provided ' + + 'server and non-DHCP provided server') + elif ntp_to_remote: + result = False, 'Device sent NTP request to non-DHCP provided server' + elif ntp_to_local: + result = True, 'Device sent NTP request to DHCP provided server' else: - LOGGER.info('Device has not sent NTP requests to DHCP provided NTP server') + result = False, 'Device has not sent any NTP requests' - return False + LOGGER.info(result[1]) + return result diff --git a/modules/test/tls/bin/check_cert_signature.sh b/modules/test/tls/bin/check_cert_signature.sh new file mode 100644 index 000000000..ebd4a7549 --- /dev/null +++ b/modules/test/tls/bin/check_cert_signature.sh @@ -0,0 +1,11 @@ +#!/bin/bash + +ROOT_CERT=$1 +DEVICE_CERT=$2 + +echo "ROOT: $ROOT_CERT" +echo "DEVICE_CERT: $DEVICE_CERT" + +response=$(openssl verify -CAfile $ROOT_CERT $DEVICE_CERT) + +echo "$response" diff --git a/modules/test/tls/bin/get_ciphers.sh b/modules/test/tls/bin/get_ciphers.sh new file mode 100644 index 000000000..e82bbc180 --- /dev/null +++ b/modules/test/tls/bin/get_ciphers.sh @@ -0,0 +1,10 @@ +#!/bin/bash + +CAPTURE_FILE=$1 +DST_IP=$2 +DST_PORT=$3 + +TSHARK_FILTER="ssl.handshake.ciphersuites and ip.dst==$DST_IP and tcp.dstport==$DST_PORT" +response=$(tshark -r $CAPTURE_FILE -Y "$TSHARK_FILTER" -Vx | grep 'Cipher Suite:' | awk '{$1=$1};1' | sed 's/Cipher Suite: //') + +echo "$response" diff --git a/modules/test/tls/bin/get_client_hello_packets.sh b/modules/test/tls/bin/get_client_hello_packets.sh new file mode 100644 index 000000000..13e42f791 --- /dev/null +++ b/modules/test/tls/bin/get_client_hello_packets.sh @@ -0,0 +1,19 @@ +#!/bin/bash + +CAPTURE_FILE=$1 +SRC_IP=$2 +TLS_VERSION=$3 + +TSHARK_OUTPUT="-T json -e ip.src -e tcp.dstport -e ip.dst" +TSHARK_FILTER="ssl.handshake.type==1 and ip.src==$SRC_IP" + +if [[ $TLS_VERSION == '1.2' || -z $TLS_VERSION ]];then + TSHARK_FILTER=$TSHARK_FILTER "and ssl.handshake.version==0x0303" +elif [ $TLS_VERSION == '1.2' ];then + TSHARK_FILTER=$TSHARK_FILTER "and ssl.handshake.version==0x0304" +fi + +response=$(tshark -r $CAPTURE_FILE $TSHARK_OUTPUT $TSHARK_FILTER) + +echo "$response" + \ No newline at end of file diff --git a/modules/test/tls/bin/get_handshake_complete.sh b/modules/test/tls/bin/get_handshake_complete.sh new file mode 100644 index 000000000..de1eb887d --- /dev/null +++ b/modules/test/tls/bin/get_handshake_complete.sh @@ -0,0 +1,19 @@ +#!/bin/bash + +CAPTURE_FILE=$1 +SRC_IP=$2 +DST_IP=$3 +TLS_VERSION=$4 + +TSHARK_FILTER="ip.src==$SRC_IP and ip.dst==$DST_IP " + +if [[ $TLS_VERSION == '1.2' || -z $TLS_VERSION ]];then + TSHARK_FILTER=$TSHARK_FILTER " and ssl.handshake.type==2 and tls.handshake.type==14 " +elif [ $TLS_VERSION == '1.2' ];then + TSHARK_FILTER=$TSHARK_FILTER "and ssl.handshake.type==2 and tls.handshake.extensions.supported_version==0x0304" +fi + +response=$(tshark -r $CAPTURE_FILE $TSHARK_FILTER) + +echo "$response" + \ No newline at end of file diff --git a/modules/test/tls/bin/start_test_module b/modules/test/tls/bin/start_test_module new file mode 100644 index 000000000..d8cede486 --- /dev/null +++ b/modules/test/tls/bin/start_test_module @@ -0,0 +1,56 @@ +#!/bin/bash + +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# https://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# An example startup script that does the bare minimum to start +# a test module via a pyhon script. Each test module should include a +# start_test_module file that overwrites this one to boot all of its +# specific requirements to run. + +# Define where the python source files are located +PYTHON_SRC_DIR=/testrun/python/src + +# Fetch module name +MODULE_NAME=$1 + +# Default interface should be veth0 for all containers +DEFAULT_IFACE=veth0 + +# Allow a user to define an interface by passing it into this script +DEFINED_IFACE=$2 + +# Select which interace to use +if [[ -z $DEFINED_IFACE || "$DEFINED_IFACE" == "null" ]] +then + echo "No interface defined, defaulting to veth0" + INTF=$DEFAULT_IFACE +else + INTF=$DEFINED_IFACE +fi + +# Create and set permissions on the log files +LOG_FILE=/runtime/output/$MODULE_NAME.log +RESULT_FILE=/runtime/output/$MODULE_NAME-result.json +touch $LOG_FILE +touch $RESULT_FILE +chown $HOST_USER $LOG_FILE +chown $HOST_USER $RESULT_FILE + +# Run the python scrip that will execute the tests for this module +# -u flag allows python print statements +# to be logged by docker by running unbuffered +python3 -u $PYTHON_SRC_DIR/run.py "-m $MODULE_NAME" + +echo Module has finished \ No newline at end of file diff --git a/modules/test/tls/conf/module_config.json b/modules/test/tls/conf/module_config.json new file mode 100644 index 000000000..7f0305d19 --- /dev/null +++ b/modules/test/tls/conf/module_config.json @@ -0,0 +1,41 @@ +{ + "config": { + "meta": { + "name": "tls", + "display_name": "TLS", + "description": "TLS tests" + }, + "network": true, + "docker": { + "depends_on": "base", + "enable_container": true, + "timeout": 300 + }, + "tests":[ + { + "name": "security.tls.v1_2_server", + "description": "Check the device web server TLS 1.2 & certificate is valid", + "expected_behavior": "TLS 1.2 certificate is issued to the web browser client when accessed", + "required_result": "Required" + }, + { + "name": "security.tls.v1_3_server", + "description": "Check the device web server TLS 1.3 & certificate is valid", + "expected_behavior": "TLS 1.3 certificate is issued to the web browser client when accessed", + "required_result": "Recommended" + }, + { + "name": "security.tls.v1_2_client", + "description": "Device uses TLS with connection to an external service on port 443 (or any other port which could be running the webserver-HTTPS)", + "expected_behavior": "The packet indicates a TLS connection with at least TLS 1.2 and support for ECDH and ECDSA ciphers", + "required_result": "Required" + }, + { + "name": "security.tls.v1_3_client", + "description": "Device uses TLS with connection to an external service on port 443 (or any other port which could be running the webserver-HTTPS)", + "expected_behavior": "The packet indicates a TLS connection with at least TLS 1.3", + "required_result": "Recommended" + } + ] + } +} \ No newline at end of file diff --git a/modules/test/tls/python/requirements.txt b/modules/test/tls/python/requirements.txt new file mode 100644 index 000000000..432116ff2 --- /dev/null +++ b/modules/test/tls/python/requirements.txt @@ -0,0 +1,2 @@ +cryptography +pyOpenSSL \ No newline at end of file diff --git a/modules/test/tls/python/src/run.py b/modules/test/tls/python/src/run.py new file mode 100644 index 000000000..51bc82f8f --- /dev/null +++ b/modules/test/tls/python/src/run.py @@ -0,0 +1,68 @@ +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# https://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +"""Run Baseline module""" +import argparse +import signal +import sys +import logger + +from tls_module import TLSModule + +LOGGER = logger.get_logger('test_module') +RUNTIME = 1500 + + +class TLSModuleRunner: + """An example runner class for test modules.""" + + def __init__(self, module): + + signal.signal(signal.SIGINT, self._handler) + signal.signal(signal.SIGTERM, self._handler) + signal.signal(signal.SIGABRT, self._handler) + signal.signal(signal.SIGQUIT, self._handler) + + LOGGER.info('Starting TLS Module') + + self._test_module = TLSModule(module) + self._test_module.run_tests() + + def _handler(self, signum): + LOGGER.debug('SigtermEnum: ' + str(signal.SIGTERM)) + LOGGER.debug('Exit signal received: ' + str(signum)) + if signum in (2, signal.SIGTERM): + LOGGER.info('Exit signal received. Stopping test module...') + LOGGER.info('Test module stopped') + sys.exit(1) + + +def run(): + parser = argparse.ArgumentParser( + description='Security Module Help', + formatter_class=argparse.ArgumentDefaultsHelpFormatter) + + parser.add_argument( + '-m', + '--module', + help='Define the module name to be used to create the log file') + + args = parser.parse_args() + + # For some reason passing in the args from bash adds an extra + # space before the argument so we'll just strip out extra space + TLSModuleRunner(args.module.strip()) + + +if __name__ == '__main__': + run() diff --git a/modules/test/tls/python/src/tls_module.py b/modules/test/tls/python/src/tls_module.py new file mode 100644 index 000000000..d58163266 --- /dev/null +++ b/modules/test/tls/python/src/tls_module.py @@ -0,0 +1,108 @@ +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# https://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +"""Baseline test module""" +from test_module import TestModule +from tls_util import TLSUtil + +LOG_NAME = 'test_tls' +LOGGER = None +STARTUP_CAPTURE_FILE = '/runtime/device/startup.pcap' +MONITOR_CAPTURE_FILE = '/runtime/device/monitor.pcap' + + +class TLSModule(TestModule): + """An example testing module.""" + + def __init__(self, module): + super().__init__(module_name=module, log_name=LOG_NAME) + global LOGGER + LOGGER = self._get_logger() + self._tls_util = TLSUtil(LOGGER) + + def _security_tls_v1_2_server(self): + LOGGER.info('Running security.tls.v1_2_server') + self._resolve_device_ip() + # If the ipv4 address wasn't resolved yet, try again + if self._device_ipv4_addr is not None: + tls_1_2_results = self._tls_util.validate_tls_server( + self._device_ipv4_addr, tls_version='1.2') + tls_1_3_results = self._tls_util.validate_tls_server( + self._device_ipv4_addr, tls_version='1.3') + return self._tls_util.process_tls_server_results(tls_1_2_results, + tls_1_3_results) + else: + LOGGER.error('Could not resolve device IP address. Skipping') + return None, 'Could not resolve device IP address. Skipping' + + def _security_tls_v1_3_server(self): + LOGGER.info('Running security.tls.v1_3_server') + self._resolve_device_ip() + # If the ipv4 address wasn't resolved yet, try again + if self._device_ipv4_addr is not None: + return self._tls_util.validate_tls_server(self._device_ipv4_addr, + tls_version='1.3') + else: + LOGGER.error('Could not resolve device IP address. Skipping') + return None, 'Could not resolve device IP address. Skipping' + + def _security_tls_v1_2_client(self): + LOGGER.info('Running security.tls.v1_2_client') + self._resolve_device_ip() + # If the ipv4 address wasn't resolved yet, try again + if self._device_ipv4_addr is not None: + return self._validate_tls_client(self._device_ipv4_addr, '1.2') + else: + LOGGER.error('Could not resolve device IP address. Skipping') + return None, 'Could not resolve device IP address. Skipping' + + def _security_tls_v1_3_client(self): + LOGGER.info('Running security.tls.v1_3_client') + self._resolve_device_ip() + # If the ipv4 address wasn't resolved yet, try again + if self._device_ipv4_addr is not None: + return self._validate_tls_client(self._device_ipv4_addr, '1.3') + else: + LOGGER.error('Could not resolve device IP address. Skipping') + return None, 'Could not resolve device IP address. Skipping' + + def _validate_tls_client(self, client_ip, tls_version): + monitor_result = self._tls_util.validate_tls_client( + client_ip=client_ip, + tls_version=tls_version, + capture_file=MONITOR_CAPTURE_FILE) + startup_result = self._tls_util.validate_tls_client( + client_ip=client_ip, + tls_version=tls_version, + capture_file=STARTUP_CAPTURE_FILE) + + LOGGER.info('Montor: ' + str(monitor_result)) + LOGGER.info('Startup: ' + str(startup_result)) + + if (not monitor_result[0] and monitor_result[0] is not None) or ( + not startup_result[0] and startup_result[0] is not None): + result = False, startup_result[1] + monitor_result[1] + elif monitor_result[0] and startup_result[0]: + result = True, startup_result[1] + monitor_result[1] + elif monitor_result[0] and startup_result[0] is None: + result = True, monitor_result[1] + elif startup_result[0] and monitor_result[0] is None: + result = True, monitor_result[1] + else: + result = None, startup_result[1] + return result + + def _resolve_device_ip(self): + # If the ipv4 address wasn't resolved yet, try again + if self._device_ipv4_addr is None: + self._device_ipv4_addr = self._get_device_ipv4() diff --git a/modules/test/tls/python/src/tls_module_test.py b/modules/test/tls/python/src/tls_module_test.py new file mode 100644 index 000000000..099956f4e --- /dev/null +++ b/modules/test/tls/python/src/tls_module_test.py @@ -0,0 +1,285 @@ +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# https://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +"""Module run all the TLS related unit tests""" +from tls_util import TLSUtil +import unittest +from common import logger +from scapy.all import sniff, wrpcap +import os +import threading +import time +import netifaces +import ssl +import http.client + +CAPTURE_DIR = 'testing/unit_test/temp' +MODULE_NAME = 'tls_module_test' +TLS_UTIL = None +PACKET_CAPTURE = None + + +class TLSModuleTest(unittest.TestCase): + """Contains and runs all the unit tests concerning TLS behaviors""" + + @classmethod + def setUpClass(cls): + log = logger.get_logger(MODULE_NAME) + global TLS_UTIL + TLS_UTIL = TLSUtil(log, + bin_dir='modules/test/tls/bin', + cert_out_dir='testing/unit_test/temp', + root_certs_dir='local/root_certs') + + # Test 1.2 server when only 1.2 connection is established + def security_tls_v1_2_server_test(self): + tls_1_2_results = TLS_UTIL.validate_tls_server('google.com', + tls_version='1.2') + tls_1_3_results = None, 'No TLS 1.3' + test_results = TLS_UTIL.process_tls_server_results(tls_1_2_results, + tls_1_3_results) + self.assertTrue(test_results[0]) + + # Test 1.2 server when 1.3 connection is established + def security_tls_v1_2_for_1_3_server_test(self): + tls_1_2_results = None, 'No TLS 1.2' + tls_1_3_results = TLS_UTIL.validate_tls_server('google.com', + tls_version='1.3') + test_results = TLS_UTIL.process_tls_server_results(tls_1_2_results, + tls_1_3_results) + self.assertTrue(test_results[0]) + + # Test 1.2 server when 1.2 and 1.3 connection is established + def security_tls_v1_2_for_1_2_and_1_3_server_test(self): + tls_1_2_results = TLS_UTIL.validate_tls_server('google.com', + tls_version='1.2') + tls_1_3_results = TLS_UTIL.validate_tls_server('google.com', + tls_version='1.3') + test_results = TLS_UTIL.process_tls_server_results(tls_1_2_results, + tls_1_3_results) + self.assertTrue(test_results[0]) + + # Test 1.2 server when 1.2 and failed 1.3 connection is established + def security_tls_v1_2_for_1_2_and_1_3_fail_server_test(self): + tls_1_2_results = TLS_UTIL.validate_tls_server('google.com', + tls_version='1.2') + tls_1_3_results = False, 'Signature faild' + test_results = TLS_UTIL.process_tls_server_results(tls_1_2_results, + tls_1_3_results) + self.assertTrue(test_results[0]) + + # Test 1.2 server when 1.3 and failed 1.2 connection is established + def security_tls_v1_2_for_1_3_and_1_2_fail_server_test(self): + tls_1_3_results = TLS_UTIL.validate_tls_server('google.com', + tls_version='1.3') + tls_1_2_results = False, 'Signature faild' + test_results = TLS_UTIL.process_tls_server_results(tls_1_2_results, + tls_1_3_results) + self.assertTrue(test_results[0]) + + # Test 1.2 server when 1.3 and 1.2 failed connection is established + def security_tls_v1_2_fail_server_test(self): + tls_1_2_results = False, 'Signature faild' + tls_1_3_results = False, 'Signature faild' + test_results = TLS_UTIL.process_tls_server_results(tls_1_2_results, + tls_1_3_results) + self.assertFalse(test_results[0]) + + # Test 1.2 server when 1.3 and 1.2 failed connection is established + def security_tls_v1_2_none_server_test(self): + tls_1_2_results = None, 'No cert' + tls_1_3_results = None, 'No cert' + test_results = TLS_UTIL.process_tls_server_results(tls_1_2_results, + tls_1_3_results) + self.assertIsNone(test_results[0]) + + def security_tls_v1_3_server_test(self): + test_results = TLS_UTIL.validate_tls_server('google.com', tls_version='1.3') + self.assertTrue(test_results[0]) + + def security_tls_v1_2_client_test(self): + test_results = self.test_client_tls('1.2') + print(str(test_results)) + self.assertTrue(test_results[0]) + + def security_tls_v1_2_client_cipher_fail_test(self): + test_results = self.test_client_tls('1.2', disable_valid_ciphers=True) + print(str(test_results)) + self.assertFalse(test_results[0]) + + def security_tls_client_skip_test(self): + # 1.1 will fail to connect and so no hello client will exist + # which should result in a skip result + test_results = self.test_client_tls('1.2', tls_generate='1.1') + print(str(test_results)) + self.assertIsNone(test_results[0]) + + def security_tls_v1_3_client_test(self): + test_results = self.test_client_tls('1.3') + print(str(test_results)) + self.assertTrue(test_results[0]) + + def client_hello_packets_test(self): + packet_fail = { + 'dst_ip': '10.10.10.1', + 'src_ip': '10.10.10.14', + 'dst_port': '443', + 'cipher_support': { + 'ecdh': False, + 'ecdsa': True + } + } + packet_success = { + 'dst_ip': '10.10.10.1', + 'src_ip': '10.10.10.14', + 'dst_port': '443', + 'cipher_support': { + 'ecdh': True, + 'ecdsa': True + } + } + hello_packets = [packet_fail, packet_success] + hello_results = TLS_UTIL.process_hello_packets(hello_packets, '1.2') + print('Hello packets test results: ' + str(hello_results)) + expected = {'valid': [packet_success], 'invalid': []} + self.assertEqual(hello_results, expected) + + def test_client_tls(self, + tls_version, + tls_generate=None, + disable_valid_ciphers=False): + # Make the capture file + os.makedirs(CAPTURE_DIR, exist_ok=True) + capture_file = CAPTURE_DIR + '/client_tls.pcap' + + # Resolve the client ip used + client_ip = self.get_interface_ip('eth0') + + # Genrate TLS outbound traffic + if tls_generate is None: + tls_generate = tls_version + self.generate_tls_traffic(capture_file, tls_generate, disable_valid_ciphers) + + # Run the client test + return TLS_UTIL.validate_tls_client(client_ip=client_ip, + tls_version=tls_version, + capture_file=capture_file) + + def generate_tls_traffic(self, + capture_file, + tls_version, + disable_valid_ciphers=False): + capture_thread = self.start_capture_thread(10) + print('Capture Started') + + # Generate some TLS 1.2 outbound traffic + while capture_thread.is_alive(): + self.make_tls_connection('www.google.com', 443, tls_version, + disable_valid_ciphers) + time.sleep(1) + + # Save the captured packets to the file. + wrpcap(capture_file, PACKET_CAPTURE) + + def make_tls_connection(self, + hostname, + port, + tls_version, + disable_valid_ciphers=False): + # Create the SSL context with the desired TLS version and options + context = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH) + context.check_hostname = False + context.verify_mode = ssl.CERT_NONE + context.options |= ssl.PROTOCOL_TLS + + if disable_valid_ciphers: + # Create a list of ciphers that do not use ECDH or ECDSA + ciphers_str = [ + 'TLS_AES_256_GCM_SHA384', 'TLS_CHACHA20_POLY1305_SHA256', + 'TLS_AES_128_GCM_SHA256', 'AES256-GCM-SHA384', + 'PSK-AES256-GCM-SHA384', 'PSK-CHACHA20-POLY1305', + 'RSA-PSK-AES128-GCM-SHA256', 'DHE-PSK-AES128-GCM-SHA256', + 'AES128-GCM-SHA256', 'PSK-AES128-GCM-SHA256', 'AES256-SHA256', + 'AES128-SHA' + ] + context.set_ciphers(':'.join(ciphers_str)) + + if tls_version != '1.1': + context.options |= ssl.OP_NO_TLSv1 # Disable TLS 1.0 + context.options |= ssl.OP_NO_TLSv1_1 # Disable TLS 1.1 + else: + context.options |= ssl.OP_NO_TLSv1_2 # Disable TLS 1.2 + context.options |= ssl.OP_NO_TLSv1_3 # Disable TLS 1.3 + + if tls_version == '1.3': + context.options |= ssl.OP_NO_TLSv1_2 # Disable TLS 1.2 + elif tls_version == '1.2': + context.options |= ssl.OP_NO_TLSv1_3 # Disable TLS 1.3 + + # Create the HTTPS connection with the SSL context + connection = http.client.HTTPSConnection(hostname, port, context=context) + + # Perform the TLS handshake manually + try: + connection.connect() + except ssl.SSLError as e: + print('Failed to make connection: ' + str(e)) + + # At this point, the TLS handshake is complete. + # You can do any further processing or just close the connection. + connection.close() + + def start_capture(self, timeout): + global PACKET_CAPTURE + PACKET_CAPTURE = sniff(iface='eth0', timeout=timeout) + + def start_capture_thread(self, timeout): + # Start the packet capture in a separate thread to avoid blocking. + capture_thread = threading.Thread(target=self.start_capture, + args=(timeout, )) + capture_thread.start() + + return capture_thread + + def get_interface_ip(self, interface_name): + try: + addresses = netifaces.ifaddresses(interface_name) + ipv4 = addresses[netifaces.AF_INET][0]['addr'] + return ipv4 + except (ValueError, KeyError) as e: + print(f'Error: {e}') + return None + + +if __name__ == '__main__': + suite = unittest.TestSuite() + suite.addTest(TLSModuleTest('client_hello_packets_test')) + # TLS 1.2 server tests + suite.addTest(TLSModuleTest('security_tls_v1_2_server_test')) + suite.addTest(TLSModuleTest('security_tls_v1_2_for_1_3_server_test')) + suite.addTest(TLSModuleTest('security_tls_v1_2_for_1_2_and_1_3_server_test')) + suite.addTest( + TLSModuleTest('security_tls_v1_2_for_1_2_and_1_3_fail_server_test')) + suite.addTest( + TLSModuleTest('security_tls_v1_2_for_1_3_and_1_2_fail_server_test')) + suite.addTest(TLSModuleTest('security_tls_v1_2_fail_server_test')) + suite.addTest(TLSModuleTest('security_tls_v1_2_none_server_test')) + # # TLS 1.3 server tests + suite.addTest(TLSModuleTest('security_tls_v1_3_server_test')) + # TLS client tests + suite.addTest(TLSModuleTest('security_tls_v1_2_client_test')) + suite.addTest(TLSModuleTest('security_tls_v1_3_client_test')) + suite.addTest(TLSModuleTest('security_tls_client_skip_test')) + suite.addTest(TLSModuleTest('security_tls_v1_2_client_cipher_fail_test')) + runner = unittest.TextTestRunner() + runner.run(suite) diff --git a/modules/test/tls/python/src/tls_util.py b/modules/test/tls/python/src/tls_util.py new file mode 100644 index 000000000..c83c131af --- /dev/null +++ b/modules/test/tls/python/src/tls_util.py @@ -0,0 +1,393 @@ +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# https://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +"""Module that contains various metehods for validating TLS communications""" +import ssl +import socket +from datetime import datetime +from OpenSSL import crypto +import json +import os +from common import util + +LOG_NAME = 'tls_util' +LOGGER = None +DEFAULT_BIN_DIR = '/testrun/bin' +DEFAULT_CERTS_OUT_DIR = '/runtime/output' +DEFAULT_ROOT_CERTS_DIR = '/testrun/root_certs' + + +class TLSUtil(): + """Helper class for various tests concerning TLS communications""" + + def __init__(self, + logger, + bin_dir=DEFAULT_BIN_DIR, + cert_out_dir=DEFAULT_CERTS_OUT_DIR, + root_certs_dir=DEFAULT_ROOT_CERTS_DIR): + global LOGGER + LOGGER = logger + self._bin_dir = bin_dir + self._dev_cert_file = cert_out_dir + '/device_cert.crt' + self._root_certs_dir = root_certs_dir + + def get_public_certificate(self, + host, + port=443, + validate_cert=False, + tls_version='1.2'): + try: + #context = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH) + context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) + context.check_hostname = False + if not validate_cert: + # Disable certificate verification + context.verify_mode = ssl.CERT_NONE + else: + # Use host CA certs for validation + context.load_default_certs() + context.verify_mode = ssl.CERT_REQUIRED + + # Set the correct TLS version + context.options |= ssl.PROTOCOL_TLS + context.options |= ssl.OP_NO_TLSv1 # Disable TLS 1.0 + context.options |= ssl.OP_NO_TLSv1_1 # Disable TLS 1.1 + if tls_version == '1.3': + context.options |= ssl.OP_NO_TLSv1_2 # Disable TLS 1.2 + elif tls_version == '1.2': + context.options |= ssl.OP_NO_TLSv1_3 # Disable TLS 1.3 + + # Create an SSL/TLS socket + with socket.create_connection((host, port), timeout=5) as sock: + with context.wrap_socket(sock, server_hostname=host) as secure_sock: + # Get the server's certificate in PEM format + cert_pem = ssl.DER_cert_to_PEM_cert(secure_sock.getpeercert(True)) + + except ConnectionRefusedError: + LOGGER.info(f'Connection to {host}:{port} was refused.') + return None + except socket.gaierror: + LOGGER.info(f'Failed to resolve the hostname {host}.') + return None + except ssl.SSLError as e: + LOGGER.info(f'SSL error occurred: {e}') + return None + + return cert_pem + + def get_public_key(self, public_cert): + # Extract and return the public key from the certificate + public_key = public_cert.get_pubkey() + return public_key + + def verify_certificate_timerange(self, public_cert): + # Extract the notBefore and notAfter dates from the certificate + not_before = datetime.strptime(public_cert.get_notBefore().decode(), + '%Y%m%d%H%M%SZ') + not_after = datetime.strptime(public_cert.get_notAfter().decode(), + '%Y%m%d%H%M%SZ') + + LOGGER.info('Certificate valid from: ' + str(not_before) + ' To ' + + str(not_after)) + + # Get the current date + current_date = datetime.utcnow() + + # Check if today's date is within the certificate's validity range + if not_before <= current_date <= not_after: + return True, 'Certificate has a valid time range' + elif current_date <= not_before: + return False, 'Certificate is not yet valid' + else: + return False, 'Certificate has expired' + + def verify_public_key(self, public_key): + + # Get the key length based bits + key_length = public_key.bits() + LOGGER.info('Key Length: ' + str(key_length)) + + # Check the key type + key_type = 'Unknown' + if public_key.type() == crypto.TYPE_RSA: + key_type = 'RSA' + elif public_key.type() == crypto.TYPE_EC: + key_type = 'EC' + elif public_key.type() == crypto.TYPE_DSA: + key_type = 'DSA' + elif public_key.type() == crypto.TYPE_DH: + key_type = 'Diffie-Hellman' + LOGGER.info('Key Type: ' + key_type) + + # Check if the public key is of RSA type + if key_type == 'RSA': + if key_length >= 2048: + return True, 'RSA key length passed: ' + str(key_length) + ' >= 2048' + else: + return False, 'RSA key length too short: ' + str(key_length) + ' < 2048' + + # Check if the public key is of EC type + elif key_type == 'EC': + if key_length >= 224: + return True, 'EC key length passed: ' + str(key_length) + ' >= 224' + else: + return False, 'EC key length too short: ' + str(key_length) + ' < 224' + else: + return False, 'Key is not RSA or EC type' + + def validate_signature(self, host): + # Reconnect to the device but with validate signature option + # set to true which will check for proper cert chains + # within the valid CA root certs stored on the server + LOGGER.info( + 'Checking for valid signature from authorized Certificate Authorities') + public_cert = self.get_public_certificate(host, + validate_cert=True, + tls_version='1.2') + if public_cert: + LOGGER.info('Authorized Certificate Authority signature confirmed') + return True, 'Authorized Certificate Authority signature confirmed' + else: + LOGGER.info('Authorized Certificate Authority signature not present') + LOGGER.info('Resolving configured root certificates') + bin_file = self._bin_dir + '/check_cert_signature.sh' + # Get a list of all root certificates + root_certs = os.listdir(self._root_certs_dir) + LOGGER.info('Root Certs Found: ' + str(len(root_certs))) + for root_cert in root_certs: + try: + # Create the file path + root_cert_path = os.path.join(self._root_certs_dir, root_cert) + LOGGER.info('Checking root cert: ' + str(root_cert_path)) + args = f'{root_cert_path} {self._dev_cert_file}' + command = f'{bin_file} {args}' + response = util.run_command(command) + if 'device_cert.crt: OK' in str(response): + LOGGER.info('Device signed by cert:' + root_cert) + return True, 'Device signed by cert:' + root_cert + else: + LOGGER.info('Device not signed by cert: ' + root_cert) + except Exception as e: # pylint: disable=W0718 + LOGGER.error('Failed to check cert:' + root_cert) + LOGGER.error(str(e)) + return False, 'Device certificate has not been signed' + + def process_tls_server_results(self, tls_1_2_results, tls_1_3_results): + results = '' + if tls_1_2_results[0] is None and tls_1_3_results[0]: + results = True, 'TLS 1.3 validated:\n' + tls_1_3_results[1] + elif tls_1_3_results[0] is None and tls_1_2_results[0]: + results = True, 'TLS 1.2 validated:\n' + tls_1_2_results[1] + elif tls_1_2_results[0] and tls_1_3_results[0]: + description = 'TLS 1.2 validated:\n' + tls_1_2_results[1] + description += '\nTLS 1.3 validated:\n' + tls_1_3_results[1] + results = True, description + elif tls_1_2_results[0] and not tls_1_3_results[0]: + description = 'TLS 1.2 validated:\n' + tls_1_2_results[1] + description += '\nTLS 1.3 not validated:\n' + tls_1_3_results[1] + results = True, description + elif tls_1_3_results[0] and not tls_1_2_results[0]: + description = 'TLS 1.2 not validated:\n' + tls_1_2_results[1] + description += '\nTLS 1.3 validated:\n' + tls_1_3_results[1] + results = True, description + elif not tls_1_3_results[0] and not tls_1_2_results[0] and tls_1_2_results[ + 0] is not None and tls_1_3_results is not None: + description = 'TLS 1.2 not validated:\n' + tls_1_2_results[1] + description += '\nTLS 1.3 not validated:\n' + tls_1_3_results[1] + results = False, description + else: + description = 'TLS 1.2 not validated:\n' + tls_1_2_results[1] + description += '\nTLS 1.3 not validated:\n' + tls_1_3_results[1] + results = None, description + LOGGER.info('TLS 1.2 server test results: ' + str(results)) + return results + + def validate_tls_server(self, host, tls_version): + cert_pem = self.get_public_certificate(host, + validate_cert=False, + tls_version=tls_version) + if cert_pem: + + # Write pem encoding to a file + self.write_cert_to_file(cert_pem) + + # Load pem encoding into a certifiate so we can process the contents + public_cert = crypto.load_certificate(crypto.FILETYPE_PEM, cert_pem) + + # Print the certificate information + cert_text = crypto.dump_certificate(crypto.FILETYPE_TEXT, + public_cert).decode() + LOGGER.info('Device Certificate:\n' + cert_text) + + # Validate the certificates time range + tr_valid = self.verify_certificate_timerange(public_cert) + + # Resolve the public key + public_key = self.get_public_key(public_cert) + if public_key: + key_valid = self.verify_public_key(public_key) + + sig_valid = self.validate_signature(host) + + # Check results + cert_valid = tr_valid[0] and key_valid[0] and sig_valid[0] + test_details = tr_valid[1] + '\n' + key_valid[1] + '\n' + sig_valid[1] + LOGGER.info('Certificate validated: ' + str(cert_valid)) + LOGGER.info('Test Details:\n' + test_details) + return cert_valid, test_details + else: + LOGGER.info('Failed to resolve public certificate') + return None, 'Failed to resolve public certificate' + + def write_cert_to_file(self, pem_cert): + with open(self._dev_cert_file, 'w', encoding='UTF-8') as f: + f.write(pem_cert) + + def get_ciphers(self, capture_file, dst_ip, dst_port): + bin_file = self._bin_dir + '/get_ciphers.sh' + args = f'{capture_file} {dst_ip} {dst_port}' + command = f'{bin_file} {args}' + response = util.run_command(command) + ciphers = response[0].split('\n') + return ciphers + + def get_hello_packets(self, capture_file, src_ip, tls_version): + bin_file = self._bin_dir + '/get_client_hello_packets.sh' + args = f'{capture_file} {src_ip} {tls_version}' + command = f'{bin_file} {args}' + response = util.run_command(command) + packets = response[0].strip() + return self.parse_hello_packets(json.loads(packets), capture_file) + + def get_handshake_complete(self, capture_file, src_ip, dst_ip, tls_version): + bin_file = self._bin_dir + '/get_handshake_complete.sh' + args = f'{capture_file} {src_ip} {dst_ip} {tls_version}' + command = f'{bin_file} {args}' + response = util.run_command(command) + return response + + def parse_hello_packets(self, packets, capture_file): + hello_packets = [] + for packet in packets: + # Extract all the basic IP information about the packet + packet_layers = packet['_source']['layers'] + dst_ip = packet_layers['ip.dst'][0] if 'ip.dst' in packet_layers else '' + src_ip = packet_layers['ip.src'][0] if 'ip.src' in packet_layers else '' + dst_port = packet_layers['tcp.dstport'][ + 0] if 'tcp.dstport' in packet_layers else '' + + # Resolve the ciphers used in this packet and validate expected ones exist + ciphers = self.get_ciphers(capture_file, dst_ip, dst_port) + cipher_support = self.is_ecdh_and_ecdsa(ciphers) + + # Put result together + hello_packet = {} + hello_packet['dst_ip'] = dst_ip + hello_packet['src_ip'] = src_ip + hello_packet['dst_port'] = dst_port + hello_packet['cipher_support'] = cipher_support + + hello_packets.append(hello_packet) + return hello_packets + + def process_hello_packets(self,hello_packets, tls_version = '1.2'): + # Validate the ciphers only for tls 1.2 + client_hello_results = {'valid': [], 'invalid': []} + if tls_version == '1.2': + for packet in hello_packets: + if packet['dst_ip'] not in str(client_hello_results['valid']): + LOGGER.info('Checking client ciphers: ' + str(packet)) + if packet['cipher_support']['ecdh'] and packet['cipher_support'][ + 'ecdsa']: + LOGGER.info('Valid ciphers detected') + client_hello_results['valid'].append(packet) + # If a previous hello packet to the same destination failed, + # we can now remove it as it has passed on a different attempt + if packet['dst_ip'] in str(client_hello_results['invalid']): + LOGGER.info(str(client_hello_results['invalid'])) + for invalid_packet in client_hello_results['invalid']: + if packet['dst_ip'] in str(invalid_packet): + client_hello_results['invalid'].remove(invalid_packet) + else: + LOGGER.info('Invalid ciphers detected') + if packet['dst_ip'] not in str(client_hello_results['invalid']): + client_hello_results['invalid'].append(packet) + else: + # No cipher check for TLS 1.3 + client_hello_results['valid'] = hello_packets + return client_hello_results + + def validate_tls_client(self, client_ip, tls_version, capture_file): + LOGGER.info('Validating client for TLS: ' + tls_version) + hello_packets = self.get_hello_packets(capture_file, client_ip, tls_version) + client_hello_results = self.process_hello_packets(hello_packets,tls_version) + + handshakes = {'complete': [], 'incomplete': []} + for packet in client_hello_results['valid']: + # Filter out already tested IP's since only 1 handshake success is needed + if not packet['dst_ip'] in handshakes['complete'] and not packet[ + 'dst_ip'] in handshakes['incomplete']: + handshake_complete = self.get_handshake_complete( + capture_file, packet['src_ip'], packet['dst_ip'], tls_version) + + # One of the responses will be a complaint about running as root so + # we have to have at least 2 entries to consider a completed handshake + if len(handshake_complete) > 1: + LOGGER.info('TLS handshake completed from: ' + packet['dst_ip']) + handshakes['complete'].append(packet['dst_ip']) + else: + LOGGER.warning('No TLS handshakes completed from: ' + + packet['dst_ip']) + handshakes['incomplete'].append(packet['dst_ip']) + + for handshake in handshakes['complete']: + LOGGER.info('Valid TLS client connection to server: ' + str(handshake)) + + # Process and return the results + tls_client_details = '' + tls_client_valid = None + if len(hello_packets) > 0: + if len(client_hello_results['invalid']) > 0: + tls_client_valid = False + for result in client_hello_results['invalid']: + tls_client_details += 'Client hello packet to ' + result[ + 'dst_ip'] + ' did not have expected ciphers:' + if not result['cipher_support']['ecdh']: + tls_client_details += ' ecdh ' + if not result['cipher_support']['ecdsa']: + tls_client_details += 'ecdsa' + tls_client_details += '\n' + if len(handshakes['incomplete']) > 0: + for result in handshakes['incomplete']: + tls_client_details += 'Incomplete handshake detected from server: ' + tls_client_details += result + '\n' + if len(handshakes['complete']) > 0: + # If we haven't already failed the test from previous checks + # allow a passing result + if tls_client_valid is None: + tls_client_valid = True + for result in handshakes['complete']: + tls_client_details += 'Completed handshake detected from server: ' + tls_client_details += result + '\n' + else: + LOGGER.info('No client hello packets detected. Skipping') + tls_client_details = 'No client hello packets detected. Skipping' + return tls_client_valid, tls_client_details + + def is_ecdh_and_ecdsa(self, ciphers): + ecdh = False + ecdsa = False + for cipher in ciphers: + ecdh |= 'ECDH' in cipher + ecdsa |= 'ECDSA' in cipher + return {'ecdh': ecdh, 'ecdsa': ecdsa} diff --git a/modules/test/tls/tls.Dockerfile b/modules/test/tls/tls.Dockerfile new file mode 100644 index 000000000..92fa6028c --- /dev/null +++ b/modules/test/tls/tls.Dockerfile @@ -0,0 +1,48 @@ +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# https://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Image name: test-run/tls-test +FROM test-run/base-test:latest + +# Set DEBIAN_FRONTEND to noninteractive mode +ENV DEBIAN_FRONTEND=noninteractive + +# Install required software +RUN apt-get update && apt-get install -y tshark + +ARG MODULE_NAME=tls +ARG MODULE_DIR=modules/test/$MODULE_NAME +ARG CERTS_DIR=local/root_certs + +# Copy over all configuration files +COPY $MODULE_DIR/conf /testrun/conf + +# Copy over all binary files +COPY $MODULE_DIR/bin /testrun/bin + +# Copy over all python files +COPY $MODULE_DIR/python /testrun/python + +#Install all python requirements for the module +RUN pip3 install -r /testrun/python/requirements.txt + +# Create a directory inside the container to store the root certificates +RUN mkdir -p /testrun/root_certs + +# Copy over all the local certificates for device signature +# checks if the folder exists +COPY $CERTS_DIR /testrun/root_certs + + + diff --git a/modules/ui/conf/nginx.conf b/modules/ui/conf/nginx.conf new file mode 100644 index 000000000..ade6ad17a --- /dev/null +++ b/modules/ui/conf/nginx.conf @@ -0,0 +1,13 @@ +events{} +http { + include /etc/nginx/mime.types; + server { + listen 80; + server_name localhost; + root /usr/share/nginx/html; + index index.html; + location / { + try_files $uri $uri/ /index.html; + } + } +} \ No newline at end of file diff --git a/modules/ui/ui.Dockerfile b/modules/ui/ui.Dockerfile new file mode 100644 index 000000000..f65f4c48b --- /dev/null +++ b/modules/ui/ui.Dockerfile @@ -0,0 +1,19 @@ +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# https://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Image name: test-run/ui +FROM nginx:1.25.1 + +COPY modules/ui/conf/nginx.conf /etc/nginx/nginx.conf +COPY ui /usr/share/nginx/html \ No newline at end of file diff --git a/resources/devices/template/device_config.json b/resources/devices/template/device_config.json index 1e92de25d..ac8ff197c 100644 --- a/resources/devices/template/device_config.json +++ b/resources/devices/template/device_config.json @@ -2,178 +2,22 @@ "manufacturer": "Manufacturer X", "model": "Device X", "mac_addr": "aa:bb:cc:dd:ee:ff", + "max_device_tests":5, "test_modules": { "dns": { - "enabled": true, - "tests": { - "dns.network.from_device": { - "enabled": true - }, - "dns.network.from_dhcp": { - "enabled": true - } - } + "enabled": true }, "connection": { - "enabled": true, - "tests": { - "connection.mac_address": { - "enabled": true - }, - "connection.mac_oui": { - "enabled": true - }, - "connection.target_ping": { - "enabled": true - } - , - "connection.single_ip": { - "enabled": true - } - } + "enabled": true }, "ntp": { - "enabled": true, - "tests": { - "ntp.network.ntp_support": { - "enabled": true - }, - "ntp.network.ntp_dhcp": { - "enabled": true - } - } + "enabled": true }, "baseline": { - "enabled": false, - "tests": { - "baseline.non-compliant": { - "enabled": true - }, - "baseline.pass": { - "enabled": true - }, - "baseline.skip": { - "enabled": true - } - } + "enabled": false }, "nmap": { - "enabled": true, - "tests": { - "security.nmap.ports": { - "enabled": true, - "security.services.ftp": { - "tcp_ports": { - "20": { - "allowed": false - }, - "21": { - "allowed": false - } - } - }, - "security.services.ssh": { - "tcp_ports": { - "22": { - "allowed": true - } - } - }, - "security.services.telnet": { - "tcp_ports": { - "23": { - "allowed": false - } - } - }, - "security.services.smtp": { - "tcp_ports": { - "25": { - "allowed": false - }, - "465": { - "allowed": false - }, - "587": { - "allowed": false - } - } - }, - "security.services.http": { - "tcp_ports": { - "80": { - "allowed": false - }, - "443": { - "allowed": true - } - } - }, - "security.services.pop": { - "tcp_ports": { - "110": { - "allowed": false - } - } - }, - "security.services.imap": { - "tcp_ports": { - "143": { - "allowed": false - } - } - }, - "security.services.snmpv3": { - "tcp_ports": { - "161": { - "allowed": false - }, - "162": { - "allowed": false - } - }, - "udp_ports": { - "161": { - "allowed": false - }, - "162": { - "allowed": false - } - } - }, - "security.services.https": { - "tcp_ports": { - "80": { - "allowed": false - } - } - }, - "security.services.vnc": { - "tcp_ports": { - "5500": { - "allowed": false - }, - "5800": { - "allowed": false - } - } - }, - "security.services.tftp": { - "udp_ports": { - "69": { - "allowed": false - } - } - }, - "security.services.ntp": { - "udp_ports": { - "123": { - "allowed": false - } - } - } - } - } + "enabled": true } } } diff --git a/testing/test_baseline b/testing/baseline/test_baseline similarity index 95% rename from testing/test_baseline rename to testing/baseline/test_baseline index 2b95ded23..61d0f9b56 100755 --- a/testing/test_baseline +++ b/testing/baseline/test_baseline @@ -48,7 +48,7 @@ EOF sudo cmd/install -sudo cmd/start --single-intf > $TESTRUN_OUT 2>&1 & +sudo bin/testrun --single-intf --no-ui > $TESTRUN_OUT 2>&1 & TPID=$! # Time to wait for testrun to be ready @@ -80,6 +80,6 @@ echo "Done baseline test" more $TESTRUN_OUT -pytest testing/test_baseline.py +pytest testing/baseline/test_baseline.py exit $? \ No newline at end of file diff --git a/testing/test_baseline.py b/testing/baseline/test_baseline.py similarity index 100% rename from testing/test_baseline.py rename to testing/baseline/test_baseline.py diff --git a/testing/device_configs/tester1/device_config.json b/testing/device_configs/tester1/device_config.json new file mode 100644 index 000000000..268399b72 --- /dev/null +++ b/testing/device_configs/tester1/device_config.json @@ -0,0 +1,22 @@ +{ + "manufacturer": "Google", + "model": "Tester 1", + "mac_addr": "02:42:aa:00:00:01", + "test_modules": { + "dns": { + "enabled": false + }, + "connection": { + "enabled": false + }, + "ntp": { + "enabled": false + }, + "baseline": { + "enabled": false + }, + "nmap": { + "enabled": true + } + } +} diff --git a/testing/device_configs/tester2/device_config.json b/testing/device_configs/tester2/device_config.json new file mode 100644 index 000000000..8b090d80a --- /dev/null +++ b/testing/device_configs/tester2/device_config.json @@ -0,0 +1,22 @@ +{ + "manufacturer": "Google", + "model": "Tester 2", + "mac_addr": "02:42:aa:00:00:02", + "test_modules": { + "dns": { + "enabled": false + }, + "connection": { + "enabled": false + }, + "ntp": { + "enabled": true + }, + "baseline": { + "enabled": false + }, + "nmap": { + "enabled": true + } + } +} diff --git a/testing/docker/ci_test_device1/Dockerfile b/testing/docker/ci_test_device1/Dockerfile index 0bb697509..a362e2a4d 100644 --- a/testing/docker/ci_test_device1/Dockerfile +++ b/testing/docker/ci_test_device1/Dockerfile @@ -1,10 +1,12 @@ FROM ubuntu:jammy -#Update and get all additional requirements not contained in the base image +ENV DEBIAN_FRONTEND=noninteractive + +# Update and get all additional requirements not contained in the base image RUN apt-get update && apt-get -y upgrade -RUN apt-get update && apt-get install -y isc-dhcp-client ntpdate coreutils moreutils inetutils-ping curl jq dnsutils openssl netcat-openbsd +RUN apt-get update && apt-get install -y isc-dhcp-client ntpdate coreutils moreutils inetutils-ping curl jq dnsutils openssl netcat-openbsd COPY entrypoint.sh /entrypoint.sh diff --git a/testing/docker/ci_test_device1/entrypoint.sh b/testing/docker/ci_test_device1/entrypoint.sh index 8113704be..9152af0c8 100755 --- a/testing/docker/ci_test_device1/entrypoint.sh +++ b/testing/docker/ci_test_device1/entrypoint.sh @@ -88,4 +88,24 @@ elif [ -n "${options[sshv1]}" ]; then /usr/local/sbin/sshd fi +# still testing - using fixed +if [ -n "${options[ntpv4_dhcp]}" ]; then + (while true; do + dhcp_ntp=$(fgrep NTPSERVERS= /run/ntpdate.dhcp) + if [ -n "${dhcp_ntp}" ]; then + ntp_server=`echo $dhcp_ntp | cut -d "'" -f 2` + echo NTP server from DHCP $ntp_server + fi + ntpdate -q -p 1 $ntp_server + sleep 5 + done) & +fi + +if [ -n "${options[ntpv3_time_google_com]}" ]; then + (while true; do + ntpdate -q -p 1 -o 3 time.google.com + sleep 5 + done) & +fi + tail -f /dev/null \ No newline at end of file diff --git a/testing/test_pylint b/testing/pylint/test_pylint similarity index 100% rename from testing/test_pylint rename to testing/pylint/test_pylint diff --git a/testing/tests/example/mac b/testing/tests/example/mac new file mode 100644 index 000000000..e69de29bb diff --git a/testing/tests/example/mac1/results.json b/testing/tests/example/mac1/results.json new file mode 100644 index 000000000..e1b837225 --- /dev/null +++ b/testing/tests/example/mac1/results.json @@ -0,0 +1,252 @@ +{ + "device": { + "mac_addr": "7e:41:12:d2:35:6a" + }, + "dns": { + "results": [ + { + "name": "dns.network.from_device", + "description": "Verify the device sends DNS requests", + "expected_behavior": "The device sends DNS requests.", + "start": "2023-07-03T13:35:48.990574", + "result": "compliant", + "end": "2023-07-03T13:35:49.035528", + "duration": "0:00:00.044954" + }, + { + "name": "dns.network.from_dhcp", + "description": "Verify the device allows for a DNS server to be entered automatically", + "expected_behavior": "The device sends DNS requests to the DNS server provided by the DHCP server", + "start": "2023-07-03T13:35:49.035701", + "result": "non-compliant", + "end": "2023-07-03T13:35:49.041532", + "duration": "0:00:00.005831" + }, + { + "name": "dns.mdns", + "description": "If the device has MDNS (or any kind of IP multicast), can it be disabled", + "start": "2023-07-03T13:35:49.041679", + "result": "non-compliant", + "end": "2023-07-03T13:35:49.057430", + "duration": "0:00:00.015751" + } + ] + }, + "nmap": { + "results": [ + { + "name": "security.nmap.ports", + "description": "Run an nmap scan of open ports", + "expected_behavior": "Report all open ports", + "config": { + "security.services.ftp": { + "tcp_ports": { + "20": { + "allowed": false, + "description": "File Transfer Protocol (FTP) Server Data Transfer", + "result": "compliant" + }, + "21": { + "allowed": false, + "description": "File Transfer Protocol (FTP) Server Data Transfer", + "result": "compliant" + } + }, + "description": "Check FTP port 20/21 is disabled and FTP is not running on any port", + "expected_behavior": "There is no FTP service running on any port" + }, + "security.services.ssh": { + "tcp_ports": { + "22": { + "allowed": true, + "description": "Secure Shell (SSH) server", + "version": "2.0", + "result": "compliant" + } + }, + "description": "Check TELNET port 23 is disabled and TELNET is not running on any port", + "expected_behavior": "There is no FTP service running on any port" + }, + "security.services.telnet": { + "tcp_ports": { + "23": { + "allowed": false, + "description": "Telnet Server", + "result": "compliant" + } + }, + "description": "Check TELNET port 23 is disabled and TELNET is not running on any port", + "expected_behavior": "There is no FTP service running on any port" + }, + "security.services.smtp": { + "tcp_ports": { + "25": { + "allowed": false, + "description": "Simple Mail Transfer Protocol (SMTP) Server", + "result": "compliant" + }, + "465": { + "allowed": false, + "description": "Simple Mail Transfer Protocol over SSL (SMTPS) Server", + "result": "compliant" + }, + "587": { + "allowed": false, + "description": "Simple Mail Transfer Protocol via TLS (SMTPS) Server", + "result": "compliant" + } + }, + "description": "Check SMTP port 25 is disabled and ports 465 or 587 with SSL encryption are (not?) enabled and SMTP is not running on any port.", + "expected_behavior": "There is no smtp service running on any port" + }, + "security.services.http": { + "tcp_ports": { + "80": { + "service_scan": { + "script": "http-methods" + }, + "allowed": false, + "description": "Administrative Insecure Web-Server", + "result": "compliant" + } + }, + "description": "Check that there is no HTTP server running on any port", + "expected_behavior": "Device is unreachable on port 80 (or any other port) and only responds to HTTPS requests on port 443 (or any other port if HTTP is used at all)" + }, + "security.services.pop": { + "tcp_ports": { + "110": { + "allowed": false, + "description": "Post Office Protocol v3 (POP3) Server", + "result": "compliant" + } + }, + "description": "Check POP port 110 is disalbed and POP is not running on any port", + "expected_behavior": "There is no pop service running on any port" + }, + "security.services.imap": { + "tcp_ports": { + "143": { + "allowed": false, + "description": "Internet Message Access Protocol (IMAP) Server", + "result": "compliant" + } + }, + "description": "Check IMAP port 143 is disabled and IMAP is not running on any port", + "expected_behavior": "There is no imap service running on any port" + }, + "security.services.snmpv3": { + "tcp_ports": { + "161": { + "allowed": false, + "description": "Simple Network Management Protocol (SNMP)", + "result": "compliant" + }, + "162": { + "allowed": false, + "description": "Simple Network Management Protocol (SNMP) Trap", + "result": "compliant" + } + }, + "udp_ports": { + "161": { + "allowed": false, + "description": "Simple Network Management Protocol (SNMP)" + }, + "162": { + "allowed": false, + "description": "Simple Network Management Protocol (SNMP) Trap" + } + }, + "description": "Check SNMP port 161/162 is disabled. If SNMP is an essential service, check it supports version 3", + "expected_behavior": "Device is unreachable on port 161 (or any other port) and device is unreachable on port 162 (or any other port) unless SNMP is essential in which case it is SNMPv3 is used." + }, + "security.services.https": { + "tcp_ports": { + "80": { + "allowed": false, + "description": "Administrative Secure Web-Server", + "result": "compliant" + } + }, + "description": "Check that if there is a web server running it is running on a secure port.", + "expected_behavior": "Device only responds to HTTPS requests on port 443 (or any other port if HTTP is used at all)" + }, + "security.services.vnc": { + "tcp_ports": { + "5800": { + "allowed": false, + "description": "Virtual Network Computing (VNC) Remote Frame Buffer Protocol Over HTTP", + "result": "compliant" + }, + "5500": { + "allowed": false, + "description": "Virtual Network Computing (VNC) Remote Frame Buffer Protocol", + "result": "compliant" + } + }, + "description": "Check VNC is disabled on any port", + "expected_behavior": "Device cannot be accessed /connected to via VNc on any port" + }, + "security.services.tftp": { + "udp_ports": { + "69": { + "allowed": false, + "description": "Trivial File Transfer Protocol (TFTP) Server", + "result": "compliant" + } + }, + "description": "Check TFTP port 69 is disabled (UDP)", + "expected_behavior": "There is no tftp service running on any port" + }, + "security.services.ntp": { + "udp_ports": { + "123": { + "allowed": false, + "description": "Network Time Protocol (NTP) Server", + "result": "compliant" + } + }, + "description": "Check NTP port 123 is disabled and the device is not operating as an NTP server", + "expected_behavior": "The device dos not respond to NTP requests when it's IP is set as the NTP server on another device" + } + }, + "start": "2023-07-03T13:36:26.923704", + "result": "compliant", + "end": "2023-07-03T13:36:52.965535", + "duration": "0:00:26.041831" + } + ] + }, + "baseline": { + "results": [ + { + "name": "baseline.pass", + "description": "Simulate a compliant test", + "expected_behavior": "A compliant test result is generated", + "start": "2023-07-03T13:37:29.100681", + "result": "compliant", + "end": "2023-07-03T13:37:29.100869", + "duration": "0:00:00.000188" + }, + { + "name": "baseline.fail", + "description": "Simulate a non-compliant test", + "expected_behavior": "A non-compliant test result is generated", + "start": "2023-07-03T13:37:29.100961", + "result": "non-compliant", + "end": "2023-07-03T13:37:29.101089", + "duration": "0:00:00.000128" + }, + { + "name": "baseline.skip", + "description": "Simulate a skipped test", + "expected_behavior": "A skipped test result is generated", + "start": "2023-07-03T13:37:29.101164", + "result": "skipped", + "end": "2023-07-03T13:37:29.101283", + "duration": "0:00:00.000119" + } + ] + } + } \ No newline at end of file diff --git a/testing/test_tests b/testing/tests/test_tests similarity index 90% rename from testing/test_tests rename to testing/tests/test_tests index 6ba9fef94..04f76daee 100755 --- a/testing/test_tests +++ b/testing/tests/test_tests @@ -17,7 +17,7 @@ set -o xtrace ip a TEST_DIR=/tmp/results -MATRIX=testing/test_tests.json +MATRIX=testing/tests/test_tests.json mkdir -p $TEST_DIR @@ -50,6 +50,9 @@ cat <local/system.json } EOF +mkdir -p local/devices +cp -r testing/device_configs/* local/devices + sudo cmd/install TESTERS=$(jq -r 'keys[]' $MATRIX) @@ -62,7 +65,7 @@ for tester in $TESTERS; do args=$(jq -r .$tester.args $MATRIX) touch $testrun_log - sudo timeout 900 cmd/start --single-intf > $testrun_log 2>&1 & + sudo timeout 900 bin/testrun --single-intf --no-ui --no-validate > $testrun_log 2>&1 & TPID=$! # Time to wait for testrun to be ready @@ -109,12 +112,12 @@ for tester in $TESTERS; do sudo docker kill $tester sudo docker logs $tester | cat - cp runtime/test/${ethmac//:/}/results.json $TEST_DIR/$tester.json + cp runtime/test/${ethmac//:/}/report.json $TEST_DIR/$tester.json more $TEST_DIR/$tester.json more $testrun_log done -pytest -s testing/test_tests.py +pytest -v testing/tests/test_tests.py exit $? diff --git a/testing/test_tests.json b/testing/tests/test_tests.json similarity index 67% rename from testing/test_tests.json rename to testing/tests/test_tests.json index 076e9149e..179a3f7fc 100644 --- a/testing/test_tests.json +++ b/testing/tests/test_tests.json @@ -9,10 +9,12 @@ }, "tester2": { "image": "test-run/ci_test1", - "args": "", + "args": "ntpv4_dhcp", "ethmac": "02:42:aa:00:00:02", "expected_results": { - "security.nmap.ports": "compliant" + "security.nmap.ports": "compliant", + "ntp.network.ntp_support": "compliant", + "ntp.network.ntp_dhcp": "compliant" } } diff --git a/testing/test_tests.py b/testing/tests/test_tests.py similarity index 82% rename from testing/test_tests.py rename to testing/tests/test_tests.py index 7c60484f0..1f484647a 100644 --- a/testing/test_tests.py +++ b/testing/tests/test_tests.py @@ -29,6 +29,7 @@ TEST_MATRIX = 'test_tests.json' RESULTS_PATH = '/tmp/results/*.json' +#TODO add reason @dataclass(frozen=True) class TestResult: name: str @@ -79,24 +80,30 @@ def test_list_tests(capsys, results, test_matrix): all_tests = set(itertools.chain.from_iterable( [collect_actual_results(results[x]) for x in results.keys()])) - ci_pass = set([test - for testers in test_matrix.values() - for test, result in testers['expected_results'].items() + ci_pass = set([test + for testers in test_matrix.values() + for test, result in testers['expected_results'].items() if result == 'compliant']) - ci_fail = set([test - for testers in test_matrix.values() - for test, result in testers['expected_results'].items() + ci_fail = set([test + for testers in test_matrix.values() + for test, result in testers['expected_results'].items() if result == 'non-compliant']) with capsys.disabled(): + #TODO print matching the JSON schema for easy copy/paste print('============') print('============') print('tests seen:') print('\n'.join([x.name for x in all_tests])) print('\ntesting for pass:') print('\n'.join(ci_pass)) - print('\ntesting for pass:') - print('\n'.join(ci_pass)) + print('\ntesting for fail:') + print('\n'.join(ci_fail)) + print('\ntester results') + for tester in test_matrix.keys(): + print(f'\n{tester}:') + for test in collect_actual_results(results[tester]): + print(f'{test.name}: {test.result}') assert True diff --git a/testing/unit_test/run_tests.sh b/testing/unit/run_tests.sh similarity index 84% rename from testing/unit_test/run_tests.sh rename to testing/unit/run_tests.sh index 5b1ed6257..5fa1179b1 100644 --- a/testing/unit_test/run_tests.sh +++ b/testing/unit/run_tests.sh @@ -15,4 +15,8 @@ export PYTHONPATH="$PWD/framework/python/src" python3 -u $PWD/modules/network/dhcp-1/python/src/grpc_server/dhcp_config_test.py python3 -u $PWD/modules/network/dhcp-2/python/src/grpc_server/dhcp_config_test.py +# Run the Security Module Unit Tests +python3 -u $PWD/modules/test/tls/python/src/tls_module_test.py + + popd >/dev/null 2>&1 \ No newline at end of file diff --git a/ui/index.html b/ui/index.html new file mode 100644 index 000000000..285fce5ad --- /dev/null +++ b/ui/index.html @@ -0,0 +1 @@ +Test Run \ No newline at end of file From c44d09c929b35890ae520c98504f780b47811c23 Mon Sep 17 00:00:00 2001 From: Jacob Boddey Date: Wed, 30 Aug 2023 21:04:12 +0100 Subject: [PATCH 02/33] Create package --- .gitignore | 1 + bin/testrun | 8 +++---- cmd/install | 6 +++++ cmd/package | 40 +++++++++++++++++++++++++++++++++ docs/configure_device.md | 16 ++----------- docs/get_started.md | 20 +++++++---------- docs/network/add_new_service.md | 4 ++-- make/.gitignore | 2 ++ make/DEBIAN/control | 5 +++++ make/DEBIAN/postinst | 31 +++++++++++++++++++++++++ 10 files changed, 101 insertions(+), 32 deletions(-) create mode 100755 cmd/package create mode 100644 make/.gitignore create mode 100644 make/DEBIAN/control create mode 100755 make/DEBIAN/postinst diff --git a/.gitignore b/.gitignore index 7ef392c5e..336202f24 100644 --- a/.gitignore +++ b/.gitignore @@ -6,3 +6,4 @@ pylint.out __pycache__/ build/ testing/unit_test/temp/ +*.deb diff --git a/bin/testrun b/bin/testrun index 9281c1ac6..a48e997ad 100755 --- a/bin/testrun +++ b/bin/testrun @@ -26,17 +26,17 @@ fi # Ensure that /var/run/netns folder exists sudo mkdir -p /var/run/netns +export TESTRUNPATH=/usr/local/testrun +cd $TESTRUNPATH + # Create device folder if it doesn't exist mkdir -p local/devices -# Check if Python modules exist. Install if not -[ ! -d "venv" ] && sudo cmd/install - # Activate Python virtual environment source venv/bin/activate # Set the PYTHONPATH to include the "src" directory -export PYTHONPATH="$PWD/framework/python/src" +export PYTHONPATH="$TESTRUNPATH/framework/python/src" python -u framework/python/src/core/test_runner.py $@ deactivate \ No newline at end of file diff --git a/cmd/install b/cmd/install index 4e8639a66..bdb50f509 100755 --- a/cmd/install +++ b/cmd/install @@ -14,6 +14,9 @@ # See the License for the specific language governing permissions and # limitations under the License. +TESTRUN_DIR=/usr/local/testrun +cd $TESTRUN_DIR + python3 -m venv venv source venv/bin/activate @@ -21,3 +24,6 @@ source venv/bin/activate pip3 install -r framework/requirements.txt deactivate + +# Copy the default configuration +cp -u local/system.json.example local/system.json diff --git a/cmd/package b/cmd/package new file mode 100755 index 000000000..b4ff3685e --- /dev/null +++ b/cmd/package @@ -0,0 +1,40 @@ +#!/bin/bash -e + +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# https://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +MAKE_SRC_DIR=make + +# Copy testrun script to /bin +mkdir -p $MAKE_SRC_DIR/bin +cp bin/testrun $MAKE_SRC_DIR/bin/testrun + +# Create testrun folder +mkdir -p $MAKE_SRC_DIR/usr/local/testrun + +# Create postinst script +cp cmd/install $MAKE_SRC_DIR/DEBIAN/postinst + +# Create local folder +mkdir -p $MAKE_SRC_DIR/usr/local/testrun/local +cp local/system.json.example $MAKE_SRC_DIR/usr/local/testrun/local/system.json.example + +# Copy framework and modules into testrun folder +cp -r {framework,modules} $MAKE_SRC_DIR/usr/local/testrun + +# Build .deb file +dpkg-deb --build --root-owner-group make + +# Rename the .deb file +mv make.deb testrun_1-0_amd64.deb \ No newline at end of file diff --git a/docs/configure_device.md b/docs/configure_device.md index ad58521a4..9eefcd866 100644 --- a/docs/configure_device.md +++ b/docs/configure_device.md @@ -8,24 +8,12 @@ The device information section includes the manufacturer, model, and MAC address ## Test Modules -Test modules are groups of tests that can be enabled or disabled as needed. You can choose which test modules to include for your device. The device configuration file contains the following test module: - -- DNS Test Module +Test modules are groups of tests that can be enabled or disabled as needed. You can choose which test modules to run on your device. ### Enabling and Disabling Test Modules To enable or disable a test module, modify the `enabled` field within the respective module. Setting it to `true` enables the module, while setting it to `false` disables the module. -## Individual Tests - -Within the DNS test module, there are individual tests that can be enabled or disabled. These tests focus on specific aspects of network behavior. You can customize the tests based on your device and testing requirements. - -### Enabling and Disabling Tests - -To enable or disable an individual test, modify the `enabled` field within the respective test. Setting it to `true` enables the test, while setting it to `false` disables the test. - -> Note: The example device configuration file (`resources/devices/template/device_config.json`) provides a complete usage example, including the structure and configuration options for the DNS test module and its tests. You can refer to this file to understand how to configure your device tests effectively. - ## Customizing the Device Configuration To customize the device configuration for your specific device, follow these steps: @@ -38,4 +26,4 @@ This ensures that you have a copy of the default configuration file, which you c > Note: Ensure that the device configuration file is properly formatted, and the changes made align with the intended test behavior. Incorrect settings or syntax may lead to unexpected results during testing. -If you encounter any issues or need assistance with the device configuration, refer to the Test Run documentation or ask a question on the Issues page. +If you encounter any issues or need assistance with the device configuration, refer to the Testrun documentation or ask a question on the Issues page. diff --git a/docs/get_started.md b/docs/get_started.md index 7b8cf9e13..fcdd21fe7 100644 --- a/docs/get_started.md +++ b/docs/get_started.md @@ -4,7 +4,7 @@ ### Hardware -Before starting with Test Run, ensure you have the following hardware: +Before starting with Testrun, ensure you have the following hardware: - PC running Ubuntu LTS (laptop or desktop) - 2x USB Ethernet adapter (one may be a built-in Ethernet port) @@ -20,15 +20,9 @@ Ensure the following software is installed on your Ubuntu LTS PC: ## Installation -1. Download Test Run from the releases page or the appropriate source. +1. Download the latest version of Testrun from the [releases page](https://github.com/google/test-run/releases) -2. Run the install script. - -## Configuration - -1. Copy the default configuration file. - -2. Open the `local/system.json` file and modify the configuration as needed. Specify the interface names for the internet and device interfaces. +2. Install the package using ``sudo dpkg -i testrun_*.deb`` ## Test Your Device @@ -37,9 +31,11 @@ Ensure the following software is installed on your Ubuntu LTS PC: - Connect one USB Ethernet adapter to the internet source (e.g., router or switch) using an Ethernet cable. - Connect the other USB Ethernet adapter directly to the IoT device you want to test using an Ethernet cable. -2. Start Test Run. +2. Start Testrun. + +Start Testrun with the command `sudo testrun` - - To run Test Run in network-only mode (without running any tests), use the `--net-only` option. + - To run Testrun in network-only mode (without running any tests), use the `--net-only` option. - To skip network validation before use and not launch the faux device on startup, use the `--no-validate` option. @@ -49,5 +45,5 @@ If you encounter any issues or need assistance, consider the following: - Ensure that all hardware and software prerequisites are met. - Verify that the network interfaces are connected correctly. -- Check the configuration in the `local/system.json` file. +- Check the configuration settings. - Refer to the Test Run documentation or ask for further assistance from the support team. diff --git a/docs/network/add_new_service.md b/docs/network/add_new_service.md index 1ad07b60d..5f7b470cd 100644 --- a/docs/network/add_new_service.md +++ b/docs/network/add_new_service.md @@ -1,8 +1,8 @@ # Adding a New Network Service -The Test Run framework allows users to add their own network services with ease. A template network service can be used to get started quickly, this can be found at [modules/network/template](../../modules/network/template). Otherwise, see below for details of the requirements for new network services. +The Testrun framework allows users to add their own network services with ease. A template network service can be used to get started quickly, this can be found at [modules/network/template](../../modules/network/template). Otherwise, see below for details of the requirements for new network services. -To add a new network service to Test Run, follow the procedure below: +To add a new network service to Testrun, follow the procedure below: 1. Create a folder under `modules/network/` with the name of the network service in lowercase, using only alphanumeric characters and hyphens (`-`). 2. Inside the created folder, include the following files and folders: diff --git a/make/.gitignore b/make/.gitignore new file mode 100644 index 000000000..1be953b79 --- /dev/null +++ b/make/.gitignore @@ -0,0 +1,2 @@ +usr/ +bin/ \ No newline at end of file diff --git a/make/DEBIAN/control b/make/DEBIAN/control new file mode 100644 index 000000000..481f87a9f --- /dev/null +++ b/make/DEBIAN/control @@ -0,0 +1,5 @@ +Package: Testrun +Version: 1.0 +Architecture: amd64 +Maintainer: Google +Description: Automatically verify IoT device network behavior diff --git a/make/DEBIAN/postinst b/make/DEBIAN/postinst new file mode 100755 index 000000000..d5897c18e --- /dev/null +++ b/make/DEBIAN/postinst @@ -0,0 +1,31 @@ +#!/bin/bash -e + +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# https://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +TESTRUN_DIR=/usr/local/testrun +cd $TESTRUN_DIR + +python3 -m venv venv + +source venv/bin/activate + +pip3 install -r framework/requirements.txt + +deactivate + +# Copy the default configuration +cp -u local/system.json.example local/system.json + +echo Successfully installed Testrun From 528b33cc34391f040db1846b23a53a92d0bef06b Mon Sep 17 00:00:00 2001 From: Jacob Boddey Date: Wed, 30 Aug 2023 21:32:55 +0100 Subject: [PATCH 03/33] Update baseline test --- .github/workflows/testing.yml | 6 ++++++ testing/baseline/test_baseline | 6 ++---- 2 files changed, 8 insertions(+), 4 deletions(-) diff --git a/.github/workflows/testing.yml b/.github/workflows/testing.yml index 87c8a814a..c1362e18b 100644 --- a/.github/workflows/testing.yml +++ b/.github/workflows/testing.yml @@ -14,6 +14,12 @@ jobs: steps: - name: Checkout source uses: actions/checkout@v2.3.4 + - name: Package Testrun + shell: bash {0} + run: cmd/package + - name: Install Testrun + shell: bash {0} + run: dpkg -i testrun*.deb - name: Run tests shell: bash {0} run: testing/baseline/test_baseline diff --git a/testing/baseline/test_baseline b/testing/baseline/test_baseline index 61d0f9b56..9e1cfe39a 100755 --- a/testing/baseline/test_baseline +++ b/testing/baseline/test_baseline @@ -36,7 +36,7 @@ sudo /usr/share/openvswitch/scripts/ovs-ctl start # Build Test Container sudo docker build ./testing/docker/ci_baseline -t ci1 -f ./testing/docker/ci_baseline/Dockerfile -cat <local/system.json +cat <usr/local/testrun/local/system.json { "network": { "device_intf": "endev0a", @@ -46,9 +46,7 @@ cat <local/system.json } EOF -sudo cmd/install - -sudo bin/testrun --single-intf --no-ui > $TESTRUN_OUT 2>&1 & +sudo testrun --single-intf --no-ui > $TESTRUN_OUT 2>&1 & TPID=$! # Time to wait for testrun to be ready From 4bf2adfac9d74d4673890cb767d2a2c9321e4ec4 Mon Sep 17 00:00:00 2001 From: Jacob Boddey Date: Wed, 30 Aug 2023 21:40:29 +0100 Subject: [PATCH 04/33] Add sudo --- .github/workflows/testing.yml | 2 +- framework/python/src/common/session.py | 2 +- framework/python/src/core/testrun.py | 2 +- .../src/net_orc/network_orchestrator.py | 2 +- .../python/src/test_orc/test_orchestrator.py | 2 +- .../test/conn/python/src/connection_module.py | 4 +- modules/test/nmap/python/src/nmap_module.py | 47 +++++++++---------- 7 files changed, 30 insertions(+), 31 deletions(-) diff --git a/.github/workflows/testing.yml b/.github/workflows/testing.yml index c1362e18b..46a34c456 100644 --- a/.github/workflows/testing.yml +++ b/.github/workflows/testing.yml @@ -19,7 +19,7 @@ jobs: run: cmd/package - name: Install Testrun shell: bash {0} - run: dpkg -i testrun*.deb + run: sudo dpkg -i testrun*.deb - name: Run tests shell: bash {0} run: testing/baseline/test_baseline diff --git a/framework/python/src/common/session.py b/framework/python/src/common/session.py index f8c8d04b5..35385c529 100644 --- a/framework/python/src/common/session.py +++ b/framework/python/src/common/session.py @@ -83,7 +83,7 @@ def _load_config(self): config_file_json = json.load(f) # Network interfaces - if (NETWORK_KEY in config_file_json + if (NETWORK_KEY in config_file_json and DEVICE_INTF_KEY in config_file_json.get(NETWORK_KEY) and INTERNET_INTF_KEY in config_file_json.get(NETWORK_KEY)): self._config[NETWORK_KEY][DEVICE_INTF_KEY] = config_file_json.get(NETWORK_KEY, {}).get(DEVICE_INTF_KEY) diff --git a/framework/python/src/core/testrun.py b/framework/python/src/core/testrun.py index 9034f5796..be6162d1a 100644 --- a/framework/python/src/core/testrun.py +++ b/framework/python/src/core/testrun.py @@ -126,7 +126,7 @@ def load_all_devices(self): self._session.clear_device_repository() self._load_devices(device_dir=LOCAL_DEVICES_DIR) - # Temporarily removing loading of template device + # Temporarily removing loading of template device # configs (feature not required yet) # self._load_devices(device_dir=RESOURCE_DEVICES_DIR) return self.get_session().get_device_repository() diff --git a/framework/python/src/net_orc/network_orchestrator.py b/framework/python/src/net_orc/network_orchestrator.py index 4abdb9651..bcb7022e4 100644 --- a/framework/python/src/net_orc/network_orchestrator.py +++ b/framework/python/src/net_orc/network_orchestrator.py @@ -329,7 +329,7 @@ def create_net(self): self.stop() sys.exit(1) - if os.getenv("GITHUB_ACTIONS"): + if os.getenv('GITHUB_ACTIONS'): self._ci_post_network_create() self._create_private_net() diff --git a/framework/python/src/test_orc/test_orchestrator.py b/framework/python/src/test_orc/test_orchestrator.py index eb5676e17..0dd1aef8e 100644 --- a/framework/python/src/test_orc/test_orchestrator.py +++ b/framework/python/src/test_orc/test_orchestrator.py @@ -451,7 +451,7 @@ def _stop_module(self, module, kill=False): def get_test_modules(self): return self._test_modules - + def get_test_module(self, name): for test_module in self.get_test_modules(): if test_module.name == name: diff --git a/modules/test/conn/python/src/connection_module.py b/modules/test/conn/python/src/connection_module.py index 248edc536..b30217809 100644 --- a/modules/test/conn/python/src/connection_module.py +++ b/modules/test/conn/python/src/connection_module.py @@ -193,7 +193,7 @@ def _connection_ipaddr_ip_change(self): result = None, 'Device has no current DHCP lease' # Restore the network self._dhcp_util.restore_failover_dhcp_server() - LOGGER.info("Waiting 30 seconds for reserved lease to expire") + LOGGER.info('Waiting 30 seconds for reserved lease to expire') time.sleep(30) self._dhcp_util.get_new_lease(self._device_mac) else: @@ -279,7 +279,7 @@ def _connection_ipv6_slaac(self): def _connection_ipv6_ping(self): LOGGER.info('Running connection.ipv6_ping') result = None - + if self._device_ipv6_addr is None: LOGGER.info('No IPv6 SLAAC address found. Cannot ping') result = None, 'No IPv6 SLAAc address found. Cannot ping' diff --git a/modules/test/nmap/python/src/nmap_module.py b/modules/test/nmap/python/src/nmap_module.py index 6bcbd141a..94597f03e 100644 --- a/modules/test/nmap/python/src/nmap_module.py +++ b/modules/test/nmap/python/src/nmap_module.py @@ -109,10 +109,10 @@ def _check_unknown_ports(self,tests,scan_results): for test in tests: if "tcp_ports" in tests[test]: for port in tests[test]['tcp_ports']: - known_ports.append(port) + known_ports.append(port) if "udp_ports" in tests[test]: for port in tests[test]['udp_ports']: - known_ports.append(port) + known_ports.append(port) for port_result in scan_results: if not port_result in known_ports: @@ -134,7 +134,7 @@ def _add_unknown_ports(self,tests,unallowed_port): LOGGER.info("Unknown Port Service: " + unallowed_port['service']) for test in tests: LOGGER.debug("Checking for known service: " + test) - # Create a regular expression pattern to match the variable at the + # Create a regular expression pattern to match the variable at the # end of the string port_service = r"\b" + re.escape(unallowed_port['service']) + r"\b$" service_match = re.search(port_service, test) @@ -166,7 +166,6 @@ def _check_scan_results(self,test_config,scan_results): if "udp_ports" in test_config: port_config = test_config["udp_ports"] self._check_scan_result(port_config=port_config,scan_results=scan_results) - def _check_scan_result(self,port_config,scan_results): if port_config is not None: @@ -213,16 +212,16 @@ def _check_unallowed_port(self,unallowed_ports,tests): version = None service = None for port in unallowed_ports: - LOGGER.info('Checking unallowed port: ' + port['port']) - LOGGER.info('Looking for service: ' + port['service']) - LOGGER.debug('Unallowed Port Config: ' + str(port)) - if port['tcp_udp'] == 'tcp': - port_style = 'tcp_ports' - elif port['tcp_udp'] == 'udp': - port_style = 'udp_ports' + LOGGER.info("Checking unallowed port: " + port["port"]) + LOGGER.info("Looking for service: " + port["service"]) + LOGGER.debug("Unallowed Port Config: " + str(port)) + if port["tcp_udp"] == "tcp": + port_style = "tcp_ports" + elif port["tcp_udp"] == "udp": + port_style = "udp_ports" for test in tests: - LOGGER.debug('Checking test: ' + str(test)) - # Create a regular expression pattern to match the variable at the + LOGGER.debug("Checking test: " + str(test)) + # Create a regular expression pattern to match the variable at the # end of the string port_service = r"\b" + re.escape(port['service']) + r"\b$" service_match = re.search(port_service, test) @@ -247,7 +246,7 @@ def _check_unallowed_port(self,unallowed_ports,tests): for u_port in self._unallowed_ports: if port['port'] in u_port['port']: self._unallowed_ports.remove(u_port) - break + break break def _check_version(self,service,version_detected,version_expected): @@ -259,8 +258,8 @@ def _check_version(self,service,version_detected,version_expected): result. """ LOGGER.info("Checking version for service: " + service) - LOGGER.info("NMAP Version Detected: " + version_detected) - LOGGER.info("Version Expected: " + version_expected) + LOGGER.info("NMAP Version Detected: " + version_detected) + LOGGER.info("Version Expected: " + version_expected) version_check = None match service: case "ssh": @@ -355,12 +354,12 @@ def _scan_udp_ports(self, tests): def _nmap_results_to_json(self,nmap_results): try: - xml_data = xmltodict.parse(nmap_results) - json_data = json.dumps(xml_data, indent=4) - return json.loads(json_data) + xml_data = xmltodict.parse(nmap_results) + json_data = json.dumps(xml_data, indent=4) + return json.loads(json_data) except Exception as e: - LOGGER.error(f"Error parsing Nmap output: {e}") + LOGGER.error(f"Error parsing Nmap output: {e}") def _process_nmap_json_results(self,nmap_results_json): LOGGER.debug("nmap results\n" + json.dumps(nmap_results_json,indent=2)) @@ -369,10 +368,10 @@ def _process_nmap_json_results(self,nmap_results_json): ports = nmap_results_json["nmaprun"]["host"]["ports"] # Checking if an object is a JSON object if isinstance(ports["port"], dict): - results.update(self._json_port_to_dict(ports["port"])) + results.update(self._json_port_to_dict(ports["port"])) elif isinstance(ports["port"], list): - for port in ports["port"]: - results.update(self._json_port_to_dict(port)) + for port in ports["port"]: + results.update(self._json_port_to_dict(port)) return results def _json_port_to_dict(self,port_json): @@ -387,4 +386,4 @@ def _json_port_to_dict(self,port_json): if "@extrainfo" in port_json["service"]: port["version"] += " " + port_json["service"]["@extrainfo"] port_result = {port_json["@portid"]:port} - return port_result \ No newline at end of file + return port_result From 82c4f8e790a01c2b74a3fe30a8dcedb41538e30a Mon Sep 17 00:00:00 2001 From: Jacob Boddey Date: Wed, 30 Aug 2023 21:43:12 +0100 Subject: [PATCH 05/33] Correct file url --- testing/baseline/test_baseline | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/testing/baseline/test_baseline b/testing/baseline/test_baseline index 9e1cfe39a..4c1589c65 100755 --- a/testing/baseline/test_baseline +++ b/testing/baseline/test_baseline @@ -36,7 +36,7 @@ sudo /usr/share/openvswitch/scripts/ovs-ctl start # Build Test Container sudo docker build ./testing/docker/ci_baseline -t ci1 -f ./testing/docker/ci_baseline/Dockerfile -cat <usr/local/testrun/local/system.json +cat </usr/local/testrun/local/system.json { "network": { "device_intf": "endev0a", From 444e6e4a750316588de321f159235161b4569c8d Mon Sep 17 00:00:00 2001 From: Jacob Boddey Date: Wed, 30 Aug 2023 21:45:37 +0100 Subject: [PATCH 06/33] Add sudo --- testing/baseline/test_baseline | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/testing/baseline/test_baseline b/testing/baseline/test_baseline index 4c1589c65..3c91c3603 100755 --- a/testing/baseline/test_baseline +++ b/testing/baseline/test_baseline @@ -36,7 +36,7 @@ sudo /usr/share/openvswitch/scripts/ovs-ctl start # Build Test Container sudo docker build ./testing/docker/ci_baseline -t ci1 -f ./testing/docker/ci_baseline/Dockerfile -cat </usr/local/testrun/local/system.json +sudo cat </usr/local/testrun/local/system.json { "network": { "device_intf": "endev0a", From 66b0569ce2aca427e72c4b74932a9137044d1d67 Mon Sep 17 00:00:00 2001 From: J Boddey Date: Thu, 31 Aug 2023 22:42:22 +0100 Subject: [PATCH 07/33] Update README.md --- README.md | 29 ++++++++++++++++------------- 1 file changed, 16 insertions(+), 13 deletions(-) diff --git a/README.md b/README.md index 41c559499..5ed2d03de 100644 --- a/README.md +++ b/README.md @@ -1,20 +1,20 @@ - Testrun logo + Testrun logo ## Introduction :wave: -Test Run is a tool to automate the validation of network-based functionality of IoT devices. Any device which is capable of receiving an IP address via DHCP is considered an IoT device by Test Run and can be tested. +Testrun is a tool to automate the validation of network-based functionality of IoT devices. Any device which is capable of receiving an IP address via DHCP is considered an IoT device by Testrun and can be tested. ## Motivation :bulb: -Without tools like Test Run, testing labs may be maintaining a large and complex network using equipment such as: A managed layer 3 switch, an enterprise-grade network router, virtualized or physical servers to provide DNS, NTP, 802.1x etc. With this amount of moving parts, all with dynamic configuration files and constant software updates, more time is likely to be spent on preparation and clean up of functinality or penetration testing - not forgetting the number of software tools required to perform the testing. The major issues which can and should be solved: +Without tools like Testrun, testing labs may be maintaining a large and complex network using equipment such as: A managed layer 3 switch, an enterprise-grade network router, virtualized or physical servers to provide DNS, NTP, 802.1x etc. With this amount of moving parts, all with dynamic configuration files and constant software updates, more time is likely to be spent on preparation and clean up of functinality or penetration testing - not forgetting the number of software tools required to perform the testing. The major issues which can and should be solved: 1) The complexity of managing a testing network 2) The time required to perform testing of network functionality 3) The accuracy and consistency of testing network functionality ## How it works :triangular_ruler: -Test Run creates an isolated and controlled network environment to fully simulate enterprise network deployments in your device testing lab. +Testrun creates an isolated and controlled network environment to fully simulate enterprise network deployments in your device testing lab. This removes the necessity for complex hardware, advanced knowledge and networking experience whilst enabling semi-technical engineers to validate device behaviour against industry cyber standards. -Two runtime modes will be supported by Test Run: +Two runtime modes will be supported by Testrun: 1) Automated Testing @@ -22,7 +22,7 @@ Once the device has become operational (steady state), automated testing of the 2) Lab network -Test Run cannot automate everything, and so additional manual testing may be required (or configuration changes may be required on the device). Rather than having to maintain a separate but idential lab network, Test Run will provide the network and some tools to assist an engineer performing the additional testing. At the same time, packet captures of the device behaviour will be recorded, alongside logs for each network service, for further debugging. +Testrun cannot automate everything, and so additional manual testing may be required (or configuration changes may be required on the device). Rather than having to maintain a separate but idential lab network, Testrun will provide the network and some tools to assist an engineer performing the additional testing. At the same time, packet captures of the device behaviour will be recorded, alongside logs for each network service, for further debugging. ## Minimum Requirements :computer: ### Hardware @@ -34,8 +34,11 @@ Test Run cannot automate everything, and so additional manual testing may be req - Docker - [Install guide](https://docs.docker.com/engine/install/ubuntu/) - Open vSwitch ``sudo apt-get install openvswitch-common openvswitch-switch`` +## Get started ▶️ +Once you have met the hardware and software requirements, you can get started with Testrun by following the [Get started guide](docs/get_started.md). + ## Roadmap :chart_with_upwards_trend: -Test Run will constantly evolve to further support end-users by automating device network behaviour against industry standards. +Testrun will constantly evolve to further support end-users by automating device network behaviour against industry standards. ## Issue reporting :triangular_flag_on_post: If the application has come across a problem at any point during setup or use, please raise an issue under the [issues tab](https://github.com/auto-iot/test-run/issues). Issue templates exist for both bug reports and feature requests. If neither of these are appropriate for your issue, raise a blank issue instead. @@ -44,11 +47,11 @@ If the application has come across a problem at any point during setup or use, p The contributing requirements can be found in [CONTRIBUTING.md](CONTRIBUTING.md). In short, checkout the [Google CLA](https://cla.developers.google.com/) site to get started. ## FAQ :raising_hand: -1) What device networking functionality is validated by Test Run? +1) What device networking functionality is validated by Testrun? Best practices and requirements for IoT devices are constantly changing due to technological advances and discovery of vulnerabilities. The current expectations for IoT devices on Google deployments can be found in the [Application Security Requirements for IoT Devices](https://partner-security.withgoogle.com/docs/iot_requirements). - Test Run aims to automate as much of the Application Security Requirements as possible. + Testrun aims to automate as much of the Application Security Requirements as possible. 2) What services are provided on the virtual network? @@ -58,11 +61,11 @@ The contributing requirements can be found in [CONTRIBUTING.md](CONTRIBUTING.md) - NTPv4 - 802.1x Port Based Authentication -3) Can I run Test Run on a virtual machine? +3) Can I run Testrun on a virtual machine? - Probably. Provided that the required 2x USB ethernet adapters are passed to the virtual machine as USB devices rather than network adapters, Test Run should - still work. We will look to test and approve the use of virtualisation to run Test Run in the future. + Probably. Provided that the required 2x USB ethernet adapters are passed to the virtual machine as USB devices rather than network adapters, Testrun should + still work. We will look to test and approve the use of virtualisation to run Testrun in the future. - 4) Can I connect multiple devices to Test Run? + 4) Can I connect multiple devices to Testrun? In short, Yes you can. The way in which multiple devices could be tested simultaneously is yet to be decided. However, if you simply want to add field/peer devices during runtime (even another laptop performing manual testing) then you may connect the USB ethernet adapter to an unmanaged switch. From c707fe7913f376cebe798af844029e1cb8c07844 Mon Sep 17 00:00:00 2001 From: Jacob Boddey Date: Fri, 1 Sep 2023 20:33:41 +0100 Subject: [PATCH 08/33] Correct sudo user --- make/DEBIAN/postinst | 8 +++++++- modules/ui/.gitignore | 3 +++ ui/index.html | 1 - 3 files changed, 10 insertions(+), 2 deletions(-) create mode 100644 modules/ui/.gitignore delete mode 100644 ui/index.html diff --git a/make/DEBIAN/postinst b/make/DEBIAN/postinst index d5897c18e..c71119e8b 100755 --- a/make/DEBIAN/postinst +++ b/make/DEBIAN/postinst @@ -21,11 +21,17 @@ python3 -m venv venv source venv/bin/activate -pip3 install -r framework/requirements.txt +echo Installing python dependencies +pip3 install -r framework/requirements.txt >> /dev/null deactivate # Copy the default configuration +echo Copying default configuration cp -u local/system.json.example local/system.json +# Set owner of local dir to sudo user +echo Correcting file ownership +chown -R $SUDO_USER /usr/local/testrun/local + echo Successfully installed Testrun diff --git a/modules/ui/.gitignore b/modules/ui/.gitignore new file mode 100644 index 000000000..3d59b2240 --- /dev/null +++ b/modules/ui/.gitignore @@ -0,0 +1,3 @@ +node_modules/ +.angular/ +dist/ \ No newline at end of file diff --git a/ui/index.html b/ui/index.html deleted file mode 100644 index 285fce5ad..000000000 --- a/ui/index.html +++ /dev/null @@ -1 +0,0 @@ -Test Run \ No newline at end of file From 893a9f95c6438cef06d217f273c528104739ec01 Mon Sep 17 00:00:00 2001 From: Jacob Boddey Date: Fri, 1 Sep 2023 20:40:13 +0100 Subject: [PATCH 09/33] Create temporary file --- testing/baseline/test_baseline | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/testing/baseline/test_baseline b/testing/baseline/test_baseline index 3c91c3603..ef5bde909 100755 --- a/testing/baseline/test_baseline +++ b/testing/baseline/test_baseline @@ -36,7 +36,7 @@ sudo /usr/share/openvswitch/scripts/ovs-ctl start # Build Test Container sudo docker build ./testing/docker/ci_baseline -t ci1 -f ./testing/docker/ci_baseline/Dockerfile -sudo cat </usr/local/testrun/local/system.json +cat <system.json { "network": { "device_intf": "endev0a", @@ -46,6 +46,9 @@ sudo cat </usr/local/testrun/local/system.json } EOF +# Copy configuration to testrun +sudo mv system.json /usr/local/testrun/local/system.json + sudo testrun --single-intf --no-ui > $TESTRUN_OUT 2>&1 & TPID=$! From ae180b9a63aa1eae72ed770054dbb409ee8aaaf3 Mon Sep 17 00:00:00 2001 From: Jacob Boddey Date: Fri, 1 Sep 2023 21:33:06 +0100 Subject: [PATCH 10/33] Fix baseline test --- modules/network/dhcp-1/dhcp-1.Dockerfile | 6 +++--- modules/network/dhcp-2/dhcp-2.Dockerfile | 2 +- testing/baseline/system.json | 7 +++++++ testing/baseline/test_baseline | 14 ++------------ 4 files changed, 13 insertions(+), 16 deletions(-) create mode 100644 testing/baseline/system.json diff --git a/modules/network/dhcp-1/dhcp-1.Dockerfile b/modules/network/dhcp-1/dhcp-1.Dockerfile index 6b941d878..272405ccd 100644 --- a/modules/network/dhcp-1/dhcp-1.Dockerfile +++ b/modules/network/dhcp-1/dhcp-1.Dockerfile @@ -19,13 +19,13 @@ ARG MODULE_NAME=dhcp-1 ARG MODULE_DIR=modules/network/$MODULE_NAME # Install all necessary packages -RUN apt-get install -y wget +RUN apt-get update && apt-get install -y wget apt-transport-https -#Update the oui.txt file from ieee +# Update the oui.txt file from ieee RUN wget http://standards-oui.ieee.org/oui.txt -P /usr/local/etc/ # Install dhcp server -RUN apt-get install -y isc-dhcp-server radvd systemd +RUN apt-get install -y --fix-missing isc-dhcp-server radvd systemd # Copy over all configuration files COPY $MODULE_DIR/conf /testrun/conf diff --git a/modules/network/dhcp-2/dhcp-2.Dockerfile b/modules/network/dhcp-2/dhcp-2.Dockerfile index 153aa50e7..2601f49b8 100644 --- a/modules/network/dhcp-2/dhcp-2.Dockerfile +++ b/modules/network/dhcp-2/dhcp-2.Dockerfile @@ -19,7 +19,7 @@ ARG MODULE_NAME=dhcp-2 ARG MODULE_DIR=modules/network/$MODULE_NAME # Install all necessary packages -RUN apt-get install -y wget +RUN apt-get update && apt-get install -y wget apt-transport-https #Update the oui.txt file from ieee RUN wget http://standards-oui.ieee.org/oui.txt -P /usr/local/etc/ diff --git a/testing/baseline/system.json b/testing/baseline/system.json new file mode 100644 index 000000000..1bc6587e1 --- /dev/null +++ b/testing/baseline/system.json @@ -0,0 +1,7 @@ +{ + "network": { + "device_intf": "endev0a", + "internet_intf": "eth0" + }, + "log_level": "DEBUG" +} \ No newline at end of file diff --git a/testing/baseline/test_baseline b/testing/baseline/test_baseline index ef5bde909..d68309acb 100755 --- a/testing/baseline/test_baseline +++ b/testing/baseline/test_baseline @@ -22,7 +22,7 @@ ifconfig sudo apt-get update sudo apt-get install openvswitch-common openvswitch-switch tcpdump jq moreutils coreutils isc-dhcp-client -pip3 install pytest +#pip3 install pytest # Setup device network sudo ip link add dev endev0a type veth peer name endev0b @@ -36,18 +36,8 @@ sudo /usr/share/openvswitch/scripts/ovs-ctl start # Build Test Container sudo docker build ./testing/docker/ci_baseline -t ci1 -f ./testing/docker/ci_baseline/Dockerfile -cat <system.json -{ - "network": { - "device_intf": "endev0a", - "internet_intf": "eth0" - }, - "log_level": "DEBUG" -} -EOF - # Copy configuration to testrun -sudo mv system.json /usr/local/testrun/local/system.json +sudo cp testing/baseline/system.json /usr/local/testrun/local/system.json sudo testrun --single-intf --no-ui > $TESTRUN_OUT 2>&1 & TPID=$! From a4ced7abdddef10a2c13230b78dfe5f3aa0f7302 Mon Sep 17 00:00:00 2001 From: Jacob Boddey Date: Mon, 4 Sep 2023 09:45:58 +0100 Subject: [PATCH 11/33] Copy device configs --- testing/baseline/test_baseline | 10 +++++++--- testing/baseline/test_baseline.py | 5 ++--- 2 files changed, 9 insertions(+), 6 deletions(-) diff --git a/testing/baseline/test_baseline b/testing/baseline/test_baseline index d68309acb..3674df86a 100755 --- a/testing/baseline/test_baseline +++ b/testing/baseline/test_baseline @@ -15,6 +15,7 @@ # limitations under the License. TESTRUN_OUT=/tmp/testrun.log +TESTRUN_DIR=/usr/local/testrun ifconfig @@ -37,7 +38,10 @@ sudo /usr/share/openvswitch/scripts/ovs-ctl start sudo docker build ./testing/docker/ci_baseline -t ci1 -f ./testing/docker/ci_baseline/Dockerfile # Copy configuration to testrun -sudo cp testing/baseline/system.json /usr/local/testrun/local/system.json +sudo cp testing/baseline/system.json $TESTRUN_DIR/local/system.json + +# Copy device configs to testrun +sudo cp -r testing/device_configs/* $TESTRUN_DIR/local/devices sudo testrun --single-intf --no-ui > $TESTRUN_OUT 2>&1 & TPID=$! @@ -51,7 +55,7 @@ for i in `seq 1 $WAITING`; do if [[ ! -d /proc/$TPID ]]; then cat $TESTRUN_OUT - echo "error encountered starting test run" + echo "Error encountered starting test run" exit 1 fi @@ -60,7 +64,7 @@ done if [[ $i -eq $WAITING ]]; then cat $TESTRUN_OUT - echo "failed after waiting $WAITING seconds for test-run start" + echo "Failed after waiting $WAITING seconds for testrun to start" exit 1 fi diff --git a/testing/baseline/test_baseline.py b/testing/baseline/test_baseline.py index 520f909f7..ed3bb17a1 100644 --- a/testing/baseline/test_baseline.py +++ b/testing/baseline/test_baseline.py @@ -26,6 +26,7 @@ DNS_SERVER = '10.10.10.4' CI_BASELINE_OUT = '/tmp/testrun_ci.json' +TESTRUN_DIR = '/usr/local/testrun' @pytest.fixture def container_data(): @@ -34,9 +35,7 @@ def container_data(): @pytest.fixture def validator_results(): - basedir = os.path.dirname(os.path.abspath(__file__)) - with open(os.path.join(basedir, - '../', + with open(os.path.join(TESTRUN_DIR, 'runtime/validation/faux-dev/result.json'), encoding='utf-8') as f: return json.load(f) From dbc5d69ad06b47c9027ed2831f1f2f557dcc827d Mon Sep 17 00:00:00 2001 From: Jacob Boddey Date: Mon, 4 Sep 2023 09:47:21 +0100 Subject: [PATCH 12/33] Install pytest --- testing/baseline/test_baseline | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/testing/baseline/test_baseline b/testing/baseline/test_baseline index 3674df86a..d7d362de9 100755 --- a/testing/baseline/test_baseline +++ b/testing/baseline/test_baseline @@ -23,7 +23,7 @@ ifconfig sudo apt-get update sudo apt-get install openvswitch-common openvswitch-switch tcpdump jq moreutils coreutils isc-dhcp-client -#pip3 install pytest +pip3 install pytest # Setup device network sudo ip link add dev endev0a type veth peer name endev0b From dab596b88f21433282ea27af8f38bc090a380a6a Mon Sep 17 00:00:00 2001 From: Jacob Boddey Date: Mon, 4 Sep 2023 09:53:14 +0100 Subject: [PATCH 13/33] Create devices folder --- cmd/package | 3 +++ make/DEBIAN/postinst | 6 +----- 2 files changed, 4 insertions(+), 5 deletions(-) diff --git a/cmd/package b/cmd/package index b4ff3685e..c60f0926f 100755 --- a/cmd/package +++ b/cmd/package @@ -33,6 +33,9 @@ cp local/system.json.example $MAKE_SRC_DIR/usr/local/testrun/local/system.json.e # Copy framework and modules into testrun folder cp -r {framework,modules} $MAKE_SRC_DIR/usr/local/testrun +# Create device repository +mkdir $MAKE_SRC_DIR/usr/local/testrun/local/devices + # Build .deb file dpkg-deb --build --root-owner-group make diff --git a/make/DEBIAN/postinst b/make/DEBIAN/postinst index c71119e8b..b79285491 100755 --- a/make/DEBIAN/postinst +++ b/make/DEBIAN/postinst @@ -28,10 +28,6 @@ deactivate # Copy the default configuration echo Copying default configuration -cp -u local/system.json.example local/system.json - -# Set owner of local dir to sudo user -echo Correcting file ownership -chown -R $SUDO_USER /usr/local/testrun/local +cp -u $TESTRUN_DIR/local/system.json.example $TESTRUN_DIR/local/system.json echo Successfully installed Testrun From 2c80242d2f695ab846e2fbeb41feccc23dbc5586 Mon Sep 17 00:00:00 2001 From: Jacob Boddey Date: Mon, 4 Sep 2023 10:16:53 +0100 Subject: [PATCH 14/33] Update test_tests --- testing/tests/system.json | 8 ++++++++ testing/tests/test_tests | 33 ++++++++++++--------------------- 2 files changed, 20 insertions(+), 21 deletions(-) create mode 100644 testing/tests/system.json diff --git a/testing/tests/system.json b/testing/tests/system.json new file mode 100644 index 000000000..8717cdbfe --- /dev/null +++ b/testing/tests/system.json @@ -0,0 +1,8 @@ +{ + "network": { + "device_intf": "endev0a", + "internet_intf": "eth0" + }, + "log_level": "DEBUG", + "monitor_period": 30 +} \ No newline at end of file diff --git a/testing/tests/test_tests b/testing/tests/test_tests index 04f76daee..75c2dc5cb 100755 --- a/testing/tests/test_tests +++ b/testing/tests/test_tests @@ -17,6 +17,7 @@ set -o xtrace ip a TEST_DIR=/tmp/results +TESTRUN_DIR=/usr/local/testrun MATRIX=testing/tests/test_tests.json mkdir -p $TEST_DIR @@ -39,21 +40,11 @@ sudo /usr/share/openvswitch/scripts/ovs-ctl start # Build Test Container sudo docker build ./testing/docker/ci_test_device1 -t ci_test_device1 -f ./testing/docker/ci_test_device1/Dockerfile -cat <local/system.json -{ - "network": { - "device_intf": "endev0a", - "internet_intf": "eth0" - }, - "log_level": "DEBUG", - "monitor_period": 30 -} -EOF +# Copy configuration to testrun +sudo cp testing/tests/system.json $TESTRUN_DIR/local/system.json -mkdir -p local/devices -cp -r testing/device_configs/* local/devices - -sudo cmd/install +# Copy device configs to testrun +sudo cp -r testing/device_configs/* $TESTRUN_DIR/local/devices TESTERS=$(jq -r 'keys[]' $MATRIX) for tester in $TESTERS; do @@ -65,7 +56,7 @@ for tester in $TESTERS; do args=$(jq -r .$tester.args $MATRIX) touch $testrun_log - sudo timeout 900 bin/testrun --single-intf --no-ui --no-validate > $testrun_log 2>&1 & + sudo timeout 900 testrun --single-intf --no-ui --no-validate > $testrun_log 2>&1 & TPID=$! # Time to wait for testrun to be ready @@ -78,7 +69,7 @@ for tester in $TESTERS; do if [[ ! -d /proc/$TPID ]]; then cat $testrun_log - echo "error encountered starting test run" + echo "Error encountered starting test run" exit 1 fi @@ -87,7 +78,7 @@ for tester in $TESTERS; do if [[ $i -eq $WAITING ]]; then cat $testrun_log - echo "failed after waiting $WAITING seconds for test-run start" + echo "Failed after waiting $WAITING seconds for testrun to start" exit 1 fi @@ -104,15 +95,15 @@ for tester in $TESTERS; do wait $TPID # Following line indicates that tests are completed but wait till it exits # Completed running test modules on device with mac addr 7e:41:12:d2:35:6a - #Change this line! - LOGGER.info(f"""Completed running test modules on device + # Change this line! - LOGGER.info(f"""Completed running test modules on device # with mac addr {device.mac_addr}""") - ls runtime - more runtime/network/*.log + ls $TESTRUN_DIR/runtime + more $TESTRUN_DIR/runtime/network/*.log sudo docker kill $tester sudo docker logs $tester | cat - cp runtime/test/${ethmac//:/}/report.json $TEST_DIR/$tester.json + cp $TESTRUN_DIR/runtime/test/${ethmac//:/}/report.json $TEST_DIR/$tester.json more $TEST_DIR/$tester.json more $testrun_log From d6f5c34dea0b6d3c3e6d4885237ee22560d45971 Mon Sep 17 00:00:00 2001 From: Jacob Boddey Date: Mon, 4 Sep 2023 10:30:14 +0100 Subject: [PATCH 15/33] Install testrun --- .github/workflows/testing.yml | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/.github/workflows/testing.yml b/.github/workflows/testing.yml index 46a34c456..8a187a9c9 100644 --- a/.github/workflows/testing.yml +++ b/.github/workflows/testing.yml @@ -32,6 +32,11 @@ jobs: steps: - name: Checkout source uses: actions/checkout@v2.3.4 + shell: bash {0} + run: cmd/package + - name: Install Testrun + shell: bash {0} + run: sudo dpkg -i testrun*.deb - name: Run tests shell: bash {0} run: testing/tests/test_tests From 47c74262cae8f740238ed60d3c83803980c58489 Mon Sep 17 00:00:00 2001 From: Jacob Boddey Date: Mon, 4 Sep 2023 10:37:37 +0100 Subject: [PATCH 16/33] Add missing name --- .github/workflows/testing.yml | 1 + 1 file changed, 1 insertion(+) diff --git a/.github/workflows/testing.yml b/.github/workflows/testing.yml index 8a187a9c9..88430b2b6 100644 --- a/.github/workflows/testing.yml +++ b/.github/workflows/testing.yml @@ -32,6 +32,7 @@ jobs: steps: - name: Checkout source uses: actions/checkout@v2.3.4 + - name: Package Testrun shell: bash {0} run: cmd/package - name: Install Testrun From bef97f653719b13eb23d42f12fb2d92f791962e6 Mon Sep 17 00:00:00 2001 From: Jacob Boddey Date: Mon, 4 Sep 2023 10:58:11 +0100 Subject: [PATCH 17/33] Allow more time for Testrun to start --- testing/tests/test_tests | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/testing/tests/test_tests b/testing/tests/test_tests index 75c2dc5cb..0506ea191 100755 --- a/testing/tests/test_tests +++ b/testing/tests/test_tests @@ -60,7 +60,7 @@ for tester in $TESTERS; do TPID=$! # Time to wait for testrun to be ready - WAITING=600 + WAITING=700 for i in `seq 1 $WAITING`; do tail -1 $testrun_log if [[ -n $(fgrep "Waiting for devices on the network" $testrun_log) ]]; then From b10bb29a8b18c937348fd191048ec1dd2138325a Mon Sep 17 00:00:00 2001 From: Jacob Boddey Date: Wed, 6 Sep 2023 15:24:54 +0100 Subject: [PATCH 18/33] Add depends --- cmd/install | 8 ++++---- cmd/package | 2 +- make/DEBIAN/control | 1 + make/DEBIAN/postinst | 14 +++++++------- 4 files changed, 13 insertions(+), 12 deletions(-) diff --git a/cmd/install b/cmd/install index 6b5e75e7e..25f60b26d 100755 --- a/cmd/install +++ b/cmd/install @@ -14,6 +14,8 @@ # See the License for the specific language governing permissions and # limitations under the License. +echo Installing application dependencies + TESTRUN_DIR=/usr/local/testrun cd $TESTRUN_DIR @@ -26,8 +28,6 @@ pip3 install -r framework/requirements.txt # Copy the default configuration cp -u local/system.json.example local/system.json -# Dependency for printing reports to pdf -# required by python package weasyprint -sudo apt-get install libpangocairo-1.0-0 - deactivate + +echo Finished installing Testrun diff --git a/cmd/package b/cmd/package index c60f0926f..dcb16198b 100755 --- a/cmd/package +++ b/cmd/package @@ -34,7 +34,7 @@ cp local/system.json.example $MAKE_SRC_DIR/usr/local/testrun/local/system.json.e cp -r {framework,modules} $MAKE_SRC_DIR/usr/local/testrun # Create device repository -mkdir $MAKE_SRC_DIR/usr/local/testrun/local/devices +mkdir -p $MAKE_SRC_DIR/usr/local/testrun/local/devices # Build .deb file dpkg-deb --build --root-owner-group make diff --git a/make/DEBIAN/control b/make/DEBIAN/control index 481f87a9f..20463e996 100644 --- a/make/DEBIAN/control +++ b/make/DEBIAN/control @@ -3,3 +3,4 @@ Version: 1.0 Architecture: amd64 Maintainer: Google Description: Automatically verify IoT device network behavior +Depends: libpangocairo-1.0-0, openvswitch-common, openvswitch-switch, python3 diff --git a/make/DEBIAN/postinst b/make/DEBIAN/postinst index b79285491..25f60b26d 100755 --- a/make/DEBIAN/postinst +++ b/make/DEBIAN/postinst @@ -14,6 +14,8 @@ # See the License for the specific language governing permissions and # limitations under the License. +echo Installing application dependencies + TESTRUN_DIR=/usr/local/testrun cd $TESTRUN_DIR @@ -21,13 +23,11 @@ python3 -m venv venv source venv/bin/activate -echo Installing python dependencies -pip3 install -r framework/requirements.txt >> /dev/null - -deactivate +pip3 install -r framework/requirements.txt # Copy the default configuration -echo Copying default configuration -cp -u $TESTRUN_DIR/local/system.json.example $TESTRUN_DIR/local/system.json +cp -u local/system.json.example local/system.json + +deactivate -echo Successfully installed Testrun +echo Finished installing Testrun From 90bb33e1a326374ca8105c040cbe6ae60c20c62a Mon Sep 17 00:00:00 2001 From: Jacob Boddey Date: Wed, 6 Sep 2023 15:56:07 +0100 Subject: [PATCH 19/33] Install dependencies --- .github/workflows/testing.yml | 10 ++++++++-- cmd/prepare | 21 +++++++++++++++++++++ 2 files changed, 29 insertions(+), 2 deletions(-) create mode 100755 cmd/prepare diff --git a/.github/workflows/testing.yml b/.github/workflows/testing.yml index 88430b2b6..093c7fc14 100644 --- a/.github/workflows/testing.yml +++ b/.github/workflows/testing.yml @@ -14,13 +14,16 @@ jobs: steps: - name: Checkout source uses: actions/checkout@v2.3.4 + - name: Install dependencies + shell: bash {0} + run: cmd/prepare - name: Package Testrun shell: bash {0} run: cmd/package - name: Install Testrun shell: bash {0} run: sudo dpkg -i testrun*.deb - - name: Run tests + - name: Run baseline tests shell: bash {0} run: testing/baseline/test_baseline @@ -32,6 +35,9 @@ jobs: steps: - name: Checkout source uses: actions/checkout@v2.3.4 + - name: Install dependencies + shell: bash {0} + run: cmd/prepare - name: Package Testrun shell: bash {0} run: cmd/package @@ -49,6 +55,6 @@ jobs: steps: - name: Checkout source uses: actions/checkout@v2.3.4 - - name: Run tests + - name: Run pylint shell: bash {0} run: testing/pylint/test_pylint diff --git a/cmd/prepare b/cmd/prepare new file mode 100755 index 000000000..17dd026c9 --- /dev/null +++ b/cmd/prepare @@ -0,0 +1,21 @@ +#!/bin/bash -e + +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# https://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +echo Installing system dependencies + +sudo apt-get install openvswitch-common openvswitch-switch python3 libpangocairo-1.0-0 + +echo Finished installing system dependencies From 25df08c11450f8292ee84926c70251a9d75a45a0 Mon Sep 17 00:00:00 2001 From: Jacob Boddey Date: Thu, 7 Sep 2023 14:14:08 +0100 Subject: [PATCH 20/33] Build containers separately --- cmd/build | 52 +++++++++++++++++++ cmd/install | 3 ++ cmd/package | 15 ++++-- cmd/prepare | 3 ++ framework/python/src/core/testrun.py | 21 -------- .../src/net_orc/network_orchestrator.py | 2 +- .../python/src/net_orc/network_validator.py | 2 +- .../python/src/test_orc/test_orchestrator.py | 2 +- make/DEBIAN/postinst | 3 ++ modules/ui/ui.Dockerfile | 8 ++- 10 files changed, 83 insertions(+), 28 deletions(-) create mode 100755 cmd/build diff --git a/cmd/build b/cmd/build new file mode 100755 index 000000000..17e61921f --- /dev/null +++ b/cmd/build @@ -0,0 +1,52 @@ +#!/bin/bash -e + +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# https://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Builds all docker images +echo Building docker images + +# Build user interface +echo Building user interface +mkdir -p build/ui +docker build -t test-run/ui -f modules/ui/ui.Dockerfile . > build/ui/ui.log 2>&1 + +# Build network modules +echo Building network modules +mkdir -p build/network +for dir in modules/network/* ; do + module=$(basename $dir) + echo Building network module $module... + docker build -f modules/network/$module/$module.Dockerfile -t test-run/$module . > build/network/$module.log 2>&1 +done + +# Build validators +echo Building network validators +mkdir -p build/devices +for dir in modules/devices/* ; do + module=$(basename $dir) + echo Building validator module $module... + docker build -f modules/devices/$module/$module.Dockerfile -t test-run/$module-dev . > build/devices/$module.log 2>&1 +done + +# Build test modules +echo Building test modules +mkdir -p build/test +for dir in modules/test/* ; do + module=$(basename $dir) + echo Building test module $module... + docker build -f modules/test/$module/$module.Dockerfile -t test-run/$module-test . > build/test/$module.log 2>&1 +done + +echo Finished building modules \ No newline at end of file diff --git a/cmd/install b/cmd/install index 25f60b26d..929f9136c 100755 --- a/cmd/install +++ b/cmd/install @@ -30,4 +30,7 @@ cp -u local/system.json.example local/system.json deactivate +# Build docker images +sudo cmd/build + echo Finished installing Testrun diff --git a/cmd/package b/cmd/package index dcb16198b..7af76a3bf 100755 --- a/cmd/package +++ b/cmd/package @@ -14,6 +14,8 @@ # See the License for the specific language governing permissions and # limitations under the License. +# Creates a package for Testrun + MAKE_SRC_DIR=make # Copy testrun script to /bin @@ -26,16 +28,23 @@ mkdir -p $MAKE_SRC_DIR/usr/local/testrun # Create postinst script cp cmd/install $MAKE_SRC_DIR/DEBIAN/postinst +# Copy other commands +mkdir -p $MAKE_SRC_DIR/usr/local/testrun/cmd +cp cmd/{prepare,build} $MAKE_SRC_DIR/usr/local/testrun/cmd + # Create local folder mkdir -p $MAKE_SRC_DIR/usr/local/testrun/local cp local/system.json.example $MAKE_SRC_DIR/usr/local/testrun/local/system.json.example -# Copy framework and modules into testrun folder -cp -r {framework,modules} $MAKE_SRC_DIR/usr/local/testrun - # Create device repository mkdir -p $MAKE_SRC_DIR/usr/local/testrun/local/devices +# Copy root_certs folder +cp -r local/root_certs $MAKE_SRC_DIR/usr/local/testrun/local/root_certs + +# Copy framework and modules into testrun folder +cp -r {framework,modules} $MAKE_SRC_DIR/usr/local/testrun + # Build .deb file dpkg-deb --build --root-owner-group make diff --git a/cmd/prepare b/cmd/prepare index 17dd026c9..950051bd3 100755 --- a/cmd/prepare +++ b/cmd/prepare @@ -14,6 +14,9 @@ # See the License for the specific language governing permissions and # limitations under the License. +# Optional script to prepare your system for use with Testrun. +# Installs system dependencies + echo Installing system dependencies sudo apt-get install openvswitch-common openvswitch-switch python3 libpangocairo-1.0-0 diff --git a/framework/python/src/core/testrun.py b/framework/python/src/core/testrun.py index 28a31e35d..e10c888ae 100644 --- a/framework/python/src/core/testrun.py +++ b/framework/python/src/core/testrun.py @@ -389,15 +389,10 @@ def get_session(self): def _set_status(self, status): self.get_session().set_status(status) - def get_session(self): - return self._session - def start_ui(self): LOGGER.info('Starting UI') - self._build_ui() - client = docker.from_env() client.containers.run( @@ -414,22 +409,6 @@ def start_ui(self): # TODO: Make port configurable LOGGER.info('User interface is ready on http://localhost:8080') - def _build_ui(self): - - # TODO: Improve this process - build_file = os.path.join(root_dir, - 'modules', - 'ui', - 'ui.Dockerfile') - client = docker.from_env() - - LOGGER.debug('Building user interface') - - client.images.build(dockerfile=build_file, - path=root_dir, - forcerm=True, - tag='test-run/ui') - def _stop_ui(self): client = docker.from_env() try: diff --git a/framework/python/src/net_orc/network_orchestrator.py b/framework/python/src/net_orc/network_orchestrator.py index e0c99d0ac..edf2e6fcd 100644 --- a/framework/python/src/net_orc/network_orchestrator.py +++ b/framework/python/src/net_orc/network_orchestrator.py @@ -114,7 +114,7 @@ def start_network(self): """Start the virtual testing network.""" LOGGER.info('Starting network') - self.build_network_modules() + #self.build_network_modules() self.create_net() self.start_network_services() diff --git a/framework/python/src/net_orc/network_validator.py b/framework/python/src/net_orc/network_validator.py index 2a4112764..3866bd3ae 100644 --- a/framework/python/src/net_orc/network_validator.py +++ b/framework/python/src/net_orc/network_validator.py @@ -56,7 +56,7 @@ def start(self): util.run_command(f'chown -R {host_user} {OUTPUT_DIR}') self._load_devices() - self._build_network_devices() + #self._build_network_devices() self._start_network_devices() def stop(self, kill=False): diff --git a/framework/python/src/test_orc/test_orchestrator.py b/framework/python/src/test_orc/test_orchestrator.py index 61dfb2e19..8fb0b1c85 100644 --- a/framework/python/src/test_orc/test_orchestrator.py +++ b/framework/python/src/test_orc/test_orchestrator.py @@ -65,7 +65,7 @@ def start(self): os.makedirs(DEVICE_ROOT_CERTS, exist_ok=True) self._load_test_modules() - self.build_test_modules() + #self.build_test_modules() def stop(self): """Stop any running tests""" diff --git a/make/DEBIAN/postinst b/make/DEBIAN/postinst index 25f60b26d..929f9136c 100755 --- a/make/DEBIAN/postinst +++ b/make/DEBIAN/postinst @@ -30,4 +30,7 @@ cp -u local/system.json.example local/system.json deactivate +# Build docker images +sudo cmd/build + echo Finished installing Testrun diff --git a/modules/ui/ui.Dockerfile b/modules/ui/ui.Dockerfile index 8fefa5293..3d8e9071b 100644 --- a/modules/ui/ui.Dockerfile +++ b/modules/ui/ui.Dockerfile @@ -13,9 +13,15 @@ # limitations under the License. # Image name: test-run/ui +FROM node:latest as build + +WORKDIR modules/ui +COPY modules/ui/ . +RUN npm install && npm run build + FROM nginx:1.25.1 -COPY modules/ui/dist/ /usr/share/nginx/html +COPY --from=build modules/ui/dist/ /usr/share/nginx/html EXPOSE 8080 From 3916c599ec3f822bc8881e9faf83b6d0cc2f11cc Mon Sep 17 00:00:00 2001 From: Jacob Boddey Date: Thu, 7 Sep 2023 14:15:18 +0100 Subject: [PATCH 21/33] Create root_certs folder --- cmd/package | 1 + 1 file changed, 1 insertion(+) diff --git a/cmd/package b/cmd/package index 7af76a3bf..d134896d3 100755 --- a/cmd/package +++ b/cmd/package @@ -40,6 +40,7 @@ cp local/system.json.example $MAKE_SRC_DIR/usr/local/testrun/local/system.json.e mkdir -p $MAKE_SRC_DIR/usr/local/testrun/local/devices # Copy root_certs folder +mkdir -p local/root_certs cp -r local/root_certs $MAKE_SRC_DIR/usr/local/testrun/local/root_certs # Copy framework and modules into testrun folder From e972e231b85be431098cf284f474e2bcfd395a4c Mon Sep 17 00:00:00 2001 From: Jacob Boddey Date: Thu, 7 Sep 2023 14:23:07 +0100 Subject: [PATCH 22/33] Correct tag name --- cmd/build | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/cmd/build b/cmd/build index 17e61921f..7e69393c8 100755 --- a/cmd/build +++ b/cmd/build @@ -37,7 +37,7 @@ mkdir -p build/devices for dir in modules/devices/* ; do module=$(basename $dir) echo Building validator module $module... - docker build -f modules/devices/$module/$module.Dockerfile -t test-run/$module-dev . > build/devices/$module.log 2>&1 + docker build -f modules/devices/$module/$module.Dockerfile -t test-run/$module . > build/devices/$module.log 2>&1 done # Build test modules From b001259cfd4314c2786272ecde29ac0a747b746b Mon Sep 17 00:00:00 2001 From: Jacob Boddey Date: Mon, 11 Sep 2023 14:07:40 +0100 Subject: [PATCH 23/33] Update test api --- cmd/install | 2 - modules/ui/src/app/app.component.html | 1 + modules/ui/src/app/app.component.spec.ts | 10 +- modules/ui/src/app/app.component.ts | 5 +- modules/ui/src/app/app.module.ts | 4 +- .../device-tests/device-tests.component.html | 1 + .../device-tests/device-tests.component.scss | 9 ++ .../download-report.component.scss | 4 + .../device-form/device-form.component.html | 3 + .../device-form/device-form.component.spec.ts | 42 +++---- .../device-form/device-form.component.ts | 18 ++- ...rmat.validator.ts => device.validators.ts} | 18 ++- .../device-repository.component.html | 2 +- .../device-repository.component.scss | 7 +- .../ui/src/app/history/history.component.html | 41 +++--- .../ui/src/app/history/history.component.scss | 12 +- .../src/app/history/history.component.spec.ts | 34 ++++- .../ui/src/app/history/history.component.ts | 24 +++- modules/ui/src/app/mocks/progress.mock.ts | 14 ++- modules/ui/src/app/model/device.ts | 3 +- .../ui/src/app/notification.service.spec.ts | 46 +++++++ modules/ui/src/app/notification.service.ts | 17 +++ .../progress-breadcrumbs.component.html | 3 - .../progress-breadcrumbs.component.scss | 14 ++- .../progress-initiate-form.component.html | 2 +- .../progress-initiate-form.component.spec.ts | 118 ++++++++++++++++-- .../progress-initiate-form.component.ts | 69 +++++++++- .../progress-table.component.scss | 5 +- .../src/app/progress/progress.component.html | 4 +- .../src/app/progress/progress.component.scss | 20 ++- .../app/progress/progress.component.spec.ts | 7 +- .../ui/src/app/progress/progress.component.ts | 8 +- modules/ui/src/app/test-run.service.spec.ts | 96 +++++--------- modules/ui/src/app/test-run.service.ts | 19 ++- modules/ui/src/index.html | 3 + modules/ui/src/styles.scss | 31 +++++ modules/ui/src/theming/theme.scss | 10 +- testing/api/system.json | 7 ++ testing/api/test_api | 16 +-- testing/api/test_api.py | 2 +- 40 files changed, 536 insertions(+), 215 deletions(-) rename modules/ui/src/app/device-repository/device-form/{device-string-format.validator.ts => device.validators.ts} (54%) create mode 100644 modules/ui/src/app/notification.service.spec.ts create mode 100644 modules/ui/src/app/notification.service.ts create mode 100644 testing/api/system.json diff --git a/cmd/install b/cmd/install index beab3d3d1..929f9136c 100755 --- a/cmd/install +++ b/cmd/install @@ -34,5 +34,3 @@ deactivate sudo cmd/build echo Finished installing Testrun - -deactivate diff --git a/modules/ui/src/app/app.component.html b/modules/ui/src/app/app.component.html index de0baf85f..2edb798b0 100644 --- a/modules/ui/src/app/app.component.html +++ b/modules/ui/src/app/app.component.html @@ -24,6 +24,7 @@