From 45e531cddc961dfacbc989f9e9f386f8b01b8907 Mon Sep 17 00:00:00 2001 From: Brian Rosenberg Date: Fri, 26 May 2023 13:57:42 -0400 Subject: [PATCH 1/2] Remove "Task Response Aggregation Route" --- docs/docs/Workflow-Manager-Architecture.md | 8 +-- .../Workflow-Manager-Architecture/index.html | 8 +-- docs/site/index.html | 2 +- docs/site/search/search_index.json | 11 ++-- docs/site/sitemap.xml | 50 +++++++++---------- 5 files changed, 33 insertions(+), 46 deletions(-) diff --git a/docs/docs/Workflow-Manager-Architecture.md b/docs/docs/Workflow-Manager-Architecture.md index 6bc5a88bd2e4..76f964ad1efa 100644 --- a/docs/docs/Workflow-Manager-Architecture.md +++ b/docs/docs/Workflow-Manager-Architecture.md @@ -144,11 +144,7 @@ Once the job is completed, this route converts the aggregated track and detectio ## Detection Response Route -The **Detection Response Route** is the re-entry point to the WFM. It unmarshals the protobuf responses from components and converts them into the Track and Detection objects used within the WFM. It then optionally performs each of the following actions: track merging, detection padding, detecting moving objects, and artifact extraction. It stores the track and detection data in the Redis database and optionally uploads artifacts to an object storage server. - -## Task Response Aggregation Route (Not Shown) - -The **Task Response Aggregation Route** is one of the exit points for the Detection Response Route. It waits until all of the sub-job responses have been retrieved for the current stage (task) of the pipeline, then it invokes the Job Router Route to see if any additional processing needs to be done. +The **Detection Response Route** is the re-entry point to the WFM. It unmarshals the protobuf responses from components and converts them into the Track and Detection objects used within the WFM. It then optionally performs each of the following actions: track merging, detection padding, detecting moving objects, and artifact extraction. It stores the track and detection data in the Redis database and optionally uploads artifacts to an object storage server. Then, it invokes the Job Router Route to see if any additional processing needs to be done. ## Markup Response Route (Not Shown) @@ -156,6 +152,6 @@ Markup files are copies of the initial media with any detections visually highli ## Other Routes (Not Shown) -Additionally, there is a **Detection Cancellation Route** for cancelling detection requests sent to components, and a **Markup Cancellation Route** for cancelling requests sent to the Markup component. +Additionally, there is a **Detection Cancellation Route** for cancelling detection requests sent to components, and a **Markup Cancellation Route** for cancelling requests sent to the Markup component. Also, there is a **DLQ Route** for handling messages that appear in the ActiveMQ Dead Letter Queue (DLQ), which usually indicates a component failure or inability to deliver a message. In these cases, the job is terminated with an error condition. diff --git a/docs/site/Workflow-Manager-Architecture/index.html b/docs/site/Workflow-Manager-Architecture/index.html index 1ec7ebeab29a..40b5fc116802 100644 --- a/docs/site/Workflow-Manager-Architecture/index.html +++ b/docs/site/Workflow-Manager-Architecture/index.html @@ -206,8 +206,6 @@
  • Detection Response Route
  • -
  • Task Response Aggregation Route (Not Shown)
  • -
  • Markup Response Route (Not Shown)
  • Other Routes (Not Shown)
  • @@ -419,13 +417,11 @@

    Job Router Route

    This route may be invoked multiple times as future routes redirect back to the Job Router so that the job can be processed by the next component in the pipeline.

    Once the job is completed, this route converts the aggregated track and detection data in Redis into a JSON output format. It then clears out all data in Redis for the job, updates the final job status in the SQL database, optionally uploads the JSON output object to an object storage server, and optionally makes a callback to the endpoint listed in the job request.

    Detection Response Route

    -

    The Detection Response Route is the re-entry point to the WFM. It unmarshals the protobuf responses from components and converts them into the Track and Detection objects used within the WFM. It then optionally performs each of the following actions: track merging, detection padding, detecting moving objects, and artifact extraction. It stores the track and detection data in the Redis database and optionally uploads artifacts to an object storage server.

    -

    Task Response Aggregation Route (Not Shown)

    -

    The Task Response Aggregation Route is one of the exit points for the Detection Response Route. It waits until all of the sub-job responses have been retrieved for the current stage (task) of the pipeline, then it invokes the Job Router Route to see if any additional processing needs to be done.

    +

    The Detection Response Route is the re-entry point to the WFM. It unmarshals the protobuf responses from components and converts them into the Track and Detection objects used within the WFM. It then optionally performs each of the following actions: track merging, detection padding, detecting moving objects, and artifact extraction. It stores the track and detection data in the Redis database and optionally uploads artifacts to an object storage server. Then, it invokes the Job Router Route to see if any additional processing needs to be done.

    Markup Response Route (Not Shown)

    Markup files are copies of the initial media with any detections visually highlighted in the frame. The Markup Response Route optionally uploads the markup files generated by the Markup component to an object storage server and persists the locations of these markup files in the SQL database.

    Other Routes (Not Shown)

    -

    Additionally, there is a Detection Cancellation Route for cancelling detection requests sent to components, and a Markup Cancellation Route for cancelling requests sent to the Markup component.

    +

    Additionally, there is a Detection Cancellation Route for cancelling detection requests sent to components, and a Markup Cancellation Route for cancelling requests sent to the Markup component.

    Also, there is a DLQ Route for handling messages that appear in the ActiveMQ Dead Letter Queue (DLQ), which usually indicates a component failure or inability to deliver a message. In these cases, the job is terminated with an error condition.

    diff --git a/docs/site/index.html b/docs/site/index.html index 9be26810f18d..53218998d3a3 100644 --- a/docs/site/index.html +++ b/docs/site/index.html @@ -380,5 +380,5 @@

    Overview

    diff --git a/docs/site/search/search_index.json b/docs/site/search/search_index.json index 4941279f129d..180d9ad857c5 100644 --- a/docs/site/search/search_index.json +++ b/docs/site/search/search_index.json @@ -1302,7 +1302,7 @@ }, { "location": "/Workflow-Manager-Architecture/index.html", - "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nINFO:\n This document describes the Workflow Manager architecture for C++ and Java batch processing. The Python batch processing architecture and C++ stream processing architecture use many of the same elements and concepts.\n\n\n\nWorkflow Manager Overview\n\n\nOpenMPF consists of three major pieces:\n\n\n\n\nA collection of \nComponents\n which process media\n\n\nA \nNode Manager\n, which launches and monitors running components in the system in a non-Docker deployment\n\n\nThe \nWorkflow Manager\n (WFM), which allows for the creation of jobs and manages the flow through active components\n\n\n\n\nThese pieces are supported by a number of modules which provide shared functionality, as shown in the dependency diagram below:\n\n\n\n\nThere are three general functional areas in the WFM:\n\n\n\n\nThe \nControllers\n are the primary entry point, accepting REST requests which trigger actions by the WFM\n\n\nThe \nWFM Services\n, which handle administrative tasks such as pipeline creation, node management, and log retrieval\n\n\nJob Management\n, which uses Camel routes to pass a job through the levels of processing\n\n\n\n\nThere are two different databases used by the WFM:\n\n\n\n\nA \nSQL database\n stores persistent data about jobs. This data includes:\n\n\nThe job ID\n\n\nThe start and stop time of the job\n\n\nThe exit status of the job\n\n\nJob priority\n\n\nJob input/outputs\n\n\n\n\n\n\nA \nRedis database\n for storing track and detection data generated by components as they process parts of the job in various stages of the pipeline.\n\n\n\n\nThe diagram below shows the functional areas of the WFM, the databases used by the WFM, and communication with components:\n\n\n\n\nControllers / Services\n\n\nThe controllers are all located \nhere\n.\n\n\nEvery controller provides a collection of REST endpoints which allow access either to a WFM service or to the job management flow. Only the JobController enters the job management flow.\n\n\nBasic Controllers\n\n\nThe table below lists the controllers:\n\n\n\n\n\n\n\n\nController Class\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nAdminComponentRegistrationController\n\n\nHandles component registration\n\n\n\n\n\n\nAdminLogsController\n\n\nAccesses the log content via REST\n\n\n\n\n\n\nAdminPropertySettingsController\n\n\nAllows access and modification of system properties\n\n\n\n\n\n\nAdminStatisticsController\n\n\nGenerates job statistics\n\n\n\n\n\n\nAtmosphereController\n\n\nUses Atmosphere to manage server-side push\n\n\n\n\n\n\nBootoutController\n\n\nHandles bootouts when a second session is opened by the same user\n\n\n\n\n\n\nHomeController\n\n\nManages index page and version information\n\n\n\n\n\n\nJobController\n\n\nManages job creation and interaction\n\n\n\n\n\n\nLoginController\n\n\nManages login/logout and authentication\n\n\n\n\n\n\nMarkupController\n\n\nHandles retrieving markup results\n\n\n\n\n\n\nMediaController\n\n\nHandles uploading and downloading media files\n\n\n\n\n\n\nNodeController\n\n\nManages component services across nodes in a non-Docker deployment\n\n\n\n\n\n\nPipelineController\n\n\nHandles the creation and deletion of actions, tasks, and pipelines\n\n\n\n\n\n\nServerMediaController\n\n\nEnables selection and deselection of files at a directory level\n\n\n\n\n\n\nSystemMessageController\n\n\nManages system level messages, such as notifying users that a server restart is needed\n\n\n\n\n\n\nTimeoutController\n\n\nManages session timeouts\n\n\n\n\n\n\n\n\nThe following sections describe some of the controllers in more detail.\n\n\nAdminComponentRegistrationController\n\n\nIn a non-Docker deployment, components can be uploaded as tar.gz packages containing all necessary component data. For more information on components, read \nOpenMPF Component API Overview\n.\n\n\nThe \nAdminComponentRegistrationController\n provides endpoints which allow:\n\n\n\n\nAccess to current component information\n\n\nUpload of new components\n\n\nRegistration and unregistration of components (note that components must be registered to be included in pipelines)\n\n\nDeletion of components\n\n\n\n\nJobController\n\n\nA job is a specific pipeline's tasks and actions applied to a set of media. The \nJobController\n allows:\n\n\n\n\nAccess to information about jobs in the system\n\n\nCreation of new jobs\n\n\nCancellation of existing jobs\n\n\nDownload of job output data\n\n\nResubmission of jobs (regardless of initial job status)\n\n\n\n\nMarkupController\n\n\nMarkup files are copies of the initial media input to a job with detections visually highlighted in the image or video frames. The \nMarkupController\n can provide lists of available Markup files, or it can download a specific file.\n\n\nMediaController\n\n\nThe \nMediaController\n enables upload and organization of media files within the WFM. It provides endpoints for media upload, and also for creation of folders to organize media files in the system. At this time, there are no endpoints which allow for deletion or reorganization of media files, since all media is shared by all users.\n\n\nNodeController\n\n\nOpenMPF uses multiple hosts to enable scalability and parallel processing. The \nNodeController\n provides access to host information and allows components to be deployed on nodes in a non-Docker deployment. One or more components can be installed on a node. The same component can be installed on multiple nodes. Each node can manage one or more services for each component.\n\n\nThe \nNodeController\n provides host information and component service deployment status. It also provides an endpoint to deploy a service on a node and an endpoint to stop a service.\n\n\nFor more information on nodes, please read the \nNode Configuration and Status\n section in the Development Environment Guide.\n\n\nPipelineController\n\n\nThe \nPipeline Controller\n allows for the creation, retrieval, and deletion of pipelines or any of their constituent parts. While actions, tasks, and pipelines may not be directly modified, they may be deleted and recreated.\n\n\nFor more information on pipelines, please read the \nCreate Custom Pipelines\n section in the User Guide.\n\n\nJob Management\n\n\nThe request to create a job begins at the \nJobController\n. From there, it is transformed and passed through multiple flows on its way to the component services. These services process the job then return information to the WFM for JSON output generation.\n\n\nThe diagram below shows the sequence of WFM operations for a job executing a single-stage pipeline.\n\n\n\n\nAfter the job request is validated and saved to the SQL database, it passes through multiple Apache Camel routes, each of which checks that the job is still valid (with no fatal errors or cancellations), and then invokes a series of transformations and processors specific to the route.\n\n\nApache Camel\n is an open-source framework that allows developers to build rule-based routing engines. Within OpenMPF, we use a \nJava DSL\n to define the routes. Every route functions independently, and communication between the routes is URI-based. OpenMPF uses ActiveMQ to handle its message traffic.\n\n\nMedia Retriever Route\n\n\nThe \nMedia Retriever Route\n ensures that the media for the job can all be found and accessed. It stores the media information on the server to ensure continued access.\n\n\nMedia Inspection Route\n\n\nThe \nMedia Inspection Route\n splits a single job with multiple media inputs into separate messages, one for each piece of media. For each piece of media, it collects metadata about the media, including MIME type, duration, frame rate, and orientation data.\n\n\nJob Router Route\n\n\nBy the time the \nJob Router Route\n route is invoked, the job has been persisted in the permanent SQL database.\n\n\nThis route uses the pipeline's flow to create the messages that are sent to the components. For large media files, it splits the job into smaller sub-jobs by logically breaking the media up into segments. Each segment has a start point and an end point (specified as a frame or time offset).\n\n\nThis route compiles properties for the job, media, and algorithm, and determines the next component that needs to be invoked. It then marshals the job into a serialized protobuf format and sends the message off to the component for processing.\n\n\nThis route may be invoked multiple times as future routes redirect back to the Job Router so that the job can be processed by the next component in the pipeline.\n\n\nOnce the job is completed, this route converts the aggregated track and detection data in Redis into a JSON output format. It then clears out all data in Redis for the job, updates the final job status in the SQL database, optionally uploads the JSON output object to an object storage server, and optionally makes a callback to the endpoint listed in the job request.\n\n\nDetection Response Route\n\n\nThe \nDetection Response Route\n is the re-entry point to the WFM. It unmarshals the protobuf responses from components and converts them into the Track and Detection objects used within the WFM. It then optionally performs each of the following actions: track merging, detection padding, detecting moving objects, and artifact extraction. It stores the track and detection data in the Redis database and optionally uploads artifacts to an object storage server.\n\n\nTask Response Aggregation Route \n(Not Shown)\n\n\nThe \nTask Response Aggregation Route\n is one of the exit points for the Detection Response Route. It waits until all of the sub-job responses have been retrieved for the current stage (task) of the pipeline, then it invokes the Job Router Route to see if any additional processing needs to be done.\n\n\nMarkup Response Route \n(Not Shown)\n\n\nMarkup files are copies of the initial media with any detections visually highlighted in the frame. The \nMarkup Response Route\n optionally uploads the markup files generated by the Markup component to an object storage server and persists the locations of these markup files in the SQL database.\n\n\nOther Routes \n(Not Shown)\n\n\nAdditionally, there is a \nDetection Cancellation Route\n for cancelling detection requests sent to components, and a \nMarkup Cancellation Route\n for cancelling requests sent to the Markup component. \n\n\nAlso, there is a \nDLQ Route\n for handling messages that appear in the ActiveMQ Dead Letter Queue (DLQ), which usually indicates a component failure or inability to deliver a message. In these cases, the job is terminated with an error condition.", + "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nINFO:\n This document describes the Workflow Manager architecture for C++ and Java batch processing. The Python batch processing architecture and C++ stream processing architecture use many of the same elements and concepts.\n\n\n\nWorkflow Manager Overview\n\n\nOpenMPF consists of three major pieces:\n\n\n\n\nA collection of \nComponents\n which process media\n\n\nA \nNode Manager\n, which launches and monitors running components in the system in a non-Docker deployment\n\n\nThe \nWorkflow Manager\n (WFM), which allows for the creation of jobs and manages the flow through active components\n\n\n\n\nThese pieces are supported by a number of modules which provide shared functionality, as shown in the dependency diagram below:\n\n\n\n\nThere are three general functional areas in the WFM:\n\n\n\n\nThe \nControllers\n are the primary entry point, accepting REST requests which trigger actions by the WFM\n\n\nThe \nWFM Services\n, which handle administrative tasks such as pipeline creation, node management, and log retrieval\n\n\nJob Management\n, which uses Camel routes to pass a job through the levels of processing\n\n\n\n\nThere are two different databases used by the WFM:\n\n\n\n\nA \nSQL database\n stores persistent data about jobs. This data includes:\n\n\nThe job ID\n\n\nThe start and stop time of the job\n\n\nThe exit status of the job\n\n\nJob priority\n\n\nJob input/outputs\n\n\n\n\n\n\nA \nRedis database\n for storing track and detection data generated by components as they process parts of the job in various stages of the pipeline.\n\n\n\n\nThe diagram below shows the functional areas of the WFM, the databases used by the WFM, and communication with components:\n\n\n\n\nControllers / Services\n\n\nThe controllers are all located \nhere\n.\n\n\nEvery controller provides a collection of REST endpoints which allow access either to a WFM service or to the job management flow. Only the JobController enters the job management flow.\n\n\nBasic Controllers\n\n\nThe table below lists the controllers:\n\n\n\n\n\n\n\n\nController Class\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nAdminComponentRegistrationController\n\n\nHandles component registration\n\n\n\n\n\n\nAdminLogsController\n\n\nAccesses the log content via REST\n\n\n\n\n\n\nAdminPropertySettingsController\n\n\nAllows access and modification of system properties\n\n\n\n\n\n\nAdminStatisticsController\n\n\nGenerates job statistics\n\n\n\n\n\n\nAtmosphereController\n\n\nUses Atmosphere to manage server-side push\n\n\n\n\n\n\nBootoutController\n\n\nHandles bootouts when a second session is opened by the same user\n\n\n\n\n\n\nHomeController\n\n\nManages index page and version information\n\n\n\n\n\n\nJobController\n\n\nManages job creation and interaction\n\n\n\n\n\n\nLoginController\n\n\nManages login/logout and authentication\n\n\n\n\n\n\nMarkupController\n\n\nHandles retrieving markup results\n\n\n\n\n\n\nMediaController\n\n\nHandles uploading and downloading media files\n\n\n\n\n\n\nNodeController\n\n\nManages component services across nodes in a non-Docker deployment\n\n\n\n\n\n\nPipelineController\n\n\nHandles the creation and deletion of actions, tasks, and pipelines\n\n\n\n\n\n\nServerMediaController\n\n\nEnables selection and deselection of files at a directory level\n\n\n\n\n\n\nSystemMessageController\n\n\nManages system level messages, such as notifying users that a server restart is needed\n\n\n\n\n\n\nTimeoutController\n\n\nManages session timeouts\n\n\n\n\n\n\n\n\nThe following sections describe some of the controllers in more detail.\n\n\nAdminComponentRegistrationController\n\n\nIn a non-Docker deployment, components can be uploaded as tar.gz packages containing all necessary component data. For more information on components, read \nOpenMPF Component API Overview\n.\n\n\nThe \nAdminComponentRegistrationController\n provides endpoints which allow:\n\n\n\n\nAccess to current component information\n\n\nUpload of new components\n\n\nRegistration and unregistration of components (note that components must be registered to be included in pipelines)\n\n\nDeletion of components\n\n\n\n\nJobController\n\n\nA job is a specific pipeline's tasks and actions applied to a set of media. The \nJobController\n allows:\n\n\n\n\nAccess to information about jobs in the system\n\n\nCreation of new jobs\n\n\nCancellation of existing jobs\n\n\nDownload of job output data\n\n\nResubmission of jobs (regardless of initial job status)\n\n\n\n\nMarkupController\n\n\nMarkup files are copies of the initial media input to a job with detections visually highlighted in the image or video frames. The \nMarkupController\n can provide lists of available Markup files, or it can download a specific file.\n\n\nMediaController\n\n\nThe \nMediaController\n enables upload and organization of media files within the WFM. It provides endpoints for media upload, and also for creation of folders to organize media files in the system. At this time, there are no endpoints which allow for deletion or reorganization of media files, since all media is shared by all users.\n\n\nNodeController\n\n\nOpenMPF uses multiple hosts to enable scalability and parallel processing. The \nNodeController\n provides access to host information and allows components to be deployed on nodes in a non-Docker deployment. One or more components can be installed on a node. The same component can be installed on multiple nodes. Each node can manage one or more services for each component.\n\n\nThe \nNodeController\n provides host information and component service deployment status. It also provides an endpoint to deploy a service on a node and an endpoint to stop a service.\n\n\nFor more information on nodes, please read the \nNode Configuration and Status\n section in the Development Environment Guide.\n\n\nPipelineController\n\n\nThe \nPipeline Controller\n allows for the creation, retrieval, and deletion of pipelines or any of their constituent parts. While actions, tasks, and pipelines may not be directly modified, they may be deleted and recreated.\n\n\nFor more information on pipelines, please read the \nCreate Custom Pipelines\n section in the User Guide.\n\n\nJob Management\n\n\nThe request to create a job begins at the \nJobController\n. From there, it is transformed and passed through multiple flows on its way to the component services. These services process the job then return information to the WFM for JSON output generation.\n\n\nThe diagram below shows the sequence of WFM operations for a job executing a single-stage pipeline.\n\n\n\n\nAfter the job request is validated and saved to the SQL database, it passes through multiple Apache Camel routes, each of which checks that the job is still valid (with no fatal errors or cancellations), and then invokes a series of transformations and processors specific to the route.\n\n\nApache Camel\n is an open-source framework that allows developers to build rule-based routing engines. Within OpenMPF, we use a \nJava DSL\n to define the routes. Every route functions independently, and communication between the routes is URI-based. OpenMPF uses ActiveMQ to handle its message traffic.\n\n\nMedia Retriever Route\n\n\nThe \nMedia Retriever Route\n ensures that the media for the job can all be found and accessed. It stores the media information on the server to ensure continued access.\n\n\nMedia Inspection Route\n\n\nThe \nMedia Inspection Route\n splits a single job with multiple media inputs into separate messages, one for each piece of media. For each piece of media, it collects metadata about the media, including MIME type, duration, frame rate, and orientation data.\n\n\nJob Router Route\n\n\nBy the time the \nJob Router Route\n route is invoked, the job has been persisted in the permanent SQL database.\n\n\nThis route uses the pipeline's flow to create the messages that are sent to the components. For large media files, it splits the job into smaller sub-jobs by logically breaking the media up into segments. Each segment has a start point and an end point (specified as a frame or time offset).\n\n\nThis route compiles properties for the job, media, and algorithm, and determines the next component that needs to be invoked. It then marshals the job into a serialized protobuf format and sends the message off to the component for processing.\n\n\nThis route may be invoked multiple times as future routes redirect back to the Job Router so that the job can be processed by the next component in the pipeline.\n\n\nOnce the job is completed, this route converts the aggregated track and detection data in Redis into a JSON output format. It then clears out all data in Redis for the job, updates the final job status in the SQL database, optionally uploads the JSON output object to an object storage server, and optionally makes a callback to the endpoint listed in the job request.\n\n\nDetection Response Route\n\n\nThe \nDetection Response Route\n is the re-entry point to the WFM. It unmarshals the protobuf responses from components and converts them into the Track and Detection objects used within the WFM. It then optionally performs each of the following actions: track merging, detection padding, detecting moving objects, and artifact extraction. It stores the track and detection data in the Redis database and optionally uploads artifacts to an object storage server. Then, it invokes the Job Router Route to see if any additional processing needs to be done.\n\n\nMarkup Response Route \n(Not Shown)\n\n\nMarkup files are copies of the initial media with any detections visually highlighted in the frame. The \nMarkup Response Route\n optionally uploads the markup files generated by the Markup component to an object storage server and persists the locations of these markup files in the SQL database.\n\n\nOther Routes \n(Not Shown)\n\n\nAdditionally, there is a \nDetection Cancellation Route\n for cancelling detection requests sent to components, and a \nMarkup Cancellation Route\n for cancelling requests sent to the Markup component.\n\n\nAlso, there is a \nDLQ Route\n for handling messages that appear in the ActiveMQ Dead Letter Queue (DLQ), which usually indicates a component failure or inability to deliver a message. In these cases, the job is terminated with an error condition.", "title": "Workflow Manager Architecture" }, { @@ -1372,14 +1372,9 @@ }, { "location": "/Workflow-Manager-Architecture/index.html#detection-response-route", - "text": "The Detection Response Route is the re-entry point to the WFM. It unmarshals the protobuf responses from components and converts them into the Track and Detection objects used within the WFM. It then optionally performs each of the following actions: track merging, detection padding, detecting moving objects, and artifact extraction. It stores the track and detection data in the Redis database and optionally uploads artifacts to an object storage server.", + "text": "The Detection Response Route is the re-entry point to the WFM. It unmarshals the protobuf responses from components and converts them into the Track and Detection objects used within the WFM. It then optionally performs each of the following actions: track merging, detection padding, detecting moving objects, and artifact extraction. It stores the track and detection data in the Redis database and optionally uploads artifacts to an object storage server. Then, it invokes the Job Router Route to see if any additional processing needs to be done.", "title": "Detection Response Route" }, - { - "location": "/Workflow-Manager-Architecture/index.html#task-response-aggregation-route-not-shown", - "text": "The Task Response Aggregation Route is one of the exit points for the Detection Response Route. It waits until all of the sub-job responses have been retrieved for the current stage (task) of the pipeline, then it invokes the Job Router Route to see if any additional processing needs to be done.", - "title": "Task Response Aggregation Route (Not Shown)" - }, { "location": "/Workflow-Manager-Architecture/index.html#markup-response-route-not-shown", "text": "Markup files are copies of the initial media with any detections visually highlighted in the frame. The Markup Response Route optionally uploads the markup files generated by the Markup component to an object storage server and persists the locations of these markup files in the SQL database.", @@ -1387,7 +1382,7 @@ }, { "location": "/Workflow-Manager-Architecture/index.html#other-routes-not-shown", - "text": "Additionally, there is a Detection Cancellation Route for cancelling detection requests sent to components, and a Markup Cancellation Route for cancelling requests sent to the Markup component. Also, there is a DLQ Route for handling messages that appear in the ActiveMQ Dead Letter Queue (DLQ), which usually indicates a component failure or inability to deliver a message. In these cases, the job is terminated with an error condition.", + "text": "Additionally, there is a Detection Cancellation Route for cancelling detection requests sent to components, and a Markup Cancellation Route for cancelling requests sent to the Markup component. Also, there is a DLQ Route for handling messages that appear in the ActiveMQ Dead Letter Queue (DLQ), which usually indicates a component failure or inability to deliver a message. In these cases, the job is terminated with an error condition.", "title": "Other Routes (Not Shown)" }, { diff --git a/docs/site/sitemap.xml b/docs/site/sitemap.xml index 68c3f6d36066..0db2d197b764 100644 --- a/docs/site/sitemap.xml +++ b/docs/site/sitemap.xml @@ -2,127 +2,127 @@ /index.html - 2023-05-10 + 2023-05-26 daily /Release-Notes/index.html - 2023-05-10 + 2023-05-26 daily /License-And-Distribution/index.html - 2023-05-10 + 2023-05-26 daily /Acknowledgements/index.html - 2023-05-10 + 2023-05-26 daily /Install-Guide/index.html - 2023-05-10 + 2023-05-26 daily /Admin-Guide/index.html - 2023-05-10 + 2023-05-26 daily /User-Guide/index.html - 2023-05-10 + 2023-05-26 daily /Media-Segmentation-Guide/index.html - 2023-05-10 + 2023-05-26 daily /Feed-Forward-Guide/index.html - 2023-05-10 + 2023-05-26 daily /Derivative-Media-Guide/index.html - 2023-05-10 + 2023-05-26 daily /Object-Storage-Guide/index.html - 2023-05-10 + 2023-05-26 daily /Markup-Guide/index.html - 2023-05-10 + 2023-05-26 daily /TiesDb-Guide/index.html - 2023-05-10 + 2023-05-26 daily /REST-API/index.html - 2023-05-10 + 2023-05-26 daily /Component-API-Overview/index.html - 2023-05-10 + 2023-05-26 daily /Component-Descriptor-Reference/index.html - 2023-05-10 + 2023-05-26 daily /CPP-Batch-Component-API/index.html - 2023-05-10 + 2023-05-26 daily /Python-Batch-Component-API/index.html - 2023-05-10 + 2023-05-26 daily /Java-Batch-Component-API/index.html - 2023-05-10 + 2023-05-26 daily /GPU-Support-Guide/index.html - 2023-05-10 + 2023-05-26 daily /Contributor-Guide/index.html - 2023-05-10 + 2023-05-26 daily /Development-Environment-Guide/index.html - 2023-05-10 + 2023-05-26 daily /Node-Guide/index.html - 2023-05-10 + 2023-05-26 daily /Workflow-Manager-Architecture/index.html - 2023-05-10 + 2023-05-26 daily /CPP-Streaming-Component-API/index.html - 2023-05-10 + 2023-05-26 daily \ No newline at end of file From 35827a3a4a95e285819058f29ea4dc071a01c306 Mon Sep 17 00:00:00 2001 From: Brian Rosenberg Date: Wed, 7 Jun 2023 14:27:23 -0400 Subject: [PATCH 2/2] Address PR issues --- docs/docs/User-Guide.md | 6 --- docs/docs/Workflow-Manager-Architecture.md | 2 - docs/site/User-Guide/index.html | 3 -- .../Workflow-Manager-Architecture/index.html | 8 --- docs/site/index.html | 2 +- docs/site/search/search_index.json | 8 +-- docs/site/sitemap.xml | 50 +++++++++---------- 7 files changed, 30 insertions(+), 49 deletions(-) diff --git a/docs/docs/User-Guide.md b/docs/docs/User-Guide.md index c982d28abdba..d7d8cfe3606f 100644 --- a/docs/docs/User-Guide.md +++ b/docs/docs/User-Guide.md @@ -27,12 +27,6 @@ The landing page for a user is the Job Status page: ![Landing Page](img/mpf_landing.png "Landing Page") -Logging in starts a user session. By default, after 30 minutes of inactivity the user will automatically be logged out of the system. Within one minute of logging out the user will be prompted to extend or end their session. Note that the timeout period can be configured by any admin user with the admin role. - -A given user can only be logged into the OpenMPF from one machine using one browser at a time. If the same user attempts to log in from another machine, or another browser on the same machine, then the first user session will be terminated immediately and redirected back to the login page. This feature ensures that the user will be able to immediately log in again if the user accidentally closes the browser window or shuts down their machine without properly logging out first. - -A user may have multiple browser tabs or windows open for the same session, for example, to view the Jobs Status page and Logs page at the same time. It is not recommended that two users login using the same browser at the same time in different tabs or windows. Technically, the second user to login will take precedence, but the first user session will not appear to be terminated. Instead the first user will appear to share recent jobs and other information with the second user. Also, when one of the users logs out in this scenario, they will both be logged out. - ## Logging out To log out a user can click the down arrow associated with the user icon at the top right hand corner of the page and then select "Logout": diff --git a/docs/docs/Workflow-Manager-Architecture.md b/docs/docs/Workflow-Manager-Architecture.md index 76f964ad1efa..b0c9281a5529 100644 --- a/docs/docs/Workflow-Manager-Architecture.md +++ b/docs/docs/Workflow-Manager-Architecture.md @@ -53,7 +53,6 @@ The table below lists the controllers: | AdminPropertySettingsController | Allows access and modification of system properties | | AdminStatisticsController | Generates job statistics | | AtmosphereController | Uses Atmosphere to manage server-side push | -| BootoutController | Handles bootouts when a second session is opened by the same user | | HomeController | Manages index page and version information | | JobController | Manages job creation and interaction | | LoginController | Manages login/logout and authentication | @@ -63,7 +62,6 @@ The table below lists the controllers: | PipelineController | Handles the creation and deletion of actions, tasks, and pipelines | | ServerMediaController | Enables selection and deselection of files at a directory level | | SystemMessageController | Manages system level messages, such as notifying users that a server restart is needed | -| TimeoutController | Manages session timeouts | The following sections describe some of the controllers in more detail. diff --git a/docs/site/User-Guide/index.html b/docs/site/User-Guide/index.html index 23dba03030bb..b1acbcf31399 100644 --- a/docs/site/User-Guide/index.html +++ b/docs/site/User-Guide/index.html @@ -270,9 +270,6 @@

    Logging In

    Login Page

    The landing page for a user is the Job Status page:

    Landing Page

    -

    Logging in starts a user session. By default, after 30 minutes of inactivity the user will automatically be logged out of the system. Within one minute of logging out the user will be prompted to extend or end their session. Note that the timeout period can be configured by any admin user with the admin role.

    -

    A given user can only be logged into the OpenMPF from one machine using one browser at a time. If the same user attempts to log in from another machine, or another browser on the same machine, then the first user session will be terminated immediately and redirected back to the login page. This feature ensures that the user will be able to immediately log in again if the user accidentally closes the browser window or shuts down their machine without properly logging out first.

    -

    A user may have multiple browser tabs or windows open for the same session, for example, to view the Jobs Status page and Logs page at the same time. It is not recommended that two users login using the same browser at the same time in different tabs or windows. Technically, the second user to login will take precedence, but the first user session will not appear to be terminated. Instead the first user will appear to share recent jobs and other information with the second user. Also, when one of the users logs out in this scenario, they will both be logged out.

    Logging out

    To log out a user can click the down arrow associated with the user icon at the top right hand corner of the page and then select "Logout":

    Logout Button

    diff --git a/docs/site/Workflow-Manager-Architecture/index.html b/docs/site/Workflow-Manager-Architecture/index.html index 40b5fc116802..5800a002ac17 100644 --- a/docs/site/Workflow-Manager-Architecture/index.html +++ b/docs/site/Workflow-Manager-Architecture/index.html @@ -325,10 +325,6 @@

    Basic Controllers

    Uses Atmosphere to manage server-side push -BootoutController -Handles bootouts when a second session is opened by the same user - - HomeController Manages index page and version information @@ -364,10 +360,6 @@

    Basic Controllers

    SystemMessageController Manages system level messages, such as notifying users that a server restart is needed - -TimeoutController -Manages session timeouts -

    The following sections describe some of the controllers in more detail.

    diff --git a/docs/site/index.html b/docs/site/index.html index 53218998d3a3..7593b7c90a58 100644 --- a/docs/site/index.html +++ b/docs/site/index.html @@ -380,5 +380,5 @@

    Overview

    diff --git a/docs/site/search/search_index.json b/docs/site/search/search_index.json index 180d9ad857c5..26a8068c906f 100644 --- a/docs/site/search/search_index.json +++ b/docs/site/search/search_index.json @@ -182,7 +182,7 @@ }, { "location": "/User-Guide/index.html", - "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nINFO:\n This document refers to components and pipelines that are no longer supported by OpenMPF; however, the images and general content still reflect the appearance and usage of the OpenMPF web UI and its features.\n\n\n\nGeneral\n\n\nThe Open Media Processing Framework (OpenMPF) can be used in three ways:\n\n\n\n\nThrough the OpenMPF Web user interface (UI)\n\n\nThrough the \nREST API endpoints\n\n\nThrough the \nCLI Runner\n\n\n\n\nAccessing the Web UI\n\n\nOn the server hosting the Open Media Processing Framework, the Web UI is accessible at http://localhost:8080/workflow-manager. To access it from other machines, substitute the hostname or IP address of the master node server in place of \"localhost\".\n\n\nThe OpenMPF user interface was designed and tested for use with Chrome and Firefox. It has not been tested with other browsers. Attempting to use an unsupported browser will result in a warning.\n\n\nLogging In\n\n\nThe OpenMPF Web UI requires user authentication and provides two default accounts: \"mpf\" and \"admin\". The password for the \"mpf\" user is \"mpf123\". These accounts are used to assign user or admin roles for OpenMPF cluster management. Note that an administrator can remove these accounts and/or add new ones using a command line tool. Refer to the \nAdmin Guide\n for features available to an admin user.\n\n\n\n\nThe landing page for a user is the Job Status page:\n\n\n\n\nLogging in starts a user session. By default, after 30 minutes of inactivity the user will automatically be logged out of the system. Within one minute of logging out the user will be prompted to extend or end their session. Note that the timeout period can be configured by any admin user with the admin role.\n\n\nA given user can only be logged into the OpenMPF from one machine using one browser at a time. If the same user attempts to log in from another machine, or another browser on the same machine, then the first user session will be terminated immediately and redirected back to the login page. This feature ensures that the user will be able to immediately log in again if the user accidentally closes the browser window or shuts down their machine without properly logging out first.\n\n\nA user may have multiple browser tabs or windows open for the same session, for example, to view the Jobs Status page and Logs page at the same time. It is not recommended that two users login using the same browser at the same time in different tabs or windows. Technically, the second user to login will take precedence, but the first user session will not appear to be terminated. Instead the first user will appear to share recent jobs and other information with the second user. Also, when one of the users logs out in this scenario, they will both be logged out.\n\n\nLogging out\n\n\nTo log out a user can click the down arrow associated with the user icon at the top right hand corner of the page and then select \"Logout\":\n\n\n\n\nUser (Non-Admin) Features\n\n\nThe remainder of this document will describe the features available to a non-admin user.\n\n\nCreating Workflow Manager Jobs\n\n\nA \"job\" consists of a set of image, video, or audio files and a set of exploitation algorithms that will operate on those files. A job is created by assigning input media file(s) to a pipeline. A pipeline specifies the order in which processing steps are performed. Each step consists of a single task and each task consists of one or more actions which may be performed in parallel. The following sections describe the UI views associated with the different aspects of job creation and job execution.\n\n\nCreate Job\n\n\nThis is the primary page for creating jobs. Creating a job consists of uploading and selecting files as well as a pipeline and job priority.\n\n\n\n\nUploading Files\n\n\nSelecting a directory in the File Manager will display all files in that directory. The user can use previously uploaded files, or to choose from the icon bar at the bottom of the panel:\n\n\n Create New Folder\n\n Add Local Files\n\n Upload from URL\n\n Refresh\n\n\nNote that the first three options are only available if the \"remote-media\" directory or one of its subdirectories is selected. That directory resides in the OpenMPF share directory. The full path is shown in the footer of the File Manager section.\n\n\nClicking the \"Add Local Files\" icon will display a file browser dialog so that the user can select and upload one or more files from their local machine. The files will be uploaded to the selected directory. The upload progress dialog will display a preview of each file (if possible) and whether or not each file is uploaded successfully.\n\n\nClicking the \"Create New Folder\" icon will allow the user to create a new directory within the one currently selected. If the user has selected \"remote-media\", then adding a directory called \"Test Data\" will place it within \"remote-media\". \"Test Data\" will appear as a subdirectory in the directory tree shown in the web UI. If the user then clicks on \"Test Data\" and then the \"Add Local Files\" button the user can upload files to that specific directory. In the screenshot below, \"lena.png\" has been uploaded to the parent \"remote-media\" directory.\n\n\n\n\nClicking the \"Upload from URL\" icon enables the user to specify URLs pointing to remote media. Each URL must appear on a new line. Note that if a URL to a video is submitted then it must be a direct link to the video file. Specifying a URL to a YouTube HTML page, for example, will not work.\n\n\n\n\nClicking the \"Refresh\" icon updates the displayed file tree from the file system. Use this if an external process has added or removed files to or from the underlying file system.\n\n\nCreating Jobs\n\n\nCreating a job consists of selecting files as well as a pipeline and job priority.\n\n\n\n\nFiles are selected by first clicking the name of a directory to populate the files table in the center of the UI and then clicking the checkbox next to the file. Multiple files can be selected, including files from different directories. Also, the contents of an entire directory, and its subdirectories, can be selected by clicking the checkbox next to the parent directory name. To review which files have been selected, click the \"View\" button shown to the right of the \"# Files\" indicator. If there are many files in a directory, you may need to page through the directory using the page number buttons at the bottom of the center pane.\n\n\nYou can remove a file from the selected files by clicking on the red \"X\" for the individual file. You can also remove multiple files by first selecting the files using the checkboxes and then clicking on the \"Remove Checked\" button.\n\n\n\n\nThe media properties can be adjusted for individual files by clicking on the \"Set Properties\" button for that file. You can modify the properties of a group of files by clicking on the \"Set properties for Checked\" after selecting multiple files.\n\n\n\n\nAfter files have been selected it's time to assign a pipeline and job priority. The \"Select a pipeline and job priority\" section is located on the right side of the screen. Clicking on the down-arrow on the far right of the \"Select a pipeline\" area displays a drop-down menu containing the available pipelines. Click on the desired pipeline to select it. Existing pipelines provided with the system are listed in the Default Pipelines section of this document.\n\n\n\"Select job priority\" is immediately below \"Select a pipeline\" and has a similar drop-down menu. Clicking on the down-arrow on the right hand side of the \"Select job priority\" area displays the drop-down menu of available priorities. Clicking on the desired priority selects it. Priority 4 is the default value used if no priority is selected by the user. Priority 0 is the lowest priority, and priority 9 is the highest priority. When a job is executed it's divided into tasks that are each executed by a component service running on one of the nodes in the OpenMPF cluster. Each service executes tasks with the highest priority first. Note that a service will first complete the task it's currently processing before moving on to the next task. Thus, a long-running low-priority task may delay the execution of a high-priority task.\n\n\nAfter files have been selected and a pipeline and priority are assigned, clicking on the \"Create Job\" icon will start the job. When the job starts, the user will be shown the \"Job Status\" view.\n\n\nJob Status\n\n\nThe Job Status page displays a summary of the status for all jobs run by any user in the past. The current status and progress of any running job can be monitored from this view, which is updated automatically.\n\n\n\n\nWhen a job is COMPLETE a user can view the generated JSON output object data by clicking the \"Output Objects\" button for that job. A new tab/window will open with the detection output. The detection object output displays a formatted JSON representation of the detection results.\n\n\n{\n \"jobId\": \"localhost-11\",\n \"errors\": [],\n \"warnings\": [],\n \"objectId\": \"ef027349-8e6a-4472-a459-eba9463787f3\",\n \"pipeline\": {\n \"name\": \"OCV FACE DETECTION PIPELINE\",\n \"description\": \"Performs OpenCV face detection.\",\n \"tasks\": [\n {\n \"actionType\": \"DETECTION\",\n \"name\": \"OCV FACE DETECTION TASK\",\n \"description\": \"Performs OpenCV face detection.\",\n \"actions\": [\n {\n \"algorithm\": \"FACECV\",\n \"name\": \"OCV FACE DETECTION ACTION\",\n \"description\": \"Executes the OpenCV face detection algorithm using the default parameters.\",\n \"properties\": {}\n }\n ]\n }\n ]\n },\n \"priority\": 4,\n \"siteId\": \"mpf1\",\n \"externalJobId\": null,\n \"timeStart\": \"2021-09-07T20:57:01.073Z\",\n \"timeStop\": \"2021-09-07T20:57:02.946Z\",\n \"status\": \"COMPLETE\",\n \"algorithmProperties\": {},\n \"jobProperties\": {},\n \"environmentVariableProperties\": {},\n \"media\": [\n {\n \"mediaId\": 3,\n \"path\": \"file:///opt/mpf/share/remote-media/faces.jpg\",\n \"sha256\": \"184e9b04369248ae8a97ec2a20b1409a016e2895686f90a2a1910a0bef763d56\",\n \"mimeType\": \"image/jpeg\",\n \"mediaType\": \"IMAGE\",\n \"length\": 1,\n \"mediaMetadata\": {\n \"FRAME_HEIGHT\": \"1275\",\n \"FRAME_WIDTH\": \"1920\",\n \"MIME_TYPE\": \"image/jpeg\"\n },\n \"mediaProperties\": {},\n \"status\": \"COMPLETE\",\n \"detectionProcessingErrors\": {},\n \"markupResult\": null,\n \"output\": {\n \"FACE\": [\n {\n \"source\": \"+#OCV FACE DETECTION ACTION\",\n \"algorithm\": \"FACECV\",\n \"tracks\": [\n {\n \"id\": \"d4b4a6e870c1378a3bc85a234b6f4c881f81a14edcf858d6d256d04ad40bc175\",\n \"startOffsetFrame\": 0,\n \"stopOffsetFrame\": 0,\n \"startOffsetTime\": 0,\n \"stopOffsetTime\": 0,\n \"type\": \"FACE\",\n \"source\": \"+#OCV FACE DETECTION ACTION\",\n \"confidence\": 5,\n \"trackProperties\": {},\n \"exemplar\": {\n \"offsetFrame\": 0,\n \"offsetTime\": 0,\n \"x\": 652,\n \"y\": 212,\n \"width\": 277,\n \"height\": 277,\n \"confidence\": 5,\n \"detectionProperties\": {},\n \"artifactExtractionStatus\": \"NOT_ATTEMPTED\",\n \"artifactPath\": null\n },\n \"detections\": [\n {\n \"offsetFrame\": 0,\n \"offsetTime\": 0,\n \"x\": 652,\n \"y\": 212,\n \"width\": 277,\n \"height\": 277,\n \"confidence\": 5,\n \"detectionProperties\": {},\n \"artifactExtractionStatus\": \"NOT_ATTEMPTED\",\n \"artifactPath\": null\n }\n ]\n }\n ]\n }\n ]\n }\n }\n ]\n}\n\n\n\nA user can click the \"Cancel\" button to attempt to cancel the execution of a job before it completes. Note that if a service is currently processing part of a job, for example, a video segment that's part of a larger video file, then it will continue to process that part of the job until it completes or there is an error. The act of cancelling a job will prevent other parts of that job from being processed. Thus, if the \"Cancel\" button is clicked late into the job execution, or if each part of the job is already being processed by services executing in parallel, it may have no effect. Also, if the video segment size is set to a very large number, and the detection being performed is slow, then cancelling a job could take awhile.\n\n\nA user can click the \"Resubmit\" button to execute a job again. The new job execution will retain the same job id and all generated artifacts, marked up media, and detection objects will be replaced with the new results. The results of the previous job execution will no longer be available. Note that the user has the option to change the job priority when resubmitting a job.\n\n\nYou can view the results of any Media Markup by clicking on the \"Media\" button for that job. This view will display the path of the source medium and the marked up output path of any media processed using a pipeline that contains a markup action. Clicking an image will display a popup with the marked up image. You cannot view a preview for marked up videos. In any case, the marked up data can be downloaded to the machine running the web browser by clicking the \"Download\" button.\n\n\n\n\nCreate Custom Pipelines\n\n\nA pipeline consists of a series of tasks executed sequentially. A task consists of a single action or a set of two or more actions performed in parallel. An action is the execution of an algorithm. The ability to arrange tasks and actions in various ways provides a great deal of flexibility when creating pipelines. Users may combine pre-existing tasks in different ways, or create new tasks based on the pre-existing actions.\n\n\nSelecting \"Pipelines\" from the \"Configuration\" dropdown menu in the top menu bar brings up the Pipeline Creation View, which enables users to create new pipelines. To create a new action, the user can scroll to the \"Create A New Action\" section of the page and select the desired algorithm from the \"Select an Algorithm\" dropdown menu:\n\n\n\n\nSelecting an algorithm will bring up a scrollable table of properties associated with the algorithm, including each property's name, description, data type, and an editable field allowing the user to set a custom value. The user may enter values for only those properties that they wish to change; any property value fields left blank will result in default values being used for those properties. For example, a custom action may be created based on the OpenCV face detection component to scan for faces equal to or exceeding a size of 100x100 pixels.\n\n\nWhen done editing the property values, the user can click the \"Create Action\" button, enter a name and description for the action (both are required), and then click the \"Create\" button. The action will then be listed in the \"Available Actions\" table and also in the \"Select an Action\" dropdown menu used for task creation.\n\n\n\n\nTo create a new task, the user can scroll to the \"Create A New Task\" section of the page:\n\n\n\n\nThe user can use the \"Select an Action\" dropdown menu to select the desired action and then click \"Add Action to Task\". The user can follow this procedure to add additional actions to the task, if desired. Clicking on the \"Remove\" button next to an added action will remove it from the task. When the user is finished adding actions the user can click \"Create Task\", enter a name and description for the task (both are required), and then click the \"Create\" button. The task will be listed in the \"Available Tasks\" table as well as in the \"Select a Task\" dropdown menu used for pipeline creation.\n\n\n\n\nTo build a new pipeline, the user can scroll down to the \"Create A New Pipeline\" section of the page:\n\n\n\n\nThe user can use the \"Select a Task\" dropdown menu to select the first task and then click \"Add Task to Pipeline\". The user can follow this procedure to add additional tasks to the pipeline, if desired. Clicking on the \"Remove\" button next to an added task will remove it from the pipeline. When the user is finished adding tasks the user can click \"Create Pipeline\", enter a name and description for the pipeline (both are required), and then click the \"Create\" button. The pipeline will be listed in the \"Available Pipelines\" table.\n\n\n\n\nAll pipelines successfully created in this view will also appear in the pipeline drop down selection menus on any job creation page:\n\n\n\n\n\n\nNOTE: Pipeline, task, and action names are case-insensitive. All letters will be converted to uppercase.\n\n\n\n\nLogs\n\n\nThis page allows a user to view the various log files that are generated by system processes running on the various nodes in the OpenMPF cluster. A log file can be selected by first selecting a host from the \"Available Hosts\" drop-down and then selecting a log file from the \"Available Logs\" drop-down. The information in the log can be filtered for display based on the following log levels: ALL, TRACE, DEBUG, INFO, WARN, ERROR, or FATAL. Choosing a successive log level displays all information at that level and levels below (e.g., choosing WARN will cause all WARN, INFO, DEBUG, and TRACE information to be displayed, but will filter out ERROR and FATAL information).\n\n\n\n\nIn general, all services of the same component type running on the same node write log messages to the same file. For example, all OCV face detection services on somehost-7-mpfd2 write log messages to the same \"ocv-face-detection\" log file. All OCV face detection services on somehost-7-mpfd3 write log messages to a different \"ocv-face-detection\" log file.\n\n\nNote that only the master node will have the \"workflow-manager\" log. This is because the workflow manager only runs on the master node. The same is true for the \"activemq\" logs.\n\n\nThe \"node-manager-startup\" and \"node-manager\" logs will appear for every node in a non-Docker OpenMPF cluster. The \"node-manager-startup\" log captures information about the nodemanager startup process, such as if any errors occurred. The \"node-manager\" log captures information about node manager execution, such as starting and stopping services.\n\n\nThe \"detection\" log captures information about initializing C++ detection components and how they handle job request and response messages.\n\n\nProperties Settings\n\n\nThis page allows a user to view the various OpenMPF properties configured automatically or by an admin user:\n\n\n\n\nStatistics\n\n\nThe \"Jobs\" tab on this page allows a user to view a bar graph representing the time it took to execute the longest running job for a given pipeline. Pipelines that do not have bars have not been used to run any jobs yet. Job statistics are preserved when the workflow manager is restarted.\n\n\n\n\nFor example, the DLIB FACE DETECTION PIPELINE was run twice. Note that the Y-axis in the bar graph has a logarithmic scale. Hovering the mouse over any bar in the graph will show more information. Information about each pipeline is listed below the graph.\n\n\nThe \"Processes\" tab on this page allows a user to view a table with information about the runtime of various internal workflow manager operations. The \"Count\" field represents the number of times each operation was run. The min, max, and mean are calculated over the set of times each operation was performed. Runtime information is reset when the workflow manager is restarted.\n\n\n\n\nREST API\n\n\nThis page allows a user to try out the \nvarious REST API endpoints\n provided by the workflow manager. It is intended to serve as a learning tool for technical users who wish to design and build systems that interact with the OpenMPF.\n\n\nAfter selecting a functional category, such as \"meta\", \"jobs\", \"statistics\", \"nodes\", \"pipelines\", or \"system-message\", each REST endpoint for that category is shown in a list. Selecting one of them will cause it to expand and reveal more information about the request and response structures. If the request takes any parameters then a section will appear that allows the user to manually specify them.\n\n\n\n\nIn the example above, the \"/rest/jobs/{id}\" endpoint was selected. It takes a required \"id\" parameter that corresponds to a previously run job and returns a JSON representation of that job's information. The screenshot below shows the result of specifying an \"id\" of \"1\", providing the \"mpf\" user credentials when prompted, and then clicking the \"Try it out!\" button:\n\n\n\n\nThe HTTP response information is shown below the \"Try it out!\" button. Note that the structure of the \"Response Body\" is the same as the response model shown in the \"Response Class\" directly underneath the \"/rest/jobs/{id}\" label.", + "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nINFO:\n This document refers to components and pipelines that are no longer supported by OpenMPF; however, the images and general content still reflect the appearance and usage of the OpenMPF web UI and its features.\n\n\n\nGeneral\n\n\nThe Open Media Processing Framework (OpenMPF) can be used in three ways:\n\n\n\n\nThrough the OpenMPF Web user interface (UI)\n\n\nThrough the \nREST API endpoints\n\n\nThrough the \nCLI Runner\n\n\n\n\nAccessing the Web UI\n\n\nOn the server hosting the Open Media Processing Framework, the Web UI is accessible at http://localhost:8080/workflow-manager. To access it from other machines, substitute the hostname or IP address of the master node server in place of \"localhost\".\n\n\nThe OpenMPF user interface was designed and tested for use with Chrome and Firefox. It has not been tested with other browsers. Attempting to use an unsupported browser will result in a warning.\n\n\nLogging In\n\n\nThe OpenMPF Web UI requires user authentication and provides two default accounts: \"mpf\" and \"admin\". The password for the \"mpf\" user is \"mpf123\". These accounts are used to assign user or admin roles for OpenMPF cluster management. Note that an administrator can remove these accounts and/or add new ones using a command line tool. Refer to the \nAdmin Guide\n for features available to an admin user.\n\n\n\n\nThe landing page for a user is the Job Status page:\n\n\n\n\nLogging out\n\n\nTo log out a user can click the down arrow associated with the user icon at the top right hand corner of the page and then select \"Logout\":\n\n\n\n\nUser (Non-Admin) Features\n\n\nThe remainder of this document will describe the features available to a non-admin user.\n\n\nCreating Workflow Manager Jobs\n\n\nA \"job\" consists of a set of image, video, or audio files and a set of exploitation algorithms that will operate on those files. A job is created by assigning input media file(s) to a pipeline. A pipeline specifies the order in which processing steps are performed. Each step consists of a single task and each task consists of one or more actions which may be performed in parallel. The following sections describe the UI views associated with the different aspects of job creation and job execution.\n\n\nCreate Job\n\n\nThis is the primary page for creating jobs. Creating a job consists of uploading and selecting files as well as a pipeline and job priority.\n\n\n\n\nUploading Files\n\n\nSelecting a directory in the File Manager will display all files in that directory. The user can use previously uploaded files, or to choose from the icon bar at the bottom of the panel:\n\n\n Create New Folder\n\n Add Local Files\n\n Upload from URL\n\n Refresh\n\n\nNote that the first three options are only available if the \"remote-media\" directory or one of its subdirectories is selected. That directory resides in the OpenMPF share directory. The full path is shown in the footer of the File Manager section.\n\n\nClicking the \"Add Local Files\" icon will display a file browser dialog so that the user can select and upload one or more files from their local machine. The files will be uploaded to the selected directory. The upload progress dialog will display a preview of each file (if possible) and whether or not each file is uploaded successfully.\n\n\nClicking the \"Create New Folder\" icon will allow the user to create a new directory within the one currently selected. If the user has selected \"remote-media\", then adding a directory called \"Test Data\" will place it within \"remote-media\". \"Test Data\" will appear as a subdirectory in the directory tree shown in the web UI. If the user then clicks on \"Test Data\" and then the \"Add Local Files\" button the user can upload files to that specific directory. In the screenshot below, \"lena.png\" has been uploaded to the parent \"remote-media\" directory.\n\n\n\n\nClicking the \"Upload from URL\" icon enables the user to specify URLs pointing to remote media. Each URL must appear on a new line. Note that if a URL to a video is submitted then it must be a direct link to the video file. Specifying a URL to a YouTube HTML page, for example, will not work.\n\n\n\n\nClicking the \"Refresh\" icon updates the displayed file tree from the file system. Use this if an external process has added or removed files to or from the underlying file system.\n\n\nCreating Jobs\n\n\nCreating a job consists of selecting files as well as a pipeline and job priority.\n\n\n\n\nFiles are selected by first clicking the name of a directory to populate the files table in the center of the UI and then clicking the checkbox next to the file. Multiple files can be selected, including files from different directories. Also, the contents of an entire directory, and its subdirectories, can be selected by clicking the checkbox next to the parent directory name. To review which files have been selected, click the \"View\" button shown to the right of the \"# Files\" indicator. If there are many files in a directory, you may need to page through the directory using the page number buttons at the bottom of the center pane.\n\n\nYou can remove a file from the selected files by clicking on the red \"X\" for the individual file. You can also remove multiple files by first selecting the files using the checkboxes and then clicking on the \"Remove Checked\" button.\n\n\n\n\nThe media properties can be adjusted for individual files by clicking on the \"Set Properties\" button for that file. You can modify the properties of a group of files by clicking on the \"Set properties for Checked\" after selecting multiple files.\n\n\n\n\nAfter files have been selected it's time to assign a pipeline and job priority. The \"Select a pipeline and job priority\" section is located on the right side of the screen. Clicking on the down-arrow on the far right of the \"Select a pipeline\" area displays a drop-down menu containing the available pipelines. Click on the desired pipeline to select it. Existing pipelines provided with the system are listed in the Default Pipelines section of this document.\n\n\n\"Select job priority\" is immediately below \"Select a pipeline\" and has a similar drop-down menu. Clicking on the down-arrow on the right hand side of the \"Select job priority\" area displays the drop-down menu of available priorities. Clicking on the desired priority selects it. Priority 4 is the default value used if no priority is selected by the user. Priority 0 is the lowest priority, and priority 9 is the highest priority. When a job is executed it's divided into tasks that are each executed by a component service running on one of the nodes in the OpenMPF cluster. Each service executes tasks with the highest priority first. Note that a service will first complete the task it's currently processing before moving on to the next task. Thus, a long-running low-priority task may delay the execution of a high-priority task.\n\n\nAfter files have been selected and a pipeline and priority are assigned, clicking on the \"Create Job\" icon will start the job. When the job starts, the user will be shown the \"Job Status\" view.\n\n\nJob Status\n\n\nThe Job Status page displays a summary of the status for all jobs run by any user in the past. The current status and progress of any running job can be monitored from this view, which is updated automatically.\n\n\n\n\nWhen a job is COMPLETE a user can view the generated JSON output object data by clicking the \"Output Objects\" button for that job. A new tab/window will open with the detection output. The detection object output displays a formatted JSON representation of the detection results.\n\n\n{\n \"jobId\": \"localhost-11\",\n \"errors\": [],\n \"warnings\": [],\n \"objectId\": \"ef027349-8e6a-4472-a459-eba9463787f3\",\n \"pipeline\": {\n \"name\": \"OCV FACE DETECTION PIPELINE\",\n \"description\": \"Performs OpenCV face detection.\",\n \"tasks\": [\n {\n \"actionType\": \"DETECTION\",\n \"name\": \"OCV FACE DETECTION TASK\",\n \"description\": \"Performs OpenCV face detection.\",\n \"actions\": [\n {\n \"algorithm\": \"FACECV\",\n \"name\": \"OCV FACE DETECTION ACTION\",\n \"description\": \"Executes the OpenCV face detection algorithm using the default parameters.\",\n \"properties\": {}\n }\n ]\n }\n ]\n },\n \"priority\": 4,\n \"siteId\": \"mpf1\",\n \"externalJobId\": null,\n \"timeStart\": \"2021-09-07T20:57:01.073Z\",\n \"timeStop\": \"2021-09-07T20:57:02.946Z\",\n \"status\": \"COMPLETE\",\n \"algorithmProperties\": {},\n \"jobProperties\": {},\n \"environmentVariableProperties\": {},\n \"media\": [\n {\n \"mediaId\": 3,\n \"path\": \"file:///opt/mpf/share/remote-media/faces.jpg\",\n \"sha256\": \"184e9b04369248ae8a97ec2a20b1409a016e2895686f90a2a1910a0bef763d56\",\n \"mimeType\": \"image/jpeg\",\n \"mediaType\": \"IMAGE\",\n \"length\": 1,\n \"mediaMetadata\": {\n \"FRAME_HEIGHT\": \"1275\",\n \"FRAME_WIDTH\": \"1920\",\n \"MIME_TYPE\": \"image/jpeg\"\n },\n \"mediaProperties\": {},\n \"status\": \"COMPLETE\",\n \"detectionProcessingErrors\": {},\n \"markupResult\": null,\n \"output\": {\n \"FACE\": [\n {\n \"source\": \"+#OCV FACE DETECTION ACTION\",\n \"algorithm\": \"FACECV\",\n \"tracks\": [\n {\n \"id\": \"d4b4a6e870c1378a3bc85a234b6f4c881f81a14edcf858d6d256d04ad40bc175\",\n \"startOffsetFrame\": 0,\n \"stopOffsetFrame\": 0,\n \"startOffsetTime\": 0,\n \"stopOffsetTime\": 0,\n \"type\": \"FACE\",\n \"source\": \"+#OCV FACE DETECTION ACTION\",\n \"confidence\": 5,\n \"trackProperties\": {},\n \"exemplar\": {\n \"offsetFrame\": 0,\n \"offsetTime\": 0,\n \"x\": 652,\n \"y\": 212,\n \"width\": 277,\n \"height\": 277,\n \"confidence\": 5,\n \"detectionProperties\": {},\n \"artifactExtractionStatus\": \"NOT_ATTEMPTED\",\n \"artifactPath\": null\n },\n \"detections\": [\n {\n \"offsetFrame\": 0,\n \"offsetTime\": 0,\n \"x\": 652,\n \"y\": 212,\n \"width\": 277,\n \"height\": 277,\n \"confidence\": 5,\n \"detectionProperties\": {},\n \"artifactExtractionStatus\": \"NOT_ATTEMPTED\",\n \"artifactPath\": null\n }\n ]\n }\n ]\n }\n ]\n }\n }\n ]\n}\n\n\n\nA user can click the \"Cancel\" button to attempt to cancel the execution of a job before it completes. Note that if a service is currently processing part of a job, for example, a video segment that's part of a larger video file, then it will continue to process that part of the job until it completes or there is an error. The act of cancelling a job will prevent other parts of that job from being processed. Thus, if the \"Cancel\" button is clicked late into the job execution, or if each part of the job is already being processed by services executing in parallel, it may have no effect. Also, if the video segment size is set to a very large number, and the detection being performed is slow, then cancelling a job could take awhile.\n\n\nA user can click the \"Resubmit\" button to execute a job again. The new job execution will retain the same job id and all generated artifacts, marked up media, and detection objects will be replaced with the new results. The results of the previous job execution will no longer be available. Note that the user has the option to change the job priority when resubmitting a job.\n\n\nYou can view the results of any Media Markup by clicking on the \"Media\" button for that job. This view will display the path of the source medium and the marked up output path of any media processed using a pipeline that contains a markup action. Clicking an image will display a popup with the marked up image. You cannot view a preview for marked up videos. In any case, the marked up data can be downloaded to the machine running the web browser by clicking the \"Download\" button.\n\n\n\n\nCreate Custom Pipelines\n\n\nA pipeline consists of a series of tasks executed sequentially. A task consists of a single action or a set of two or more actions performed in parallel. An action is the execution of an algorithm. The ability to arrange tasks and actions in various ways provides a great deal of flexibility when creating pipelines. Users may combine pre-existing tasks in different ways, or create new tasks based on the pre-existing actions.\n\n\nSelecting \"Pipelines\" from the \"Configuration\" dropdown menu in the top menu bar brings up the Pipeline Creation View, which enables users to create new pipelines. To create a new action, the user can scroll to the \"Create A New Action\" section of the page and select the desired algorithm from the \"Select an Algorithm\" dropdown menu:\n\n\n\n\nSelecting an algorithm will bring up a scrollable table of properties associated with the algorithm, including each property's name, description, data type, and an editable field allowing the user to set a custom value. The user may enter values for only those properties that they wish to change; any property value fields left blank will result in default values being used for those properties. For example, a custom action may be created based on the OpenCV face detection component to scan for faces equal to or exceeding a size of 100x100 pixels.\n\n\nWhen done editing the property values, the user can click the \"Create Action\" button, enter a name and description for the action (both are required), and then click the \"Create\" button. The action will then be listed in the \"Available Actions\" table and also in the \"Select an Action\" dropdown menu used for task creation.\n\n\n\n\nTo create a new task, the user can scroll to the \"Create A New Task\" section of the page:\n\n\n\n\nThe user can use the \"Select an Action\" dropdown menu to select the desired action and then click \"Add Action to Task\". The user can follow this procedure to add additional actions to the task, if desired. Clicking on the \"Remove\" button next to an added action will remove it from the task. When the user is finished adding actions the user can click \"Create Task\", enter a name and description for the task (both are required), and then click the \"Create\" button. The task will be listed in the \"Available Tasks\" table as well as in the \"Select a Task\" dropdown menu used for pipeline creation.\n\n\n\n\nTo build a new pipeline, the user can scroll down to the \"Create A New Pipeline\" section of the page:\n\n\n\n\nThe user can use the \"Select a Task\" dropdown menu to select the first task and then click \"Add Task to Pipeline\". The user can follow this procedure to add additional tasks to the pipeline, if desired. Clicking on the \"Remove\" button next to an added task will remove it from the pipeline. When the user is finished adding tasks the user can click \"Create Pipeline\", enter a name and description for the pipeline (both are required), and then click the \"Create\" button. The pipeline will be listed in the \"Available Pipelines\" table.\n\n\n\n\nAll pipelines successfully created in this view will also appear in the pipeline drop down selection menus on any job creation page:\n\n\n\n\n\n\nNOTE: Pipeline, task, and action names are case-insensitive. All letters will be converted to uppercase.\n\n\n\n\nLogs\n\n\nThis page allows a user to view the various log files that are generated by system processes running on the various nodes in the OpenMPF cluster. A log file can be selected by first selecting a host from the \"Available Hosts\" drop-down and then selecting a log file from the \"Available Logs\" drop-down. The information in the log can be filtered for display based on the following log levels: ALL, TRACE, DEBUG, INFO, WARN, ERROR, or FATAL. Choosing a successive log level displays all information at that level and levels below (e.g., choosing WARN will cause all WARN, INFO, DEBUG, and TRACE information to be displayed, but will filter out ERROR and FATAL information).\n\n\n\n\nIn general, all services of the same component type running on the same node write log messages to the same file. For example, all OCV face detection services on somehost-7-mpfd2 write log messages to the same \"ocv-face-detection\" log file. All OCV face detection services on somehost-7-mpfd3 write log messages to a different \"ocv-face-detection\" log file.\n\n\nNote that only the master node will have the \"workflow-manager\" log. This is because the workflow manager only runs on the master node. The same is true for the \"activemq\" logs.\n\n\nThe \"node-manager-startup\" and \"node-manager\" logs will appear for every node in a non-Docker OpenMPF cluster. The \"node-manager-startup\" log captures information about the nodemanager startup process, such as if any errors occurred. The \"node-manager\" log captures information about node manager execution, such as starting and stopping services.\n\n\nThe \"detection\" log captures information about initializing C++ detection components and how they handle job request and response messages.\n\n\nProperties Settings\n\n\nThis page allows a user to view the various OpenMPF properties configured automatically or by an admin user:\n\n\n\n\nStatistics\n\n\nThe \"Jobs\" tab on this page allows a user to view a bar graph representing the time it took to execute the longest running job for a given pipeline. Pipelines that do not have bars have not been used to run any jobs yet. Job statistics are preserved when the workflow manager is restarted.\n\n\n\n\nFor example, the DLIB FACE DETECTION PIPELINE was run twice. Note that the Y-axis in the bar graph has a logarithmic scale. Hovering the mouse over any bar in the graph will show more information. Information about each pipeline is listed below the graph.\n\n\nThe \"Processes\" tab on this page allows a user to view a table with information about the runtime of various internal workflow manager operations. The \"Count\" field represents the number of times each operation was run. The min, max, and mean are calculated over the set of times each operation was performed. Runtime information is reset when the workflow manager is restarted.\n\n\n\n\nREST API\n\n\nThis page allows a user to try out the \nvarious REST API endpoints\n provided by the workflow manager. It is intended to serve as a learning tool for technical users who wish to design and build systems that interact with the OpenMPF.\n\n\nAfter selecting a functional category, such as \"meta\", \"jobs\", \"statistics\", \"nodes\", \"pipelines\", or \"system-message\", each REST endpoint for that category is shown in a list. Selecting one of them will cause it to expand and reveal more information about the request and response structures. If the request takes any parameters then a section will appear that allows the user to manually specify them.\n\n\n\n\nIn the example above, the \"/rest/jobs/{id}\" endpoint was selected. It takes a required \"id\" parameter that corresponds to a previously run job and returns a JSON representation of that job's information. The screenshot below shows the result of specifying an \"id\" of \"1\", providing the \"mpf\" user credentials when prompted, and then clicking the \"Try it out!\" button:\n\n\n\n\nThe HTTP response information is shown below the \"Try it out!\" button. Note that the structure of the \"Response Body\" is the same as the response model shown in the \"Response Class\" directly underneath the \"/rest/jobs/{id}\" label.", "title": "User Guide" }, { @@ -197,7 +197,7 @@ }, { "location": "/User-Guide/index.html#logging-in", - "text": "The OpenMPF Web UI requires user authentication and provides two default accounts: \"mpf\" and \"admin\". The password for the \"mpf\" user is \"mpf123\". These accounts are used to assign user or admin roles for OpenMPF cluster management. Note that an administrator can remove these accounts and/or add new ones using a command line tool. Refer to the Admin Guide for features available to an admin user. The landing page for a user is the Job Status page: Logging in starts a user session. By default, after 30 minutes of inactivity the user will automatically be logged out of the system. Within one minute of logging out the user will be prompted to extend or end their session. Note that the timeout period can be configured by any admin user with the admin role. A given user can only be logged into the OpenMPF from one machine using one browser at a time. If the same user attempts to log in from another machine, or another browser on the same machine, then the first user session will be terminated immediately and redirected back to the login page. This feature ensures that the user will be able to immediately log in again if the user accidentally closes the browser window or shuts down their machine without properly logging out first. A user may have multiple browser tabs or windows open for the same session, for example, to view the Jobs Status page and Logs page at the same time. It is not recommended that two users login using the same browser at the same time in different tabs or windows. Technically, the second user to login will take precedence, but the first user session will not appear to be terminated. Instead the first user will appear to share recent jobs and other information with the second user. Also, when one of the users logs out in this scenario, they will both be logged out.", + "text": "The OpenMPF Web UI requires user authentication and provides two default accounts: \"mpf\" and \"admin\". The password for the \"mpf\" user is \"mpf123\". These accounts are used to assign user or admin roles for OpenMPF cluster management. Note that an administrator can remove these accounts and/or add new ones using a command line tool. Refer to the Admin Guide for features available to an admin user. The landing page for a user is the Job Status page:", "title": "Logging In" }, { @@ -1302,7 +1302,7 @@ }, { "location": "/Workflow-Manager-Architecture/index.html", - "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nINFO:\n This document describes the Workflow Manager architecture for C++ and Java batch processing. The Python batch processing architecture and C++ stream processing architecture use many of the same elements and concepts.\n\n\n\nWorkflow Manager Overview\n\n\nOpenMPF consists of three major pieces:\n\n\n\n\nA collection of \nComponents\n which process media\n\n\nA \nNode Manager\n, which launches and monitors running components in the system in a non-Docker deployment\n\n\nThe \nWorkflow Manager\n (WFM), which allows for the creation of jobs and manages the flow through active components\n\n\n\n\nThese pieces are supported by a number of modules which provide shared functionality, as shown in the dependency diagram below:\n\n\n\n\nThere are three general functional areas in the WFM:\n\n\n\n\nThe \nControllers\n are the primary entry point, accepting REST requests which trigger actions by the WFM\n\n\nThe \nWFM Services\n, which handle administrative tasks such as pipeline creation, node management, and log retrieval\n\n\nJob Management\n, which uses Camel routes to pass a job through the levels of processing\n\n\n\n\nThere are two different databases used by the WFM:\n\n\n\n\nA \nSQL database\n stores persistent data about jobs. This data includes:\n\n\nThe job ID\n\n\nThe start and stop time of the job\n\n\nThe exit status of the job\n\n\nJob priority\n\n\nJob input/outputs\n\n\n\n\n\n\nA \nRedis database\n for storing track and detection data generated by components as they process parts of the job in various stages of the pipeline.\n\n\n\n\nThe diagram below shows the functional areas of the WFM, the databases used by the WFM, and communication with components:\n\n\n\n\nControllers / Services\n\n\nThe controllers are all located \nhere\n.\n\n\nEvery controller provides a collection of REST endpoints which allow access either to a WFM service or to the job management flow. Only the JobController enters the job management flow.\n\n\nBasic Controllers\n\n\nThe table below lists the controllers:\n\n\n\n\n\n\n\n\nController Class\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nAdminComponentRegistrationController\n\n\nHandles component registration\n\n\n\n\n\n\nAdminLogsController\n\n\nAccesses the log content via REST\n\n\n\n\n\n\nAdminPropertySettingsController\n\n\nAllows access and modification of system properties\n\n\n\n\n\n\nAdminStatisticsController\n\n\nGenerates job statistics\n\n\n\n\n\n\nAtmosphereController\n\n\nUses Atmosphere to manage server-side push\n\n\n\n\n\n\nBootoutController\n\n\nHandles bootouts when a second session is opened by the same user\n\n\n\n\n\n\nHomeController\n\n\nManages index page and version information\n\n\n\n\n\n\nJobController\n\n\nManages job creation and interaction\n\n\n\n\n\n\nLoginController\n\n\nManages login/logout and authentication\n\n\n\n\n\n\nMarkupController\n\n\nHandles retrieving markup results\n\n\n\n\n\n\nMediaController\n\n\nHandles uploading and downloading media files\n\n\n\n\n\n\nNodeController\n\n\nManages component services across nodes in a non-Docker deployment\n\n\n\n\n\n\nPipelineController\n\n\nHandles the creation and deletion of actions, tasks, and pipelines\n\n\n\n\n\n\nServerMediaController\n\n\nEnables selection and deselection of files at a directory level\n\n\n\n\n\n\nSystemMessageController\n\n\nManages system level messages, such as notifying users that a server restart is needed\n\n\n\n\n\n\nTimeoutController\n\n\nManages session timeouts\n\n\n\n\n\n\n\n\nThe following sections describe some of the controllers in more detail.\n\n\nAdminComponentRegistrationController\n\n\nIn a non-Docker deployment, components can be uploaded as tar.gz packages containing all necessary component data. For more information on components, read \nOpenMPF Component API Overview\n.\n\n\nThe \nAdminComponentRegistrationController\n provides endpoints which allow:\n\n\n\n\nAccess to current component information\n\n\nUpload of new components\n\n\nRegistration and unregistration of components (note that components must be registered to be included in pipelines)\n\n\nDeletion of components\n\n\n\n\nJobController\n\n\nA job is a specific pipeline's tasks and actions applied to a set of media. The \nJobController\n allows:\n\n\n\n\nAccess to information about jobs in the system\n\n\nCreation of new jobs\n\n\nCancellation of existing jobs\n\n\nDownload of job output data\n\n\nResubmission of jobs (regardless of initial job status)\n\n\n\n\nMarkupController\n\n\nMarkup files are copies of the initial media input to a job with detections visually highlighted in the image or video frames. The \nMarkupController\n can provide lists of available Markup files, or it can download a specific file.\n\n\nMediaController\n\n\nThe \nMediaController\n enables upload and organization of media files within the WFM. It provides endpoints for media upload, and also for creation of folders to organize media files in the system. At this time, there are no endpoints which allow for deletion or reorganization of media files, since all media is shared by all users.\n\n\nNodeController\n\n\nOpenMPF uses multiple hosts to enable scalability and parallel processing. The \nNodeController\n provides access to host information and allows components to be deployed on nodes in a non-Docker deployment. One or more components can be installed on a node. The same component can be installed on multiple nodes. Each node can manage one or more services for each component.\n\n\nThe \nNodeController\n provides host information and component service deployment status. It also provides an endpoint to deploy a service on a node and an endpoint to stop a service.\n\n\nFor more information on nodes, please read the \nNode Configuration and Status\n section in the Development Environment Guide.\n\n\nPipelineController\n\n\nThe \nPipeline Controller\n allows for the creation, retrieval, and deletion of pipelines or any of their constituent parts. While actions, tasks, and pipelines may not be directly modified, they may be deleted and recreated.\n\n\nFor more information on pipelines, please read the \nCreate Custom Pipelines\n section in the User Guide.\n\n\nJob Management\n\n\nThe request to create a job begins at the \nJobController\n. From there, it is transformed and passed through multiple flows on its way to the component services. These services process the job then return information to the WFM for JSON output generation.\n\n\nThe diagram below shows the sequence of WFM operations for a job executing a single-stage pipeline.\n\n\n\n\nAfter the job request is validated and saved to the SQL database, it passes through multiple Apache Camel routes, each of which checks that the job is still valid (with no fatal errors or cancellations), and then invokes a series of transformations and processors specific to the route.\n\n\nApache Camel\n is an open-source framework that allows developers to build rule-based routing engines. Within OpenMPF, we use a \nJava DSL\n to define the routes. Every route functions independently, and communication between the routes is URI-based. OpenMPF uses ActiveMQ to handle its message traffic.\n\n\nMedia Retriever Route\n\n\nThe \nMedia Retriever Route\n ensures that the media for the job can all be found and accessed. It stores the media information on the server to ensure continued access.\n\n\nMedia Inspection Route\n\n\nThe \nMedia Inspection Route\n splits a single job with multiple media inputs into separate messages, one for each piece of media. For each piece of media, it collects metadata about the media, including MIME type, duration, frame rate, and orientation data.\n\n\nJob Router Route\n\n\nBy the time the \nJob Router Route\n route is invoked, the job has been persisted in the permanent SQL database.\n\n\nThis route uses the pipeline's flow to create the messages that are sent to the components. For large media files, it splits the job into smaller sub-jobs by logically breaking the media up into segments. Each segment has a start point and an end point (specified as a frame or time offset).\n\n\nThis route compiles properties for the job, media, and algorithm, and determines the next component that needs to be invoked. It then marshals the job into a serialized protobuf format and sends the message off to the component for processing.\n\n\nThis route may be invoked multiple times as future routes redirect back to the Job Router so that the job can be processed by the next component in the pipeline.\n\n\nOnce the job is completed, this route converts the aggregated track and detection data in Redis into a JSON output format. It then clears out all data in Redis for the job, updates the final job status in the SQL database, optionally uploads the JSON output object to an object storage server, and optionally makes a callback to the endpoint listed in the job request.\n\n\nDetection Response Route\n\n\nThe \nDetection Response Route\n is the re-entry point to the WFM. It unmarshals the protobuf responses from components and converts them into the Track and Detection objects used within the WFM. It then optionally performs each of the following actions: track merging, detection padding, detecting moving objects, and artifact extraction. It stores the track and detection data in the Redis database and optionally uploads artifacts to an object storage server. Then, it invokes the Job Router Route to see if any additional processing needs to be done.\n\n\nMarkup Response Route \n(Not Shown)\n\n\nMarkup files are copies of the initial media with any detections visually highlighted in the frame. The \nMarkup Response Route\n optionally uploads the markup files generated by the Markup component to an object storage server and persists the locations of these markup files in the SQL database.\n\n\nOther Routes \n(Not Shown)\n\n\nAdditionally, there is a \nDetection Cancellation Route\n for cancelling detection requests sent to components, and a \nMarkup Cancellation Route\n for cancelling requests sent to the Markup component.\n\n\nAlso, there is a \nDLQ Route\n for handling messages that appear in the ActiveMQ Dead Letter Queue (DLQ), which usually indicates a component failure or inability to deliver a message. In these cases, the job is terminated with an error condition.", + "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nINFO:\n This document describes the Workflow Manager architecture for C++ and Java batch processing. The Python batch processing architecture and C++ stream processing architecture use many of the same elements and concepts.\n\n\n\nWorkflow Manager Overview\n\n\nOpenMPF consists of three major pieces:\n\n\n\n\nA collection of \nComponents\n which process media\n\n\nA \nNode Manager\n, which launches and monitors running components in the system in a non-Docker deployment\n\n\nThe \nWorkflow Manager\n (WFM), which allows for the creation of jobs and manages the flow through active components\n\n\n\n\nThese pieces are supported by a number of modules which provide shared functionality, as shown in the dependency diagram below:\n\n\n\n\nThere are three general functional areas in the WFM:\n\n\n\n\nThe \nControllers\n are the primary entry point, accepting REST requests which trigger actions by the WFM\n\n\nThe \nWFM Services\n, which handle administrative tasks such as pipeline creation, node management, and log retrieval\n\n\nJob Management\n, which uses Camel routes to pass a job through the levels of processing\n\n\n\n\nThere are two different databases used by the WFM:\n\n\n\n\nA \nSQL database\n stores persistent data about jobs. This data includes:\n\n\nThe job ID\n\n\nThe start and stop time of the job\n\n\nThe exit status of the job\n\n\nJob priority\n\n\nJob input/outputs\n\n\n\n\n\n\nA \nRedis database\n for storing track and detection data generated by components as they process parts of the job in various stages of the pipeline.\n\n\n\n\nThe diagram below shows the functional areas of the WFM, the databases used by the WFM, and communication with components:\n\n\n\n\nControllers / Services\n\n\nThe controllers are all located \nhere\n.\n\n\nEvery controller provides a collection of REST endpoints which allow access either to a WFM service or to the job management flow. Only the JobController enters the job management flow.\n\n\nBasic Controllers\n\n\nThe table below lists the controllers:\n\n\n\n\n\n\n\n\nController Class\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nAdminComponentRegistrationController\n\n\nHandles component registration\n\n\n\n\n\n\nAdminLogsController\n\n\nAccesses the log content via REST\n\n\n\n\n\n\nAdminPropertySettingsController\n\n\nAllows access and modification of system properties\n\n\n\n\n\n\nAdminStatisticsController\n\n\nGenerates job statistics\n\n\n\n\n\n\nAtmosphereController\n\n\nUses Atmosphere to manage server-side push\n\n\n\n\n\n\nHomeController\n\n\nManages index page and version information\n\n\n\n\n\n\nJobController\n\n\nManages job creation and interaction\n\n\n\n\n\n\nLoginController\n\n\nManages login/logout and authentication\n\n\n\n\n\n\nMarkupController\n\n\nHandles retrieving markup results\n\n\n\n\n\n\nMediaController\n\n\nHandles uploading and downloading media files\n\n\n\n\n\n\nNodeController\n\n\nManages component services across nodes in a non-Docker deployment\n\n\n\n\n\n\nPipelineController\n\n\nHandles the creation and deletion of actions, tasks, and pipelines\n\n\n\n\n\n\nServerMediaController\n\n\nEnables selection and deselection of files at a directory level\n\n\n\n\n\n\nSystemMessageController\n\n\nManages system level messages, such as notifying users that a server restart is needed\n\n\n\n\n\n\n\n\nThe following sections describe some of the controllers in more detail.\n\n\nAdminComponentRegistrationController\n\n\nIn a non-Docker deployment, components can be uploaded as tar.gz packages containing all necessary component data. For more information on components, read \nOpenMPF Component API Overview\n.\n\n\nThe \nAdminComponentRegistrationController\n provides endpoints which allow:\n\n\n\n\nAccess to current component information\n\n\nUpload of new components\n\n\nRegistration and unregistration of components (note that components must be registered to be included in pipelines)\n\n\nDeletion of components\n\n\n\n\nJobController\n\n\nA job is a specific pipeline's tasks and actions applied to a set of media. The \nJobController\n allows:\n\n\n\n\nAccess to information about jobs in the system\n\n\nCreation of new jobs\n\n\nCancellation of existing jobs\n\n\nDownload of job output data\n\n\nResubmission of jobs (regardless of initial job status)\n\n\n\n\nMarkupController\n\n\nMarkup files are copies of the initial media input to a job with detections visually highlighted in the image or video frames. The \nMarkupController\n can provide lists of available Markup files, or it can download a specific file.\n\n\nMediaController\n\n\nThe \nMediaController\n enables upload and organization of media files within the WFM. It provides endpoints for media upload, and also for creation of folders to organize media files in the system. At this time, there are no endpoints which allow for deletion or reorganization of media files, since all media is shared by all users.\n\n\nNodeController\n\n\nOpenMPF uses multiple hosts to enable scalability and parallel processing. The \nNodeController\n provides access to host information and allows components to be deployed on nodes in a non-Docker deployment. One or more components can be installed on a node. The same component can be installed on multiple nodes. Each node can manage one or more services for each component.\n\n\nThe \nNodeController\n provides host information and component service deployment status. It also provides an endpoint to deploy a service on a node and an endpoint to stop a service.\n\n\nFor more information on nodes, please read the \nNode Configuration and Status\n section in the Development Environment Guide.\n\n\nPipelineController\n\n\nThe \nPipeline Controller\n allows for the creation, retrieval, and deletion of pipelines or any of their constituent parts. While actions, tasks, and pipelines may not be directly modified, they may be deleted and recreated.\n\n\nFor more information on pipelines, please read the \nCreate Custom Pipelines\n section in the User Guide.\n\n\nJob Management\n\n\nThe request to create a job begins at the \nJobController\n. From there, it is transformed and passed through multiple flows on its way to the component services. These services process the job then return information to the WFM for JSON output generation.\n\n\nThe diagram below shows the sequence of WFM operations for a job executing a single-stage pipeline.\n\n\n\n\nAfter the job request is validated and saved to the SQL database, it passes through multiple Apache Camel routes, each of which checks that the job is still valid (with no fatal errors or cancellations), and then invokes a series of transformations and processors specific to the route.\n\n\nApache Camel\n is an open-source framework that allows developers to build rule-based routing engines. Within OpenMPF, we use a \nJava DSL\n to define the routes. Every route functions independently, and communication between the routes is URI-based. OpenMPF uses ActiveMQ to handle its message traffic.\n\n\nMedia Retriever Route\n\n\nThe \nMedia Retriever Route\n ensures that the media for the job can all be found and accessed. It stores the media information on the server to ensure continued access.\n\n\nMedia Inspection Route\n\n\nThe \nMedia Inspection Route\n splits a single job with multiple media inputs into separate messages, one for each piece of media. For each piece of media, it collects metadata about the media, including MIME type, duration, frame rate, and orientation data.\n\n\nJob Router Route\n\n\nBy the time the \nJob Router Route\n route is invoked, the job has been persisted in the permanent SQL database.\n\n\nThis route uses the pipeline's flow to create the messages that are sent to the components. For large media files, it splits the job into smaller sub-jobs by logically breaking the media up into segments. Each segment has a start point and an end point (specified as a frame or time offset).\n\n\nThis route compiles properties for the job, media, and algorithm, and determines the next component that needs to be invoked. It then marshals the job into a serialized protobuf format and sends the message off to the component for processing.\n\n\nThis route may be invoked multiple times as future routes redirect back to the Job Router so that the job can be processed by the next component in the pipeline.\n\n\nOnce the job is completed, this route converts the aggregated track and detection data in Redis into a JSON output format. It then clears out all data in Redis for the job, updates the final job status in the SQL database, optionally uploads the JSON output object to an object storage server, and optionally makes a callback to the endpoint listed in the job request.\n\n\nDetection Response Route\n\n\nThe \nDetection Response Route\n is the re-entry point to the WFM. It unmarshals the protobuf responses from components and converts them into the Track and Detection objects used within the WFM. It then optionally performs each of the following actions: track merging, detection padding, detecting moving objects, and artifact extraction. It stores the track and detection data in the Redis database and optionally uploads artifacts to an object storage server. Then, it invokes the Job Router Route to see if any additional processing needs to be done.\n\n\nMarkup Response Route \n(Not Shown)\n\n\nMarkup files are copies of the initial media with any detections visually highlighted in the frame. The \nMarkup Response Route\n optionally uploads the markup files generated by the Markup component to an object storage server and persists the locations of these markup files in the SQL database.\n\n\nOther Routes \n(Not Shown)\n\n\nAdditionally, there is a \nDetection Cancellation Route\n for cancelling detection requests sent to components, and a \nMarkup Cancellation Route\n for cancelling requests sent to the Markup component.\n\n\nAlso, there is a \nDLQ Route\n for handling messages that appear in the ActiveMQ Dead Letter Queue (DLQ), which usually indicates a component failure or inability to deliver a message. In these cases, the job is terminated with an error condition.", "title": "Workflow Manager Architecture" }, { @@ -1317,7 +1317,7 @@ }, { "location": "/Workflow-Manager-Architecture/index.html#basic-controllers", - "text": "The table below lists the controllers: Controller Class Description AdminComponentRegistrationController Handles component registration AdminLogsController Accesses the log content via REST AdminPropertySettingsController Allows access and modification of system properties AdminStatisticsController Generates job statistics AtmosphereController Uses Atmosphere to manage server-side push BootoutController Handles bootouts when a second session is opened by the same user HomeController Manages index page and version information JobController Manages job creation and interaction LoginController Manages login/logout and authentication MarkupController Handles retrieving markup results MediaController Handles uploading and downloading media files NodeController Manages component services across nodes in a non-Docker deployment PipelineController Handles the creation and deletion of actions, tasks, and pipelines ServerMediaController Enables selection and deselection of files at a directory level SystemMessageController Manages system level messages, such as notifying users that a server restart is needed TimeoutController Manages session timeouts The following sections describe some of the controllers in more detail.", + "text": "The table below lists the controllers: Controller Class Description AdminComponentRegistrationController Handles component registration AdminLogsController Accesses the log content via REST AdminPropertySettingsController Allows access and modification of system properties AdminStatisticsController Generates job statistics AtmosphereController Uses Atmosphere to manage server-side push HomeController Manages index page and version information JobController Manages job creation and interaction LoginController Manages login/logout and authentication MarkupController Handles retrieving markup results MediaController Handles uploading and downloading media files NodeController Manages component services across nodes in a non-Docker deployment PipelineController Handles the creation and deletion of actions, tasks, and pipelines ServerMediaController Enables selection and deselection of files at a directory level SystemMessageController Manages system level messages, such as notifying users that a server restart is needed The following sections describe some of the controllers in more detail.", "title": "Basic Controllers" }, { diff --git a/docs/site/sitemap.xml b/docs/site/sitemap.xml index 0db2d197b764..a9a7098e585b 100644 --- a/docs/site/sitemap.xml +++ b/docs/site/sitemap.xml @@ -2,127 +2,127 @@ /index.html - 2023-05-26 + 2023-06-07 daily /Release-Notes/index.html - 2023-05-26 + 2023-06-07 daily /License-And-Distribution/index.html - 2023-05-26 + 2023-06-07 daily /Acknowledgements/index.html - 2023-05-26 + 2023-06-07 daily /Install-Guide/index.html - 2023-05-26 + 2023-06-07 daily /Admin-Guide/index.html - 2023-05-26 + 2023-06-07 daily /User-Guide/index.html - 2023-05-26 + 2023-06-07 daily /Media-Segmentation-Guide/index.html - 2023-05-26 + 2023-06-07 daily /Feed-Forward-Guide/index.html - 2023-05-26 + 2023-06-07 daily /Derivative-Media-Guide/index.html - 2023-05-26 + 2023-06-07 daily /Object-Storage-Guide/index.html - 2023-05-26 + 2023-06-07 daily /Markup-Guide/index.html - 2023-05-26 + 2023-06-07 daily /TiesDb-Guide/index.html - 2023-05-26 + 2023-06-07 daily /REST-API/index.html - 2023-05-26 + 2023-06-07 daily /Component-API-Overview/index.html - 2023-05-26 + 2023-06-07 daily /Component-Descriptor-Reference/index.html - 2023-05-26 + 2023-06-07 daily /CPP-Batch-Component-API/index.html - 2023-05-26 + 2023-06-07 daily /Python-Batch-Component-API/index.html - 2023-05-26 + 2023-06-07 daily /Java-Batch-Component-API/index.html - 2023-05-26 + 2023-06-07 daily /GPU-Support-Guide/index.html - 2023-05-26 + 2023-06-07 daily /Contributor-Guide/index.html - 2023-05-26 + 2023-06-07 daily /Development-Environment-Guide/index.html - 2023-05-26 + 2023-06-07 daily /Node-Guide/index.html - 2023-05-26 + 2023-06-07 daily /Workflow-Manager-Architecture/index.html - 2023-05-26 + 2023-06-07 daily /CPP-Streaming-Component-API/index.html - 2023-05-26 + 2023-06-07 daily \ No newline at end of file